QNAP – TR-004 Firmware Issue Workaround

I’ve been a user of QNAP products for over 10 years now. I have a couple of home systems running at the moment, including a TS-831X with a TR-004 enclosure attached to it. Last week I was prompted to update the external enclosure firmware to 1.1.0. After I did that, I had an issue where, once the unit spun down its disks, the volume would be marked as “Not active” by the system and I’d lose access to the data. Recovery was simple enough – I could either reboot the box or manually recover the enclosure via the QTS interface. I raised a job with QNAP web support, and we went back and forth with troubleshooting over the course of a week. The ticket was eventually escalated, and it was acknowledged that the current fix was to rollback to version 1.0.4 of the enclosure firmware.

The box is only used for media storage for Plex, but I figured it was worth backing up the contents of the external enclosure to another location in case something went wrong with the rollback. In any case, I’ve not done a downgrade on a QNAP device before, so I thought it was worth documenting the procedure here.

For some reason I needed to use Chrome over Safari in this example. I don’t know why that is. But whatever. In QTS, click on Storage & Snapshots, then Storage. Click on External RAID Management and then click on Check for Update.

You’ll see in this example, the installed TR-004 version is 1.1.0. Click on Browse to get the firmware file you want to roll back to.

You’ll get a stern warning that this kind of thing might cause problems.

Take a backup. Then tick the box.

The update will progress. It doesn’t take too long.

You then need to power off the enclosure and power it back on.

And, hopefully, your data will still be there. One side effect I noted was that the shared folder on that particular volume no longer had the correct permissions associated with the share. Fortunately, this is a home environment, and I’m using one user account to provide access to the share. I don’t know what you’d do if you had a complicated permissions situation in place.

And there you go. Like most things with QNAP, it’s a fairly simple process. This is the first time I’ve had to use QNAP support, and I found them responsive and helpful. I’ll report back if I get any other issues with the enclosure.

Scale Computing and Leostream – Is It Finally VDI’s Year?

Scale Computing announced a partnership with Leostream a little while ago. With the global pandemic drastically changing the way a large amount of organisations are working, it seemed like a good time to talk to Alan Conboy about how this all worked from a Scale Computing and Leostream perspective.

 

Easy As 1, 2

Getting started with Leostream is surprisingly simple. To start with, you’ll need to deploy a Gateway and a Broker VM. These are CentOS machines (if you’re a Scale Computing customer you can get likely some minimally configured, pre-packaged qcow appliances from Alan). You’ll need to punch a hole through your firewall for SSL traffic, and run a couple of simple commands on the VMs, but that’s it.

But I’m getting ahead of myself. The way it works is that Leostream has a small agent that you can deploy across the PCs in your fleet. When users hit the gateway they can be directed to their own (physical) desktop inside the organisation. They can then access their desktops remotely (using RDP, SSH, or VNC) over any browser that supports SSL and HTML5. So, rather than having to go out and grab a bunch of laptops, setup a VPN (or scale it out), and have a desktop image ready to go (along with the prerequisite VDI resources hosted somewhere), you can have your remote workforce working remotely from day 1. It comes with a Windows, Java, and Linux agent, so if you have users running macOS or Linux they can still come to the party.

I know I’ve done a bad job of describing the solution, so I recommend you check out this blog post instead.

 

Thoughts

I’m not at all passionate about VDI and End User Computing in the same way some people I know are. I always thought it was a neat solution that was frequently poorly executed and oftentimes cost a lot of money. But it’s a weird time for the world and, sadly, it might be something like a global pandemic that finally means that VDI gets its due as a useful solution for remote workers. I’d also like to point out that this is just a part of what Leostream can do. If you’re after something outside of the Scale Computing alliance – they can probably help you out.

I’ve spoken to Alan and the Scale Computing team about Leostream a few times now, and I really do like the idea of being able to bring users back into the network, rather than extending the network out to your users. You don’t have to go crazy acquiring a bunch of laptops or mobile devices for traditionally desk-bound users and re-imaging said laptops for those users. You don’t need to spend a tonne of cash on extra VPN connectivity or compute to support a bunch of new “desktop” VMs. Instead, in a fairly short amount of time, you can get users working the way they always have, with a minimum of fuss. This is exactly the kind of approach that I’ve come to expect from Scale Computing – keep it simple, easy to deploy, cost-conscious, and functional.

As I said before – VDI solutions don’t really excite me. But I do appreciate the flexibility they can offer in terms of the ability to access corporate workloads from non-traditional locales. This solution takes it a step further, and does a great job of delivering what could be a complicated solution in a simple and functional fashion. This is the kind of thing we need more of at the moment.

FalconStor Announces StorSafe

Remember FalconStor? You might have used its VTL product years ago? Or perhaps the Network Storage Server product? Anyway, it’s still around, and recently announced a new product. I had the opportunity to speak to Todd Brooks (CEO) and David Morris (VP Products) to discuss StorSafe, something FalconStor is positioning as “the industry’s first enterprise-class persistent data storage container”.

 

What Is It?

StorSafe is essentially a way to store data via containers. It has the following features:

  • Long-term archive storage capacity reduction drives low cost;
  • Multi-cloud archive storage;
  • Automatic archive integrity validation & journaling in the cloud;
  • Data egress fee optimisation; and
  • Unified Management and Analytics Console.

Persistent Virtual Storage Container

StorSafe is a bit different to the type of container you might expect from a company with VTL heritage.

  • Does not rely on traditional tape formats, e.g. LTO constraints
  • Variable Payload Capacity of Archival Optimisation by Type
  • Execution capabilities for Advanced Features
  • Encryption, Compression, and Best-in-Class Deduplication
  • Erasure coding for Redundancy across On-premise/Clouds
  • Portable – Transfer Container to Storage System or any S3 Cloud
  • Archival Retention for 10, 25, 50, & 100 years

[image courtesy of FalconStor]

Multi-Cloud Erasure Coding

  • The VSC is sharded into multiple Mini-Containers that are protected with Erasure Coding
  • These Mini-Containers can then be moved to multiple local, private data centres, or cloud destinations for archive
  • Tier Containers depending on Access Criticality or Limited Access needs

[image courtesy of FalconStor]

 

Thoughts And Further Reading

If you’re familiar with my announcement posts, you’ll know that I try to touch on the highlights provided to me by the vendor about its product, and add my own interpretation. I feel like I haven’t really done StorSafe justice however. It’s a cool idea, in a lot of ways. This idea that you can take a bunch of storage and dump it all over the place in a distributed fashion and have it be highly accessible and resilient. This isn’t designed for high performance storage requirements. This is very much focused on the kinds of data you’d be keen to store long-term, maybe on tape. I can’t tell you what this looks like from an implementation or performance perspective, so I can’t tell you whether the execution matches up with the idea that Falconstor has had. I find the promise of portability, particularly for data that you want to keep for a long time, extremely compelling. So let’s agree that this idea seems interesting, and watch this space for more on this as I learn more about it. You can read the press release here, and check out Mellor’s take on it here.

Random Short Take #32

Welcome to Random Short Take #32. Lot of good players have worn 32 in the NBA. I’m a big fan of Magic Johnson, but honourable mentions go to Jimmer Fredette and Blake Griffin. It’s a bit of a weird time around the world at the moment, but let’s get to it.

  • Veeam 10 was finally announced a little while ago and is now available for deployment. I work for a service provider, and we use Veeam, so this article from Anthony was just what I was after. There’s a What’s New article from Veeam you can view here too.
  • I like charts, and I like Apple laptops, so this chart was a real treat. The lack of ports is nice to look at, I guess, but carrying a bag of dongles around with me is a bit of a pain.
  • VMware recently made some big announcements around vSphere 7, amongst other things. Ather Beg did a great job of breaking down the important bits. If you like to watch videos, this series from VMware’s recent presentations at Tech Field Day 21 is extremely informative.
  • Speaking of VMware Cloud Foundation, Cormac Hogan recently wrote a great article on getting started with VCF 4.0. If you’re new to VCF – this is a great resource.
  • Leaseweb Global recently announced the availability of 2nd Generation AMD EPYC powered hosts as part of its offering. I had a chance to speak with Mathijs Heikamph about it a little while ago. One of the most interesting things he said, when I questioned him about the market appetite for dedicated servers, was “[t]here’s no beating a dedicated server when you know the workload”. You can read the press release here.
  • This article is just … ugh. I used to feel a little sorry for businesses being disrupted by new technologies. My sympathy is rapidly diminishing though.
  • There’s a whole bunch of misinformation on the Internet about COVID-19 at the moment, but sometimes a useful nugget pops up. This article from Kieren McCarthy over at El Reg delivers some great tips on working from home – something more and more of us (at least in the tech industry) are doing right now. It’s not all about having a great webcam or killer standup desk.
  • Speaking of things to do when you’re working at home, JB posted a handy note on what he’s doing when it comes to lifting weights and getting in some regular exercise. I’ve been using this opportunity to get back into garage weights, but apparently it’s important to lift stuff more than once a month.

Retrospect Announces Backup 2017 And Virtual 2020

Retrospect recently announced new versions of its Backup (17) and Virtual (2020) products. I had the opportunity to speak to JG Heithcock (GM, Retrospect) about the announcement and thought I’d share some thoughts here.

 

What’s New?

Retrospect Backup 17 has the following new features:

  • Automatic Onboarding: Simplified and automated deployment and discovery;
  • Nexsan E-Series / Unity Certification;
  • 10x Faster ProactiveAI; and
  • Restore Preflight for restores from cold storage.

Retrospect Virtual 2020 has the following enhancements:

  • Automatic Onboarding: Physical and Virtual monitoring from a single website;
  • 50% Faster;
  • Wasabi Cloud Support;
  • Backblaze B2 Cloud Support; and
  • Flexible licensing between VMware and Hyper-V.

Automatic Onboarding?

So what exactly is automatic onboarding? You can onboard new servers and endpoints for faster deployment and automatic discovery.

  • Share one link with your team. No agent password required.
  • Retrospect Backup finds and protects new clients with ProactiveAI.
  • Add servers, desktops, and laptops to Retrospect Backup.
  • Single pane of glass for entire backup infrastructure with Retrospect Management Console.
  • Available for Windows, Mac, and Linux.

You can also onboard a new Retrospect Backup server for faster, simplified deployment.

  • Protect group or site.
  • Customised installer with license built-in.
  • Seamless Management Console integration.
  • Available for Windows and Mac.

Onboard new Retrospect Virtual server for complete physical and virtual monitoring.

  • Customised installer
  • Seamless Management Console integration.
  • Monitor Physical + Virtual

Pricing

There’s a variety of pricing available. When you buy a perpetual license, you have access to any new minor or major version upgrades for 12 months. With the monthly subscription model you have access to the latest version of the product for as long as you keep the subscription active.

[image courtesy of Retrospect]

 

Thoughts And Further Reading

Retrospect was acquired by StorCentric in June 2019 after bouncing around a few different owners over the years. It’s been around for a long time, and has a rich history of delivering data protection solutions for small business and “prosumer” markets. I have reasonably fond memories of Retrospect from the time when it was shipped with Maxtor OneTouch external hard drives. Platform support is robust, with protection options available across Windows, macOS and some Linux, and the pricing is competitive. Retrospect is also benefitting from joining the StorCentric family, and I’m looking forward to hearing about more product integrations as time goes on.

Why would I cover a data protection product that isn’t squarely targeted at the enterprise or cloud market? Because I’m interested in data protection solutions across all areas of IT. I think the small business and home market is particularly under-represented when it comes to easy to deploy and run solutions. There is a growing market for cloud-based solutions, but simple local protection options still seem to be pretty rare. The number of people I talk to who are just manually copying data from one spot to another is pretty crazy. Why is it so hard to get good backup and recovery happening on endpoints? It shouldn’t be. You could argue that, with the advent of SaaS services and cloud-based storage solutions, the requirement to protect endpoints the way we used to has changed. But local protection options still makes it a whole lot quicker and easier to recover.

If you’re in the market for a solution that is relatively simple to operate, has solid support for endpoint operating systems and workloads, and is competitively priced, then I think Retrospect is worth evaluating. You can read the announcement here.

Random Short Take #31

Welcome to Random Short Take #31. Lot of good players have worn 31 in the NBA. You’d think I’d call this the Reggie edition (and I appreciate him more after watching Winning Time), but this one belongs to Brent Barry. This may be related to some recency bias I have, based on the fact that Brent is a commentator in NBA 2K19, but I digress …

  • Late last year I wrote about Scale Computing’s big bet on a small form factor. Scale Computing recently announced that Jerry’s Foods is using the HE150 solution for in-store computing.
  • I find Plex to be a pretty rock solid application experience, and most of the problems I’ve had with it have been client-related. I recently had a problem with a server update that borked my installation though, and had to roll back. Here’s the quick and dirty way to do that on macOS.
  • Here’s are 7 contentious thoughts on data protection from Preston. I think there are some great ideas here and I recommend taking the time to read this article.
  • I recently had the chance to speak with Michael Jack from Datadobi about the company’s announcement about its new DIY Starter Pack for NAS migrations. Whilst it seems that the professional services market for NAS migrations has diminished over the last few years, there’s still plenty of data out there that needs to be moved from on box to another. Robocopy and rsync aren’t always the best option when you need to move this much data around.
  • There are a bunch of things that people need to learn to do operations well. A lot of them are learnt the hard way. This is a great list from Jan Schaumann.
  • Analyst firms are sometimes misunderstood. My friend Enrico Signoretti has been working at GigaOm for a little while now, and I really enjoyed this article on the thinking behind the GigaOm Radar.
  • Nexsan recently announced some enhancements to its “BEAST” storage platforms. You can read more on that here.
  • Alastair isn’t just a great writer and moustache aficionado, he’s also a trainer across a number of IT disciplines, including AWS. He recently posted this useful article on what AWS newcomers can expect when it comes to managing EC2 instances.

VMware – vExpert 2020

I’m very happy to have been listed as a vExpert for 2020. This is the eighth time that they’ve forgotten to delete my name from the list (I’m like Rick Astley with that joke). Read about it here, and more news about this year’s programme is coming shortly. Thanks again to Corey Romero and the rest of the VMware Social Media & Community Team for making this kind of thing happen. And thanks also to the vExpert community for being such a great community to be part of. There are now 1730 of us in over 40 countries. I think that’s pretty cool.

Storage Field Day 19 – Wrap-up and Link-o-rama

Disclaimer: I recently attended Storage Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

This is a quick post to say thanks once again to Stephen and Ben, and the presenters at Storage Field Day 19. I had a super fun and educational time. For easy reference, here’s a list of the posts I did covering the events (they may not match the order of the presentations).

Storage Field Day – I’ll Be at Storage Field Day 19

Storage Field Day 19 – (Fairly) Full Disclosure

Tiger Technology Is Bridging The Gap

Western Digital, Composable Infrastructure, Hyperscalers, And You

Infrascale Protects Your Infrastructure At Scale

MinIO – Not Your Father’s Object Storage Platform

Dell EMC Isilon – Cloudy With A Chance Of Scale Out

NetApp And The StorageGRID Evolution

Komprise – Non-Disruptive Data Management

Stellus Is Doing Something With All That Machine Data

Dell EMC PowerOne – Not V(x)block 2.0

WekaIO And A Fresh Approach

Dell EMC, DevOps, And The World Of Infrastructure Automation

Also, here’s a number of links to posts by my fellow delegates (in no particular order). They’re all very smart people, and you should check out their stuff, particularly if you haven’t before. I’ll attempt to keep this updated as more posts are published. But if it gets stale, the Storage Field Day 19 landing page will have updated links.

 

Becky Elliott (@BeckyLElliott)

SFD19: No Komprise on Knowing Thy Data

SFD19: DellEMC Does DevOps

 

Chin-Fah Heoh (@StorageGaga)

Hadoop is truly dead – LOTR version

Zoned Technologies With Western Digital

Is General Purpose Object Storage Disenfranchised?

Tiger Bridge extending NTFS to the cloud

Open Source and Open Standards open the Future

Komprise is a Winner

Rebooting Infrascale

DellEMC Project Nautilus Re-imagine Storage for Streams

Paradigm shift of Dev to Storage Ops

StorageGRID gets gritty

Dell EMC Isilon is an Emmy winner!

 

Chris M Evans (@ChrisMEvans)

Storage Field Day 19 – Vendor Previews

Storage Management and DevOps – Architecting IT

Stellus delivers scale-out storage with NVMe & KV tech – Architecting IT

Can Infrascale Compete in the Enterprise Backup Market?

 

Ray Lucchesi (@RayLucchesi)

097: GreyBeards talk open source S3 object store with AB Periasamy, CEO MinIO

Gaming is driving storage innovation at WDC

 

Enrico Signoretti (@ESignoretti)

Storage Field Day 19 RoundUp

Tiers, Tiers, and More Storage Tiers

The Hard Disk is Dead! (But Only in Your Datacenter)

Dell EMC PowerOne is Next-Gen Converged Infrastructure

Voices in Data Storage – Episode 35: A Conversation with Krishna Subramanian of Komprise

 

Gina Rosenthal (@GMinks)

Storage Field Day 19: Getting Back to My Roots

Is storage still relevant?

Tiger Technology Brings the Cloud to You

Taming Unstructured Data with Dell EMC Isilon

Project Nautilus emerged as Dell’s Streaming Data Platform

 

Joey D’Antoni (@JDAnton)

Storage Field Day 19–Current State of the Storage Industry #SFD19

Storage Field Day 19–Western Digital #SFD19

Storage Field Day 19 MinIO #SFD19

 

Keiran Shelden (@Keiran_Shelden)

California, Show your teeth… Storage Field Day 19

Western Digital Presents at SFD19

 

Ruairi McBride (@McBride_Ruairi)

 

Arjan Timmerman (@ArjanTim)

TECHunplugged at Storage Field Day 19

TECHunplugged VideoCast SFD19 Part 1

Preview Storage Field Day 19 – Day 1

 

Vuong Pham (@Digital_KungFu)

 

[photo courtesy of Stephen Foskett]

WekaIO And A Fresh Approach

Disclaimer: I recently attended Storage Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

WekaIO recently presented at Storage Field Day 19. You can see videos of their presentation here, and download my rough notes from here.

 

More Data And New Architectures

Liran Zvibel (Co-founder and CEO) spent some time talking about the explosion in data storage requirements in the next 4 – 5 years. It was suggested that most of this growth will come in the form of unstructured data. The problem with today’s storage systems, he suggested, was that storage is broken into “Islands of Compromise” categories – each category carries a leader. What does that mean exactly? DAS and SAN cannot share data easily, and the performance of a number of NAS and Object architectures isn’t great.

A New Storage Category

WekaIO is positioning itself in a new storage category. One that delivers:

  • The highest performance for any workload
  • Complete data shareability
  • Cloud native, hybrid cloud support
  • Full enterprise features
  • Simple management

Unique Product Differentiation

So what is that sets WekaIO apart from the rest of the storage industry? Zvibel listed a number of differentiators, including:

  • Only POSIX namespace that scales to exabytes of capacity and trillions of files
  • Only networked file system that is faster than local storage
    • Massively parallel
    • Lowest latency
  • Snap to object
    • Unique blend of All-Flash and Object storage for instant backup to cloud storage (no backup software required)
  • Cloud burst from on-premises to public cloud
    • Fully hybrid cloud enabled with highest performance
  • End-to-end data encryption with no performance degradation
    • Critical for modern workloads and compliance

[image courtesy of Barbara Murphy]

 

Customer Examples

This all sounds great, but where is WekaIO really being used effectively? Barbara Murphy spent some time talking with the delegates about a number of customer examples across the following market verticals.

Life sciences

  • Genomics sequencing and analytics
  • Drug discovery
  • Microscopy

Deep Learning

  • Machine Learning / Artificial Intelligence
  • Real-time analytics
  • IoT

 

Thoughts and Further Reading

I’ve written enthusiastically about WekaIO before. It’s easy to get caught up in some of the hype that seems to go hand in hand with WekaIO presentations. But WekaIO has a lot of data to back up its claims, and it’s taken an interesting approach to solving traditional storage problems in a non-traditional fashion. I like that there’s a strong cloud story there, as well as the potential to leverage the latest hardware advancements to deliver the performance companies need.

The analysts and storage vendors drone on and on about the explosion in data growth over the coming years, but it’s a real problem. Our workload challenges are changing as well, and it seems like a new approach is needed for how we approach some of these challenges. The scale of the data that needs to be crunched doesn’t always mean that DAS is a good option. You’re more likely to see these kinds of challenges show up in the science and technology industries. And WekaIO seems to be well-positioned to meet these challenges, whether it’s in public cloud or on-premises. It strikes me that WekaIO’s focus on performance and resilience, along with a robust software-defined architecture, has it in a good position to tackle the types of workload problems we’re seeing at the edge and in AI / ML focused environments. I’m really looking forward to seeing what comes next for WekaIO.

Dell EMC PowerOne – Not V(x)block 2.0

Disclaimer: I recently attended Storage Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Dell EMC recently presented at Storage Field Day 19. You can see videos of the presentation here, and download my rough notes from here.

 

Not VxBlock 2.0?

Dell EMC describes PowerOne as “all-in-one autonomous infrastructure”. It’s converged infrastructure, meaning your storage, compute, and networking are all built into the rack. It’s a transportation-tested package and fully assembled when it ships. When it arrives, you can plug it in, fire up the API, and be up and going “within a few hours”.

Trey Layton is no stranger to Vblock / VxBlock, and he was very clear with the delegates that PowerOne is not replacing VxBlock. After all, VxBlock lets them sell Dell EMC external storage into Cisco UCS customers.

 

So What Is It Then?

It’s a rack or racks full of gear. All of which is now Dell EMC gear. And it’s highly automated and has some proper management around it too.

[image courtesy of Dell EMC]

So what’s in those racks?

  • PowerMax Storage – World’s “fastest” storage array
  • PowerEdge MX – industry leading compute
  • PowerSwitch – Declarative system fabric
  • PowerOne Controller – API-powered automation engine

PowerMax Storage

  • Zero-touch SAN config
  • Discovery / inventory of storage resources
  • Dynamically create storage volumes for clusters
  • Intelligent load balancing

PowerEdge MX Compute

  • Dynamically provision compute resources into clusters
  • Automated chassis expansion
  • Telemetry aggregation
  • Kinetic infrastructure

System Fabrics

  • Switches are 32Gbps
  • 98% reduction in network configuration steps
  • System fabric visibility and lifecycle management
  • Intent-based automated deployment and provision
  • PowerSwitch open networking

PowerOne Controller

  • Highly automates 1000s of tasks
  • Powered by Kubernetes and Ansible
  • Delivers next-gen autonomous outcomes via robust API capabilities

From a scalability perspective, you can go to 275 nodes in a pod, and you can look after up to 32 pods (I think). The technical specifications are here.

 

Thoughts and Further Reading

Converged infrastructure has always been an interesting architectural choice for the enterprise. When VCE first came into being 10+ years ago via Acadia, delivering consistent infrastructure experiences in the average enterprise was a time-consuming endeavour and not a lot of fun. It was also hard to do well. VCE changed a lot of that with Vblock, but you paid a premium. The reason you paid that premium was that VCE did a pretty decent job of putting together an architecture that was reliable and, more importantly, supportable by the vendor. It wasn’t just the IP behind this that made it successful though, it was the effort put into logistics and testing. And yes, a lot of that was built on the strength of spreadsheets and the blood, sweat and tears of the deployment engineers out in the field.

PowerOne feels like a very different beast in this regard. Dell EMC took us through a demo of the “unboxing” experience, and talked extensively about the lifecycle of the product. They also demonstrated many of the automation features included in the solution that weren’t always there with Vblock. I’ve been responsible for Vblock environments over the years, and a lot of the lifecycle management activities were very thoroughly documented, and extremely manual. PowerOne, on the other hand, doesn’t look like it relies extensively on documentation and spreadsheets to be managed effectively. But maybe that’s just because Trey and the team were able to demonstrate things so effectively.

So why would the average enterprise get tangled up in converged infrastructure nowadays? What with all the kids and their HCI solutions, and the public cloud, and the plethora of easy to consume infrastructure solutions available via competitive consumption models? Well, some enterprises don’t like relying on people within the organisation to deliver solutions for mission critical applications. These enterprises would rather leave that type of outcome in the hands of one trusted vendor. But they might still want that outcome to be hosted on-premises. Think of big financial institutions, and various government agencies looking after very important things. These are the kinds of customers that PowerOne is well suited to.

That doesn’t mean that what Dell EMC is doing with PowerOne isn’t innovative. In fact I think what they’ve managed to do with converged infrastructure is very innovative, within the confines of converged infrastructure. This type of approach isn’t for everyone though. There’ll always be organisations that can do it faster and cheaper themselves, but they may or may not have as much at stake as some of the other guys. I’m curious to see how much uptake this particular solution gets in the market, particularly in environments where HCI and public cloud adoption is on the rise. It strikes me that Dell EMC has turned a corner in terms of system integration too, as the out of the box experience looks really well thought out compared to some of its previous attempts at integration.