Random Short Take #65

Welcome to Random Short take #65. Last one for the year I think.

  • First up, this handy article from Steve Onofaro on replacing certificates in VMware Cloud Director 10.3.1.
  • Speaking of cloud, I enjoyed this article from Chris M. Evans on the AWS “wobble” (as he puts it) in us-east-1 recently. Speaking of articles Chris has written recently, check out his coverage of the Pure Storage FlashArray//XL announcement.
  • Speaking of Pure Storage, my friend Jon wrote about his experience with ActiveCluster in the field recently. You can find that here. I always find these articles to be invaluable, if only because they demonstrate what’s happening out there in the real world.
  • Want some press releases? Here’s one from Datadobi announcing it has released new Starter Packs for DobiMigrate ranging from 1PB up to 7PB.
  • Data protection isn’t just something you do at the office – it’s a problem for home too. I’m always interested to hear how other people tackle the problem. This article from Jeff Geerling (and the associated documentation on Github) was great.
  • John Nicholson is a smart guy, so I think you should check out his articles on benchmarking (and what folks are getting wrong). At the moment this is a 2-part series, but I suspect that could be expanded. You can find Part 1 here and Part 2 here. He makes a great point that benchmarking can be valuable, but benchmarking like it’s 1999 may not be the best thing to do (I’m paraphrasing).
  • Speaking of smart people, Tom Andry put together a great article recently on dispelling myths around subwoofers. If you or a loved one are getting worked up about subwoofers, check out this article.
  • I had people ask me if I was doing a predictions post this year. I’m not crazy enough to do that, but Mellor is. You can read his article here.

In some personal news (and it’s not LinkedIn official yet) I recently quit my job and will be taking up a new role in the new year. I’m not shutting the blog down, but you might see a bit of a change in the content. I can’t see myself stopping these articles, but it’s likely there’ll be less of the data protection howto articles being published. But we’ll see. In any case, wherever you are, stay safe, happy holidays, and see you on the line next year.

Pure Storage – A Few Thoughts on Pure as-a-Service

I caught up with Matt Oostveen from Pure Storage in August to talk about Pure as-a-Service. It’s been a while since any announcements were made, but I’ve been meaning to write up a few notes on the offering and what I thought of it. So here we are.

 

What Is It?

Oostveen describes Pure Storage as a “software company that sells storage arrays”. The focus at Pure has always been on giving the customer an exceptional experience, which invariably means controlling the stack from end-to-end. To that end, Pure as-a-Service could be described more as a feat of financial, rather than technical, engineering. You’re “billed on actual consumption, with minimum commitments starting at 50 TiB”. Also of note is the burst capability, allowing a level of comfort in understanding both the floor and the ceiling of the consumption levels you may decide to consume. You can choose what kind of storage you want – block, file, or object. You also get access to orchestration tools to manage everything. You also get access to Evergreen Storage, so your hardware stays up to date, and it’s available in four easy to understand tiers of storage.

 

Why Is It?

In this instance, I think the what isn’t as interesting as the why. Oostveen and I spoke about the need for a true utility model to enable companies to deliver on the promise of digital transformation. He noted that many of the big transactions that were occurring were CFO to CFO engagements, rather than the CTO deciding on the path forward for applications and infrastructure. In short, price is always a driver, and simplicity is also very important. Pure has worked to ensure that the offering delivers on both of those fronts.

 

Thoughts

IT is complicated nowadays. You’re dealing with cloud, SaaS, micro-SaaS, distributed, and personalised IT. You’re invariably trying to accommodate the role of data in your organisation, and you’re no doubt facing challenges with getting applications running not just in your core, but also in the cloud and the edge. We talk a lot about how infrastructure can be used to solve a number of the challenges facing organisations, but I have no doubt that if most business leaders never had to deal with infrastructure and the associated challenges it presents they’d be over the moon. Offerings like Pure as-a-Service go some of the way to elevating that conversation from speeds and feeds to something more aligned with business outcomes. It strikes me that these kinds of offerings will have great appeal to both the folks in charge of finance inside big enterprises and the potentially the technical folk trying to keep the lights on whilst a budget decrease gets lobbed at them every year.

I’ve written about Pure enthusiastically in the past because I think the company has a great grasp of some of the challenges that many organisations are facing nowadays. I think that the expansion into other parts of the cloud ecosystem, combined with a willingness to offer flexible consumption models for solutions that were traditionally offered as lease or buy is great. But I don’t think this makes sense without everything that Pure has done previously as a company, from the focus on getting the most out of All-Flash hardware, to a relentless drive for customer satisfaction, to the willingness to take a chance on solutions that are a little outside the traditional purview of a storage array company.

As I’ve said many times before, IT can be hard. There are a lot of things that you need to consider when evaluating the most suitable platform for your applications. Pure Storage isn’t the only game in town, but in terms of storage vendors offering flexible and powerful storage solutions across a variety of topologies, it seems to be a pretty compelling one, and definitely worth checking out.

Random Short Take #63

Welcome to Random Short take #63. It’s Friday morning, and the weekend is in sight.

  • I really enjoyed this article from Glenn K. Lockwood about how just looking for an IOPS figure can be a silly thing to do, particularly with HPC workloads. “If there’s one constant in HPC, it’s that everyone hates I/O.  And there’s a good reason: it’s a waste of time because every second you wait for I/O to complete is a second you aren’t doing the math that led you to use a supercomputer in the first place.”
  • Speaking of things that are a bit silly, it seems like someone thought getting on the front foot with some competitive marketing videos was a good idea. It rarely is though.
  • Switching gears a little, you may have been messing about with Tanzu Community Edition and asking yourself how you could SSH to a node. Ask no more, as Mark has your answer.
  • Speaking of storage companies that are pretty pleased with how things are going, Weka has put out this press release on its growth.
  • Still on press releases, Imply had some good news to share at Druid Summit recently.
  • Intrigued by Portworx and want to know more? Check out these two blog posts on configuring multi-cloud application portability (here and here) – they are excellent. Hat tip to my friend Mike at Pure Storage for the links.
  • I loved this article on project heroics from Chris Wahl. I’ve got a lot more to say about this and the impact this behaviour can have on staff but some of it is best not committed to print at this stage.
  • Finally, I replaced one of my receivers recently and cursed myself once again for not using banana plugs. They just make things a bit easier to deal with.

Pure Storage – Pure1 Makes Life Easy

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Pure Storage recently presented at Storage Field Day 22. You can see videos of the presentation here, and download my rough notes from here.

 

What You Need

If you’ve spent any time working with storage infrastructure, you’ll know that it can be a pain to manage and operate in an efficient manner. Pure1 has always been a great tool to manage your Pure Storage fleet. But Pure has taken that idea of collecting and analysing a whole bunch of telemetry data and taken it even further. So what is it you need?

Management and Observation

  • Setup needs to be easy to reduce risk and accelerate delivery
  • Alerting needs to be predictive to prevent downtime
  • Management has to be done anywhere to be responsive

Planning and Upgrades

  • Determining when to buy requires forecasting to manage costs
  • Workload optimisations should be intuitive to help keep users happy
  • Non-disruptive upgrades are critical to prevent disruptions

Purchasing and Scaling

  • Resources should be available as a service for on-demand scaling.
  • Data service purchasing should be self-service for speed and simplicity
  • Hybrid cloud should be available from one vendor, in one place

 

Pure1 Has It

Sounds great, so how do you get that with Pure1? Pure breaks it down into three key areas:

  • Optimise
  • Recommend
  • Empower

Optimise

Reduce the time you spend on management and take the guesswork out of support. With aggregated fleet / group metrics, you get:

  • Capacity utilisation
  • Performance
  • Data reduction savings
  • Alerts and support cases

[image courtesy of Pure Storage]

Recommend

Every organisation wants to improve the speed and accuracy of resource planning while enhancing user experience. Pure1 provides the ability to use “What-If” modelling to stay ahead of demands.

  • Select application to be added
  • Provide sizing details
  • Get recommendations based on Pure best practices and AI analysis of our telemetry databases

[image courtesy of Pure Storage]

The process is alarmingly simple:

  • Pick a Workload Type – Choose a preset application type from a list of the most deployed enterprise applications, including SAP HANA, Microsoft SQL, and more.
  • Set Application Parameter – Define size of the deployment. Attributes are auto-populated based on Pure1 analytics across its global database. Adjust as needed for your environment.
  • Simulate Deployment – Identify where you want to deploy the application data. Pure1 analyses the impact on performance and capacity.

Empower

Build your hybrid-cloud infrastructure your way and on demand without the headaches of legacy purchasing. Pure has a great story to tell when it comes to Pure as-a-Service and OpEx acquisition models.

 

Thoughts and Further Reading

In a previous job, I was a Pure1 user and found the overall experience to be tremendous. Much has changed with Pure1 since I first installed it on my phone, and it’s my opinion that the integration and usefulness of the service have both increased exponentially. The folks at Pure have always understood that it’s not enough to deliver high-performance storage solutions built on All-Flash. This is considered table-stakes nowadays. Instead, Pure has done a great job of focussing on the management and operation of these high-performance storage solutions to ensure that users get what they need from the system. I sound like a broken record, I’m sure, but it’s this relentless focus on the customer experience that I think sets Pure apart from many of its competitors.

Most of the tier 1 storage vendors have had a chop at delivering management and operations systems that make extensive use of field telemetry data and support knowledge to deliver proactive support for customers. Everyone is talking about how they use advanced analytics, AI / ML, and so on to deliver a great support experience. But I think it’s the other parts of the equation that really brings it together nicely for Pure: the “evergreen” hardware lifecycle options, the consumption flexibility, and the focus on constantly improving the day 2 operations experience that’s required when managing storage at scale in the enterprise. Add to that the willingness to embrace hybrid cloud technologies, and the expanding product portfolio, and I’m looking forward to seeing what’s next for Pure. Finally, shout out to Stan Yanitskiy for jumping in at the last minute to present when his colleague had a comms issue – I think the video shows that he handled it like a real pro.

Ransomware? More Like Ransom Everywhere …

Stupid title, but ransomware has been in the news quite a bit recently. I’ve had some tabs open in my browser for over twelve months with articles about ransomware that I found interesting. I thought it was time to share them and get this post out there. This isn’t comprehensive by any stretch, but rather it’s a list of a few things to look at when looking into anti-ransomware solutions, particularly for NAS environments.

 

It Kicked Him Right In The NAS

The way I see it (and I’m really not the world’s strongest security person), there are (at least) three approaches to NAS and ransomware concerns.

The Endpoint

This seems to be where most companies operate – addressing ransomware as it enters the organisation via the end users. There are a bunch of solutions out there that are designed to protect humans from themselves. But this approach doesn’t always help with alternative attack vectors and it’s only as good as the update processes you have in place to keep those endpoints updated. I’ve worked in a few shops where endpoint protection solutions were deployed and then inadvertently clobbered by system updates or users with too many privileges. The end result was that the systems didn’t do what they were meant to and there was much angst.

The NAS Itself

There are things you can do with NetApp solutions, for example, that are kind of interesting. Something like Stealthbits looks neat, and Varonis also uses FPolicy to get a similar result. Your mileage will vary with some of these solutions, and, again, it comes down to the ability to effectively ensure that these systems are doing what they say they will, when they will.

Data Protection

A number of the data protection vendors are talking about their ability to recover quickly from ransomware attacks. The capabilities vary, as they always do, but most of them have a solid handle on quick recovery once an infection is discovered. They can even help you discover that infection by analysing patterns in your data protection activities. For example, if a whole bunch of data changes overnight, it’s likely that you have a bit of a problem. But, some of the effectiveness of these solutions is limited by the frequency of data protection activity, and whether anyone is reading the alerts. The challenge here is that it’s a reactive approach, rather than something preventative. That said, companies like Rubrik are working hard to enhance its Radar capability into something a whole lot more interesting.

Other Things

Other things that can help limit your exposure to ransomware include adopting generally robust security practices across the board, monitoring all of your systems, and talking to your users about not clicking on unknown links in emails. Some of these things are easier to do than others.

 

Thoughts

I don’t think any of these solutions provide everything you need in isolation, but the challenge is going to be coming up with something that is supportable and, potentially, affordable. It would also be great if it works too. Ransomware is a problem, and becoming a bigger problem every day. I don’t want to sound like I’m selling you insurance, but it’s almost not a question of if, but when. But paying attention to some of the above points will help you on your way. Of course, sometimes Sod’s Law applies, and things will go badly for you no matter how well you think you’ve designed your systems. At that point, it’s going to be really important that you’ve setup your data protection systems correctly, otherwise you’re in for a tough time. Remember, it’s always worth thinking about what your data is worth to you when you’re evaluating the relative value of security and data protection solutions. This article from Chin-Fah had some interesting insights into the problem. And this article from Cohesity outlined a comprehensive approach to holistic cyber security. This article from Andrew over at Pure Storage did a great job of outlining some of the challenges faced by organisations when rolling out these systems. This list of NIST ransomware resources from Melissa is great. And if you’re looking for a useful resource on ransomware from VMware’s perspective, check out this site.

Random Short Take #56

Welcome to Random Short Take #56. Only three players have worn 56 in the NBA. I may need to come up with a new bit of trivia. Let’s get random.

  • Are we nearing the end of blade servers? I’d hoped the answer was yes, but it’s not that simple, sadly. It’s not that I hate them, exactly. I bought blade servers from Dell when they first sold them. But they can present challenges.
  • 22dot6 emerged from stealth mode recently. I had the opportunity to talk to them and I’ll post something soon about that. In the meantime, this post from Mellor covers it pretty well.
  • It may be a Northern Hemisphere reference that I don’t quite understand, but Retrospect is running a “Dads and Grads” promotion offering 90 days of free backup subscriptions. Worth checking out if you don’t have something in place to protect your desktop.
  • Running VMware Cloud Foundation and want to stretch your vSAN cluster across two sites? Tony has you covered.
  • The site name in VMware Cloud Director can look a bit ugly. Steve O gives you the skinny on how to change it.
  • Pure//Accelerate happened recently / is still happening, and there was a bit of news from the event, including the new and improved Pure1 Digital Experience. As a former Pure1 user I can say this was a big part of the reason why I liked using Pure Storage.
  • Speaking of press releases, this one from PDI and its investment intentions caught my eye. It’s always good to see companies willing to spend a bit of cash to make progress.
  • I stumbled across Oxide on Twitter and fell for the aesthetic and design principles. Then I read some of the articles on the blog and got even more interested. Worth checking out. And I’ll be keen to see just how it goes for the company.

*Bonus Round*

I was recently on the Restore it All podcast with W. Curtis Preston and Prasanna Malaiyandi. It was a lot of fun as always, despite the fact that we talked about something that’s a pretty scary subject (data (centre) loss). No, I’m not a DC manager in real life, but I do have responsibility for what goes into our DC so I sort of am. Don’t forget there’s a discount code for the book in the podcast too.

Random Short Take #55

Welcome to Random Short Take #55. A few players have worn 55 in the NBA. I wore some Mutombo sneakers in high school, and I enjoy watching Duncan Robinson light it up for the Heat. My favourite ever to wear 55 was “White Chocolate” Jason Williams. Let’s get random.

  • This article from my friend Max around Intel Optane and VMware Cloud Foundation provided some excellent insights.
  • Speaking of friends writing about VMware Cloud Foundation, this first part of a 4-part series from Vaughn makes a compelling case for VCF on FlashStack. Sure, he gets paid to say nice things about the company he works for, but there is plenty of info in here that makes a lot of sense if you’re evaluating which hardware platform pairs well with VCF.
  • Speaking of VMware, if you’re a VCD shop using NSX-V, it’s time to move on to NSX-T. This article from VMware has the skinny.
  • You want an open source version of BMC? Fine, you got it. Who would have thought securing BMC would be a thing? (Yes, I know it should be)
  • Stuff happens, hard drives fail. Backblaze recently published its drive stats report for Q1. You can read about that here.
  • Speaking of drives, check out this article from Netflix on its Netflix Drive product. I find it amusing that I get more value from Netflix’s tech blog than I do its streaming service, particularly when one is free.
  • The people in my office laugh nervously when I say I hate being in meetings where people feel the need to whiteboard. It’s not that I think whiteboard sessions can’t be valuable, but oftentimes the information on those whiteboards should be documented somewhere and easy to bring up on a screen. But if you find yourself in a lot of meetings and need to start drawing pictures about new concepts or whatever, this article might be of some use.
  • Speaking of office things not directly related to tech, this article from Preston de Guise on interruptions was typically insightful. I loved the “Got a minute?” reference too.

 

Pure Storage Acquires Portworx

Pure Storage announced its intention to acquire Portworx in mid-September. Around that time I had the opportunity to talk about the news with Goutham Rao (Portworx CTO) and Matt Kixmoeller (Pure Storage VP, Strategy) and thought I’d share some brief thoughts here.

 

The News

Pure and Portworx have entered an agreement that will see Pure pay approximately $370M US in cash. Portworx will form a new Cloud Native Business Unit inside Pure to be led by Portworx CEO Murli Thirumale. All Portworx founders are joining Pure, with Pure investing significantly to grow the new business unit. According to Pure, “Portworx software to continue as-is, supporting deployments in any cloud and on-premises, and on any bare metal, VM, or array-based storage”. It was also noted that “Portworx solutions to be integrated with Pure yet maintain a commitment to an open ecosystem”.

About Portworx

Described as the “leading Kubernetes data services platform”, Portworx was founded in 2014 in Los Altos, CA. It runs a 100% software, subscription, and cloud business model with development and support sites in California, India, and Eastern Europe. The product has been GA since 2017, and is used by some of the largest enterprise and Cloud / SaaS companies globally.

 

What’s A Portworx?

The idea behind Portworx is that it gives you data services for any application, on any Kubernetes distribution, running on any cloud, any infrastructure, and at any stage of the application lifecycle. To that end, it’s broken up into a bunch of different components, and runs in the K8s control plane adjacent to the applications.

PX-Store

  • Software-defined storage layer that automates container storage for developers and admins
  • Consistent storage APIs: cloud, bare metal, or arrays

PX-Migrate

  • Easily move applications between clusters
  • Enables hybrid cloud and multi-cloud mobility

PX-Backup

  • Application-consistent backup for cloud native apps with all k8s artefacts and state
  • Backup to any cloud or on-premises object storage

PX-Secure

  • Implement consistent encryption and security policies across clouds
  • Enable multi-tenancy with access controls

PX-DR

  • Sync and async replication between Availability Zones and regions
  • Zero RPO active / active for high resiliency

PX-Autopilot

  • GitOps-driven automation allows for easier platform for non-storage experts to deploy stateful applications, monitors everything about an application, reacts and prevents problems from happening
  • Auto-scale storage as your app grows to reduce costs

 

How It Fits Together

When you bring Portworx into the Pure Storage picture, you start to see that it fits well with the existing Pure Storage picture. In the picture below you’ll also see support for the standard container storage interface (CSI) to work with other vendors.

[image courtesy of Pure Storage]

Also worth noting is that PX-Essentials remains free forever for workloads under 5TB and 5 nodes).

 

Thoughts and Further Reading

I think this is a great move by Pure, mainly because it lends them a whole lot more credibility with the DevOps folks. Pure was starting to make inroads with Pure Storage Orchestrator, and I think this move will strengthen that story. Giving Portworx access to Pure’s salesforce globally is also going to broaden its visibility in the market and open up doors to markets that may have been difficult to get into previously.

Persistent storage for containers is heating up. As Rao pointed out in our discussion, “as container adoption grows, storage becomes a problem”. Portworx already had a good story to tell in this space, and Pure is no slouch when it comes to delivering advanced storage capabilities across a variety of platforms. I like that the messaging has been firmly based in maintaining the openness of the platform and I’m interested to see what other integrations happen as the two companies start working more closely together. If you’d like another perspective on the news, check out Chris Evans’s article here.

Pure Storage Announces Second Generation FlashArray//C with QLC

Disclaimer: I recently attended Storage Field Day 20.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Pure Storage recently announced its second generation FlashArray//C – an all-QLC offering offering scads of capacity in a dense form factor. Pure Storage presented on this topic at Storage Field Day 20. You can see videos of the presentation here, and download my rough notes from here.

 

It’s A Box!

FlashArray//C burst on to the scene last year as an all-flash, capacity-optimised storage option for customers looking for storage that didn’t need to go quite as fast the FlashArray//X, but that wasn’t built on spinning disk. Available capacities range from 1.3PB to 5.2PB (effective).

[image courtesy of Pure Storage]

There are a number of models available, with a variety of capacities and densities.

  Capacity Physical
 

//C60-366

 

Up to 1.3PB effective capacity**

366TB raw capacity**

3U; 1000–1240 watts (nominal–peak)

97.7 lbs (44.3 kg) fully loaded

5.12” x 18.94” x 29.72” chassis

 

//C60-494

 

Up to 1.9PB effective capacity**

494TB raw capacity**

3U; 1000–1240 watts (nominal–peak)

97.7 lbs (44.3 kg) fully loaded

5.12” x 18.94” x 29.72” chassis

 

//C60-840

 

Up to 3.2PB effective capacity**

840TB raw capacity**

6U; 1480–1760 watts (nominal–peak)

177.0lbs (80.3kg) fully loaded

10.2” x 18.94 x 29.72” chassis

 

//C60-1186

 

Up to 4.6PB effective capacity**

1.2PB raw capacity**

6U; 1480–1760 watts (nominal–peak)

185.4 lbs (84.1 kg) fully loaded

15.35” x 18.94 x 29.72” chassis

 

//C60-1390

 

Up to 5.2PB effective capacity**

1.4PB raw capacity**

9U; 1960–2280 watts (nominal–peak)

273.2 lbs (123.9 kg) fully loaded

15.35” x 18.94 x 29.72” chassis

Workloads

There are reasons why the FlashArray//C could be a really compelling option for workload consolidation. More and more workloads are “business critical” in terms of both performance and availability. There’s a requirement to do more with less, while battling complexity, and a strong desire to manage everything via a single pane of glass.

There are some other cool things you could use the //C for as well, including:

  • Automated policy-based VM tiering between //X and //C arrays;
  • DR using the //X at production and //C at your secondary site;
  • Consolidating multiple //X array workloads on a single //C array for test and dev; and
  • Consolidating multiple //X array snapshots to a single //C array for long-term retention.

 

It’s a QLC World, Sort Of

The second generation is FlashArray//C means you can potentially now have flash all through the data centre.

  • Apps and VMs – provision your high performance workloads to //X, lower performance / high capacity workloads to //C
  • Modern Data Protection & Disaster Recovery – on-premises production applications on //X efficiently replicated or backed up to //C at DR site
  • User File Shares – User file access with Purity 6.0 via SMB, NFS

QLC nonetheless presents significant engineering challenges with traditionally high write latency and low endurance (when compared to SLC, MLC, and TLC). Pure Storage’s answer to that problem has been to engineer the crap out of DirectFlash to get the required results. I’d do a bad job of explaining it, so instead I recommend you check out Pete Kirkpatrick’s explanation.

 

Thoughts And Further Reading

I covered the initial FlashArray//C announcement here and many of the reasons why this type of offering is appealing remain the same. The knock on Pure Storage in the last few years has been that, while FlashArray//X is nice and fast and a snap to use, it couldn’t provide the right kind of capacity (i.e. cheap and deep) that a number of price-sensitive punters wanted.  Sure, they could go and buy the FlashArray//X and then look to another vendor for a dense storage option, but the motivation to run with a number of storage vendors in smaller enterprise shops is normally fairly low. The folks in charge of technology in these environments are invariably stretched in terms of bodies on the floor to run the environments, and cash in the bank to procure those solutions. A single vendor solution normally makes sense for them (as opposed to some of the larger shops, or specialist organisations that really have very specific requirements that can only be serviced by particular solutions).

So now Pure Storage has the FlashArray//C, and you can get it with some decent density, some useful features (thanks in part to some new features in Purity 6), and integration with the things you know and like about Pure Storage, such as Pure1 and Evergreen storage. It seems like Pure Storage has done an awful lot of work to squeeze performance out of QLC whilst ensuring that the modules don’t need replacing every other week. There’s a lot to like about the evolving Pure Storage story, and I’m interested to see how they tie it all together as the portfolio continues to expand. You can read the press release here, access the data sheet here, and read Mellor’s take on the news here.

Pure Storage and Cohesity Announce Strategic Partnership and Pure FlashRecover

Pure Storage and Cohesity announced a strategic partnership and a new joint solution today. I had the opportunity to speak with Amy Fowler and Biswajit Mishra from Pure Storage, along with Anand Nadathur and Chris Wiborg from Cohesity, and thought I’d share my notes here.

 

Friends In The Market

The announcement comes in two parts, with the first being that Pure Storage and Cohesity are forming a strategic partnership. The idea behind this is that, together, the companies will deliver “industry-leading storage innovations from Pure Storage with modern, flash-optimised backup from Cohesity”.  There are plenty of things in common between the companies, including the fact that they’re both, as Wiborg puts it, “keenly focused on doing the right thing for the customer”.

 

Pure FlashRecover Powered By Cohesity

Partnerships are exciting and all, but what was of more interest was the Pure FlashRecover announcement. What is it exactly? It’s basically Cohesity DataProtect running on Cohesity-certified compute nodes (the whitebox gear you might be familiar with if you’ve bought Cohesity tin previously), using Pure’s FlashBlades as the storage backend.

[image courtesy of Pure Storage]

FlashRecover has a targeted general availability for Q4 CY2020 (October). It will be released in the US initially, with other regions to follow. From a go to market perspective, Pure will handle level 1 and level 2 support, with Cohesity support being engaged for escalations. Cohesity DataProtect will be added to the Pure price list, and Pure becomes a Cohesity Technology Partner.

 

Thoughts

My first thought when I heard about this was why would you? I’ve traditionally associated scalable data protection and secondary storage with slower, high-capacity appliances. But as we talked through the use cases, it started to make sense. FlashBlades by themselves aren’t super high capacity devices, but neither are the individual nodes in Cohesity appliances. String a few together and you have enough capacity to do data protection and fast recovery in a predictable fashion. FlashBlade supports 75 nodes (I think) [Edit: it scales up to 150x 52TB nodes. Thanks for the clarification from Andrew Miller] and up to 1PB of data in a single namespace. Throw in some of the capabilities that Cohesity DataProtect brings to the table and you’ve got an interesting solution. The knock on some of the next-generation data protection solutions has been that recovery can still be quite time-consuming. The use of all-flash takes away a lot of that pain, especially when coupled with a solution like FlashBlade that delivers some pretty decent parallelism in terms of getting data recovered back to production quickly.

An evolving use case for protection data is data reuse. For years, application owners have been stuck with fairly clunky ways of getting test data into environments to use with application development and testing. Solutions like FlashRecover provide a compelling story around protection data being made available for reuse, not just recovery. Another cool thing is that when you invest in FlashBlade, you’re not locking yourself into a particular silo, you can use the FlashBlade solution for other things too.

I don’t work with Pure Storage and Cohesity on a daily basis anymore, but in my previous role I had the opportunity to kick the tyres extensively with both the Cohesity DataProtect solution and the Pure Storage FlashBlade. I’m an advocate of both of these companies because of the great support I received from both companies from pre-sales through to post-sales support. They are relentlessly customer focused, and that really translates in both the technology and the field experience. I can’t speak highly enough of the engagement I’ve experienced with both companies, from both a blogger’s experience, and as an end user.

FlashRecover isn’t going to be appropriate for every organisation. Most places, at the moment, can probably still get away with taking a little time to recover large amounts of data if required. But for industries where time is money, solutions like FlashRecover can absolutely make sense. If you’d like to know more, there’s a comprehensive blog post over at the Pure Storage website, and the solution brief can be found here.