Pure Storage – A Few Thoughts on Pure as-a-Service

I caught up with Matt Oostveen from Pure Storage in August to talk about Pure as-a-Service. It’s been a while since any announcements were made, but I’ve been meaning to write up a few notes on the offering and what I thought of it. So here we are.

 

What Is It?

Oostveen describes Pure Storage as a “software company that sells storage arrays”. The focus at Pure has always been on giving the customer an exceptional experience, which invariably means controlling the stack from end-to-end. To that end, Pure as-a-Service could be described more as a feat of financial, rather than technical, engineering. You’re “billed on actual consumption, with minimum commitments starting at 50 TiB”. Also of note is the burst capability, allowing a level of comfort in understanding both the floor and the ceiling of the consumption levels you may decide to consume. You can choose what kind of storage you want – block, file, or object. You also get access to orchestration tools to manage everything. You also get access to Evergreen Storage, so your hardware stays up to date, and it’s available in four easy to understand tiers of storage.

 

Why Is It?

In this instance, I think the what isn’t as interesting as the why. Oostveen and I spoke about the need for a true utility model to enable companies to deliver on the promise of digital transformation. He noted that many of the big transactions that were occurring were CFO to CFO engagements, rather than the CTO deciding on the path forward for applications and infrastructure. In short, price is always a driver, and simplicity is also very important. Pure has worked to ensure that the offering delivers on both of those fronts.

 

Thoughts

IT is complicated nowadays. You’re dealing with cloud, SaaS, micro-SaaS, distributed, and personalised IT. You’re invariably trying to accommodate the role of data in your organisation, and you’re no doubt facing challenges with getting applications running not just in your core, but also in the cloud and the edge. We talk a lot about how infrastructure can be used to solve a number of the challenges facing organisations, but I have no doubt that if most business leaders never had to deal with infrastructure and the associated challenges it presents they’d be over the moon. Offerings like Pure as-a-Service go some of the way to elevating that conversation from speeds and feeds to something more aligned with business outcomes. It strikes me that these kinds of offerings will have great appeal to both the folks in charge of finance inside big enterprises and the potentially the technical folk trying to keep the lights on whilst a budget decrease gets lobbed at them every year.

I’ve written about Pure enthusiastically in the past because I think the company has a great grasp of some of the challenges that many organisations are facing nowadays. I think that the expansion into other parts of the cloud ecosystem, combined with a willingness to offer flexible consumption models for solutions that were traditionally offered as lease or buy is great. But I don’t think this makes sense without everything that Pure has done previously as a company, from the focus on getting the most out of All-Flash hardware, to a relentless drive for customer satisfaction, to the willingness to take a chance on solutions that are a little outside the traditional purview of a storage array company.

As I’ve said many times before, IT can be hard. There are a lot of things that you need to consider when evaluating the most suitable platform for your applications. Pure Storage isn’t the only game in town, but in terms of storage vendors offering flexible and powerful storage solutions across a variety of topologies, it seems to be a pretty compelling one, and definitely worth checking out.

Random Short Take #64

Welcome to Random Short take #64. It’s the start of the last month of the year. We’re almost there.

  • Want to read an article that’s both funny and informative? Look no further than this beginner’s guide to subnetting. I did Elizabethan literature at uni, so it was good to get a reminder on Shakespeare’s involvement in IP addressing.
  • Continuing with the amusing articles, Chris Colotti published a video of outtakes from some Cohesity lightboard sessions that had me cracking up. It’s always nice when people don’t take themselves too seriously.
  • On a more serious note, data hoarding is a problem (I know this because I’ve been guilty of it), and this article from Preston outlines some of the reasons why it can be a bad thing for business.
  • Still on data protection, Howard Oakley looks at checking the integrity of Time Machine backups in this post. I’ve probably mentioned this a few times previously, but if you find macOS behaviour baffling at times, Howard likely has an article that can explain why you’re seeing what you’re seeing.
  • Zerto recently announced Zerto In-Cloud for AWS – you read more about that here. Zerto is really starting to put together a comprehensive suite of DR solutions. Worth checking out.
  • Still on press releases, Datadobi has announced new enhancements to DobiMigrate with 5.13. The company also recently validated Google Cloud Storage as an endpoint for its DobiProtect solution.
  • Leaseweb Global is also doing stuff with Google Cloud – you can read more about that here.
  • Finally, this article over at Blocks and Files on what constitutes a startup made for some interesting reading. Some companies truly are Peter Pans at this point, whilst others are holding on to the idea that they’re still in startup mode.

22dot6 Releases TASS Cloud Suite

22dot6 sprang from stealth in May 2021. and recently announced its TASS Cloud Suite. I had the opportunity to once again catch up with Diamond Lauffin about the announcement, and thought I’d share some thoughts here.

 

The Product

If you’re unfamiliar with the 22dot6 product, it’s basically a software or hardware-based storage offering that delivers:

  • File and storage management
  • Enterprise-class data services
  • Data and systems profiling and analytics
  • Performance, scalability
  • Virtual, physical, and cloud capabilities, with NFS, SMB, and S3 mixed protocol support

According to Lauffin, it’s built on a scale-out, parallel architecture, and can deliver great pricing and performance per GiB.

Components

It’s Linux-based, and can leverage any bare-metal machine or VM. Metadata services live on scale-out, redundant nodes (VSR nodes), and data services are handled via single, clustered, or redundant nodes (DSX nodes).

[image courtesy of 22dot6]

TASS

The key to this all making some kind of sense is TASS (the Transcendent Abstractive Storage System). 22dot6 describes this as a “purpose-built, objective based software integrating users, applications and data services with physical, virtual and cloud-based architectures globally”. Sounds impressive, doesn’t it? Valence is the software that drives everything, providing the ability to deliver NAS and object over physical and virtual storage, in on-premises, hybrid, or public cloud deployments. It’s multi-vendor capable, offering support for third-party storage systems, and does some really neat stuff with analytics to ensure your storage is performing the way you need it to.

 

The Announcement

22dot6 has announced the TASS Cloud Suite, an “expanded collection of cloud specific features to enhance its universal storage software Valence”. Aimed at solving many of the typical problems users face when using cloud storage, it addresses:

  • Private cloud, with a “point-and-click transcendent capability to easily create an elastic, scale-on-demand, any storage, anywhere, private cloud architecture”
  • Hybrid cloud, by combining local and cloud resources into one big pool of storage
  • Cloud migration and mobility, with a “zero stub, zero pointer” architecture
  • Cloud-based NAS / Block / S3 Object consolidation, with a “transparent, multi-protocol, cross-platform support for all security and permissions with a single point-and-click”

There’s also support for cloud-based data protection, WORM encoding of data, and a comprehensive suite of analytics and reporting.

 

Thoughts and Further Reading

I’ve had the pleasure of speaking to Lauffin about 22dot6 on 2 occasions now, and I’m convinced that he’s probably one of the most enthusiastic storage company founders / CEOs I’ve ever been briefed by. He’s certainly been around for a while, and has seen a whole bunch of stuff. In writing this post I’ve had a hard time articulating everything that Lauffin tells me 22dot6 can do, while staying focused on the cloud part of the announcement. Clearly I should have done an overview post in May and then I could just point you to that. In short, go have a look at the website and you’ll see that there’s quite a bit going on with this product.

The solution seeks to address a whole raft of issues that anyone familiar with modern storage systems will have come across at one stage or another. I remain continually intrigued by how various solutions work to address storage virtualisation challenges, while still making a system that works in a seamless manner. Then try and do that at scale, and in multiple geographical locations across the world. It’s not a terribly easy problem to solve, and if Lauffin and his team can actually pull it off, they’ll be well placed to dominate the storage market in the near future.

Spend any time with Lauffin and you realise that everything about 22dot6 speaks to many of the lessons learned over years of experience in the storage industry, and it’s refreshing to see a company trying to take on such a wide range of challenges and fix everything that’s wrong with modern storage systems. What I can’t say for sure, having never had any real stick time with the solution, is whether it works. In Lauffin’s defence, he has offered to get me in contact with some folks for a demo, and I’ll be taking him up on that offer. There’s a lot to like about what 22dot6 is trying to do here, with the Valance Cloud Suite being a small part of the bigger picture. I’m looking forward to seeing how this goes for 22dot6 over the next year or two, and will report back after I’ve had a demo.

StorONE Announces S1:Backup

StorONE recently announced details of its S1:Backup product. I had the opportunity to talk about the announcement with Gal Naor and George Crump about the news and thought I’d share some brief thoughts here.

 

The Problem

Talk to people in the tech sector today, and you’ll possibly hear a fair bit about how ransomware is a real problem for them, and a scary one at that. Most all of the data protection solution vendors are talking about how they can help customers quickly recover from ransomware events, and some are particularly excited about how they can let you know you’ve been hit in a timely fashion. Which is great. A good data protection solution is definitely important to an organisation’s ability to rapidly recover when things go pop. But what about those software-based solutions that themselves have become targets of the ransomware gangs? What do you do when someone goes after both your primary and secondary storage solution? It costs a lot of money to deliver immutable solutions that are resilient to the nastiness associated with ransomware. Unfortunately, most organisations continue to treat data protection as an overpriced insurance policy and are reluctant to spend more than the bare minimum to keep these types of solutions going. It’s alarming the number of times I’ve spoken to customers using software-based data protection solutions that are out of support with the vendor just to save a few thousand dollars a year in maintenance costs.

 

The StorONE Solution

So what do you get with S1:Backup? Quite a bit, as it happens.

[image courtesy of StorONE]

You get Flash-based data ingestion in an immutable format, with snapshots being taken every 30 seconds.

[image courtesy of StorONE]

You also get fast consolidation of multiple incremental backup jobs (think synthetic fulls, etc.), thanks to the high performance of the StorONE platform. Speaking of performance, you also get quick recovery capabilities, and the other benefits of the StorONE platform (namely high availability and high performance).

And if you’re looking for long term retention that’s affordable, you can take advantage of StorONE’s ability to cope well with 90% capacity utilisation, rapid RAID rebuild times, and the ability to start small and grow.

 

Thoughts and Further Reading

Ransomware is a big problem, particularly when it hits you across both primary and secondary storage platforms. Storage immutability has become a super important piece of the puzzle that vendors are trying to solve. Like many things though, it does require some level of co-operation to make sure non-integrated systems are functioning across the tack in an integrated fashion. There are all kinds of ways to attack this issue, with some hardware vendors insisting that they’re particular interpretation of immutability is the only way to go, while some software vendors are quite keen on architecting air gaps into solutions to get around the problem. And I’m sure there’s a tape guy sitting up the back muttering about how tape is the ultimate air gap. Whichever way you want to look at it, I don’t think any one vendor has the solution that is 100% guaranteed to keep you safe from the folks in hoodies intent on trashing your data. So I’m pleased that StorONE is looking at this problem and wanting to work with the major vendors to develop a cost-effective solution to the issue. It may not be right for everyone, and that’s fine. But on the face of it, it certainly looks like a compelling solution when compared to rolling your own storage platforms and hoping that you don’t get hit.

Doing data protection well is hard, and made harder by virtue of the fact that many organisations treat it as a necessary evil. Sadly, it seems that CxOs only really start to listen after they’ve been rolled, not beforehand. Sometimes the best you can do is be prepared for when disaster strikes. If something like the StorONE solution is going to be the difference between losing the whole lot, or coming back from an attack quickly, it seems like it’s worth checking out. I can assure you that ignoring the problem will only end in tears. It’s also important to remember that a robust data protection solution is just another piece of the puzzle. You still need to need to look at your overall security posture, including securing your assets and teaching your staff good habits. Finally, if it seems like I’m taking aim at software-based solutions, I’m not. I’m the first to acknowledge that any system is susceptible if it isn’t architected and deployed in a secure fashion – regardless of whether it’s integrated or not. Anyway, if you’d like another take on the announcement, Mellor covered it here.

Random Short Take #63

Welcome to Random Short take #63. It’s Friday morning, and the weekend is in sight.

  • I really enjoyed this article from Glenn K. Lockwood about how just looking for an IOPS figure can be a silly thing to do, particularly with HPC workloads. “If there’s one constant in HPC, it’s that everyone hates I/O.  And there’s a good reason: it’s a waste of time because every second you wait for I/O to complete is a second you aren’t doing the math that led you to use a supercomputer in the first place.”
  • Speaking of things that are a bit silly, it seems like someone thought getting on the front foot with some competitive marketing videos was a good idea. It rarely is though.
  • Switching gears a little, you may have been messing about with Tanzu Community Edition and asking yourself how you could SSH to a node. Ask no more, as Mark has your answer.
  • Speaking of storage companies that are pretty pleased with how things are going, Weka has put out this press release on its growth.
  • Still on press releases, Imply had some good news to share at Druid Summit recently.
  • Intrigued by Portworx and want to know more? Check out these two blog posts on configuring multi-cloud application portability (here and here) – they are excellent. Hat tip to my friend Mike at Pure Storage for the links.
  • I loved this article on project heroics from Chris Wahl. I’ve got a lot more to say about this and the impact this behaviour can have on staff but some of it is best not committed to print at this stage.
  • Finally, I replaced one of my receivers recently and cursed myself once again for not using banana plugs. They just make things a bit easier to deal with.

Datadobi, DobiProtect, and Forward Progress

I recently had the opportunity to speak Carl D’Halluin from Datadobi about DobiProtect, and thought I’d share some thoughts here. I wrote about DobiProtect in the past, particularly in relation to disaster recovery and air gaps. Things have progressed since then, as they invariably do, and there’s a bit more to the DobiProtect story now.

 

Ransomware Bad, Data Protection Good

If you’re paying attention to any data protection solution vendors at the moment, you’re no doubt hearing about ransomware attacks. These are considered to be Very Bad Things (™).

What Happens

  • Ransomware comes in through zero-day exploit or email attachments
  • Local drive content encrypted
  • Network shares encrypted – might be fast, might be slow
  • Encrypted file accessed and ransom message appears

How It Happens

Ransomware attacks are executed via many means, including social engineering, software exploits, and “malvertising” (my second favourite non-word next to performant). The timing of these attacks is important to note as well, as some ransomware will lay dormant and launch during a specific time period (a public holiday, for example). Sometimes ransomware will slowly and periodically encrypt content , but generally speaking it will begin encrypting files as quickly as possible. It might not encrypt everything either, but you can bet that it will be a pain regardless.

Defense In Depth

Ransomware protection isn’t just about data protection though. There are many layers you need to consider (and protect), including:

  • Human – hard to control, not very good at doing what they’re told.
  • Physical – securing the locations where data is stored is important.
  • End Points – BYOD can be a pain to manage effectively, and keeping stuff up to date seems to be challenging for the most mature organisations.
  • Networks – there’s a lot of work that needs to go into making sure workloads are both secure and accessible.
  • Application – sometimes they’re just slapped in there and we’re happy they run.
  • Data – It’s everything, but super exposed if you don’t get the rest of this right.

 

DobiProtect Then?

The folks at Datadobi tell me DobiProtect is the ideal solution for protecting the data layer as part of your defence in depth strategy as it is:

  • Software defined
  • Designed for the scale and complexity of file and / or object datasets
  • A solution that compliments existing capabilities such as storage system snapshots
  • Easy to deploy and does not impact existing configurations
  • A solution that is cost effective and flexible

 

Where Does It Fit?

DobiProtect plays to the strength of Datadobi – file and object storage. As such, it’s not designed to handle your traditional VM and DB protection, this remains the domain of the usual suspects.

[image courtesy of Datadobi]

Simple Deployment

The software-only nature of the solution, and the flexibility of going between file and object, means that it’s pretty easy to deploy as well.

[image courtesy of Datadobi]

Architecture

From an architecture perspective, it’s pretty straight forward as well, with the Core handling the orchestration and monitoring, and software proxies used for data movement.

[image courtesy of Datadobi]

 

Thoughts

I’ve been involved in the data protection business in some form or another for over two decades now. As you can imagine, I’ve seen a whole bunch of different ways to solve problems. In my day job I generally promote modern approaches to solving the challenge of protecting data in an efficient and cost-effective fashion. It can be hard to do this well, at scale, across the variety of workloads that you find in the modern enterprise nowadays. It’s not just some home directories, a file server, and one database that you have to protect. Now there’s SaaS workloads, 5000 different database options, containers, endpoints, and all kinds of other crazy stuff. The thing linking that all together is data, and the requirement to protect that data in order for the business to do its business – whether that’s selling widgets or providing services to the general public.

Protecting file and object workloads can be a pain. But why not just use a vendor that can roughly do the job rather than using a very specific solution like DobiProtect? I asked D’Halluin the same question, and his response was along the following lines. The kind of customers Datadobi is working with on a regular basis have petabytes of unstructured data they need to protect, and they absolutely need to be sure that it’s being protected properly. Not just from a quality of recovered data perspective, but also from a defensible compliance position. It’s not just about pointing out to the auditors that the data protection solution “should” be working. There’s a lot of legislation and stuff in place to ensure that it needs to be more than that. So it’s oftentimes worth investing in a solution that can reliably deliver against that compliance requirement.

Ransomware attacks can be the stuff of nightmares, particularly if you aren’t prepared. Any solution that is helping you to protect yourself (and, more importantly, recover) from attacks is a Very Good Thing™. Just be sure to check that the solution you’re looking at does what you think it will do. And then check again, because it’s not a matter of if, but when.

Random Short Take #62

Welcome to Random Short take #62. It’s Friday afternoon, so I’ll try and keep this one brief.

  • Tony was doing some stuff in his lab and needed to clean up a bunch of ports in his NSX-T segment. Read more about what happened next here.
  • Speaking of people I think of when I think automation, Chris Wahl wrote a thought-provoking article on deep work that is well worth checking out.
  • While we’re talking about work, Nitro has published its 2022 Productivity Report. You can read more here.
  • This article from Backblaze on machine learning and predicting hard drive failure rates was interesting. Speaking of Backblaze, if you’re thinking about signing up with them, use my code and we’ll both get some free time.
  • Had a security problem? Need to recover? How do you know when to hit the big red button? Preston can help.
  • Speaking of doom and gloom (i.e. losing data), Curtis’s recent podcast episode covering ZFS and related technologies made for some great listening.
  • Have you been looking for a “A Unique Technology to Scan and Interrogate Petabyte-Scale Unstructured Data Lakes”? Maybe, maybe not. If you have, Datadobi has you covered with Datadobi Query Language. You can read the press release here.
  • I love when bloggers take the time to do hands-on articles, and this one from Dennis Faucher covering VMware Tanzu Community Edition was fantastic.

Random Short Take #61

Welcome to Random Short take #61.

  • VMworld is on this week. I still find the virtual format (and timezones) challenging, and I miss the hallway track and the jet lag. There’s nonetheless some good news coming out of the event. One thing that was announced prior to the event was Tanzu Community Edition. William Lam talks more about that here.
  • Speaking of VMworld news, Viktor provided a great summary on the various “projects” being announced. You can read more here.
  • I’ve been a Mac user for a long time, and there’s stuff I’m learning every week via Howard Oakley’s blog. Check out this article covering the Recovery Partition. While I’m at it, this presentation he did on Time Machine is also pretty ace.
  • Facebook had a little problem this week, and the Cloudflare folks have provided a decent overview of what happened. As someone who works for a service provider, this kind of stuff makes me twitchy.
  • Fibre Channel? Cloud? Chalk and cheese? Maybe. Read Chin-Fah’s article for some more insights. Personally, I miss working with FC, but I don’t miss the arguing I had to do with systems and networks people when it came to the correct feeding and watering of FC environments.
  • Remote working has been a challenge for many organisations, with some managers not understanding that their workers weren’t just watching streaming video all day, but actually being more productive. Not everything needs to be a video call, however, and this post / presentation has a lot of great tips on what does and doesn’t work with distributed teams.
  • I’ve had to ask this question before. And Jase has apparently had to answer it too, so he’s posted an article on vSAN and external storage here.
  • This is the best response to a trio of questions I’ve read in some time.

Storage Field Day 22 – Wrap-up and Link-o-rama

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

This is a quick post to say thanks once again to Stephen and Ben, and the presenters at Storage Field Day 22. I had a great time. For easy reference, here’s a list of the posts I did covering the events (they may not match the order of the presentations).

Storage Field Day 22 – I’ll Be At Storage Field Day 22

Storage Field Day 22 – (Fairly) Full Disclosure

Komprise – It’s About Data, Not Storage

Infrascale Puts The Customer First

Fujifilm Object Archive – Not Your Father’s Tape Library

Intel – It’s About Getting The Right Kind Of Fast At The Edge

CTERA – Storage The Way Your Users Want It

Pure Storage – Pure1 Makes Life Easy

Also, here are a number of links to posts by my fellow delegates (in no particular order). They’re all very smart people, and you should check out their stuff, particularly if you haven’t before. I’ll attempt to keep this updated as more posts are published. But if it gets stale, the Storage Field Day 22 landing page will have updated links.

Erik Ableson (@EAbleson)

 

Jason Benedicic (@JABenedicic)

 

Brandon Graves (@BrandonGraves08)

 

Mikael Korsgaard Jensen (@Jekomi)

 

David Klee (@KleeGeek)

 

Rob Koper (@50mu)

 

Ray Lucchesi (@RayLucchesi)

CTERA, Cloud NAS on steroids

 

Christian Mohn (@h0bbel)

Storage Field Day #22 — Here We Go

 

Enrico Signoretti (@esignoretti)

 

Wolfgang Stief (@SpeicherStief)

Storage Field Day 22: Ein Ausblick

Data://express 10: Datenmamagement, Backup Und Die Cloud

Endlich: LTO-9 ist da!

 

Justin Warren (@JPWarren)

Komprise Is Klever

Disclosure: #SFD22 Edition

 

Gestalt IT (@GestaltIT)

CTERA: Multi-cloud Unstructured Data Management

Handling Object Storage at Scale with Fujifilm Object Archive

Intel and Lightbits Labs Make Storage Optimization Easy

Pure1 by Pure Storage Optimizes Hybrid Storage Management

Streamlining BDR with DRaaS from Infrascale

Analytics-Driven Unstructured Data Management from Komprise

[image courtesy of Stephen Foskett]

Pure Storage – Pure1 Makes Life Easy

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Pure Storage recently presented at Storage Field Day 22. You can see videos of the presentation here, and download my rough notes from here.

 

What You Need

If you’ve spent any time working with storage infrastructure, you’ll know that it can be a pain to manage and operate in an efficient manner. Pure1 has always been a great tool to manage your Pure Storage fleet. But Pure has taken that idea of collecting and analysing a whole bunch of telemetry data and taken it even further. So what is it you need?

Management and Observation

  • Setup needs to be easy to reduce risk and accelerate delivery
  • Alerting needs to be predictive to prevent downtime
  • Management has to be done anywhere to be responsive

Planning and Upgrades

  • Determining when to buy requires forecasting to manage costs
  • Workload optimisations should be intuitive to help keep users happy
  • Non-disruptive upgrades are critical to prevent disruptions

Purchasing and Scaling

  • Resources should be available as a service for on-demand scaling.
  • Data service purchasing should be self-service for speed and simplicity
  • Hybrid cloud should be available from one vendor, in one place

 

Pure1 Has It

Sounds great, so how do you get that with Pure1? Pure breaks it down into three key areas:

  • Optimise
  • Recommend
  • Empower

Optimise

Reduce the time you spend on management and take the guesswork out of support. With aggregated fleet / group metrics, you get:

  • Capacity utilisation
  • Performance
  • Data reduction savings
  • Alerts and support cases

[image courtesy of Pure Storage]

Recommend

Every organisation wants to improve the speed and accuracy of resource planning while enhancing user experience. Pure1 provides the ability to use “What-If” modelling to stay ahead of demands.

  • Select application to be added
  • Provide sizing details
  • Get recommendations based on Pure best practices and AI analysis of our telemetry databases

[image courtesy of Pure Storage]

The process is alarmingly simple:

  • Pick a Workload Type – Choose a preset application type from a list of the most deployed enterprise applications, including SAP HANA, Microsoft SQL, and more.
  • Set Application Parameter – Define size of the deployment. Attributes are auto-populated based on Pure1 analytics across its global database. Adjust as needed for your environment.
  • Simulate Deployment – Identify where you want to deploy the application data. Pure1 analyses the impact on performance and capacity.

Empower

Build your hybrid-cloud infrastructure your way and on demand without the headaches of legacy purchasing. Pure has a great story to tell when it comes to Pure as-a-Service and OpEx acquisition models.

 

Thoughts and Further Reading

In a previous job, I was a Pure1 user and found the overall experience to be tremendous. Much has changed with Pure1 since I first installed it on my phone, and it’s my opinion that the integration and usefulness of the service have both increased exponentially. The folks at Pure have always understood that it’s not enough to deliver high-performance storage solutions built on All-Flash. This is considered table-stakes nowadays. Instead, Pure has done a great job of focussing on the management and operation of these high-performance storage solutions to ensure that users get what they need from the system. I sound like a broken record, I’m sure, but it’s this relentless focus on the customer experience that I think sets Pure apart from many of its competitors.

Most of the tier 1 storage vendors have had a chop at delivering management and operations systems that make extensive use of field telemetry data and support knowledge to deliver proactive support for customers. Everyone is talking about how they use advanced analytics, AI / ML, and so on to deliver a great support experience. But I think it’s the other parts of the equation that really brings it together nicely for Pure: the “evergreen” hardware lifecycle options, the consumption flexibility, and the focus on constantly improving the day 2 operations experience that’s required when managing storage at scale in the enterprise. Add to that the willingness to embrace hybrid cloud technologies, and the expanding product portfolio, and I’m looking forward to seeing what’s next for Pure. Finally, shout out to Stan Yanitskiy for jumping in at the last minute to present when his colleague had a comms issue – I think the video shows that he handled it like a real pro.