Random Short Take #52

Welcome to Random Short Take #52. A few players have worn 52 in the NBA including Victor Alexander (I thought he was getting dunked on by Shawn Kemp but it was Chris Gatling). My pick is Greg Oden though. If only his legs were the same length. Let’s get random.

  • Penguin Computing and Seagate have been doing some cool stuff with the Exos E 5U84 platform. You can read more about that here. I think it’s slightly different to the AP version that StorONE uses, but I’ve been wrong before.
  • I still love Fibre Channel (FC), as unhealthy as that seems. I never really felt the same way about FCoE though, and it does seem to be deader than tape.
  • VMware vSAN 7.0 U2 is out now, and Cormac dives into what’s new here. If you’re in the ANZ timezone, don’t forget that Cormac, Duncan and Frank will be presenting (virtually) at the Sydney VMUG *soon*.
  • This article on data mobility from my preferred Chris Evans was great. We talk a lot about data mobility in this industry, but I don’t know that we’ve all taken the time to understand what it really means.
  • I’m a big fan of Tech Field Day, and it’s nice to see presenting companies take on feedback from delegates and putting out interesting articles. Kit’s a smart fellow, and this article on using VMware Cloud for application modernisation is well worth reading.
  • Preston wrote about some experiences he had recently with almost failing drives in his home environment, and raised some excellent points about resilience, failure, and caution.
  • Speaking of people I worked with briefly, I’ve enjoyed Siobhán’s series of articles on home automation. I would never have the patience to do this, but I’m awfully glad that someone did.
  • Datadobi appears to be enjoying some success, and have appointed Paul Repice to VP of Sales for the Americas. As the clock runs down on the quarter, I’m going two for one, and also letting you know that Zerto has done some work to enhance its channel program.

StorONE and Seagate Team Up

This news came out a little while ago, but I thought I’d cover it here nonetheless. Seagate and StorONE recently announced that the Seagate Exos AP 5U84 Application Platform would support StorONE’s S1:Enterprise Storage Platform.

 

It’s A Box!

[image courtesy of StorONE]

The Exos 5U84 Dual Node supports:

  • 2x 1.8 GHz CPU (E5-2648L v4)
  • 2x 256GB RAM
  • Storage capacities between 250TB and 1.3PB

 

It’s Software!

Hardware is fun, but it’s the software that really helps here, with support for:

  • Full High Availability
  • Automated Tiering
  • No Write Cache
  • Rapid RAID Rebuilds
  • Unlimited Snapshots
  • Cascading Replication
  • Self Encrypting Drives

It offers support for multiple access protocols, including iSCSI, NFS, SMB, and S3. Note that there is no FC support with this unit.

 

Thoughts and Further Reading

I’ve had positive things to say about StorONE in the past, particularly when it comes to transparent pricing and the ability to run this storage solution on commodity hardware. I’ve been on the fence about whether hybrid storage solutions are really on the way out. It felt like they were, for a while, and then folks kept coming up with tweaks to software that meant you could get even more bang for your buck (per GB). Much like tape, I think it would be premature to say that hybrid storage using spinning disk is dead just yet.

Obviously, the folks at StorONE have skin in this particular game, so they’re going to talk about how hybrid isn’t going anywhere. It’s much the same as Michael Dell telling me that the on-premises server market is hotting up. When a vendor is selling something, it’s in their interest to convince you that a market exists for that thing and it is hot. That said, some of the numbers Crump and the team at StorONE have shown me are indeed compelling. When you couple those numbers with the cost of the solution (you can work out for yourself here) it becomes difficult to dismiss out of hand.

When I look at storage solutions I like to look at the numbers, and the hardware, and how it’s supported. But what’s really important is whether the solution is up to the task of the workload I need to throw at it. I also want to know that someone can fix my problem when the magic smoke escapes said storage solution. After a while in the industry, you start to realise that, regardless of what the brochures look like, there are a few different ways that these kind of things get put together. Invariably, unless the solution is known for being reckless with data integrity, or super slow, there’s going to be a point at which the technical advantages become less of a point of differentiation. It’s at that point where the economics really come into play.

The world is software-defined in a lot of ways, but this doesn’t mean you can run your favourite storage code on any old box and expect a great outcome. It does, however, mean that you no longer have to pay a premium to get good performance, good capacity, and a reliable outcome for your workload. You also get the opportunity to enjoy performance improvements as the code improves, without necessarily needing to update your hardware. Which is kind of neat, particularly if you’ve ever paid a pretty penny for golden screwdriver upgrades from big brand disk slingers in the past. This solution might not be for everyone, particularly if you already have a big arrangement with some of the bigger vendors. But if you’re looking to do something, and can’t stretch the economics to an All-Flash solution, this is worth a look.

Storage Field Day 21 – Wrap-up and Link-o-rama

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

This is a quick post to say thanks once again to Stephen and Ben, and the presenters at Storage Field Day 21. I had a great time. For easy reference, here’s a list of the posts I did covering the events (they may not match the order of the presentations).

Storage Field Day 21 – I’ll Be At Storage Field Day 21

Storage Field Day 21 – (Fairly) Full Disclosure

Back To The Future With Tintri

Hammerspace, Storageless Data, And One Tough Problem

Intel Optane – Challenges and Triumphs

NetApp Keystone – How Do you Want It?

Pliops – Can We Take Fast And Make It Faster?

Nasuni Puts Your Data Where You Need It

MinIO – Cloud, Edge, Everywhere …

Also, here’s a number of links to posts by my fellow delegates (in no particular order). They’re all very smart people, and you should check out their stuff, particularly if you haven’t before. I’ll attempt to keep this updated as more posts are published. But if it gets stale, the Storage Field Day 21 landing page will have updated links.

 

Jason Collier (@BocaNuts)

 

Barry Coombs (@VirtualisedReal)

#SFD21 – Storage Field Day 21 – Tintri

#SFD21 – Storage Field Day 21 – NetApp

#SFD21 – Storage Field Day 21 – Nasuni

#SFD21 – Storage Field Day 21 – MinIO Session

#SFD21 – Storage Field Day 21 – Pliops

#SFD21 – Storage Field Day 21 – Hammerspace

#SFD21 – Storage Field Day 21 – Intel

 

Becky Elliott (@BeckyLElliott)

 

Matthew Leib (@MBLeib)

 

Ray Lucchesi (@RayLucchesi)

The rise of MinIO object storage

Data Science storage with NetApp’s Python Toolkit

Storageless data!?

115-GreyBeards talk database acceleration with Moshe Twitto, CTO&Co-founder, Pliops

 

Andrea Mauro (@Andrea_Mauro)

 

Max Mortillaro (@DarkkAvenger)

Nasuni – Cloud-Scale NAS Without Cloud Worries

Storage Field Day 21 – The TECHunplugged Take on Nasuni

Pliops: Re-Imagining Storage, Crushing Bottlenecks and a Bright Future in the Cloud

 

Keiran Shelden (@Keiran_Shelden)

 

Enrico Signoretti (@esignoretti)

Object Storage Is Heating Up

Storage Options for the Distributed Enterprise

 

Paul Stringfellow (@TechStringy)

Looking ahead with Storage Field Day 21 – Barry Coombs, Jason Collier, Max Mortillaro – Ep 149

Storageless data, really? – Doug Fallstrom – Ep156

 

Frederic Van Haren (@FredericVHaren)

 

On-Premise IT Podcast

Is Storageless Storage Just Someone Else’s Storage?

 

Now please enjoy this group photo.

[image courtesy of Gestalt IT]

MinIO – Cloud, Edge, Everywhere …

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

MinIO recently presented at Storage Field Day 21. You can see videos of the presentation here, and download my rough notes from here.

 

What Is It?

To quote the good folks at MinIO, it is a “high performance, Kubernetes-native object store”. It is designed to be used for large-scale data infrastructure, and was built from scratch to be cloud native.

[image courtesy of MinIO]

Design Principles

MinIO has been built with the following principles in mind:

  • Cloud Native – born in the cloud with “cloud native DNA”
  • Performance Focussed – believe it is the fastest object store in existence
  • Simplicity – designed for simplicity because “simplicity scales”

S3 Compatibility

MinIO is heavily focussed on S3 compatibility. It was first to market with V4 and one of the few vendors to support S3 Select. It has also been strictly consistent from inception.

Put Me In Your Favourite Box

The cloud native part of MinIO was no accident, and as a result more than 62% of MinIO instances run in containers (according to MinIO). 43% of those instances are also managed via Kubernetes. It’s not just about jamming this solution into your favourite container solution though. The lightweight nature of it means you can deploy it pretty much anywhere. As the MinIO folks pointed out during the presentation, MinIO is going everywhere that AWS S3 isn’t.

 

Thoughts And Further Reading

I love object storage. Maybe not in the way I love my family or listening to records or beer, but I do love it. It’s not just useful for storage for the great unwashed of the Internet, but also backup and recovery, disaster recovery, data archives, and analytics. And I’m a big fan of MinIO, primarily because of the S3 compatibility and simplicity of deployment. Like it or not, S3 is the way forward in terms of a standard for object storage for cloud native (and a large number of enterprise) workloads. I’ve written before about other vendors being focussed on this compatibility, and I think it’s great that MinIO has approached this challenge with just as much vigour. There are plenty of problems to be had deploying applications at the best of times, and being able to rely on the storage vendor sticking to the script in terms of S3 compatibility takes one more potential headache away.

The simplicity of deployment is a big part of what intrigues me about MinIO too. I’m old enough to remember some deployments of early generation on-premises object storage systems that involved a bunch of hardware and complicated software interactions for what ultimately wasn’t a great experience. Something like MinIO can be up and running on some pretty tiny footprints in no time at all. A colleague of mine shared some insights into that process here.

And that’s what makes this cool. It’s not that MinIO are trying to take a piece of the AWS pie. Rather, it’s positioning the solution as one that can operate everywhere that the hyperscalers aren’t. Putting object storage solutions in edge locations has historically been a real pain to do. That’s no longer the case. Part of this has to do with the fact that we’ve got access to really small computers and compact storage. But it also has a bit to do with lightweight code that can be up and running in a snap. Like some of the other on-premises object vendors, MinIO has done a great job of turning people on to the possibility of doing cool storage for cloud native workloads outside of the cloud. It seems a bit odd until you think about all of the use cases in enterprise that might work really well in cloud, but aren’t allowed to be hosted in the cloud. It’s my opinion that MinIO has done a great job of filling that gap (and exceeding expectations) when it comes to lightweight, easy to deploy object storage. I’m looking forward to see what’s next for them, particularly as the other vendors start to leverage the solution. For another perspective on MinIO’s growth, check out Ray’s article here.

Nasuni Puts Your Data Where You Need It

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Nasuni recently presented at Storage Field Day 21. You can see videos of the presentation here, and download my rough notes from here.

 

Nasuni?

The functionality is in the product name. It’s NAS that offers a unified file system across cloud. The key feature is that it’s cloud-native, rather than built on any particular infrastructure solution.

[image courtesy of Nasuni]

The platform is comprised of 5 key components.

UniFS

  • Consolidates files and metadata in cloud storage – “Gold Copy”
  • Ensures durability by storing files as immutable, read-only objects
  • Stores an unlimited version history of every file

Virtual Edge Appliances

  • Caches active files with 99% hit rate
  • 98% smaller footprint vs traditional file server / NAS
  • Scales across all sites, including VDI
  • Supports standard file sharing protocols
  • Built-in web server enables remote file access via web browser (HTTP)

Management Console

  • Administers appliances, volumes, shares and file recovery
  • Automated through central GUI and REST API
  • Provides centralised monitoring, reporting, and alerting

Orchestration Center

  • Multi-site file sync keeps track of versions
  • Advanced version control with Nasuni Global File Lock
  • Multi-region cloud support to ensure performance

Analytics Connector

  • Translates file data into native object storage format
  • Leverage any public cloud services (AI, data analytics, search)
  • Multi-cloud support so you can run any cloud service against your data

 

Thoughts and Further Reading

I’m the first to admit I’ve had a bit of a blind spot for Nasuni for a little while now. Not because I think the company doesn’t do cool stuff – it really does. Rather, my former employer was an investor in the tech and was keen to see how we could use the platform in every opportunity. Even when the opportunity wasn’t appropriate.

Distributed storage for file sharing has been a pain in the rear for enterprises ever since enterprises have been a thing. The real challenge has been doing something sensible about managing data across multiple locations in a cogent fashion. As local becomes global, this becomes even more of an issue, particularly when folks all across the world need to work on the same data. Email isn’t really great for this, and some of those sync and share solutions don’t cope well with the scale that is sometimes required. In the end, file serving is still a solution that can solve a problem for a lot of enterprise use cases.

The advent of public cloud has been great in terms of demonstrating that workloads can be distributed, and you don’t need to have a bunch of tin sitting in the office to get value from infrastructure. Nasuni recognised this over ten years ago, and it has put together a platform that seeks to solve that problem by taking advantage of the distributed nature of cloud, whilst acknowledging that virtualised resources can make for a useful local presence when it comes to having the right data in the right place. One of my favourite things about the solution is that you can also do stuff via the Analytics Connector to derive further value from your unstructured data. This is not a unique feature, but it’s certainly something that gives the impression that Nasuni isn’t just here to serve up your data.

The elegance of the Nasuni solution is in the fact that the complexity is well hidden from the end user. It’s a normal file access experience, but it’s hosted in the cloud. When you contrast that with what you get from the sync solutions of the world or the clumsy web-based document management systems so prevalent in the enterprise, this kind of simplicity is invaluable. It’s my opinion that there is very much a place for this kind of solution in the marketplace. The world is becoming increasingly global, but we still need solutions that can provide data where we need it. We also need those solutions to accommodate the performance and resilience needs of the enterprise.

If you’re after a great discussion on storage options for the distributed enterprise, check out Enrico’s article over at GigaOm.

Sydney VMUG – April 2021

hero_vmug_express_2011

*Update* This one’s been postponed for a few weeks. I’ll provide an update as soon as I get the new date.

The next Sydney VMUG meeting is coming up in a few weeks, and it should be great. Details below, and you can register here.

Providing a platform for modern IT services with a VMware SDDC    

Abstract: Today’s business requirements are driving the evolution of IT at an extremely rapid pace. Never before have IT administrators introduced so many new services and solutions to their customers. Although ensuring availability, performance, and recoverability of these solutions is key, it cannot come at the cost of business or developer agility. In this three-part roadshow Frank, Cormac, and Duncan will discuss how VMware (and products like vSphere, vSAN, Tanzu, etc) can help to transform your infrastructure to facilitate this new wave of applications.

Agenda

  • Opening by VMUG Leaders
  • Frank Denneman:  Creating a developer self-service platform with vSphere 7 while maintaining governance
  • Q&A and Break
  • Cormac Hogan:  vSphere 7.0 U1 and the Kubernetes Admin
  • Q&A and Break
  • Duncan Epping: vSAN 7.0 U1 and the vSphere Admin
  • Q&A and wrap up by VMUG leaders

Pliops – Can We Take Fast And Make It Faster?

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Pliops recently presented at Storage Field Day 21. You can see videos of the presentation here, and download my rough notes from here.

 

The Problem

You might have heard of solid-state drives (SSDs). You might have one in your computer. You might even have a data centre full of them. They’re quiet and they’re fast. It’s generally accepted that SSDs perform way better than HDDs. The problem, however, is that CPUs haven’t kept up with that performance increase. The folks at Pliops have also pointed out that resiliency technologies such as RAID generally suck, particularly when you’re using SSDs. In essence, you’re wasting a good chunk of your Flash.

 

The Solution?

The solution, according to Pliops, is the Pliops Storage Processor. This is “a hardware-based storage accelerator that enables cloud and enterprise customers to offload and accelerate data-intensive workloads using just a fraction of the computational load and power”. Capabilities include increased performance, capacity expansion, improved endurance, and data protection capabilities.

System Integration

From an integration perspective, the card is a half-height, half-length PCIe device that fits in any standard rackmount server. There’s a Pliops Agent that’s installed, and it supports a variety of different deployment options. There’s even a cloud / as-a-service option available.

[image courtesy of Pliops]

Use Cases

The Pliops SP is targeted primarily at RDBMS, NoSQL and Analytics workloads, due in large part to its ability to reduce read and write amplification on SSDs – something that’s been a problem since Flash became more prevalent in the data centre.

[image courtesy of Pliops]

 

Thoughts and Further Reading

On the surface, the Pliops Storage Processor seems to be solving a fairly specific problem. It’s not a problem that gets a lot of airplay, but that doesn’t mean it’s not an important one to solve. There are scads of solutions in the market that have been developed to address the problem of legacy systems design. For example, the way we addressed resilience previously (i.e. RAID) doesn’t work that well as drive capacities continue to increase. We’ve also fundamentally changed the media we’re working with, but haven’t necessarily developed new ways of taking advantage of that media.

Whenever I see add-in devices like this I worry that it would be a pain to manage at any sort of scale. But then I remember that literally everything related to hardware is a pain to manage at any kind of scale. The Pliops folks tell us that it’s not actually too bad, and any disadvantages related to having super specialised add-in cards deployed in servers is more than made up for by the improved outcomes achieved with those cards.

Ultimately, the value of a solution like the Pliops Storage Processor is absolutely tied to whether you’ve had a problem with this in the past. If you have, you’ll understand that this kind of solution is a reasonably elegant way of addressing the problem. It has the added bonus of taking fast media and eking out even more performance from that media.

Pliops has only been around since 2017, but it recently announced a decent funding round and the product is being ready for mass deployment. I’ll happily admit that I’ve done a fairly poor job of explaining the Pliops Storage Processor and what it does, so I recommend you check out the solution brief on the Pliops website. If you’d like another perspective, be sure to head over to TECHunplugged to read Max’s thoughts on Pliops.

Random Short Take #51

Welcome to Random Short Take #51. A few players have worn 51 in the NBA including Lawrence Funderburke (I remember the Ohio State team wearing grey Nikes on TV and thinking that was a really cool sneaker colour – something I haven’t been able to shake over 25 years later). My pick is Boban Marjanović though. Let’s get random.

  • Folks don’t seem to spend much time making sure the fundamentals are sound, particularly when it comes to security. This article from Jess provides a handy list of things you should be thinking about, and doing, when it comes to securing your information systems. As she points out, it’s just a starting point, but I think it should be seen as a bare minimum / entry level set of requirements that you could wrap around most environments out in the wild.
  • Could there be a new version of AIX on the horizon? Do I care? Not really. But I do sometimes yearn for the “simpler” times I spent working on a myriad of proprietary open systems, particularly when it came to storage array support.
  • StorCentric recently announced Nexsan Assureon Cloud Edition. You can read the press release here.
  • Speaking of press releases, Zerto continues to grow its portfolio of cloud protection technology. You can read more on that here.
  • Spectro Cloud has been busy recently, and announced supporting for management of existing Kubernetes deployments. The news on that can be found here.
  • Are you a data hoarder? I am. This article won’t help you quit data, but it will help you understand some of the things you can do to protect your data.
  • So you’ve found yourself with a publicly facing vCenter? Check out this VMware security advisory, and get patching ASAP. vCenter is the only thing you need to be patching either, but hopefully you knew that already.
  • John Birmingham is one of my favourite writers. Not just for his novels with lots of things going bang, but also for his blog posts about food. And things of that nature.

NetApp Keystone – How Do you Want It?

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

NetApp recently presented at Storage Field Day 21. You can see videos of the presentation here, and download my rough notes from here. This post is focussed on the Keystone presentation, but I recommend you check out the Oracle performance analysis session as well – I found it extremely informative.

 

Keystone? What Is It?

According to the website, “Keystone provides a portfolio of payment solutions and storage-as-a-service offerings for hybrid cloud environments to deliver greater agility, financial flexibility, and reduced financial risk that helps you meet your business outcomes”. In short, it gives you a flexible way to consume the broader portfolio of NetApp solutions as a service on-premises (and off-premises).

 

How Much XaaS Is Too Much?

According to NetApp’s research, no amount of XaaS is too much. The market is apparently hungry for everything as a service to be a thing. It seems we’re no longer just looking to do boring old Infrastructure or Software as a Service. We really seem to want everything as a Service.

[image courtesy of NetApp]

Why?

There are some compelling reasons to consume things as a service via operating expenditure rather than upfront capital expenditure. In the olden days, when I needed some storage for my company, I usually had a line item in the budget for some kind of storage array. What invariably happened was that the budget would only be made available once every 3 – 5 years. It doesn’t make any sense necessarily, but I’m sure there are accounting reasons behind it. So I would try to estimate how much storage the company would need for the next 5 years (and usually miss the estimate by a country mile). I’d then buy as much storage as I could and watch it fill up at an alarming rate.

The other problem with this approach was that we were paying for spindles that weren’t necessarily in use for the entirety of the asset’s lifecycle. There was also an issue that some storage vendors would offer special discounting to buy everything up front. When you went to add additional storage, however, you’d be slugged with pricing that was nowhere near as good as it was if you’d have bought everything up front. The appeal of solutions like storage as a service is that you can start with a smallish footprint and grow it as required, spending what you need, and going from there. It’s also nice for the vendors, as the sales engagement is a little more regular, and the opportunity to sell other services into the environment that may not have been identified previously becomes a reality.

No, But Really Why?

If you’ve watched the NetApp Keystone presentation, and maybe checked out the supporting documentation, you’re going to wonder why folks aren’t just moving everything to public cloud, and skipping the disk slinging middle man. As anyone who’s worked with or consulted for enterprise IT organisations will be able to tell you though, it’s rarely that simple. There may indeed be appetite to leverage public cloud storage services, for example, but there may also be a raft of reasons why this can’t be done, including latency requirements, legacy application support, data sovereignty concerns, and so forth.

[image courtesy of NetApp]

Sometimes the best thing that can happen is that there’s a compromise to be had between the desire for the business to move to a different operating model and the ability for the IT organisation to service that need.

 

Thoughts and Further Reading

The growth of XaaS over the last decade has been fascinating to see. There really is an expectation that you can do pretty much anything as a service, and folks are queuing up for the privilege. As I mentioned earlier, I think there are reasons why it’s popular on both sides, and I certainly don’t want to come across as some weird on-premises storage hugger who doesn’t believe the future of infrastructure is heavily entwined with as a service offerings. Heck, my day job is a a company that is entirely built on this model. What I do wonder at times is whether folks in organisations looking to transform their services are really ready to relinquish the control of the infrastructure part of the equation in exchange for a per-GB / per month option. Offerings like Keystone aren’t just fancy financial models to make getting kit on the floor easier, they’re changing the way that vendors and IT organisations interact at a fairly fundamental level. In much the same way that public cloud has changed the role of the IT bod in the organisation, so too does XaaS change that value proposition.

I think the folks at NetApp have done quite a good job with Keystone, particularly recognising that there is still a place for on-premises infrastructure, but acknowledging that the market wants both a “cloud-like” experience, and a new way of consuming these services. I’ll be interested to see how Keystone develops over the next 12 – 24 months now that it’s been released to the market at large. We all talk about as a service being the future, so I’m keen to see if folks are really buying it.

Intel Optane – Challenges and Triumphs

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Intel recently presented at Storage Field Day 21. You can see videos of the presentation here, and download my rough notes from here.

 

Alive and Kicking

Kristie Mann, Sr. Director Products, Intel Optane Group, kicked off the session by telling us that “Intel Optane is alive and well”. I don’t think anyone thought it was really going away, particularly given the effort that folks inside Intel have gone to to get this product to market. But from a consumer perspective, it’s potentially been a little confusing.

[image courtesy of Intel]

In terms of data centre penetration, it’s been a different story, and taking Optane from idea to reality has been quite a journey. It was also noted that the “strong uptake of PMem in HPC was unexpected”, but no doubt welcome.

 

Learnings

Some of the other learnings that were covered as part of the session were as follows.

Software Really Matters

It’s one thing to come out with some super cool hardware that is absolutely amazing, but it’s quite another to get software support for that hardware. Unfortunately, the hardware doesn’t give you much without the software, no matter how well it performs. While this has been something of a challenge for Optane until recent times, there’s definitely been more noise from the big ISVs about enhanced Optane support.

IaaS Adoption

Adoption in IaaS has not been great, mainly due to some uneven performance. This will only improve as the software support matures. But the IaaS market can be tough for a bunch of reasons. IaaS vendors are trying to do a things at a certain price point. That doesn’t mean that they’re still making people run VMs on spinning disk (hopefully), but rolling out All-Flash support for platforms is something that’s only going to be done when the $/GB makes sense for the providers. You also might have seen in the field that IaaS providers are pretty particular about performance and quality of service. It makes sense when you’re trying to host a whole bunch of different workloads at large scale. So it makes sense that they’d be somewhat cautious about launching new media types on their platforms without running through a whole bunch of performance and integration testing. I’m not saying they’re not going to get there, they just may not be the first cabs off the rank.

Can you spell OEM?

OEM qualifications have been slow to date with Optane. This is key to getting the product out there. Enterprise folks don’t like to buy things until their favourite Tier 1 vendors are offering it as a default option in their server / storage array / fabric switch. If Dell has the Optane Inside sticker (not a real thing, but you know what I mean), the infrastructure architects inside large government entities are more likely to get on board.

Battling The Status Quo

Status quo thinking makes it hard to understand this isn’t just memory or storage. This has been something of a problem for Intel since Optane became a thing. I’m still having conversations with people and running up against significant confusion about the difference between PMem and Optane SSD. I think that’s going to improve as time goes on, but it can make things difficult when it comes to broad market penetration.

Thoughts and Further Reading

I don’t want people reading this to think that I’m down on Intel and what it’s done with Optane. If anything, I’m really into it. I enjoyed the presentation at Storage Field Day 21 tremendously, and not just because my friend Howard was on the panel repping VAST Data. It’s unusual that a vendor as big as Intel would be so frank about some of the challenges that it’s faced with getting new media to market. But I think it’s the willingness to share some of this information that demonstrates how committed Intel is to Optane moving forward. I was lucky enough to speak to Intel Senior Fellow Al Fazio about the Optane journey, and it was clear that there’s a whole lot of innovation and sweat that goes into making a product like this work.

Some folks think that these panel presentations are marketing disguised as a presentation. Invariably, the reference customers are friendly with the company, and you’ll only ever hear good stories. But I think those stories from those customers are still extremely powerful. After all, having a customer jump on a session to tell the world about how good your product has been means you’ve done something right. As a consumer of these products, I find these kind of testimonials invaluable. Ultimately, products are successful in the market when they serve the market’s needs. From what I can see, Intel Optane is on its way to meeting those needs, and it has a bright future.