NetApp, Workloads, and Pizza

Disclaimer: I recently attended VMworld 2019 – US.  My flights and accommodation were paid for by Digital Sense, and VMware provided me with a free pass to the conference and various bits of swag. There is no requirement for me to blog about any of the content presented and I am not compensated by VMware for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

As part of my attendance at VMworld US 2019 I had the opportunity to attend Tech Field Day Extra sessions. You can view the videos from the NetApp session here, and download my rough notes from here.

 

Enhanced DC Workloads

In The Beginning There Were Workloads

Andy Banta started his presentation by talking about the evolution of the data centre (DC). The first-generation DCs were resource-constrained. As long as there was something limiting (disk, CPU, memory), things didn’t get done. The later first-generation DCs were comprised of standalone hosts with applications. Andy called “2nd-generation DCs” those hosts that were able to run multiple workloads. The evolution of these 2nd-generation DCs was virtualisation – now you could run multiple applications and operating systems on one host.

The DC though, is still all about compute, memory, throughput, and capacity. As Andy described it, “the DC is full of boxes”.

[image courtesy of NetApp]

 

But There’s Cool Stuff Happening

Things are changing in the DC though, primarily thanks to a few shifts in key technologies that have developed in recent times.

Persistent Memory

Persistent memory has become more mainstream, and application vendors are developing solutions that can leverage this technology effectively. There’s also technology out there that will let you slice this stuff up and share it around, just like you would a pizza. And it’s resilient too, so if you drop your pizza, there’ll be some still left on your plate (or someone else’s plate). Okay I’ll stop with the tortured analogy.

Microvisors

Microvisors are being deployed more commonly in the DC (and particularly at the edge). What’s a microvisor? “Imagine a Hypervisor stripped down to only what you need to run modern Linux based containers”. The advent of the microvisor is leading to different types of workloads (and hardware) popping up in racks where they may not have previously been found.

Specialised Cores on Demand

You can now also access specialised cores on demand from most service providers. You need access to some GPUs to get some particular work done? No problem. There are a bunch of different ways you can slice this stuff up, and everyone’s hip to the possibility that you might only need them for a short time, but you can pay a consumption fee for however long that time will be.

HPC

Even High Performance Compute (HPC) is doing stuff with new technology (in this case NVMeoF). What kinds of workloads?

  • Banking – low-latency transactions
  • Fluid dynamics – lots of data being processed quickly in a parallel stream
  • Medical and nuclear research

 

Thoughts

My favourite quote from Andy was “NVMe is grafting flesh back on to the skeleton of fibre channel”. He (and most of us in the room) are of the belief that FC (in its current incantation at least) is dead. Andy went on to say that “[i]t’s out there for high margin vendors” and “[t]he more you can run on commodity hardware, the better off you are”.

The DC is changing, and not just in the sense that a lot of organisations aren’t running their own DCs any more, but also in the sense that the types of workloads in the DC (and their form factor) are a lot different to those we’re used to running in first-generation DC deployments.

Where does NetApp fit in all of this? The nice thing about having someone like Andy speak on their behalf is that you’re not going to get a product pitch. Andy has been around for a long time, and has seen a lot of different stuff. What he can tell you, though, is that NetApp have started developing (or selling) technology that can accommodate these newer workloads and newer DC deployments. NetApp will be happy to sell you storage that runs over IP, but they can also help you out with compute workloads (in the core and edge), and show you how to run Kubernetes across your estate.

The DC isn’t just full of apps running on hosts accessing storage any more – there’s a lot more to it than that. Workload diversity is becoming more and more common, and it’s going to be really interesting to see where it’s at in ten years from now.

3 Comments

  1. I think the death of Fibre Channel, as Mark Twain would say, has been greatly exaggerated. In fact in conversations with FCIA, I’m told that FC sales are increasing (albeit only slightly). FC is as dead as tape is. It’s a technology that will become legacy in the sense of filling a requirement without being current or “cool”. Incidentally, Andy’s “banter” (sic) was fun but he fell foul when questioned on some of the detail, so I’d take the FC comments with a pinch of salt…

  2. That is the beauty of working at a vendor – you get to say everything that you don’t sell is dead.

  3. Pingback: We’re not here to talk about Storage! – Al Rasheed – A personal Blog about IT related subjects.

Comments are closed.