Cisco Introduces HyperFlex 4.5

Disclaimer: I recently attended Storage Field Day 20.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Cisco presented a sneak preview of HyperFlex 4.5 at Storage Field Day 20 a little while ago. You can see videos of the presentation here, and download my rough notes from here. Note that this preview was done some time before the product was officially announced, so there may be a few things that did or didn’t make it into the final product release.

 

Announcing HyperFlex 4.5

4.5: Meat and Potatoes

So what are the main components of the 4.5 announcement?

  • iSCSI Block storage
  • N:1 Edge data replication
  • New edge platforms / SD-WAN
  • HX Application Platform (KVM)
  • Intersight K8s Service
  • Intersight Workload Optimizer

Other Cool Stuff

  • HX Boost Mode – virtual CPU configuration change in HX controller VM, the boost is persistent (scale up).
  • ESXi & VC 7.0, Native VC Plugin, 6.0 is EoS, HX Native HTML5 vCenter Plugin (this has been available since HX 4.0)
  • Secure Boot – protect the hypervisor against bootloader attacks with secure boot anchored in Cisco hardware root of trust
  • Hardened SDS Controller – reduce the attack surface and mitigate against compromised admin credentials

The HX240 Short Depth nodes have been available since HX 4.0, but there’s now a new Edge Option – the HX240 Edge. This is a new 2RU form factor option for HX Edge (2N / 3N / 4N), A-F and hybrid, 1 or 2 sockets, up to 3TB RAM and 175TB capacity, and PCIe slots for dense GPUs.

 

iSCSI in HX 4.5(1a)

[image courtesy of Cisco]

iSCSI Topologies

[image courtesy of Cisco]

 

Thoughts and Further Reading

Some of the drama traditionally associated with HCI marketing seems to have died down now, and people have mostly stopped debating what it is or isn’t, and started focusing on what they can get from the architecture over more traditional infrastructure deployments. Hyperconverged has always had a good story when it comes to compute and storage, but the networking piece has proven problematic in the field. Sure, there have been attempts at making software-defined networking more effective, but some of these efforts have run into trouble when they’ve hit the northbound switches.

When I think of Cisco HyperFlex I think of it as the little HCI solution that could. It doesn’t dominate the industry conversation like some of the other vendors, but it’s certainly had an impact, in much the same way UCS has. I’ve been a big fan of Springpath for some time, and HyperFlex has taken a solid foundation and turned it into something even more versatile and fully featured. I think the key thing to remember with HyperFlex is that it’s a networking company selling this stuff – a networking company that knows what’s up when it comes to connecting all kinds of infrastructure together.

The addition of iSCSI keeps the block storage crowd happy, and the new edge form-factor will have appeal for customers trying to squeeze these boxes into places they probably shouldn’t be going. I’m looking forward to seeing more HyperFlex from Cisco over the next 12 months, as I think it finally has a really good story to tell, particularly when it comes to integration with other Cisco bits and pieces.

Storage Field Day 20 – Wrap-up and Link-o-rama

Disclaimer: I recently attended Storage Field Day 20.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

This is a quick post to say thanks once again to Stephen and Ben, and the presenters at Storage Field Day 20. I had a super fun and educational time. For easy reference, here’s a list of the posts I did covering the events (they may not match the order of the presentations).

Storage Field Day 20 – I’ll Be At Storage Field Day 20

Storage Field Day 20 – (Fairly) Full Disclosure

Cisco MDS, NVMe, and Flexibility

Qumulo – Storage Your Way

Pure Storage Announces Second Generation FlashArray//C with QLC

Nebulon – It’s Server Storage Jim, But Not As We Know It

VAST Data – The Best Is Yet To Come

Intel Optane And The DAOS Storage Engine

Cisco Introduces HyperFlex 4.5

 

Also, here’s a number of links to posts by my fellow delegates (in no particular order). They’re all very smart people, and you should check out their stuff, particularly if you haven’t before. I’ll attempt to keep this updated as more posts are published. But if it gets stale, the Storage Field Day 20 landing page will have updated links.

 

Jason Benedicic (@JABenedicic)

Nebulon Shadow Storage

 

David Chapa (@DavidChapa)

“High Optane” Fuel For Performance

 

Becky Elliott (@BeckyLElliott)

Guess Who’s Attending Storage Field Day 20?

 

Ray Lucchesi (@RayLucchesi)

Storage that provides 100% performance at 99% full

106: Greybeards talk Intel’s new HPC file system with Kelsey Prantis, Senior Software Eng. Manager, Intel

 

Vuong Pham (@Digital_KungFu)

Storage Field Day 20.. oh yeah!!

 

Keiran Shelden (@Keiran_Shelden)

Let’s Zoom to SFD20

 

Enrico Signoretti (@esignoretti)

An Intriguing Approach to Modern Data Center Infrastructure

Is Scale-Out File Storage the New Black?

 

Paul Stringfellow (@TechStringy)

Storage Field Day and The Direction of Travel

 

Keith Townsend (@CTOAdvisor)

Will the DPU kill the Storage Array?

 

[image courtesy of Stephen Foskett]

Intel Optane And The DAOS Storage Engine

Disclaimer: I recently attended Storage Field Day 20.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Intel recently presented at Storage Field Day 20. You can see videos of the presentation here, and download my rough notes from here.

 

Intel Optane Persistent Memory

If you’re a diskslinger, you’ve very likely heard of Intel Optane. You may have even heard of Intel Optane Persistent Memory. It’s a little different to Optane SSD, and Intel describes it as “memory technology that delivers a unique combination of affordable large capacity and support for data persistence”. It looks a lot like DRAM, but the capacity is greater, and there’s data persistence across power losses. This all sounds pretty cool, but isn’t it just another form factor for fast storage? Sort of, but the application of the engineering behind the product is where I think it starts to get really interesting.

 

Enter DAOS

Distributed Asynchronous Object Storage (DAOS) is described by Intel as “an open source software-defined scale-out object store that provides high bandwidth, low latency, and high I/O operations per second (IOPS) storage containers to HPC applications”. It’s ostensibly a software stack built from the ground up to take advantage of the crazy speeds you can achieve with Optane, and at scale. There’s a handy overview of the architecture available on Intel’s website. Traditional object (and other storage systems) haven’t really been built to take advantage of Optane in quite the same way DAOS has.

[image courtesy of Intel]

There are some cool features built into DAOS, including:

  • Ultra-fine grained, low-latency, and true zero-copy I/O
  • Advanced data placement to account for fault domains
  • Software-managed redundancy supporting both replication and erasure code with online rebuild
  • End-to-end (E2E) data integrity
  • Scalable distributed transactions with guaranteed data consistency and automated recovery
  • Dataset snapshot capability
  • Security framework to manage access control to storage pools
  • Software-defined storage management to provision, configure, modify, and monitor storage pools

Exciting? Sure is. There’s also integration with Lustre. The best thing about this is that you can grab it from Github under the Apache 2.0 license.

 

Thoughts And Further Reading

Object storage is in its relative infancy when compared to some of the storage architectures out there. It was designed to be highly scalable and generally does a good job of cheap and deep storage at “web scale”. It’s my opinion that object storage becomes even more interesting as a storage solution when you put a whole bunch of really fast storage media behind it. I’ve seen some media companies do this with great success, and there are a few of the bigger vendors out there starting to push the All-Flash object story. Even then, though, many of the more popular object storage systems aren’t necessarily optimised for products like Intel Optane PMEM. This is what makes DAOS so interesting – the ability for the storage to fundamentally do what it needs to do at massive scale, and have it go as fast as the media will let it go. You don’t need to worry as much about the storage architecture being optimised for the storage it will sit on, because the folks developing it have access to the team that developed the hardware.

The other thing I really like about this project is that it’s open source. This tells me that Intel are both focused on Optane being successful, and also focused on the industry making the most of the hardware it’s putting out there. It’s a smart move – come up with some super fast media, and then give the market as much help as possible to squeeze the most out of it.

You can grab the admin guide from here, and check out the roadmap here. Intel has plans to release a new version every 6 months, and I’m really looking forward to seeing this thing gain traction. For another perspective on DAOS and Intel Optane, check out David Chapa’s article here.

 

 

VAST Data – The Best Is Yet To Come

Disclaimer: I recently attended Storage Field Day 20.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

VAST Data recently presented at Storage Field Day 20. You can see videos of their presentation here, and download my rough notes from here.

 

Feature Progress

VAST Data has come up with a pretty cool solution, and it continues to evolve as time passes (funny how that works). You can see that a whole stack of features has been added to the platform since the 1.0 release in November 2018.

[image courtesy of VAST Data]

The Similarity

One feature that caught my eye was the numbers that VAST Data presented that had been observed with the similarity-based data reduction capability (introduced in 1.2). In the picture below you’ll see a lot of 3:1 and 2:1. It doesn’t seem like that great a ratio, but the data that’s being worked on here is pre-compressed. My experience with applying data reduction techniques to pre-compressed and / or pre-deduplicated data is that it’s usually tough to get anything decent out of it, so I think this is pretty neat.

[image courtesy of VAST Data]

Snap to S3

Another cool feature (added in 3.0) is snap to cloud / S3. This is one of those features where you think, ha, I hadn’t been looking for that specifically, but it does look kind of cool.

[image courtesy of VAST Data]

Replicate snaps to object store

  • Independent schedule, retention

Presented as .remote folder

  • Self service restore (<30 days .snapshots, >30 days .remote)

Large objects

  • Data and metadata
  • Compressed

ReaderVM

  • Presents read-only .remote
  • .ovf, AMI
  • Parallel for bandwidth

 

Thoughts and Further Reading

You’ll notice I haven’t written a lot in this article. This isn’t because I don’t think VAST Data is intriguing, or that I don’t like what it can do. Rather, I think you’d be better served checking out the Storage Field Day presentations yourself (I recommend the presentations from both Storage Field Day 18 and Storage Field Day 20). You can also read my summary of the tech from Storage Field Day 18 here, but obviously things have progressed significantly since then.

As Howard Marks pointed out in his presentation, this is not the first rodeo for many of the folks involved in VAST Data’s rapid development. You can see from the number of features being added in a short time that they’re keen on making more progress and meeting the needs of the market. But it still takes time. SMB failover is cool, but some people might be more interested in seeing vSphere support sooner rather than later. I have no insight into the roadmap, but based on what I’ve seen over the last 20ish months, there’s been some great stuff forthcoming, and definitely more cool things to come. Coupled with the fact that this thing relies heavily on QLC and you’ve got a compelling platform at potentially a super interesting price point upon which you can do a bunch of interesting things, storage-wise. I’m looking forward to seeing what’s in store over the next 20 months.

Nebulon – It’s Server Storage Jim, But Not As We Know It

Disclaimer: I recently attended Storage Field Day 20.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Nebulon recently presented at Storage Field Day 20. You can see videos of the presentation here, and download my rough notes from here. I’d also like to thank Siamak Nazari (CEO), Craig Nunes (COO), and the team for taking the time to do a follow-up briefing with me after the event. I think I took a lot more in when it was done during waking hours.

 

What Is It?

Nebulon defines its offering as “cloud-defined storage (CDS)”. It’s basically an add-in card that delivers “on-premises, server-based enterprise-class storage that consumes no server CPU / memory resources and is defined and managed through the cloud”. This is achieved via a combination of nebulon ON (the cloud management plane) and the Nebulon Services Processing Unit (SPU).

 

The SPU

The SPU is the gateway to the solution, and

  • Runs in any 2RU / 24 drive server;
  • Connects to server SSDs like any RAID card; and
  • Presents local or shared volumes to the application.

A group of SPU-equipped servers is called an “nPod”.

The SPU is built in such a fashion as to deliver some features you may not traditionally associate with host-side storage, including:

  • All-flash performance via a high perfromance 8-core 3GHz CPU and 32GB NVRAM for tier-1 performance with all-flash latencies; and
  • Zero-trust security using hardware-accelerated encryption engines, a token-based “security triangle”, and crypto-authentication chip.

The Nebulon solution is designed to scale out, with support for up to 32 heterogeneous, SPU-enabled servers per nPod connected by 2x 25Gb Ethernet ports. Note that you can scale out your compute independently of your storage needs.

 

[image courtesy of Nebulon]

Key Features

Offloads the full storage software stack from the server.

Enterprise data services

  • Data efficiency: deduplication, compression, thin provisioning
  • Data protection: encryption, erasure coding, snapshots, replication

No software dependencies

  • In-box drivers for all hypervisors and operating systems without managing multi-pathing or firmware dependencies

1.3x more performance from each server

  • Application / VM density advantage vs “restrictive” SDS

isolated fault domain

  • Storage and data services are not impacted by operating system or hypervisor reboots

 

It’s Not On, nebulon ON

Always Up-to-Date Software

The cool thing is the nebulon ON cloud control plane is always being updated and delivered as a service. You can leverage new features instantly, and there’s scope for some AI stuff to be done there too. The SPU runs a lightweight storage OS: nebOS. Nebulon says it’s fast, with infrequent updates and no disruption to service, and scheduling updates is also apparently easy.

 

Thoughts and Further Reading

I tried to come up with a witty title for this post, because the name Nebulon makes me think of Star Trek. But I’ll admit my knowledge of Star Trek runs to “Star Trekkin’” by The Firm, so I can’t really say whether that’s a valid thing. In any case, I didn’t immediately get the value that Nebulon offered, and it wasn’t until the team took me through the offering for a second time (and it wasn’t the middle of the night for me) that I think I started to get the value proposition. Perhaps it was because I’m still working with “traditional” storage vendors on a more regular basis – the exact solution that Nebulon is looking to remove from environments.

“Server storage” is an interesting thing. There are a lot of reasons why it’s a good thing, and well suited to a number of workloads. When coupled with a robust management plane and “enterprise” resilience features, server storage can have a lot of appeal, particularly if you need to scale up or down quickly. One thing that makes the Nebulon solution super interesting is the fact that the management is done primarily from the cloud offering. I confirmed with the team that nothing went bad with the storage itself when the management plane was unavailable for a period of time. I also confirmed that the typical objection handling they were seeing in the field regarding security came down to the need to do a workshop and run through the solution with the security folks to get it over the line.

This solution has a lot of appeal, depending on how you’re consuming your storage resources today. If you’re already down the track of server storage, this may not be as exciting, because you might have already done a lot of the heavy lifting and gotten things working just so. But if you’re still using a traditional storage solution and looking to change things up, Nebulon could have some appeal, particularly as it provides some cloud-based management and some level of grunt on the SPUs. The ability to couple this with your preferred server vendor will also have appeal to the bigger shops looking to leverage their buying power with the bigger server slingers. Enrico covered Nebulon here, and you can read more on cloud-defined storage here.

Pure Storage Announces Second Generation FlashArray//C with QLC

Disclaimer: I recently attended Storage Field Day 20.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Pure Storage recently announced its second generation FlashArray//C – an all-QLC offering offering scads of capacity in a dense form factor. Pure Storage presented on this topic at Storage Field Day 20. You can see videos of the presentation here, and download my rough notes from here.

 

It’s A Box!

FlashArray//C burst on to the scene last year as an all-flash, capacity-optimised storage option for customers looking for storage that didn’t need to go quite as fast the FlashArray//X, but that wasn’t built on spinning disk. Available capacities range from 1.3PB to 5.2PB (effective).

[image courtesy of Pure Storage]

There are a number of models available, with a variety of capacities and densities.

  Capacity Physical
 

//C60-366

 

Up to 1.3PB effective capacity**

366TB raw capacity**

3U; 1000–1240 watts (nominal–peak)

97.7 lbs (44.3 kg) fully loaded

5.12” x 18.94” x 29.72” chassis

 

//C60-494

 

Up to 1.9PB effective capacity**

494TB raw capacity**

3U; 1000–1240 watts (nominal–peak)

97.7 lbs (44.3 kg) fully loaded

5.12” x 18.94” x 29.72” chassis

 

//C60-840

 

Up to 3.2PB effective capacity**

840TB raw capacity**

6U; 1480–1760 watts (nominal–peak)

177.0lbs (80.3kg) fully loaded

10.2” x 18.94 x 29.72” chassis

 

//C60-1186

 

Up to 4.6PB effective capacity**

1.2PB raw capacity**

6U; 1480–1760 watts (nominal–peak)

185.4 lbs (84.1 kg) fully loaded

15.35” x 18.94 x 29.72” chassis

 

//C60-1390

 

Up to 5.2PB effective capacity**

1.4PB raw capacity**

9U; 1960–2280 watts (nominal–peak)

273.2 lbs (123.9 kg) fully loaded

15.35” x 18.94 x 29.72” chassis

Workloads

There are reasons why the FlashArray//C could be a really compelling option for workload consolidation. More and more workloads are “business critical” in terms of both performance and availability. There’s a requirement to do more with less, while battling complexity, and a strong desire to manage everything via a single pane of glass.

There are some other cool things you could use the //C for as well, including:

  • Automated policy-based VM tiering between //X and //C arrays;
  • DR using the //X at production and //C at your secondary site;
  • Consolidating multiple //X array workloads on a single //C array for test and dev; and
  • Consolidating multiple //X array snapshots to a single //C array for long-term retention.

 

It’s a QLC World, Sort Of

The second generation is FlashArray//C means you can potentially now have flash all through the data centre.

  • Apps and VMs – provision your high performance workloads to //X, lower performance / high capacity workloads to //C
  • Modern Data Protection & Disaster Recovery – on-premises production applications on //X efficiently replicated or backed up to //C at DR site
  • User File Shares – User file access with Purity 6.0 via SMB, NFS

QLC nonetheless presents significant engineering challenges with traditionally high write latency and low endurance (when compared to SLC, MLC, and TLC). Pure Storage’s answer to that problem has been to engineer the crap out of DirectFlash to get the required results. I’d do a bad job of explaining it, so instead I recommend you check out Pete Kirkpatrick’s explanation.

 

Thoughts And Further Reading

I covered the initial FlashArray//C announcement here and many of the reasons why this type of offering is appealing remain the same. The knock on Pure Storage in the last few years has been that, while FlashArray//X is nice and fast and a snap to use, it couldn’t provide the right kind of capacity (i.e. cheap and deep) that a number of price-sensitive punters wanted.  Sure, they could go and buy the FlashArray//X and then look to another vendor for a dense storage option, but the motivation to run with a number of storage vendors in smaller enterprise shops is normally fairly low. The folks in charge of technology in these environments are invariably stretched in terms of bodies on the floor to run the environments, and cash in the bank to procure those solutions. A single vendor solution normally makes sense for them (as opposed to some of the larger shops, or specialist organisations that really have very specific requirements that can only be serviced by particular solutions).

So now Pure Storage has the FlashArray//C, and you can get it with some decent density, some useful features (thanks in part to some new features in Purity 6), and integration with the things you know and like about Pure Storage, such as Pure1 and Evergreen storage. It seems like Pure Storage has done an awful lot of work to squeeze performance out of QLC whilst ensuring that the modules don’t need replacing every other week. There’s a lot to like about the evolving Pure Storage story, and I’m interested to see how they tie it all together as the portfolio continues to expand. You can read the press release here, access the data sheet here, and read Mellor’s take on the news here.

Qumulo – Storage Your Way

Disclaimer: I recently attended Storage Field Day 20.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Qumulo recently presented at Storage Field Day 20. You can see videos of the presentation here, and download my rough notes from here.

 

Solving Problems The Qumulo Way

Extreme, Efficient Scalability

Legacy problem

  • Inefficient file system architectures only use 60-80% of purchased capacity
  • Limited capacity and file count scalability
  • Adding capacity and performance complex and often required downtime

Qumulo Solution

  • Get 100% out of your investment
  • Scale to hundreds of billions of files
  • Expand capacity and performance seamlessly

No More Tiers

Legacy problem

  • Data not available in fastest storage when users and applications request it
  • Tiering jobs were slow to execute and often didn’t complete, leaving risk of filling the performance tier

Qumulo Solution

  • Single-tier solution with predictive caching ensures performance SLAs are met
  • Simplified administration and full confidence in access to data

Visibility

Legacy problem

  • Troubleshooting complexity, unforeseen growth expenditures, and usage bottlenecks plague IT
  • Impossible to track who is using data and for which projects

Qumulo Solution

  • Gain graphical, real-time visibility into user performance, usage trends and performance bottlenecks
  • Proactively plan for future requirements

 

Flash! Ah-ah

The development of an all-NVMe solution also means Qumulo can do even more to address legacy storage problems.

Intelligent Caching

  • Enables fast random reads from limited and expensive media (SSD)
  • Identifies read I/O patterns and promotes data from disk to SSD
  • Cache eviction policy leverages heat-based strategy

Predictive Prefetch

  • Enables very fast streaming reads
  • Proactively moves data to RAM by anticipating files that will likely be read in large, parallel batches or within a file
  • Constantly automatically adjusting if data is not used at per client granularity

All Writes Go To Flash

  • No special hardware components required
  • Data de-staged to optimise for HDD performance
  • Low latency NVMe makes writes much faster

 

Thoughts and Further Reading

We’ve been talking about the end of disk-based storage systems for some time now. But there still seems to be an awful lot of spinning drives being used around the globe to power a variety of different workloads. Hybrid storage still has a place in storage world, particularly when you need “good enough” performance in price-sensitive environments. What All-Flash does, however, is provide the ability to deliver some very, very good performance for those applications that need it. Doing high resolution video editing or need to render those files at high speed? An all-NVMe solution is likely what you’re after. But if you just need a bunch of capacity to store video surveillance files, or archives, then a hybrid solution will quite likely meet your requirements. The key to the Qumulo solution is that it can do both of those things whilst using a bunch of software smarts to help you get your unstructured data under control. It’s not just about throwing a new storage protocol at the problem and hoping things will run better / faster though. It’s also important to understand how the solution can scale out, and what kind of visibility you get with said solution. These are two critical aspects of storage solutions used in media production environments, particularly when being able to squeeze every last bit of performance out of the system is a key requirement, and you might be in a position where you need to throw a bunch of workload at the system in a hurry.

Qumulo strikes me as being a super popular solution for video editing, broadcast, production, and so forth. This is one of my favourite market segments, if only because the variety of workloads and solutions that cater to those workloads is pretty insane. That said, when you dig into other market segments, such as Artificial Intelligence and analytics workloads, you’ll also notice that unstructured data access is a common requirement. The delivery of an all-NVMe solution helps Qumulo provide the resources required to satisfy those high-performance applications. But the cool thing isn’t just the performance, or even the ability to scale. It’s the visibility you can get into the platform to work out what’s going on. Managing petabytes of unstructured data is a daunting task at the best of times, so it’s nice to see a company paying attention to making both the end user and the storage administrator happy. I’m the first to admit I haven’t been paying as much attention to Qumulo as I should have, but I will be doing so from now on. For another perspective, check out Ray Lucchesi’s article on Qumulo here.

Cisco MDS, NVMe, and Flexibility

Disclaimer: I recently attended Storage Field Day 20.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Cisco recently presented at Storage Field Day 20. You can see videos of the presentation here, and download my rough notes from here.

 

NVMe, Yeah You Know Me

Non-Volatile Memory Express, known more commonly as NVMe, is a protocol designed for high performance SSD storage access.  In the olden days, we used to associate fibre channel and iSCSI networking options with high performance block storage. Okay, maybe not the 1Gbps iSCSI stuff, but you know what I mean. Time has passed, and the storage networking landscape has changed significantly with the introduction of All-Flash and NVMe. But NVMe’s adoption hasn’t been all smooth sailing. There have been plenty of vendors willing to put drives in storage arrays that support NVMe while doing some translation on the backend that negated the real benefits of NVMe. And, like many new technologies, it’s been a gradual process to get end-to-end NVMe in place, because enterprises, and the vendors that sell to them, only move so fast. Some vendors support NVMe, but only over FC. Others have adopted the protocol to run over RoCEv2. There’s also NVMe-TCP, in case you weren’t confused enough about what you could use. I’m doing a poor job of explaining this, so you should really just head over to Dr J Metz’s article on NVMe for beginners at SNIA.

 

Cisco Are Ready For Anything

As you’ve hopefully started to realise, you’ll see a whole bunch of NVMe implementations available in storage fabrics, along with a large number of enterprises continuing to have conversations about and deploy new storage equipment that uses traditional block fabrics, such as iSCSI or FC or, perish the thought, FCoE. The cool thing about Cisco MDS is that it supports all this crazy and more. If you’re running the latest and greatest NVMe end to end implementation and have some old block-only 8Gbps FC box sitting in the corner they can likely help you with connectivity. The diagram below hopefully demonstrates that point.

[image courtesy of Cisco]

 

Thoughts and Further Reading

Very early in my storage career, I attended a session on MDS at Cisco Networkers Live (when they still ran those types of events in Brisbane). Being fairly new to storage, and running a smallish network of one FC4700 and 8 Unix hosts, I’d tended to focus more on the storage part of the equation rather than the network part of the SAN. Cisco was still relatively new to the storage world at that stage, and it felt a lot like it had adopted a very network-centric view of the storage world. I was a little confused why all the talk was about backplanes and port density, as I was more interested about the optimal RAID configuration for mail server volumes and how I should protect the data being stored on this somewhat sensitive piece of storage. As time went on, I was invariably exposed to larger and larger environments where decisions around core and edge storage networking devices started to become more and more critical to getting optimal performance out of the environment. A lot of the information I was exposed to in that early MDS session started to make more sense (particularly as I was tasked with deploying larger and larger MDS-based fabrics).

Things have obviously changed quite a bit since those heady days of a network upstart making waves in the storage world. We’ve seen increases in network speeds become more and more common in the data centre, and we’re no longer struggling to get as many IOPS as we can out of 5400 RPM PATA drives with an interposer and some slightly weird firmware. What has become apparent, I think, is the importance of the fabric when it comes to getting access to storage resources in a timely fashion, and with the required performance. As enterprises scale up and out, and more and more hosts and applications connect to centralised storage resources, it doesn’t matter how fast those storage resources are if there’s latency in the fabric.

The SAN still has a place in the enterprise, despite was the DAS huggers will tell you, and you can get some great performance out of your SAN if you architect it appropriately. Cisco certainly seems to have an option for pretty much everything when it comes to storage (and network) fabrics. It also has a great story when it comes to fabric visibility, and the scale and performance at the top end of its MDS range is pretty impressive. In my mind, though, the key really is the variety of options available when build a storage network. It’s something that shouldn’t be underestimated given the plethora of options available in the market.

Storage Field Day 20 – (Fairly) Full Disclosure

Disclaimer: I recently attended Storage Field Day 20.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my notes on gifts, etc, that I received as a conference attendee at Storage Field Day 20. This is by no stretch an interesting post from a technical perspective, but it’s a way for me to track and publicly disclose what I get and how it looks when I write about various things. With all of … this stuff … happening, it’s not going to be as lengthy as normal, but I did receive a couple of boxes of stuff in the mail, so I wanted to disclose it.

The Tech Field Day team sent:

  • Keyboard cloth – a really useful thing to save the monitor on my laptop from being bashed against the keyboard.
  • Tote bag
  • Patch
  • Badge
  • Stickers

VAST Data contributed:

  • h2go 17 oz Stainless Steel bottle
  • Trucker cap
  • Pen
  • Notepad
  • Sunglasses
  • Astronaut figurine
  • Stickers

Nebulon chipped in with a:

  • Hip flask
  • Shoelaces
  • Socks
  • Stickers

Qumulo dropped in some socks and a sticker, while Pure Storage sent over a Pure Storage branded Timbuk2 Backpack and a wooden puzzle.

The Intel Optane team kindly included:

  • Intel Optane Nike golf shirt
  • Intel Optane travel tumbler
  • Intel Optane USB phone charging cable
  • Stickers
  • Tote bag
  • Notepad
  • Flashing Badge
  • Pin
  • Socks

My Secret Santa gift was also in one of the box, and I was lucky enough to receive a:

It wasn’t fancy food and limos this time around. But it was nonetheless an enjoyable event. Thanks again to Stephen and the team for having me back.

Storage Field Day 20 – I’ll Be At Storage Field Day 20

Here’s some news that will get you excited. I’ll be virtually heading to the US this week for another Storage Field Day event. If you haven’t heard of the very excellent Tech Field Day events, you should check them out. I’m looking forward to time travel and spending time with some really smart people for a few days. It’s also worth checking back on the Storage Field Day 20 website during the event (August 5 – 7) as there’ll be video streaming and updated links to additional content. You can also see the list of delegates and event-related articles that have been published.

I think it’s a great line-up of both delegates and presenting companies this time around. I know most of them, but there may also still be a few companies added to the line-up. I’ll update this if and when they’re announced.

I’d like to publicly thank in advance the nice folks from Tech Field Day who’ve seen fit to have me back, as well as my employer for letting me take time off to attend these events. Also big thanks to the companies presenting. It’s going to be a lot of fun. And a little weird to be doing this virtually, rather than in person. But I’m really looking forward to this, even if it means doing the night shift for a few days. If you’d like to follow along at home, here’s the current schedule (all times are in US/Pacific).

Wednesday, Aug 5 8:00-10:00 Pensando Presents at Storage Field Day 20
Wednesday, Aug 5 11:00-13:00 Cisco Presents at Storage Field Day 20
Thursday, Aug 6 8:00-9:00 Qumulo Presents at Storage Field Day 20
Thursday, Aug 6 10:00-12:00 Nebulon Presents at Storage Field Day 20
Thursday, Aug 6 13:00-14:00 Intel Presents at Storage Field Day 20
Friday, Aug 7 8:00-9:30 VAST Data Presents at Storage Field Day 20
Friday, Aug 7 11:00-13:00 Pure Storage Presents at Storage Field Day 20