Random Short Take #35

Welcome to Random Short Take #35. Some really good players have worn 35 in the NBA, including The Big Dog Antoine Carr, and Reggie Lewis. This one, though, goes out to one of my favourite players from the modern era, Kevin Durant. If it feels like it’s only been a week since the last post, that’s because it has. I bet you wish that I was producing some content that’s more useful than a bunch of links. So do I.

  • I don’t often get excited about funding rounds, but I have a friend who works there, so here’s an article covering the latest round (C) of funding for VAST Data.
  • Datadobi continue to share good news in these challenging times, and has published a success story based on some work it’s done with Payspan.
  • Speaking of challenging times, the nice folks a Retrospect are offering a free 90-day license subscription for Retrospect Backup. You don’t need a credit card to sign up, and “[a]ll backups can be restored, even if the subscription is cancelled”.
  • I loved this post from Russ discussing a recent article on Facebook and learning from network failures at scale. I’m in love with the idea that you can’t automate your way out of misconfiguration. We’ve been talking a lot about this in my day job lately. Automation can be a really exciting concept, but it’s not magic. And as scale increase, so too does the time it takes to troubleshoot issues. It all seems like a straightforward concept, but you’d be surprised how many people are surprised by these ideas.
  • Software continues to dominate the headlines, but hardware still has a role to play in the world. Alastair talks more about that idea here.
  • Paul Stringfellow recently jumped on the Storage Unpacked podcast to talk storage myths versus reality. Worth listening to.
  • It’s not all good news though. Sometimes people make mistakes, and pull out the wrong cables. This is a story I’ll be sharing with my team about resiliency.
  • SMR drives and consumer NAS devices aren’t necessarily the best combo. So this isn’t the best news either. I’m patiently waiting for consumer Flash drive prices to come down. It’s going to take a while though.

 

Western Digital, Composable Infrastructure, Hyperscalers, And You

Disclaimer: I recently attended Storage Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Western Digital recently presented at Storage Field Day 19. You can see videos of the presentation here, and download my rough notes from here.

 

Composability and Scale

NVMe-oF

Scott Hamilton (Senior Director of Product Management and Marketing, Data Center Platforms BU) kicked off his presentation by describing composable infrastructure as “AirBnB for storage”. The requirements for data centre storage at scale are increasing exponentially every day. There are a number of challenges when you get to Zettabyte scale:

  • Shared-nothing model strands resources;
  • Lack of agility results in SKU explosion; and
  • New use cases move GPUs to the data.

NVMe over Fabrics (NVMe-oF) provides the solution to these issues. It became a standard in 2016, and provides:

  • Low latency – delivers latencies on par with NVMe SSDs inside x86 servers;
  • High-performance sharing – NVMe-oF attached SSDs can be shared among hundreds of application servers resulting in higher utilisation and lower TCO; and
  • Data access and mobility – Fabric-attached data enables cloud-like dynamic access and workload mobility.

These are the sort of characteristics that enable composable infrastructure to really shine. Hamilton also said that “[b]y 2023, 50% of SSA shipments deployed to support primary storage workloads will be based on end-to-end NVMe technology. Up from less than 2% in 2019”. It seems clear that end-to-end NVMe is what a lot of the kids will be getting into.

Composability and Momentum

Western Digital’s composability infrastructure line has been slowly gaining momentum. OpenFlex is now shipping, and the Open Composable API is available on Open Compute platforms. Western Digital also recently acquired Kazan Networks to accelerate its NVMe ambitions. The Open Composable Interoperability Lab was also recently announced.

 

Thoughts and Further Reading

We tell most folks in the enterprise that they’re not hyperscalers – so they shouldn’t try to behave like a hyperscaler. But a lot of the key infrastructure vendors are heavily focused on servicing the hyperscaler market. It’s a big market, and there’s a lot of money to be had selling to hyperscalers. Why is this important? It strikes me that the concept of composable infrastructure meets a lot of the needs of companies doing compute, storage, and networking at massive scale. Does that mean that Joe Enterprise doesn’t need to worry about how composable infrastructure can help him? Not at all. It just means that some of the early implementations of the technology may not make sense if he’s not operating at a particular scale. The good news is that this architecture will continue to be refined by the likes of Western Digital, and in time we’ll see it adapted to the needs of the enterprise market.

Western Digital presented on a wide range of topics at Storage Field Day 19. There was talk of how the gaming industry was impacting the storage industry, how the Internet of Things was driving development of edge-based infrastructure, and how all of these activities were continuing to push the limits of traditional storage designs. Is composable infrastructure the AirBnB of storage? Maybe, maybe not. Some of this will ultimately depend on the uptake of the architecture in the enterprise and commercial sectors. It’s certainly a super neat concept though, and I think it does a good job of meeting some of the more modern workload needs of infrastructure shops that operate at scale.

I’m looking forward to the day when this kind of technology becomes broadly accessible. The idea of being able to optimise my data centre based on the types of workloads I need to run is extremely appealing. Not every workload is the same, and some things need to run at the edge, some in the cloud, and some in the core. Adopting an architecture in the DC that can adapt to those kind of fluid requirements seems like a great idea. I don’t know that we’re there just yet, but some of that is as much about the maturity of most infrastructure shops as it is about the technology they’re using to serve up workloads to the business. For another view on Western Digital, check out Keiran’s post here, and Chin-Fah posted some interesting thoughts here and here.

Western Digital Are Keeping Composed

Disclaimer: I recently attended Storage Field Day 18.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Western Digital recently presented at Storage Field Day 18. You can see videos of their presentation here, and download my rough notes from here.

 

Getting Composed

Scott Hamilton (Senior Director, Product Management) spoke to the delegates about Western Digital’s vision for composable infrastructure. I’m the first to admit that I haven’t really paid enough attention to composability in the recent past, although I do know that it messes with my computer’s spell check mechanism – so it must be new and disruptive.

There’s Work To Be Done

Hamilton spoke a little about the increasingly dynamic workloads in the DC, with a recent study showing that:

  • 45% of compute hours and storage capacity are utilised
  • 70% report inefficiencies in the time required to provision compute and storage resources

There are clearly greater demands on:

  • Scalability
  • Efficiency
  • Agility
  • Performance

Path to Composability

I remember a few years ago when I was presenting to customers about hyper-converged solutions. I’d talk about the path to HCI, with build it yourself being the first step, followed by converged, and then hyper-converged. The path to Composable is similar, with converged, and hyper-converged being the precursor architectures in the modern DC.

Converged

  • Preconfigured hardware / software for a specific application and workload (think EMC Vblock or NetApp FlexPod)

Hyper-Converged

  • Software-defined with deeper levels of abstraction and automation (think Nutanix or EMC’s VxRail)

Composable

  • Disaggregated compute and storage resources
  • Shared pool of resources that can be composed and made available on demand

[image courtesy of Western Digital]

The idea is that you have a bunch of disaggregated resources that can be really used as a pool for various applications or hosts. In this architecture, there are

  • No physical systems – only composed systems;
  • No established hierarchy – CPU doesn’t own the GPU or the memory; and
  • All elements are peers on the network and they communicate with each other.

 

Can You See It?

Western Digital outlined their vision for composable infrastructure thusly:

Composable Infrastructure Vision

  • Open – open in both form factor and API for management and orchestration of composable resources
  • Scalable – independent performance and capacity scaling from rack-level to multi-rack
  • Disaggregated – true disaggregation of storage and compute for independent scaling to maximise efficiency, agility snd to reduce TCO
  • Extensible – flash, disk and future compassable entities can be independently scaled, managed and shared over the same fabric

Western Digital’s Open Composability API is also designed for DC Composability, with:

  • Logical composability of resources abstracted from the underlying physical hardware, and
  • It discovers, assembles, and composes self-virtualised resources via peer-to-peer communication.

The idea is that it enables virtual system composition of existing HCI and Next-generation SCI environments. It also

  • Future proofs the transition from hyper-converged to disaggregated architectures
  • Complements existing Redfish / Swordfish usage

You can read more about OpenFlex here. There’s also an excellent technical brief from Western Digital that you can access here.

 

OpenFlex Composable Infrastructure

We’re talking about infrastructure to support an architecture though. In this instance, Western Digital offer the:

  • OpenFlex F3000 – Fabric device and enclosure; and
  • OpenFlex D3000 – High capacity for big data

 

F3000 and E3000

The F3000 and E3000 (F is for Flash Fabric and E is for Enclosure) has the following specification:

  • Dual-port, high-performance, low-latency, fabric-attached SSD
  • 3U enclosure with 10 dual-port slots offering up to 614TB
  • Self-virtualised device with up to 256 namespaces for dynamic provisioning
  • Multiple storage tiers over the same wire – Flash and Disk accessed via NVMf

D3000

The D3000 (D is for Disk / Dense) is as follows:

  • Dual-port fabric-attached high-capacity device to balance cost and capacity
  • 1U network addressable device offering up to 168TB
  • Self-virtualised device with up to 256 namespaces for dynamic provisioning
  • Multiple storage tiers over the same wire – Flash and Disk accessed via NVMe-oF

You can get a better look at them here.

 

Thoughts and Further Reading

Western Digital covered an awful lot of ground in their presentation at Storage Field Day 18. I like the story behind a lot of what they’re selling, particularly the storage part of it. I’m still playing wait and see when it comes to the composability story. I’m a massive fan of the concept. It’s my opinion that virtualisation gave us an inkling of what could be done in terms of DC resource consumption, but there’s still an awful lot of resources wasted in modern deployments. Technologies such as containers help a bit with that resource control issue, but I’m not sure the enterprise can effectively leverage them in their current iteration, primarily because the enterprise is very, well, enterprise-y.

Composability, on the other hand, might just be the kind of thing that can free the average enterprise IT shop from the shackles of resource management ineptitude that they’ve traditionally struggled with. Much like the public cloud has helped (and created consumption problems), so too could composable infrastructure. This is assuming that we don’t try and slap older style thinking on top of the infrastructure. I’ve seen environments where operations staff needed to submit change requests to perform vMotions of VMs from one host to another. So, like anything, some super cool technology isn’t going to magically fix your broken processes. But the idea is so cool, and if companies like Western Digital can continue to push the boundaries of what’s possible with the infrastructure, there’s at least a chance that things will improve.

If you’d like to read more about the storage-y part of Western Digital, check out Chin-Fah’s post here, Erik’s post here, and Jon’s post here. There was also some talk about dual actuator drives as well. Matt Leib wrote some thoughts on that. Look for more in this space, as I think it’s starting to really heat up.

Western Digital – The A Is For Active, The S Is For Scale

Disclaimer: I recently attended Storage Field Day 15.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

   

Western Digital recently presented at Storage Field Day 15. You might recall there are a few different brands under the WD umbrella, including Tegile and HGST and folks from both Tegile and HGST presented during Storage Field Day 15. I’d like to talk about the ActiveScale session however, mainly because I’m interested in object solutions. I’ve written about Tegile previously, although obviously a fair bit has changed for them too. You can see their videos from Storage Field Day 15 here, and download a PDF copy of my rough notes from here.

 

ActiveScale, Probably Not What You Thought It Was

ActiveScale isn’t some kind of weight measurement tool for exercise fanatics, but rather the brand of scalable object system that HGST sells. It comes in two flavours: the P100 and X100. Apparently the letters in product names sometimes do mean things, with the “P” standing for Petabyte, and the “X” for Exabyte (possibly in the same way that X stands for Excellent). From a speeds and feeds perspective, the typical specs are as follows:

  • P100 – starts as low as 720TB, goes to 18PB. 17x 9s data durability, 4.6KVA typical power consumption; and
  • X100 – 5.4PB in a rack, 840TB – 52PB, 17x 9s data durability, 6.5KVA typical power consumption.

You can scale out to 9 expansion racks, with 52PB of scale out object storage goodness per namespace. Some of the key capabilities of the ActiveScale platform include:

  • Archive and Backup;
  • Active Data for Analytics;
  • Data Forever Architecture;
  • Versioning;
  • Encryption;
  • Replication;
  • Single Pane Management;
  • S3 Compatible APIs;
  • Multi-Geo Availability Zones; and
  • Scale Up and Scale Out.

They use “BitSpread” for dynamic data placement and you can read a little about their erasure coding mechanism here. “BitDynamics” assures continuous data integrity, offering the following features:

  • Background – verification process always running
  • Performance – not impacted by verification or repair
  • Automatic – all repairs happen with no intervention

There’s also a feature called “GeoSpread” for geographical availability.

  • Single – Distributed erasure coded copy;
  • Available – Can sustain the loss of an entire site; and
  • Efficient – Better than 2 or 3 copy replication.

 

What Do I Use It For Again?

Like a number of other object storage systems in the market, ActiveScale is being positioned as a very suitable platform for:

  • Media & Entertainment
    • Media Archive
    • Tape replacement and augmentation
    • Transcoding
    • Playout
  • Life Sciences
    • Bio imaging
    • Genomic Sequencing
  • Analytics

 

Thoughts And Further Reading

Unlike a lot of people, I find technical sessions discussing object storage at extremely large scale to be really interesting. It’s weird, I know, but there’s something that I really like about the idea of petabytes of storage servicing media and entertainment workloads. Maybe it’s because I don’t frequently come across these types of platforms in my day job. If I’m lucky I get to talk to folks about using object as a scalable archive platform. Occasionally I’ll bump into someone doing stuff with life sciences stuff in a higher education setting, but they’ve invariably built something that’s a little more home-brew than HGST’s offering. Every now and then I’m lucky enough to spend some time with media types who regale me with tales of things that go terribly wrong when the wrong bit of storage infrastructure is put in the path of a particular editing workflow or transcode process. Oh how we laugh. I can certainly see these types of scalable platforms being a good fit for archive and tape replacement. I’m not entirely convinced they make for a great transcode or playout platform, but I’m relatively naive when it comes to those kinds of workloads. If there are folks reading this who are familiar with that kind of stuff, I’d love to have a chat.

But enough with my fascination with the media and entertainment industry’s infrastructure requirements. From what I’ve seen of ActiveScale, it looks to be a solid platform with a lot of very useful features. Coupled with the cloud management feature it seems like they’re worth a look. Western Digital aren’t just making hard drives for your NAS (and other devices), they’re doing a whole lot more, and a lot of it is really cool. You can read El Reg’s article on the X100 here.