Random Short Take #65

Welcome to Random Short take #65. Last one for the year I think.

  • First up, this handy article from Steve Onofaro on replacing certificates in VMware Cloud Director 10.3.1.
  • Speaking of cloud, I enjoyed this article from Chris M. Evans on the AWS “wobble” (as he puts it) in us-east-1 recently. Speaking of articles Chris has written recently, check out his coverage of the Pure Storage FlashArray//XL announcement.
  • Speaking of Pure Storage, my friend Jon wrote about his experience with ActiveCluster in the field recently. You can find that here. I always find these articles to be invaluable, if only because they demonstrate what’s happening out there in the real world.
  • Want some press releases? Here’s one from Datadobi announcing it has released new Starter Packs for DobiMigrate ranging from 1PB up to 7PB.
  • Data protection isn’t just something you do at the office – it’s a problem for home too. I’m always interested to hear how other people tackle the problem. This article from Jeff Geerling (and the associated documentation on Github) was great.
  • John Nicholson is a smart guy, so I think you should check out his articles on benchmarking (and what folks are getting wrong). At the moment this is a 2-part series, but I suspect that could be expanded. You can find Part 1 here and Part 2 here. He makes a great point that benchmarking can be valuable, but benchmarking like it’s 1999 may not be the best thing to do (I’m paraphrasing).
  • Speaking of smart people, Tom Andry put together a great article recently on dispelling myths around subwoofers. If you or a loved one are getting worked up about subwoofers, check out this article.
  • I had people ask me if I was doing a predictions post this year. I’m not crazy enough to do that, but Mellor is. You can read his article here.

In some personal news (and it’s not LinkedIn official yet) I recently quit my job and will be taking up a new role in the new year. I’m not shutting the blog down, but you might see a bit of a change in the content. I can’t see myself stopping these articles, but it’s likely there’ll be less of the data protection howto articles being published. But we’ll see. In any case, wherever you are, stay safe, happy holidays, and see you on the line next year.

StorCentric Announces Nexsan Unity 7.0

Nexsan (a StorCentric company) recently announced version 7.0 of its Unity software platform. I had the opportunity to speak to StorCentric CTO Surya Varanasi about the announcement and thought I’d share a few of my thoughts here.

 

What’s New?

In short, there’s a fair bit that’s gone into this release, and I’ll cover these below.

Protocol Enhancements

The Unity platform already supported FC, iSCSI, NFS, and SMB. It now supports S3 as well, making interoperability with data protection software that supports S3 as a target even simpler. It also means you can do stuff with Object Locking, and I’ll cover that below.

.

[image courtesy of Nexsan]

There have also been some enhancements to the speeds supported on the Unity hardware interfaces, and FC now supports up to 32Gbps, and support for 1/10/25/40/100GbE over Ethernet.

Security, Compliance and Ransomware Protection

Unity now supports immutable volume and file system snapshots for data protection. This provides secure point-in-time copies of data for business continuity.  As I mentioned before, there’s also support for object locking, enabling bucket or object-level protection for a specified retention period to create immutable copies of data. This allows enterprises to address compliance, regulatory and other data protection requirements. Finally, there’s now support for pool-scrubbing to detect and remediate bit rot to avoid data corruption.

Performance Improvements

There have been increases in total throughput capability, with Varanasi telling me that Total Throughput has increased up to 13GB/s on existing platforms. There’s also been a significant improvement in the Unity to Assureon ingestion rate. I’ve written a little about the Unbreakable Backup solution before, and there’s a lot to like about the architecture.

[image courtesy of Nexsan]

 

Thoughts

This is the first time that Nexsan has announced enhancements to its Unity platform without incorporating some kind of hardware refresh, so the company is testing the waters in some respects. I think it’s great when storage companies are able to upgrade their existing hardware platforms with software and offering improved performance and functionality. There’s a lot to like in this release, particularly when it comes to the improved security and data integrity capabilities. Sure, not everyone wants object storage available on their midrange storage array, but it makes it a lot more accessible, particularly if you only need a few 100TB of object. The object lock capability, along with the immutable snapshotting for SMB and NFS users, really helps improve the overall integrity and resiliency of the platform as well.

StorCentric now has a pretty broad portfolio of storage and data protection products available, and you can see the integrations between the different lines are only going to increase as time goes on. The company has been positioning itself as a data-centric company for some time, and working hard to ensure that improved security is a big part of that solution. I think there’s a great story here for customers looking to leverage one vendor to deliver storage, data protection, and data security capabilities into the enterprise. The bad guys in hoodies are always looking for ways to make your day unpleasant, so when vendors are working to tighten up their integrations across a variety of products, it can only be a good thing in terms of improving the resilience and availability of your critical information assets. I’m looking forward to hearing what’s next with Nexsan and StorCentric.

Pure Storage – A Few Thoughts on Pure as-a-Service

I caught up with Matt Oostveen from Pure Storage in August to talk about Pure as-a-Service. It’s been a while since any announcements were made, but I’ve been meaning to write up a few notes on the offering and what I thought of it. So here we are.

 

What Is It?

Oostveen describes Pure Storage as a “software company that sells storage arrays”. The focus at Pure has always been on giving the customer an exceptional experience, which invariably means controlling the stack from end-to-end. To that end, Pure as-a-Service could be described more as a feat of financial, rather than technical, engineering. You’re “billed on actual consumption, with minimum commitments starting at 50 TiB”. Also of note is the burst capability, allowing a level of comfort in understanding both the floor and the ceiling of the consumption levels you may decide to consume. You can choose what kind of storage you want – block, file, or object. You also get access to orchestration tools to manage everything. You also get access to Evergreen Storage, so your hardware stays up to date, and it’s available in four easy to understand tiers of storage.

 

Why Is It?

In this instance, I think the what isn’t as interesting as the why. Oostveen and I spoke about the need for a true utility model to enable companies to deliver on the promise of digital transformation. He noted that many of the big transactions that were occurring were CFO to CFO engagements, rather than the CTO deciding on the path forward for applications and infrastructure. In short, price is always a driver, and simplicity is also very important. Pure has worked to ensure that the offering delivers on both of those fronts.

 

Thoughts

IT is complicated nowadays. You’re dealing with cloud, SaaS, micro-SaaS, distributed, and personalised IT. You’re invariably trying to accommodate the role of data in your organisation, and you’re no doubt facing challenges with getting applications running not just in your core, but also in the cloud and the edge. We talk a lot about how infrastructure can be used to solve a number of the challenges facing organisations, but I have no doubt that if most business leaders never had to deal with infrastructure and the associated challenges it presents they’d be over the moon. Offerings like Pure as-a-Service go some of the way to elevating that conversation from speeds and feeds to something more aligned with business outcomes. It strikes me that these kinds of offerings will have great appeal to both the folks in charge of finance inside big enterprises and the potentially the technical folk trying to keep the lights on whilst a budget decrease gets lobbed at them every year.

I’ve written about Pure enthusiastically in the past because I think the company has a great grasp of some of the challenges that many organisations are facing nowadays. I think that the expansion into other parts of the cloud ecosystem, combined with a willingness to offer flexible consumption models for solutions that were traditionally offered as lease or buy is great. But I don’t think this makes sense without everything that Pure has done previously as a company, from the focus on getting the most out of All-Flash hardware, to a relentless drive for customer satisfaction, to the willingness to take a chance on solutions that are a little outside the traditional purview of a storage array company.

As I’ve said many times before, IT can be hard. There are a lot of things that you need to consider when evaluating the most suitable platform for your applications. Pure Storage isn’t the only game in town, but in terms of storage vendors offering flexible and powerful storage solutions across a variety of topologies, it seems to be a pretty compelling one, and definitely worth checking out.

22dot6 Releases TASS Cloud Suite

22dot6 sprang from stealth in May 2021. and recently announced its TASS Cloud Suite. I had the opportunity to once again catch up with Diamond Lauffin about the announcement, and thought I’d share some thoughts here.

 

The Product

If you’re unfamiliar with the 22dot6 product, it’s basically a software or hardware-based storage offering that delivers:

  • File and storage management
  • Enterprise-class data services
  • Data and systems profiling and analytics
  • Performance, scalability
  • Virtual, physical, and cloud capabilities, with NFS, SMB, and S3 mixed protocol support

According to Lauffin, it’s built on a scale-out, parallel architecture, and can deliver great pricing and performance per GiB.

Components

It’s Linux-based, and can leverage any bare-metal machine or VM. Metadata services live on scale-out, redundant nodes (VSR nodes), and data services are handled via single, clustered, or redundant nodes (DSX nodes).

[image courtesy of 22dot6]

TASS

The key to this all making some kind of sense is TASS (the Transcendent Abstractive Storage System). 22dot6 describes this as a “purpose-built, objective based software integrating users, applications and data services with physical, virtual and cloud-based architectures globally”. Sounds impressive, doesn’t it? Valence is the software that drives everything, providing the ability to deliver NAS and object over physical and virtual storage, in on-premises, hybrid, or public cloud deployments. It’s multi-vendor capable, offering support for third-party storage systems, and does some really neat stuff with analytics to ensure your storage is performing the way you need it to.

 

The Announcement

22dot6 has announced the TASS Cloud Suite, an “expanded collection of cloud specific features to enhance its universal storage software Valence”. Aimed at solving many of the typical problems users face when using cloud storage, it addresses:

  • Private cloud, with a “point-and-click transcendent capability to easily create an elastic, scale-on-demand, any storage, anywhere, private cloud architecture”
  • Hybrid cloud, by combining local and cloud resources into one big pool of storage
  • Cloud migration and mobility, with a “zero stub, zero pointer” architecture
  • Cloud-based NAS / Block / S3 Object consolidation, with a “transparent, multi-protocol, cross-platform support for all security and permissions with a single point-and-click”

There’s also support for cloud-based data protection, WORM encoding of data, and a comprehensive suite of analytics and reporting.

 

Thoughts and Further Reading

I’ve had the pleasure of speaking to Lauffin about 22dot6 on 2 occasions now, and I’m convinced that he’s probably one of the most enthusiastic storage company founders / CEOs I’ve ever been briefed by. He’s certainly been around for a while, and has seen a whole bunch of stuff. In writing this post I’ve had a hard time articulating everything that Lauffin tells me 22dot6 can do, while staying focused on the cloud part of the announcement. Clearly I should have done an overview post in May and then I could just point you to that. In short, go have a look at the website and you’ll see that there’s quite a bit going on with this product.

The solution seeks to address a whole raft of issues that anyone familiar with modern storage systems will have come across at one stage or another. I remain continually intrigued by how various solutions work to address storage virtualisation challenges, while still making a system that works in a seamless manner. Then try and do that at scale, and in multiple geographical locations across the world. It’s not a terribly easy problem to solve, and if Lauffin and his team can actually pull it off, they’ll be well placed to dominate the storage market in the near future.

Spend any time with Lauffin and you realise that everything about 22dot6 speaks to many of the lessons learned over years of experience in the storage industry, and it’s refreshing to see a company trying to take on such a wide range of challenges and fix everything that’s wrong with modern storage systems. What I can’t say for sure, having never had any real stick time with the solution, is whether it works. In Lauffin’s defence, he has offered to get me in contact with some folks for a demo, and I’ll be taking him up on that offer. There’s a lot to like about what 22dot6 is trying to do here, with the Valance Cloud Suite being a small part of the bigger picture. I’m looking forward to seeing how this goes for 22dot6 over the next year or two, and will report back after I’ve had a demo.

StorONE Announces S1:Backup

StorONE recently announced details of its S1:Backup product. I had the opportunity to talk about the announcement with Gal Naor and George Crump about the news and thought I’d share some brief thoughts here.

 

The Problem

Talk to people in the tech sector today, and you’ll possibly hear a fair bit about how ransomware is a real problem for them, and a scary one at that. Most all of the data protection solution vendors are talking about how they can help customers quickly recover from ransomware events, and some are particularly excited about how they can let you know you’ve been hit in a timely fashion. Which is great. A good data protection solution is definitely important to an organisation’s ability to rapidly recover when things go pop. But what about those software-based solutions that themselves have become targets of the ransomware gangs? What do you do when someone goes after both your primary and secondary storage solution? It costs a lot of money to deliver immutable solutions that are resilient to the nastiness associated with ransomware. Unfortunately, most organisations continue to treat data protection as an overpriced insurance policy and are reluctant to spend more than the bare minimum to keep these types of solutions going. It’s alarming the number of times I’ve spoken to customers using software-based data protection solutions that are out of support with the vendor just to save a few thousand dollars a year in maintenance costs.

 

The StorONE Solution

So what do you get with S1:Backup? Quite a bit, as it happens.

[image courtesy of StorONE]

You get Flash-based data ingestion in an immutable format, with snapshots being taken every 30 seconds.

[image courtesy of StorONE]

You also get fast consolidation of multiple incremental backup jobs (think synthetic fulls, etc.), thanks to the high performance of the StorONE platform. Speaking of performance, you also get quick recovery capabilities, and the other benefits of the StorONE platform (namely high availability and high performance).

And if you’re looking for long term retention that’s affordable, you can take advantage of StorONE’s ability to cope well with 90% capacity utilisation, rapid RAID rebuild times, and the ability to start small and grow.

 

Thoughts and Further Reading

Ransomware is a big problem, particularly when it hits you across both primary and secondary storage platforms. Storage immutability has become a super important piece of the puzzle that vendors are trying to solve. Like many things though, it does require some level of co-operation to make sure non-integrated systems are functioning across the tack in an integrated fashion. There are all kinds of ways to attack this issue, with some hardware vendors insisting that they’re particular interpretation of immutability is the only way to go, while some software vendors are quite keen on architecting air gaps into solutions to get around the problem. And I’m sure there’s a tape guy sitting up the back muttering about how tape is the ultimate air gap. Whichever way you want to look at it, I don’t think any one vendor has the solution that is 100% guaranteed to keep you safe from the folks in hoodies intent on trashing your data. So I’m pleased that StorONE is looking at this problem and wanting to work with the major vendors to develop a cost-effective solution to the issue. It may not be right for everyone, and that’s fine. But on the face of it, it certainly looks like a compelling solution when compared to rolling your own storage platforms and hoping that you don’t get hit.

Doing data protection well is hard, and made harder by virtue of the fact that many organisations treat it as a necessary evil. Sadly, it seems that CxOs only really start to listen after they’ve been rolled, not beforehand. Sometimes the best you can do is be prepared for when disaster strikes. If something like the StorONE solution is going to be the difference between losing the whole lot, or coming back from an attack quickly, it seems like it’s worth checking out. I can assure you that ignoring the problem will only end in tears. It’s also important to remember that a robust data protection solution is just another piece of the puzzle. You still need to need to look at your overall security posture, including securing your assets and teaching your staff good habits. Finally, if it seems like I’m taking aim at software-based solutions, I’m not. I’m the first to acknowledge that any system is susceptible if it isn’t architected and deployed in a secure fashion – regardless of whether it’s integrated or not. Anyway, if you’d like another take on the announcement, Mellor covered it here.

Random Short Take #63

Welcome to Random Short take #63. It’s Friday morning, and the weekend is in sight.

  • I really enjoyed this article from Glenn K. Lockwood about how just looking for an IOPS figure can be a silly thing to do, particularly with HPC workloads. “If there’s one constant in HPC, it’s that everyone hates I/O.  And there’s a good reason: it’s a waste of time because every second you wait for I/O to complete is a second you aren’t doing the math that led you to use a supercomputer in the first place.”
  • Speaking of things that are a bit silly, it seems like someone thought getting on the front foot with some competitive marketing videos was a good idea. It rarely is though.
  • Switching gears a little, you may have been messing about with Tanzu Community Edition and asking yourself how you could SSH to a node. Ask no more, as Mark has your answer.
  • Speaking of storage companies that are pretty pleased with how things are going, Weka has put out this press release on its growth.
  • Still on press releases, Imply had some good news to share at Druid Summit recently.
  • Intrigued by Portworx and want to know more? Check out these two blog posts on configuring multi-cloud application portability (here and here) – they are excellent. Hat tip to my friend Mike at Pure Storage for the links.
  • I loved this article on project heroics from Chris Wahl. I’ve got a lot more to say about this and the impact this behaviour can have on staff but some of it is best not committed to print at this stage.
  • Finally, I replaced one of my receivers recently and cursed myself once again for not using banana plugs. They just make things a bit easier to deal with.

Datadobi, DobiProtect, and Forward Progress

I recently had the opportunity to speak Carl D’Halluin from Datadobi about DobiProtect, and thought I’d share some thoughts here. I wrote about DobiProtect in the past, particularly in relation to disaster recovery and air gaps. Things have progressed since then, as they invariably do, and there’s a bit more to the DobiProtect story now.

 

Ransomware Bad, Data Protection Good

If you’re paying attention to any data protection solution vendors at the moment, you’re no doubt hearing about ransomware attacks. These are considered to be Very Bad Things (™).

What Happens

  • Ransomware comes in through zero-day exploit or email attachments
  • Local drive content encrypted
  • Network shares encrypted – might be fast, might be slow
  • Encrypted file accessed and ransom message appears

How It Happens

Ransomware attacks are executed via many means, including social engineering, software exploits, and “malvertising” (my second favourite non-word next to performant). The timing of these attacks is important to note as well, as some ransomware will lay dormant and launch during a specific time period (a public holiday, for example). Sometimes ransomware will slowly and periodically encrypt content , but generally speaking it will begin encrypting files as quickly as possible. It might not encrypt everything either, but you can bet that it will be a pain regardless.

Defense In Depth

Ransomware protection isn’t just about data protection though. There are many layers you need to consider (and protect), including:

  • Human – hard to control, not very good at doing what they’re told.
  • Physical – securing the locations where data is stored is important.
  • End Points – BYOD can be a pain to manage effectively, and keeping stuff up to date seems to be challenging for the most mature organisations.
  • Networks – there’s a lot of work that needs to go into making sure workloads are both secure and accessible.
  • Application – sometimes they’re just slapped in there and we’re happy they run.
  • Data – It’s everything, but super exposed if you don’t get the rest of this right.

 

DobiProtect Then?

The folks at Datadobi tell me DobiProtect is the ideal solution for protecting the data layer as part of your defence in depth strategy as it is:

  • Software defined
  • Designed for the scale and complexity of file and / or object datasets
  • A solution that compliments existing capabilities such as storage system snapshots
  • Easy to deploy and does not impact existing configurations
  • A solution that is cost effective and flexible

 

Where Does It Fit?

DobiProtect plays to the strength of Datadobi – file and object storage. As such, it’s not designed to handle your traditional VM and DB protection, this remains the domain of the usual suspects.

[image courtesy of Datadobi]

Simple Deployment

The software-only nature of the solution, and the flexibility of going between file and object, means that it’s pretty easy to deploy as well.

[image courtesy of Datadobi]

Architecture

From an architecture perspective, it’s pretty straight forward as well, with the Core handling the orchestration and monitoring, and software proxies used for data movement.

[image courtesy of Datadobi]

 

Thoughts

I’ve been involved in the data protection business in some form or another for over two decades now. As you can imagine, I’ve seen a whole bunch of different ways to solve problems. In my day job I generally promote modern approaches to solving the challenge of protecting data in an efficient and cost-effective fashion. It can be hard to do this well, at scale, across the variety of workloads that you find in the modern enterprise nowadays. It’s not just some home directories, a file server, and one database that you have to protect. Now there’s SaaS workloads, 5000 different database options, containers, endpoints, and all kinds of other crazy stuff. The thing linking that all together is data, and the requirement to protect that data in order for the business to do its business – whether that’s selling widgets or providing services to the general public.

Protecting file and object workloads can be a pain. But why not just use a vendor that can roughly do the job rather than using a very specific solution like DobiProtect? I asked D’Halluin the same question, and his response was along the following lines. The kind of customers Datadobi is working with on a regular basis have petabytes of unstructured data they need to protect, and they absolutely need to be sure that it’s being protected properly. Not just from a quality of recovered data perspective, but also from a defensible compliance position. It’s not just about pointing out to the auditors that the data protection solution “should” be working. There’s a lot of legislation and stuff in place to ensure that it needs to be more than that. So it’s oftentimes worth investing in a solution that can reliably deliver against that compliance requirement.

Ransomware attacks can be the stuff of nightmares, particularly if you aren’t prepared. Any solution that is helping you to protect yourself (and, more importantly, recover) from attacks is a Very Good Thing™. Just be sure to check that the solution you’re looking at does what you think it will do. And then check again, because it’s not a matter of if, but when.

Pure Storage – Pure1 Makes Life Easy

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Pure Storage recently presented at Storage Field Day 22. You can see videos of the presentation here, and download my rough notes from here.

 

What You Need

If you’ve spent any time working with storage infrastructure, you’ll know that it can be a pain to manage and operate in an efficient manner. Pure1 has always been a great tool to manage your Pure Storage fleet. But Pure has taken that idea of collecting and analysing a whole bunch of telemetry data and taken it even further. So what is it you need?

Management and Observation

  • Setup needs to be easy to reduce risk and accelerate delivery
  • Alerting needs to be predictive to prevent downtime
  • Management has to be done anywhere to be responsive

Planning and Upgrades

  • Determining when to buy requires forecasting to manage costs
  • Workload optimisations should be intuitive to help keep users happy
  • Non-disruptive upgrades are critical to prevent disruptions

Purchasing and Scaling

  • Resources should be available as a service for on-demand scaling.
  • Data service purchasing should be self-service for speed and simplicity
  • Hybrid cloud should be available from one vendor, in one place

 

Pure1 Has It

Sounds great, so how do you get that with Pure1? Pure breaks it down into three key areas:

  • Optimise
  • Recommend
  • Empower

Optimise

Reduce the time you spend on management and take the guesswork out of support. With aggregated fleet / group metrics, you get:

  • Capacity utilisation
  • Performance
  • Data reduction savings
  • Alerts and support cases

[image courtesy of Pure Storage]

Recommend

Every organisation wants to improve the speed and accuracy of resource planning while enhancing user experience. Pure1 provides the ability to use “What-If” modelling to stay ahead of demands.

  • Select application to be added
  • Provide sizing details
  • Get recommendations based on Pure best practices and AI analysis of our telemetry databases

[image courtesy of Pure Storage]

The process is alarmingly simple:

  • Pick a Workload Type – Choose a preset application type from a list of the most deployed enterprise applications, including SAP HANA, Microsoft SQL, and more.
  • Set Application Parameter – Define size of the deployment. Attributes are auto-populated based on Pure1 analytics across its global database. Adjust as needed for your environment.
  • Simulate Deployment – Identify where you want to deploy the application data. Pure1 analyses the impact on performance and capacity.

Empower

Build your hybrid-cloud infrastructure your way and on demand without the headaches of legacy purchasing. Pure has a great story to tell when it comes to Pure as-a-Service and OpEx acquisition models.

 

Thoughts and Further Reading

In a previous job, I was a Pure1 user and found the overall experience to be tremendous. Much has changed with Pure1 since I first installed it on my phone, and it’s my opinion that the integration and usefulness of the service have both increased exponentially. The folks at Pure have always understood that it’s not enough to deliver high-performance storage solutions built on All-Flash. This is considered table-stakes nowadays. Instead, Pure has done a great job of focussing on the management and operation of these high-performance storage solutions to ensure that users get what they need from the system. I sound like a broken record, I’m sure, but it’s this relentless focus on the customer experience that I think sets Pure apart from many of its competitors.

Most of the tier 1 storage vendors have had a chop at delivering management and operations systems that make extensive use of field telemetry data and support knowledge to deliver proactive support for customers. Everyone is talking about how they use advanced analytics, AI / ML, and so on to deliver a great support experience. But I think it’s the other parts of the equation that really brings it together nicely for Pure: the “evergreen” hardware lifecycle options, the consumption flexibility, and the focus on constantly improving the day 2 operations experience that’s required when managing storage at scale in the enterprise. Add to that the willingness to embrace hybrid cloud technologies, and the expanding product portfolio, and I’m looking forward to seeing what’s next for Pure. Finally, shout out to Stan Yanitskiy for jumping in at the last minute to present when his colleague had a comms issue – I think the video shows that he handled it like a real pro.

Intel – It’s About Getting The Right Kind Of Fast At The Edge

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Intel recently presented at Storage Field Day 22. You can see videos of the presentation here, and download my rough notes from here.

 

The Problem

A lot of countries have used lockdowns as a way to combat the community transmission of COVID-19. Apparently, this has led to an uptick in the consumption of streaming media services. If you’re somewhat familiar with streaming media services, you’ll understand that your favourite episode of Hogan’s Heroes isn’t being delivered from a giant storage device sitting in the bowels of your streaming media provider’s data centre. Instead, it’s invariably being delivered to your device from a content delivery network (CDN) device.

 

Content Delivery What?

CDNs are not a new concept. The idea is that you have a bunch of web servers geographically distributed delivering content to users who are also geographically distributed. Think of it as a way to cache things closer to your end users. There are many reasons why this can be a good idea. Your content will load faster for users if it resides on servers in roughly the same area as them. Your bandwidth costs are generally a bit cheaper, as you’re not transmitting as much data from your core all the way out to the end user. Instead, those end users are getting the content from something close to them. You can potentially also deliver more versions of content (in terms of resolution) easily. It can also be beneficial in terms of resiliency and availability – an outage on one part of your network, say in Palo Alto, doesn’t need to necessarily impact end users living in Sydney. Cloudflare does a fair bit with CDNs, and there’s a great overview of the technology here.

 

Isn’t All Content Delivery The Same?

Not really. As Intel covered in its Storage Field Day presentation, there are some differences with the performance requirements of video on demand and live-linear streaming CDN solutions.

Live-Linear Edge Cache

Live-linear video streaming is similar to the broadcast model used in television. It’s basically programming content streamed 24/7, rather than stuff that the user has to search for. Several minutes of content are typically cached to accommodate out-of-sync users and pause / rewind activities. You can read a good explanation of live-linear streaming here.

[image courtesy of Intel]

In the example above, Intel Optane PMem was used to address the needs of live-linear streaming.

  • Live-linear workloads consume a lot of memory capacity to maintain a short-lived video buffer.
  • Intel Optane PMem is less expensive than DRAM.
  • Intel Optane PMem has extremely high endurance, to handle frequent overwrite.
  • Flexible deployment options – Memory Mode or App-Direct, consuming zero drive slots.

With this solution they were able to achieve better channel and stream density per server than with DRAM-based solutions.

Video on Demand (VoD)

VoD providers typically offer a large library of content allowing users to view it at any time (e.g. Netflix and Disney+). VoD servers are a little different to live-linear streaming CDNs. They:

  • Typically require large capacity and drive fanout for performance / failure domains; and
  • Have a read-intensive workload, with typically large IOs.

[image courtesy of Intel]

 

Thoughts and Further Reading

I first encountered the magic of CDNs years ago when working in a data centre that hosted some Akamai infrastructure. Windows Server updates were super zippy, and it actually saved me from having to spend a lot of time standing in the cold aisle. Fast forward about 15 years, and CDNs are being used for all kinds of content delivery on the web. With whatever the heck this is is in terms of the new normal, folks are putting more and more strain on those CDNs by streaming high-quality, high-bandwidth TV and movie titles into their homes (except in backwards places like Australia). As a result, content providers are constantly searching for ways to tweak the throughput of these CDNs to serve more and more customers, and deliver more bandwidth to those users.

I’ve barely skimmed the surface of how CDNs help providers deliver content more effectively to end users. What I did find interesting about this presentation was that it reinforced the idea that different workloads require different infrastructure solutions to deliver the right outcomes. It sounds simple when I say it like this, but I guess I’ve thought about streaming video CDNs as being roughly the same all over the place. Clearly they aren’t, and it’s not just a matter of jamming some SSDs in one RU servers and hoping that your content will be delivered faster to punters. It’s important to understand that Intel Optane PMem and Intel Optane 3D NAND can give you different results depending on what you’re trying to do, with PMem arguably giving you better value for money (per GB) than DRAM. There are some great papers on this topic available on the Intel website. You can read more here and here.

Random Short Take #60

Welcome to Random Short take #60.

  • VMware Cloud Director 10.3 went GA recently, and this post will point you in the right direction when it comes to planning the upgrade process.
  • Speaking of VMware products hitting GA, VMware Cloud Foundation 4.3 became available about a week ago. You can read more about that here.
  • My friend Tony knows a bit about NSX-T, and certificates, so when he bumped into an issue with NSX-T and certificates in his lab, it was no big deal to come up with the fix.
  • Here’s everything you wanted to know about creating an external bootable disk for use with macOS 11 and 12 but were too afraid to ask.
  • I haven’t talked to the good folks at StarWind in a while (I miss you Max!), but this article on the new All-NVMe StarWind Backup Appliance by Paolo made for some interesting reading.
  • I loved this article from Chin-Fah on storage fear, uncertainty, and doubt (FUD). I’ve seen a fair bit of it slung about having been a customer and partner of some big storage vendors over the years.
  • This whitepaper from Preston on some of the challenges with data protection and long-term retention is brilliant and well worth the read.
  • Finally, I don’t know how I came across this article on hacking Playstation 2 machines, but here you go. Worth a read if only for the labels on some of the discs.