Random Short Take #57

Welcome to Random Short Take #57. Only one player has worn 57 in the NBA. So it looks like this particular bit is done. Let’s get random.

  • In the early part of my career I spent a lot of time tuning up old UNIX workstations. I remember lifting those SGI CRTs from desk to desk was never a whole lot of fun. This article about a Sun Ultra 1 project bought back a hint of nostalgia for those days (but not enough to really get into it again). Hat tip to Scott Lowe for the link.
  • As you get older, you realise that people talk a whole lot of rubbish most of the time. This article calling out audiophiles for the practice was great.
  • This article on the Backblaze blog about one company’s approach to building its streaming media capability on B2 made for interesting reading.
  • DH2i recently announced the general availability of DxEnterprise (DxE) for Containers, enabling cloud-native Microsoft SQL Server container Availability Groups outside and inside Kubernetes.
  • Speaking of press releases, Zerto has made a few promotions recently. You can keep up with that news here.
  • I’m terrible when it comes to information security, but if you’re looking to get started in the field, this article provides some excellent guidance on what you should be focussing on.
  • We all generally acknowledge that NTP is important, and most of us likely assume that it’s working. But have you been checking? This article from Tony does a good job of outlining some of the reasons you should be paying some more attention to NTP.
  • This is likely the most succinct article from John you’ll ever read, and it’s right on the money too.

Retrospect Announces Retrospect Backup 18 and Retrospect Virtual 2021

Retrospect recently announced new versions of its Backup (18) and Virtual (2021) products. I had the opportunity to speak to JG Heithcock (GM, Retrospect) about the announcement and thought I’d share some thoughts here.

 

What’s New?

New Management Console & Workflow 

  • Simplified workflows
  • Comprehensive reporting through an updated management console

The Retrospect Management Console now supports geo tracking with a worldwide map of all users, Retrospect Backup servers, and remote clients, down to the city.

[image courtesy of Retrospect]

Cloud Native

  • Deploy directly in the cloud
  • Protect application data

Note that cloud native means that you can deploy agents on cloud-based hypervisor workloads and protect them. It doesn’t mean support for things like Kubernetes.

Anti-Ransomware Protection

Enables users to set immutable retention periods and policies within Amazon S3, Wasabi and Backblaze B2 and supports bucket-level object lock in Google Cloud Storage and Microsoft Azure.

Pricing

There’s a variety of pricing options available. When you buy a perpetual license, you have access to any new minor or major version upgrades for 12 months. With the monthly subscription model you have access to the latest version of the product for as long as you keep the subscription active.

[image courtesy of Retrospect]

 

Thoughts And Further Reading

I’ve mentioned in my previous coverage of Retrospect that I’m fan of the product, if only for the fact that the consumer and SME space is screaming out for simple to use data protection solutions. Any solution that can help users develop some kind of immunity to ransomware has to be a good thing, and it’s nice to see Retrospect getting there in terms of cloud support. This isn’t as fully featured a product as some of the enterprise solutions out there, but for the price it doesn’t need to be.

Ultimately, the success of software like this is a balance between usability, cost, and reliability. The Retrospect folks seem cognisant of this, and have gone some way to fill the gaps where they could, and are working on others. I’ll be taking this version for a spin in the lab in the very near future, and hope to report back with how it all went.

StorONE and Seagate Team Up

This news came out a little while ago, but I thought I’d cover it here nonetheless. Seagate and StorONE recently announced that the Seagate Exos AP 5U84 Application Platform would support StorONE’s S1:Enterprise Storage Platform.

 

It’s A Box!

[image courtesy of StorONE]

The Exos 5U84 Dual Node supports:

  • 2x 1.8 GHz CPU (E5-2648L v4)
  • 2x 256GB RAM
  • Storage capacities between 250TB and 1.3PB

 

It’s Software!

Hardware is fun, but it’s the software that really helps here, with support for:

  • Full High Availability
  • Automated Tiering
  • No Write Cache
  • Rapid RAID Rebuilds
  • Unlimited Snapshots
  • Cascading Replication
  • Self Encrypting Drives

It offers support for multiple access protocols, including iSCSI, NFS, SMB, and S3. Note that there is no FC support with this unit.

 

Thoughts and Further Reading

I’ve had positive things to say about StorONE in the past, particularly when it comes to transparent pricing and the ability to run this storage solution on commodity hardware. I’ve been on the fence about whether hybrid storage solutions are really on the way out. It felt like they were, for a while, and then folks kept coming up with tweaks to software that meant you could get even more bang for your buck (per GB). Much like tape, I think it would be premature to say that hybrid storage using spinning disk is dead just yet.

Obviously, the folks at StorONE have skin in this particular game, so they’re going to talk about how hybrid isn’t going anywhere. It’s much the same as Michael Dell telling me that the on-premises server market is hotting up. When a vendor is selling something, it’s in their interest to convince you that a market exists for that thing and it is hot. That said, some of the numbers Crump and the team at StorONE have shown me are indeed compelling. When you couple those numbers with the cost of the solution (you can work out for yourself here) it becomes difficult to dismiss out of hand.

When I look at storage solutions I like to look at the numbers, and the hardware, and how it’s supported. But what’s really important is whether the solution is up to the task of the workload I need to throw at it. I also want to know that someone can fix my problem when the magic smoke escapes said storage solution. After a while in the industry, you start to realise that, regardless of what the brochures look like, there are a few different ways that these kind of things get put together. Invariably, unless the solution is known for being reckless with data integrity, or super slow, there’s going to be a point at which the technical advantages become less of a point of differentiation. It’s at that point where the economics really come into play.

The world is software-defined in a lot of ways, but this doesn’t mean you can run your favourite storage code on any old box and expect a great outcome. It does, however, mean that you no longer have to pay a premium to get good performance, good capacity, and a reliable outcome for your workload. You also get the opportunity to enjoy performance improvements as the code improves, without necessarily needing to update your hardware. Which is kind of neat, particularly if you’ve ever paid a pretty penny for golden screwdriver upgrades from big brand disk slingers in the past. This solution might not be for everyone, particularly if you already have a big arrangement with some of the bigger vendors. But if you’re looking to do something, and can’t stretch the economics to an All-Flash solution, this is worth a look.

Cisco Introduces HyperFlex 4.5

Disclaimer: I recently attended Storage Field Day 20.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Cisco presented a sneak preview of HyperFlex 4.5 at Storage Field Day 20 a little while ago. You can see videos of the presentation here, and download my rough notes from here. Note that this preview was done some time before the product was officially announced, so there may be a few things that did or didn’t make it into the final product release.

 

Announcing HyperFlex 4.5

4.5: Meat and Potatoes

So what are the main components of the 4.5 announcement?

  • iSCSI Block storage
  • N:1 Edge data replication
  • New edge platforms / SD-WAN
  • HX Application Platform (KVM)
  • Intersight K8s Service
  • Intersight Workload Optimizer

Other Cool Stuff

  • HX Boost Mode – virtual CPU configuration change in HX controller VM, the boost is persistent (scale up).
  • ESXi & VC 7.0, Native VC Plugin, 6.0 is EoS, HX Native HTML5 vCenter Plugin (this has been available since HX 4.0)
  • Secure Boot – protect the hypervisor against bootloader attacks with secure boot anchored in Cisco hardware root of trust
  • Hardened SDS Controller – reduce the attack surface and mitigate against compromised admin credentials

The HX240 Short Depth nodes have been available since HX 4.0, but there’s now a new Edge Option – the HX240 Edge. This is a new 2RU form factor option for HX Edge (2N / 3N / 4N), A-F and hybrid, 1 or 2 sockets, up to 3TB RAM and 175TB capacity, and PCIe slots for dense GPUs.

 

iSCSI in HX 4.5(1a)

[image courtesy of Cisco]

iSCSI Topologies

[image courtesy of Cisco]

 

Thoughts and Further Reading

Some of the drama traditionally associated with HCI marketing seems to have died down now, and people have mostly stopped debating what it is or isn’t, and started focusing on what they can get from the architecture over more traditional infrastructure deployments. Hyperconverged has always had a good story when it comes to compute and storage, but the networking piece has proven problematic in the field. Sure, there have been attempts at making software-defined networking more effective, but some of these efforts have run into trouble when they’ve hit the northbound switches.

When I think of Cisco HyperFlex I think of it as the little HCI solution that could. It doesn’t dominate the industry conversation like some of the other vendors, but it’s certainly had an impact, in much the same way UCS has. I’ve been a big fan of Springpath for some time, and HyperFlex has taken a solid foundation and turned it into something even more versatile and fully featured. I think the key thing to remember with HyperFlex is that it’s a networking company selling this stuff – a networking company that knows what’s up when it comes to connecting all kinds of infrastructure together.

The addition of iSCSI keeps the block storage crowd happy, and the new edge form-factor will have appeal for customers trying to squeeze these boxes into places they probably shouldn’t be going. I’m looking forward to seeing more HyperFlex from Cisco over the next 12 months, as I think it finally has a really good story to tell, particularly when it comes to integration with other Cisco bits and pieces.

StorCentric Announces Data Mobility Suite

StorCentric recently announced its Data Mobility Suite (DMS). I had the opportunity to talk to Surya Varanasi (StorCentric CTO) about the news, and thought I’d share some of my notes here.

 

What Is It?

DMS is being positioned as a suite of “data cloud services” by StorCentric, with a focus on:

  • Data migration;
  • Data consistency; and
  • Data operation.

It has the ability to operate across heterogeneous storage, clouds, and protocols. It’s a software solution based on subscription licensing and uses a policy-driven engine to manage data in the enterprise. It can run on bare-metal or as a VM appliance. Object storage platform / cloud support if fairly robust, with AWS, Backblaze B2, and Wasabi, amongst others, all being supported.

[image courtesy of StorCentric]

Use Cases

There are a number of scenarios where a solution like DMS makes sense. You might have a bunch of NFS storage on-premises, for example, and want to move it to a cloud storage target using S3. Another use case cited involved collaboration across multiple sites, with the example being a media company creating content in three places, and working in different time zones, and wanting to move the data back to a centralised location.

Big Ideas

Speaking to StorCentric about the announcement, it was clear that there’s a lot more on the DMS roadmap. Block storage is something the team wants to tackle, and they’re also looking to deliver analytics and ransomware alerting. There’s also a strong desire to provide governance as well. For example, if I want to copy some data somewhere and keep it for 10 years, I’ll configure DMS to take care of that for me.

 

Thoughts and Further Reading

Data management means a lot of things to a lot of people. Storage companies often focus on moving blocks and files from one spot to another, but don’t always do a solid job of capturing data needs to be stored where it does. Or how, for that matter. There’s a lot more to data management than keeping ones and zeroes in a safe place. But it’s not just about being able to move data from one spot to another. It’s about understanding the value of your data, and understanding where it needs to be to deliver the most value to your organisation. Whilst it seems like DMS is focused primarily on moving data from one spot to another, there’s plenty of potential here to develop a broader story in terms of data governance and mobility. There’s built-in security, and the ability to apply levels of data governance to data in various locations. The greater appeal here is also the ability to automate the movement of data to different places based on policy. This policy-driven approach becomes really interesting when you start to look at complicated collaboration scenarios, or need to do something smart with replication or data migration.

Ultimately, there are a bunch of different ways to get data from one point to another, and a bunch of different reasons why you might need to do that. The value in something like DMS is the support for heterogeneous storage platforms, as well as the simple to use GUI support. Plenty of data migration tools come with extremely versatile command line interfaces and API support, but the trick is delivering an interface that is both intuitive and simple to navigate. It’s also nice to have a few different use cases met with one tool, rather than having to reach into the bag a few different times to solve very similar problems. StorCentric has a lot of plans for DMS moving forward, and if those plans come to fruition it’s going to form a very compelling part of the typical enterprise’s data management toolkit. You can read the press release here.

Zerto Announces 8.5 and Zerto Data Protection

Zerto recently announced 8.5 of its product, along with a new offering, Zerto Data Protection (ZDP). I had the good fortune to catch up with Caroline Seymour (VP, Product Marketing) about the news and thought I’d share some thoughts here.

 

ZDP, Yeah You Know Me

Global Pandemic for $200 Please, Alex

In “these uncertain times”, organisations are facing new challenges

  • No downtime, no data loss, 24/7 availability
  • Influx of remote work
  • Data growth and sprawl
  • Security threats
  • Acceleration of cloud

Many of these things were already a problem, and the global pandemic has done a great job highlighting them.

“Legacy Architecture”

Zerto paints a bleak picture of the “legacy architecture” adopted by many of the traditional dat protection solutions, positing that many IT shops need to use a variety of tools to get to a point where operations staff can sleep better at night. Disaster recovery, for example, is frequently handled via replication for mission-critical applications, with backup being performed via periodic snapshots for all other applications. ZDP aims to being all this together under one banner of continuous data protection, delivering:

  • Local continuous backup and long-term retention (LTR) to public cloud; and
  • Pricing optimised for backup.

[image courtesy of Zerto]

Features

[image courtesy of Zerto]

So what do you get with ZDP? Some neat features, including:

  • Continuous backup with journal
  • Instant restore from local journal
  • Application consistent recovery
  • Short-term SLA policy settings
  • Intelligent index and search
  • LTR to disk, object or Cloud (Azure, AWS)
  • LTR policies, daily incremental with weekly, monthly or yearly fulls
  • Data protection workflows

 

New Licensing

It wouldn’t be a new software product without some mention of new licensing. If you want to use ZDP, you get:

  • Backup for short-term retention and LTR;
  • On-premises or backup to cloud;
  • Analytics; and
  • Orchestration and automation for backup functions.

If you’re sticking with (the existing) Zerto Cloud Edition, you get:

  • Everything in ZDP;
  • Disaster Recovery for on-premises and cloud;
  • Multi-cloud support; and
  • Orchestration and automation.

 

Zerto 8.5

A big focus of Zerto’s recently has been VMware on public cloud support, including the various flavours of VMware on Azure, AWS, and Oracle Cloud. There are a bunch of reasons why this approach has proven popular with existing VMware customers looking to migrate from on-premises to public cloud, including:

  • Native VMware support – run existing VMware workloads natively on IaaS;
  • Policies and configuration don’t need to change;
  • Minimal changes – no need to refactor applications; and
  • IaaS benefits- reliability, scale, and operational model.

[image courtesy of Zerto]

New in 8.5

With 8.5, you can now backup directly to Microsoft Azure and AWS. You also get instant file and folder restores to production. There’s now support for VMware on public cloud disaster recovery and data protection for Microsoft Azure VMware Solution, Google Cloud VMware Engine, and the Oracle Cloud VMware Solution. You also get platform automation and lifecycle management features, including:

  • Auto-evacuate for recovery hosts;
  • Auto-populate for recovery hosts; and
  • Encryption capabilities.

And finally, a Zerto PowerShell Cmdlets Module has also been released.

 

Thoughts and Further Reading

The writing’s been on the wall for some time that Zerto might need to expand its solution offering to incorporate backup and recovery. Continuous data protection is a great feature and my experience with Zerto has been that it does what it says on the tin. The market, however, is looking for ways to consolidate solution offerings in order to save a few more dollarydoos and keep the finance department happy. I haven’t seen the street pricing for ZDP, but Seymour seemed confident that it stacks up well against the more traditional data protection options on the market, particularly when compared against offerings that incorporate components that deal with CDP and periodic data protection with different tools. There’s a new TCO calculator on the Zerto website, and there’s also the opportunity to talk to a Zerto account representative about your particular needs.

I’ve always treated regular backup and recovery and disaster recovery as very different things, mainly because they are. Companies frequently make the mistake of trying to cobble together some kind of DR solution using traditional backup and recovery tools. I’m interested to see how Zerto goes with this approach. It’s not the first company to converge elements that fit in the data protection space together, and it will be interesting to see how much of the initial uptake of ZDP is with existing customers or net new logos. The broadening of support for the VMware on X public cloud workloads is good news for enterprises too (putting aside my thoughts on whether or not that’s a great long term strategy for said enterprises). There’s some interesting stuff happening, and I’m looking forward to see how the story unfolds over the next 6 – 12 months.

Quobyte Announces 3.0

Quobyte recently announced Release 3.0 of its software. I had the opportunity to speak to Björn Kolbeck (Co-Founder and CEO) about the release, and thought I’d share some thoughts here.

 

About Quobyte

If you haven’t heard of Quobyte before, it was founded in 2013 by some ex-Googlers and HPC experts. The folks at Quobyte were heavily influenced by Google’s scale-out software model and wanted to bring that to the enterprise. Quobyte has had software in production since 2016 and has customers across a range of industry verticals, including financial services and media streaming. It’s not really object storage, more a parallel file system or, at a stretch, scale-out NAS.

 

The Tech

Kolbeck describes Quobyte as “storage for Generation Scale-Out” and is focussed on “getting storage out of the ugly corner of specialised appliances”.

Unlimited Performance

  • Linear scaling delivers unlimited performance
  • No bottlenecks – scale from small to 1000s of servers
  • No more NFS – it’s part of the problem

Deploy Anywhere

  • True software storage runs anywhere – bare metal, containers, cloud
  • Almost any x86t server – no appliances

Unconditional Simplicity

  • Anyone can do storage, it’s just another Linux application
  • All in user space, installs in minutes

 

The Announcement

Free Edition

The first part of the announcement is that there’s a free edition (previously there was a 45 day trial on offer). It’s limited in terms of capacity, support, and file system clients, but could be useful in labs and smaller environments.

[image courtesy of Quobyte]

3.0 Release

The 3.0 release is also a big part of Quobyte’s news, with the new version delivering a bunch of new features, most of which are outlined below.

360 Security

  • Holistic data protection
  • End to end AES encryption (in transit / at rest / untrusted storage nodes)
  • Selective TLS support
  • Access keys for the file system
  • X.509 certificates
  • Event stream (metadata, file access)

Policy Engine

Powerful Policy Engine

  • For: Tenant, volume, file, client
  • Control: Layout, tiering, QoS, recoding, caching
  • Dynamic: Runtime re-configurable

Automated

  • Auto file layout: replication + EC and Flash + HDD
  • Auto selection of replication factor, EC schema

Self-Service

Quobyte is looking to deliver a “cloud-like experience” with its self-service capabilities.

Login for users

  • Manage access keys
  • Check resource consumption

Authenticate using access keys

  • S3
  • File system driver
  • K8s / CSI
  • User-space drivers: HDFS, TF, MPI-IO

Multi-Cluster

Data Mover

  • Bi-directional sync (evental consistency)
  • Policy-based data tiering between clusters
  • Recoding

TLS between clusters

More Native Drivers

HDFS

MPI-IO

Benefit of kernel bypass

  • Lower latency
  • Less memory bandwidth

 

Thoughts and Further Reading

One of the challenges with software-defined storage is invariably the constraint that poor hardware choices can put on performance. Kolbeck acknowledged that Quobyte is “as fast as your hardware”. I asked him whether Quobyte provided guidance on hardware choices that worked well with the platform. There is a bunch of recommended (and tested) hardware listed on this page. He did mention that whichever way you decided to go, it was recommended to stick with either Mellanox or Broadcom NICs due to issues observed with other vendors’ Linux drivers. There’re also recommendations on the site for public cloud instance sizing covering AWS, GCP, and Oracle.

Quobyte is being deployed to support scale-out workloads in the enterprise across a number of sectors including financial services, life sciences, media and entertainment, and manufacturing in Europe and Asia. Kolbeck noted that one of the interesting things about the advent of smart everything is that “car manufacturers are suddenly in the machine learning field” and looking for new ways to support their businesses.

There are a lot of reasons to like software-defined storage offerings. You can generally run them on anything, and performance enhancements can frequently be had via code upgrades. That’s not to say that you don’t get that with the big box slingers, but the flexibility of hardware choice has tremendous appeal, particularly in the enterprise market where it can feel like the margin on commodity hardware can be exorbitant. Quobyte hasn’t been around forever, but the folks over there seem to have a pretty solid heritage in software-defined and scale-out storage solutions – a good sign if you’re in the market for a software-defined, scale-out storage solution. Some folks are going to rue the lack of NFS support, but I’m sure Kolbeck and the team would be happy to sit down and discuss with them why that’s no great loss. There’s some pretty cool stuff in this release, and the free edition is definitely worth taking for a spin. I’m looking forward to hearing more from Quobyte over the next little while.

StorONE Q3-2020 Update

StorONE recently announced details of its Q3-2020 software release. I had the opportunity to talk about the announcement with George Crump and thought I’d share some brief thoughts here.

 

Release Highlights

Performance Improvements

One of the key highlights of this release is significant performance improvements for the platform based purely on code optimisations. Crump tells me that customers with Intel Optane and NVMe SSDs will be extremely happy with what they see. What’s also notable is that customers still using high latency media such as hard disk drives will still see a performance improvement of 15 – 20%.

Data Protection

StorONE has worked hard on introducing some improved resilience for the platform as well, with two key features being made available:

  • vRack; and
  • vReplicate.

vRack provides the ability to split S1 storage across more than one rack (or row, for that matter) to mitigate any failures impacting the rack hosting the controllers and disk enclosures. You can now also set tolerance for faults at an enclosure level, not just a drive level.

[image courtesy of StorONE]

vReplicate extends S1:Replicate’s capabilities to provide cascading replication. You can now synchronously replicate between data centres or campus sites and then asynchronously send that data to another site, hundreds of kilometres away if necessary. Primary systems can be an All-Flash Array.next, traditional All-Flash Array, or a Hybrid Array, and the replication target can be an inexpensive hard disk only S1 system.

[image courtesy of StorONE]

There’s now full support for Volume Shadow Copy Service (VSS) for S1:Snap users.

 

Other Enhancements

Some of the other enhancements included with this release are:

  • Improved support for NVMe-oF (including the ability to simultaneously support iSCSI and FC along with NVMe);
  • Improved NAS capability, with support for quotas and NIS / LDAP; and
  • Downloadable stats for increased insights.

 

Thoughts

Some of these features might seem like incremental improvements, but this is an incremental release. I like the idea of supporting legacy connections while supporting the ability to add newer tech to the platform, and providing a way forward in terms of hardware migration. The vRack resiliency concept is also great, and a salient reminder that the ability to run this code on commodity hardware makes some of these types of features a little more accessible. I also like the idea of being able to download analytics data and do things with it to gain greater insights into what the system is doing. Sure, it’s an incremental improvement, but an important one nonetheless.

I’ve been a fan of the StorONE story for some time now (and not just because the team slings a few dollars my way to support the site every now and then). I think the key to much of StorONE’s success has been that it hasn’t gotten caught up trying to be a storage appliance vendor, and has instead focussed on delivering reliable code on commodity systems that results in a performance-oriented storage platform that continues to improve from a software perspective without being tied to a particular hardware platform. The good news is though, when new hardware becomes available (such as Optane), it’s not a massive problem to incorporate it into the solution.

StorONE has always talked a big game in terms of raw performance numbers, but I think it’s the addition of features such as vRack and improvements to the replication capability that really makes it a solution worth investigating. It doesn’t hurt that you can check the pricing calculator out for yourself before you decide to go down the path of talking to StorONE’s sales team. I’m looking forward to seeing what StorONE has in store in the next little while, as I get the impression it’s going to be pretty cool. You can read details of the update here.

Pure Storage Acquires Portworx

Pure Storage announced its intention to acquire Portworx in mid-September. Around that time I had the opportunity to talk about the news with Goutham Rao (Portworx CTO) and Matt Kixmoeller (Pure Storage VP, Strategy) and thought I’d share some brief thoughts here.

 

The News

Pure and Portworx have entered an agreement that will see Pure pay approximately $370M US in cash. Portworx will form a new Cloud Native Business Unit inside Pure to be led by Portworx CEO Murli Thirumale. All Portworx founders are joining Pure, with Pure investing significantly to grow the new business unit. According to Pure, “Portworx software to continue as-is, supporting deployments in any cloud and on-premises, and on any bare metal, VM, or array-based storage”. It was also noted that “Portworx solutions to be integrated with Pure yet maintain a commitment to an open ecosystem”.

About Portworx

Described as the “leading Kubernetes data services platform”, Portworx was founded in 2014 in Los Altos, CA. It runs a 100% software, subscription, and cloud business model with development and support sites in California, India, and Eastern Europe. The product has been GA since 2017, and is used by some of the largest enterprise and Cloud / SaaS companies globally.

 

What’s A Portworx?

The idea behind Portworx is that it gives you data services for any application, on any Kubernetes distribution, running on any cloud, any infrastructure, and at any stage of the application lifecycle. To that end, it’s broken up into a bunch of different components, and runs in the K8s control plane adjacent to the applications.

PX-Store

  • Software-defined storage layer that automates container storage for developers and admins
  • Consistent storage APIs: cloud, bare metal, or arrays

PX-Migrate

  • Easily move applications between clusters
  • Enables hybrid cloud and multi-cloud mobility

PX-Backup

  • Application-consistent backup for cloud native apps with all k8s artefacts and state
  • Backup to any cloud or on-premises object storage

PX-Secure

  • Implement consistent encryption and security policies across clouds
  • Enable multi-tenancy with access controls

PX-DR

  • Sync and async replication between Availability Zones and regions
  • Zero RPO active / active for high resiliency

PX-Autopilot

  • GitOps-driven automation allows for easier platform for non-storage experts to deploy stateful applications, monitors everything about an application, reacts and prevents problems from happening
  • Auto-scale storage as your app grows to reduce costs

 

How It Fits Together

When you bring Portworx into the Pure Storage picture, you start to see that it fits well with the existing Pure Storage picture. In the picture below you’ll also see support for the standard container storage interface (CSI) to work with other vendors.

[image courtesy of Pure Storage]

Also worth noting is that PX-Essentials remains free forever for workloads under 5TB and 5 nodes).

 

Thoughts and Further Reading

I think this is a great move by Pure, mainly because it lends them a whole lot more credibility with the DevOps folks. Pure was starting to make inroads with Pure Storage Orchestrator, and I think this move will strengthen that story. Giving Portworx access to Pure’s salesforce globally is also going to broaden its visibility in the market and open up doors to markets that may have been difficult to get into previously.

Persistent storage for containers is heating up. As Rao pointed out in our discussion, “as container adoption grows, storage becomes a problem”. Portworx already had a good story to tell in this space, and Pure is no slouch when it comes to delivering advanced storage capabilities across a variety of platforms. I like that the messaging has been firmly based in maintaining the openness of the platform and I’m interested to see what other integrations happen as the two companies start working more closely together. If you’d like another perspective on the news, check out Chris Evans’s article here.

Rancher Labs Announces 2.5

Rancher Labs recently announced version 2.5 of its platform. I had the opportunity to catch up with co-founder and CEO Sheng Liang about the release and other things that Rancher has been up to and thought I’d share some of my notes here.

 

Introducing Rancher Labs 2.5

Liang described Rancher as a way for organisations to “[f]ocus on enriching their own apps, rather than trying to be a day 1, day 2 K8s outfit”. With that thinking in mind, the new features in 2.5 are as follows:

  1. Rancher now installs everywhere – on EKS, OpenShift, whatever – and they’ve removed a bunch of dependencies. Rancher 2.5 can now be installed on any CNCF-certified Kubernetes cluster, eliminating the need to set up a separate Kubernetes cluster before installing Rancher. The new lightweight installation experience is useful for users who already have access to a cloud-managed Kubernetes service like EKS.
  2. Enhanced management for EKS. Rancher Labs was a launch partner for EKS and used to treat it like a dumb distribution. The management architecture has been revamped with improved lifecycle management for EKS. It now uses the native EKS way of doing various things and only adds value where it’s not already present.
  3. Managing edge clusters. Liang described K3s as “almost the goto distribution for edge computing (5G, IoT, ATMs, etc)”. When you get into some of these scenarios, the scale of operations is becoming pretty big. You need to re-think multi-cluster management when you have that in place. Rancher has introduced a GitOps framework to do that. “GitOps at scale” – created its own GitOp framework to accommodate the required scale.
  4. K8s has plenty of traction in government and high security environments, hence the development of RKE Government Edition.

 

Other Notes

Liang mentioned that Longhorn uptake (made generally available in May 2020) has been great, with over 10000 active deployments (not just downloads) in the wild now. He noted that persistent storage with K8s has been hard to do, and Longhorn has gone some way to improving that experience. K3s is now a CNCF Sandbox project, not just a Rancher project, and this has certainly helped with its popularity as well. He also mentioned the acquisition by SUSE was continuing to progress, and expected it would be closed in Q4, 2020.

 

Thoughts and Further Reading

Longtime readers of this blog will know that my background is fairly well entrenched in infrastructure as opposed to cloud-native technologies. Liang understands this, and always does a pretty good job of translating some of the concepts he talks about with me back into infrastructure terms. The world continues to change, though, and the popularity of Kubernetes and solutions like Rancher Labs highlights that it’s no longer a simple conversation about LUNs, CPUs, network throughput and which server I’ll use to host my application. Organisations are looking for effective ways to get the most out of their technology investment, and Kubernetes can provide an extremely effective way of deploying and running containerised applications in an agile and efficient fashion. That said, the bar for entry into the cloud-native world can still be considered pretty high, particularly when you need to do things at large scale. This is where I think platforms like the one from Rancher Labs make so much sense. I may have described some elements of cloud-native architecture as a bin fire previously, but I think the progress that Rancher is making demonstrates just how far we’ve come. I know that VMware and Kubernetes has little in common, but it strikes me that we’re seeing the same development progress that we saw 15 years ago with VMware (and ESX in particular). I remember at the time that VMware seemed like a whole bunch of weird to many infrastructure folks, and it wasn’t until much later that these same people were happily using VMware in every part of the data centre. I suspect that the adoption of Kubernetes (and useful management frameworks for it) will be a bit quicker than that, but it’s going to be heavily reliant on solutions like this to broaden the appeal of what’s a very useful (but nonetheless challenging) container deployment and management ecosystem.

If you’re in the APAC region, Rancher is hosting a webinar in a friendly timezone later this month. You can get more details on that here. And if you’re on US Eastern time, there’s the “Computing on the Edge with Kubernetes” one day event that’s worth checking out.