Pure Storage Acquires Portworx

Pure Storage announced its intention to acquire Portworx in mid-September. Around that time I had the opportunity to talk about the news with Goutham Rao (Portworx CTO) and Matt Kixmoeller (Pure Storage VP, Strategy) and thought I’d share some brief thoughts here.

 

The News

Pure and Portworx have entered an agreement that will see Pure pay approximately $370M US in cash. Portworx will form a new Cloud Native Business Unit inside Pure to be led by Portworx CEO Murli Thirumale. All Portworx founders are joining Pure, with Pure investing significantly to grow the new business unit. According to Pure, “Portworx software to continue as-is, supporting deployments in any cloud and on-premises, and on any bare metal, VM, or array-based storage”. It was also noted that “Portworx solutions to be integrated with Pure yet maintain a commitment to an open ecosystem”.

About Portworx

Described as the “leading Kubernetes data services platform”, Portworx was founded in 2014 in Los Altos, CA. It runs a 100% software, subscription, and cloud business model with development and support sites in California, India, and Eastern Europe. The product has been GA since 2017, and is used by some of the largest enterprise and Cloud / SaaS companies globally.

 

What’s A Portworx?

The idea behind Portworx is that it gives you data services for any application, on any Kubernetes distribution, running on any cloud, any infrastructure, and at any stage of the application lifecycle. To that end, it’s broken up into a bunch of different components, and runs in the K8s control plane adjacent to the applications.

PX-Store

  • Software-defined storage layer that automates container storage for developers and admins
  • Consistent storage APIs: cloud, bare metal, or arrays

PX-Migrate

  • Easily move applications between clusters
  • Enables hybrid cloud and multi-cloud mobility

PX-Backup

  • Application-consistent backup for cloud native apps with all k8s artefacts and state
  • Backup to any cloud or on-premises object storage

PX-Secure

  • Implement consistent encryption and security policies across clouds
  • Enable multi-tenancy with access controls

PX-DR

  • Sync and async replication between Availability Zones and regions
  • Zero RPO active / active for high resiliency

PX-Autopilot

  • GitOps-driven automation allows for easier platform for non-storage experts to deploy stateful applications, monitors everything about an application, reacts and prevents problems from happening
  • Auto-scale storage as your app grows to reduce costs

 

How It Fits Together

When you bring Portworx into the Pure Storage picture, you start to see that it fits well with the existing Pure Storage picture. In the picture below you’ll also see support for the standard container storage interface (CSI) to work with other vendors.

[image courtesy of Pure Storage]

Also worth noting is that PX-Essentials remains free forever for workloads under 5TB and 5 nodes).

 

Thoughts and Further Reading

I think this is a great move by Pure, mainly because it lends them a whole lot more credibility with the DevOps folks. Pure was starting to make inroads with Pure Storage Orchestrator, and I think this move will strengthen that story. Giving Portworx access to Pure’s salesforce globally is also going to broaden its visibility in the market and open up doors to markets that may have been difficult to get into previously.

Persistent storage for containers is heating up. As Rao pointed out in our discussion, “as container adoption grows, storage becomes a problem”. Portworx already had a good story to tell in this space, and Pure is no slouch when it comes to delivering advanced storage capabilities across a variety of platforms. I like that the messaging has been firmly based in maintaining the openness of the platform and I’m interested to see what other integrations happen as the two companies start working more closely together. If you’d like another perspective on the news, check out Chris Evans’s article here.

Rancher Labs Announces 2.5

Rancher Labs recently announced version 2.5 of its platform. I had the opportunity to catch up with co-founder and CEO Sheng Liang about the release and other things that Rancher has been up to and thought I’d share some of my notes here.

 

Introducing Rancher Labs 2.5

Liang described Rancher as a way for organisations to “[f]ocus on enriching their own apps, rather than trying to be a day 1, day 2 K8s outfit”. With that thinking in mind, the new features in 2.5 are as follows:

  1. Rancher now installs everywhere – on EKS, OpenShift, whatever – and they’ve removed a bunch of dependencies. Rancher 2.5 can now be installed on any CNCF-certified Kubernetes cluster, eliminating the need to set up a separate Kubernetes cluster before installing Rancher. The new lightweight installation experience is useful for users who already have access to a cloud-managed Kubernetes service like EKS.
  2. Enhanced management for EKS. Rancher Labs was a launch partner for EKS and used to treat it like a dumb distribution. The management architecture has been revamped with improved lifecycle management for EKS. It now uses the native EKS way of doing various things and only adds value where it’s not already present.
  3. Managing edge clusters. Liang described K3s as “almost the goto distribution for edge computing (5G, IoT, ATMs, etc)”. When you get into some of these scenarios, the scale of operations is becoming pretty big. You need to re-think multi-cluster management when you have that in place. Rancher has introduced a GitOps framework to do that. “GitOps at scale” – created its own GitOp framework to accommodate the required scale.
  4. K8s has plenty of traction in government and high security environments, hence the development of RKE Government Edition.

 

Other Notes

Liang mentioned that Longhorn uptake (made generally available in May 2020) has been great, with over 10000 active deployments (not just downloads) in the wild now. He noted that persistent storage with K8s has been hard to do, and Longhorn has gone some way to improving that experience. K3s is now a CNCF Sandbox project, not just a Rancher project, and this has certainly helped with its popularity as well. He also mentioned the acquisition by SUSE was continuing to progress, and expected it would be closed in Q4, 2020.

 

Thoughts and Further Reading

Longtime readers of this blog will know that my background is fairly well entrenched in infrastructure as opposed to cloud-native technologies. Liang understands this, and always does a pretty good job of translating some of the concepts he talks about with me back into infrastructure terms. The world continues to change, though, and the popularity of Kubernetes and solutions like Rancher Labs highlights that it’s no longer a simple conversation about LUNs, CPUs, network throughput and which server I’ll use to host my application. Organisations are looking for effective ways to get the most out of their technology investment, and Kubernetes can provide an extremely effective way of deploying and running containerised applications in an agile and efficient fashion. That said, the bar for entry into the cloud-native world can still be considered pretty high, particularly when you need to do things at large scale. This is where I think platforms like the one from Rancher Labs make so much sense. I may have described some elements of cloud-native architecture as a bin fire previously, but I think the progress that Rancher is making demonstrates just how far we’ve come. I know that VMware and Kubernetes has little in common, but it strikes me that we’re seeing the same development progress that we saw 15 years ago with VMware (and ESX in particular). I remember at the time that VMware seemed like a whole bunch of weird to many infrastructure folks, and it wasn’t until much later that these same people were happily using VMware in every part of the data centre. I suspect that the adoption of Kubernetes (and useful management frameworks for it) will be a bit quicker than that, but it’s going to be heavily reliant on solutions like this to broaden the appeal of what’s a very useful (but nonetheless challenging) container deployment and management ecosystem.

If you’re in the APAC region, Rancher is hosting a webinar in a friendly timezone later this month. You can get more details on that here. And if you’re on US Eastern time, there’s the “Computing on the Edge with Kubernetes” one day event that’s worth checking out.

Spectro Cloud – Profile-Based Kubernetes Management For The Enterprise

 

Spectro Cloud launched in March. I recently had the opportunity to speak to Tenry Fu (CEO) and Tina Nolte (VP, Products) about the launch, and what Spectro Cloud is, and thought I’d share some notes here.

 

The Problem?

I was going to start this article by saying that Kubernetes in the enterprise is a bin fire, but that’s too harsh (and entirely unfair on the folks who are doing it well). There is, however, a frequent compromise being made between ease of use, control, and visibility.

[image courtesy of Spectro Cloud]

According to Fu, the way that enterprises consume Kubernetes shouldn’t just be on the left or the right side of the diagram. There is a way to do both.

 

The Solution?

According to the team, Spectro Cloud is “a SaaS platform that gives Enterprises control over Kubernetes infrastructure stack integrations, consistently and at scale”. What does that mean though? Well, you get access to the “table stakes” SaaS management, including:

  • Managed Kubernetes experience;
  • Multi-cluster and environment management; and
  • Enterprise features.

Profile-Based Management

You also get some cool stuff that heavily leverages profile-based management, including infrastructure stack modelling and lifecycle management that can be done based on integration policies. In short, you build cluster profiles and then apply them to your infrastructure. The cluster profile usually describes the OS flavour and version, Kubernetes version, storage configuration, networking drivers, and so on. The Pallet orchestrator then ensures these profiles are used to maintain the desired cluster state. There are also security-hardened profiles available out of the box.

If you’re a VMware-based cloud user, the appliance (deployed from an OVA file) sits in your on-premises VMware cloud environment and communicates with the Spectro Cloud SaaS offering over TLS, and the cloud properties are dynamically propagated.

Licensing

The solution is licensed on the number of worker node cores under management. This is tiered based on the number of cores and it follows a simple model: More cores and a longer commitment equals a bigger discount.

 

The Differentiator?

Current Kubernetes deployment options vary in their complexity and maturity. You can take the DIY path, but you might find that this option is difficult to maintain at scale. There are packaged options available, such as VMware Tanzu, but you might find that multi-cluster management is not always a focus. The managed Kubernetes option (such as those offered by Google and AWS) has its appeal to the enterprise crowd, but those offerings are normally quite restricted in terms of technology offerings and available versions.

Why does Spectro Cloud have appeal as a solution then? Because you get control over the integrations you might want to use with your infrastructure, but also get the warm and fuzzy feeling of leveraging a managed service experience.

 

Thoughts

I’m no great fan of complexity for complexity’s sake, particularly when it comes to enterprise IT deployments. That said, there are always reasons why things get complicated in the enterprise. Requirements come from all parts of the business, legacy applications need to be fed and watered, rules and regulations seem to be in place simply to make things difficult. Enterprise application owners crave solutions like Kubernetes because there’s some hope that they, too, can deliver modern applications if only they used some modern application deployment and management constructs. Unfortunately, Kubernetes can be a real pain in the rear to get right, particularly at scale. And if enterprise has taught us anything, it’s that most enterprise shops are struggling to do the basics well, let alone the needlessly complicated stuff.

Solutions like the one from Spectro Cloud aren’t a silver bullet for enterprise organisations looking to modernise the way applications are deployed, scaled, and managed. But something like Spectro Cloud certainly has great appeal given the inherent difficulties you’re likely to experience if you’re coming at this from a standing start. Sure, if you’re a mature Kubernetes shop, chances are slim that you really need something like this. But if you’re still new to it, or are finding that the managed offerings don’t give you the flexibility you might need, then something like Spectro Cloud could be just what you’re looking for.