Cohesity DataProtect Delivered As A Service – A Few Notes

As part of a recent vExpert giveaway the folks at Cohesity gave me a 30-day trial of the Cohesity DataProtect Delivered as a Service offering. This is a component of Cohesity’s Data Management as a Service (DMaaS) offering and, despite the slightly unwieldy name, it’s a pretty neat solution. I want to be clear that it’s been a little while since I had any real stick time with Cohesity’s DataProtect offering, and I’m looking at this in a friend’s home lab, so I’m making no comments or assertions regarding the performance of the service. I’d also like to be clear that I’m not making any recommendation one way or another with regards to the suitability of this service for your organisation. Every organisation has its own requirements and it’s up to you to determine whether this is the right thing for you.

 

Overview

I’ve added a longer article here that explains the setup process in more depth, but here’s the upshot of what you need to do to get up and running. In short, you sign up, select the region you want to backup workloads to, configure your SaaS Connectors for the particular workloads you’d like to protect, and then go nuts. It’s really pretty simple.

Workloads

In terms of supported workloads, the following environments are currently supported:

  • Hypervisors (VMware and Hyper-V);
  • NAS (generic SMB and NFS, Isilon, and NetApp);
  • Microsoft SQL Server;
  • Oracle;
  • Microsoft 365;
  • Amazon AWS; and
  • Physical hosts.

This list will obviously grow as some of the support for particular workloads with DataProtect and Helios improves over time.

Regions

The service is currently available in seven AWS Regions:

  • US East (Ohio)
  • US East (N. Virginia)
  • US West (Oregon)
  • US West (N. California)
  • Canada (Central)
  • Asia Pacific (Sydney)
  • Europe (Frankfurt)

You’ve got some flexibility in terms of where you store your data, but it’s my understanding that the telemetry data (i.e. Helios) goes to one of the US East Regions. It’s also important to note that once you’ve put data in a particular Region, you can’t then move that data to another Region.

Encryption

Data is encrypted in-flight and at rest, and you have a choice of KMS solutions (Cohesity-managed or DIY AWS KMS). Note that once you choose a KMS, you cannot change your mind. Well, you can, but you can’t do anything about it.

 

Thoughts

Data protection as a service offerings are proving increasingly popular with customers, data protection vendors, and service providers. The appeal for the punters is that they can apply some of the same thinking to protecting their investment in their cloud as they did to standing it up in the first place. The appeal for the vendors and SPs is that they can deliver service across a range of platforms without shipping tin anywhere, and build up annuity business as well.

With regards to this particular solution, it still has some rough edges, but it’s great to see just how much can already be achieved. As I mentioned, it’s been a while since I had some time with DataProtect, and some of the usability and functionality of both it and Helios has really come along in leaps and bounds. And the beauty of this being a vendor-delivered as a Service offering is that features can be rolled out on a frequent basis, rather than waiting for quarterly improvements to arrive via regularly scheduled software maintenance releases. Once you get your head around the workload, things tend to work as expected, and it was fairly simple to get everything setup and working in a short period of time.

This isn’t for everyone, obviously. If you’re not a fan of doing things in AWS, then you’re really not going to like how this works. And if you don’t operate near one of the currently supported Regions, then the tyranny of bandwidth (i.e. physics) may prevent reasonable recovery times from being achievable for you. It might seem a bit silly, but these are nonetheless things you need to consider when looking at adopting a service like this. It’s also important to think of the security posture of these kinds of services. Sure, things are encrypted, and you can use MFA with Helios, but folks outside the US sometimes don’t really dig the idea of any of their telemetry data living in the US. Sure, it’s a little bit tinfoil hat but it you’d be surprised how much it comes up. And it should be noted that this is the same for on-premises Cohesity solutions using Helios. Then again, Cohesity is by no means alone in sending telemetry data back for support and analysis purposes. It’s fairly common and something your infosec will likely already be across how to deal with it.

If you’re fine with that (and you probably should be), and looking to move away from protecting your data with on-premises solutions, or looking for something that gives you some flexible deployment and management options, this could be of interest. As I mentioned, the beauty of SaaS-based solutions is that they’re more frequently updated by the vendor with fixes and features. Plus you don’t need to do a lot of the heavy lifting in terms of care and feeding of the environment. You’ll also notice that this is the DataProtect component, and I imagine that Cohesity has plans to fill out the Data Management part of the solution more thoroughly in the future. If you’d like to try it for yourself, I believe there’s a trial you can sign up for. Finally, thanks to the Cohesity TAG folks for the vExpert giveaway and making this available to people like me.

Random Short Take #53

Welcome to Random Short Take #53. A few players have worn 53 in the NBA including Mark Eaton, James Edwards, and Artis Gilmore. My favourite though was Chocolate Thunder, Darryl Dawkins. Let’s get random.

  • I love Preston’s series of articles covering the basics of backup and recovery, and this one on backup lifecycle is no exception.
  • Speaking of data protection, Druva has secured another round of funding. You can read Mellor’s thoughts here, and the press release is here.
  • More data protection press releases? I’ve got you covered. Zerto released one recently about cloud data protection. Turns out folks like cloud when it comes to data protection. But I don’t know that everyone has realised that there’s some work still to do in that space.
  • In other press release news, Cloud Propeller and Violin Systems have teamed up. Things seem to have changed a bit at Violin Systems since StorCentric’s acquisition, and I’m interested to see how things progress.
  • This article on some of the peculiarities associated with mainframe deployments in the old days by Anthony Vanderwerdt was the most entertaining thing I’ve read in a while.
  • Alastair has been pumping out a series of articles around AWS principles, and this one on understanding your single points of failure is spot on.
  • Get excited! VMware Cloud Director 10.2.2 is out now. Read more about that here.
  • A lot of people seem to think it’s no big thing to stretch Layer 2 networks. I don’t like it, and this article from Ethan Banks covers a good number of reasons why you should think again if you’re that way inclined.

Random Short Take #52

Welcome to Random Short Take #52. A few players have worn 52 in the NBA including Victor Alexander (I thought he was getting dunked on by Shawn Kemp but it was Chris Gatling). My pick is Greg Oden though. If only his legs were the same length. Let’s get random.

  • Penguin Computing and Seagate have been doing some cool stuff with the Exos E 5U84 platform. You can read more about that here. I think it’s slightly different to the AP version that StorONE uses, but I’ve been wrong before.
  • I still love Fibre Channel (FC), as unhealthy as that seems. I never really felt the same way about FCoE though, and it does seem to be deader than tape.
  • VMware vSAN 7.0 U2 is out now, and Cormac dives into what’s new here. If you’re in the ANZ timezone, don’t forget that Cormac, Duncan and Frank will be presenting (virtually) at the Sydney VMUG *soon*.
  • This article on data mobility from my preferred Chris Evans was great. We talk a lot about data mobility in this industry, but I don’t know that we’ve all taken the time to understand what it really means.
  • I’m a big fan of Tech Field Day, and it’s nice to see presenting companies take on feedback from delegates and putting out interesting articles. Kit’s a smart fellow, and this article on using VMware Cloud for application modernisation is well worth reading.
  • Preston wrote about some experiences he had recently with almost failing drives in his home environment, and raised some excellent points about resilience, failure, and caution.
  • Speaking of people I worked with briefly, I’ve enjoyed Siobhán’s series of articles on home automation. I would never have the patience to do this, but I’m awfully glad that someone did.
  • Datadobi appears to be enjoying some success, and have appointed Paul Repice to VP of Sales for the Americas. As the clock runs down on the quarter, I’m going two for one, and also letting you know that Zerto has done some work to enhance its channel program.

Random Short Take #47

Welcome to Random Short Take #47. Not a great many players have worn 47 in the NBA, but Andrei “AK-47” Kirilenko did. So let’s get random.

  • I’ve been doing some stuff with Runecast in my day job, so this post over at Gestalt IT really resonated.
  • I enjoyed this article from Alastair on AWS Design, and the mention of “handcrafted perfection” in particular has put an abrupt end to any yearning I’d be doing to head back into the enterprise fray.
  • Speaking of AWS, you can now hire Mac mini instances. Frederic did a great job of documenting the process here.
  • Liking VMware Cloud Foundation but wondering if you can get it via your favourite public cloud provider? Wonder no more with this handy reference from Simon Long.
  • Ransomware. Seems like everyone’s doing it. This was a great article on the benefits of the air gap approach to data protection. Remember, it’s not a matter of if, but when.
  • Speaking of data protection and security, BackupAssist Classic v11 launched recently. You can read the press release here.
  • Using draw.io but want to use some VVD stencils? Christian has the scoop here.
  • Speaking of VMware Cloud Director, Steve O has a handy guide on upgrading to 10.2 that you can read here.

Druva Update – Q3 2020

I caught up with my friend W. Curtis Preston from Druva a little while ago to talk about what the company has been up to. It seems like quite a bit, so I thought I’d share some notes here.

 

DXP and Company Update

Firstly, Druva’s first conference, DXP, is coming up shortly. There’s an interesting range of topics and speakers, and it looks to be jam packed with useful info. You can find out more and register for that here. The company seems to be going from strength to strength, enjoying 50% year-on-year growth, and 70% for Phoenix in particular (its DC product).

If you’re into Gartner Peer Insights – Druva has taken out the top award in 3 categories – file analysis, DRaaS, and data centre backup. Preston also tells me Druva is handling around 5 million backups a day, for what it’s worth. Finally, if you’re into super fluffy customer satisfaction metrics, Druva is reporting an “industry-leading NPS score of 88” that has been third-party verified.

 

Product News

It’s Fun To Read The CCPA

If you’re unfamiliar, California has released its version of the GDPR, know as the California Consumer Privacy Act. Druva has created a template for data types that shouldn’t be stored in plain text and can flag them as they’re backed up. It can also do the same thing in email, and you can now do a federated search against both of these things. If anything turns up that shouldn’t be there, you can go and remove problematic files.

ServiceNow Automation

Druva now has support for automated SNOW ticket creation. It’s based on some advanced logic, too. For example, if a backup fails 3 times, a ticket will be created and can be routed to the people who should be caring about such things.

More APIs

There’s been a lot of done work to deliver more APIs, and a more robust RBAC implementation.

DRaaS

DRaaS is currently only for VMware, VMC, and AWS-based workloads. Preston tells me that users are getting an RTO of 15-20 minutes, and an RPO of 1 hour. Druva added failback support a little while ago (one VM at a time). That feature has now been enhanced, and you can failback as many workloads as you want. You can also add a prefix or suffix to a VM name, and Druva has added a failover prerequisite check as well.

 

Other Notes

In other news, Druva is now certified on VMC on Dell. It’s added support for Microsoft Teams and support for Slack. Both useful if you’ve stopped storing your critical data in email and started storing it in collaboration apps instead.

Storage Insights and Recommendations

There’s also a storage insights feature that is particularly good for unstructured data. Say, for example, that 30% of your backups are media files, you might not want to back them up (unless you’re in the media streaming business, I guess). You can delete bad files from backups, and automatically create an exclusion for those file types.

Support for K8s

Support for everyone’s favourite container orchestration system has been announced, not yet released. Read about that here. You can now do a full backup of an entire K8s environment (AWS only in v1). This includes Docker containers, mounted volumes, and DBs referenced in those containers.

NAS Backup

Druva has enhanced its NAS backup in two ways, the first of which is performance. Preston tells me the current product is at least 10X faster than one year ago. Also, for customers already using a native recovery mechanism like snapshots, Druva has also added the option to backup directly to Glacier, which cuts your cost in half.

Oracle Support

For Oracle, Druva has what Preston describes as “two solid options”. Right now there’s an OVA that provides a ready to go, appliance-like experience, uses the image copy format (supporting block-level incremental, and incremental merge). The other option will be announced next week at DxP.

 

Thoughts and Further Reading

Some of these features seem like incremental improvements, but when you put it all together, it makes for some impressive reading. Druva has done a really impressive job, in my opinion, of sticking with the built in the cloud, for the cloud mantra that dominates much of its product design. The big news is the support for K8s, but things like multi-VM failback with the DRaaS solution is nothing to sneeze at. There’s more news coming shortly, and I look forward to covering that. In the meantime, if you have the time, be sure to check out DXP – I think it will be quite an informative event.

 

 

Random Short Take #46

Welcome to Random Short Take #46. Not a great many players have worn 46 in the NBA, but one player who has is one of my favourite Aussie players: Aron “Bangers” Baynes. So let’s get random.

  • Enrico recently attended Cloud Field Day 9, and had some thoughts on NetApp’s identity in the new cloud world. You can read his insights here.
  • This article from Chris Wahl on multi-cloud design patterns was fantastic, and well worth reading.
  • I really enjoyed this piece from Russ on technical debt, and some considerations when thinking about how we can “future-proof” our solutions.
  • The Raspberry Pi 400 was announced recently. My first computer was an Amstrad CPC 464, so I have a real soft spot for jamming computers inside keyboards.
  • I enjoyed this piece from Chris M. Evans on hybrid storage, and what it really means nowadays.
  • Working from home a bit this year? Me too. Tom wrote a great article on some of the security challenges associated with the new normal.
  • Everyone has a quadrant nowadays, and Zerto has found itself in another one recently. You can read more about that here.
  • Working with VMware Cloud Director and wanting to build a custom theme? Check out this article.

ANZ VMUG Virtual Event – November 2020

hero_vmug_express_2011

The November edition of the Brisbane VMUG meeting is a special one – we’re doing a joint session with a number of the other VMUG chapters in Australia and New Zealand. It will be held on Tuesday 17th November on Zoom from 3pm – 5pm AEST. It’s sponsored by Google Cloud for VMware and promises to be a great afternoon.

Here’s the agenda:

  • VMUG Intro
  • VMware Presentation: VMware SASE
  • Google Presentation: Google Cloud VMware Engine Overview
  • Q&A

Google Cloud has gone to great lengths to make sure this will be a fun and informative session and I’m really looking forward to hearing about Google Cloud VMware Engine. You can find out more information and register for the event here. I hope to see you there. Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.

Zerto Announces 8.5 and Zerto Data Protection

Zerto recently announced 8.5 of its product, along with a new offering, Zerto Data Protection (ZDP). I had the good fortune to catch up with Caroline Seymour (VP, Product Marketing) about the news and thought I’d share some thoughts here.

 

ZDP, Yeah You Know Me

Global Pandemic for $200 Please, Alex

In “these uncertain times”, organisations are facing new challenges

  • No downtime, no data loss, 24/7 availability
  • Influx of remote work
  • Data growth and sprawl
  • Security threats
  • Acceleration of cloud

Many of these things were already a problem, and the global pandemic has done a great job highlighting them.

“Legacy Architecture”

Zerto paints a bleak picture of the “legacy architecture” adopted by many of the traditional dat protection solutions, positing that many IT shops need to use a variety of tools to get to a point where operations staff can sleep better at night. Disaster recovery, for example, is frequently handled via replication for mission-critical applications, with backup being performed via periodic snapshots for all other applications. ZDP aims to being all this together under one banner of continuous data protection, delivering:

  • Local continuous backup and long-term retention (LTR) to public cloud; and
  • Pricing optimised for backup.

[image courtesy of Zerto]

Features

[image courtesy of Zerto]

So what do you get with ZDP? Some neat features, including:

  • Continuous backup with journal
  • Instant restore from local journal
  • Application consistent recovery
  • Short-term SLA policy settings
  • Intelligent index and search
  • LTR to disk, object or Cloud (Azure, AWS)
  • LTR policies, daily incremental with weekly, monthly or yearly fulls
  • Data protection workflows

 

New Licensing

It wouldn’t be a new software product without some mention of new licensing. If you want to use ZDP, you get:

  • Backup for short-term retention and LTR;
  • On-premises or backup to cloud;
  • Analytics; and
  • Orchestration and automation for backup functions.

If you’re sticking with (the existing) Zerto Cloud Edition, you get:

  • Everything in ZDP;
  • Disaster Recovery for on-premises and cloud;
  • Multi-cloud support; and
  • Orchestration and automation.

 

Zerto 8.5

A big focus of Zerto’s recently has been VMware on public cloud support, including the various flavours of VMware on Azure, AWS, and Oracle Cloud. There are a bunch of reasons why this approach has proven popular with existing VMware customers looking to migrate from on-premises to public cloud, including:

  • Native VMware support – run existing VMware workloads natively on IaaS;
  • Policies and configuration don’t need to change;
  • Minimal changes – no need to refactor applications; and
  • IaaS benefits- reliability, scale, and operational model.

[image courtesy of Zerto]

New in 8.5

With 8.5, you can now backup directly to Microsoft Azure and AWS. You also get instant file and folder restores to production. There’s now support for VMware on public cloud disaster recovery and data protection for Microsoft Azure VMware Solution, Google Cloud VMware Engine, and the Oracle Cloud VMware Solution. You also get platform automation and lifecycle management features, including:

  • Auto-evacuate for recovery hosts;
  • Auto-populate for recovery hosts; and
  • Encryption capabilities.

And finally, a Zerto PowerShell Cmdlets Module has also been released.

 

Thoughts and Further Reading

The writing’s been on the wall for some time that Zerto might need to expand its solution offering to incorporate backup and recovery. Continuous data protection is a great feature and my experience with Zerto has been that it does what it says on the tin. The market, however, is looking for ways to consolidate solution offerings in order to save a few more dollarydoos and keep the finance department happy. I haven’t seen the street pricing for ZDP, but Seymour seemed confident that it stacks up well against the more traditional data protection options on the market, particularly when compared against offerings that incorporate components that deal with CDP and periodic data protection with different tools. There’s a new TCO calculator on the Zerto website, and there’s also the opportunity to talk to a Zerto account representative about your particular needs.

I’ve always treated regular backup and recovery and disaster recovery as very different things, mainly because they are. Companies frequently make the mistake of trying to cobble together some kind of DR solution using traditional backup and recovery tools. I’m interested to see how Zerto goes with this approach. It’s not the first company to converge elements that fit in the data protection space together, and it will be interesting to see how much of the initial uptake of ZDP is with existing customers or net new logos. The broadening of support for the VMware on X public cloud workloads is good news for enterprises too (putting aside my thoughts on whether or not that’s a great long term strategy for said enterprises). There’s some interesting stuff happening, and I’m looking forward to see how the story unfolds over the next 6 – 12 months.

Pure Storage Acquires Portworx

Pure Storage announced its intention to acquire Portworx in mid-September. Around that time I had the opportunity to talk about the news with Goutham Rao (Portworx CTO) and Matt Kixmoeller (Pure Storage VP, Strategy) and thought I’d share some brief thoughts here.

 

The News

Pure and Portworx have entered an agreement that will see Pure pay approximately $370M US in cash. Portworx will form a new Cloud Native Business Unit inside Pure to be led by Portworx CEO Murli Thirumale. All Portworx founders are joining Pure, with Pure investing significantly to grow the new business unit. According to Pure, “Portworx software to continue as-is, supporting deployments in any cloud and on-premises, and on any bare metal, VM, or array-based storage”. It was also noted that “Portworx solutions to be integrated with Pure yet maintain a commitment to an open ecosystem”.

About Portworx

Described as the “leading Kubernetes data services platform”, Portworx was founded in 2014 in Los Altos, CA. It runs a 100% software, subscription, and cloud business model with development and support sites in California, India, and Eastern Europe. The product has been GA since 2017, and is used by some of the largest enterprise and Cloud / SaaS companies globally.

 

What’s A Portworx?

The idea behind Portworx is that it gives you data services for any application, on any Kubernetes distribution, running on any cloud, any infrastructure, and at any stage of the application lifecycle. To that end, it’s broken up into a bunch of different components, and runs in the K8s control plane adjacent to the applications.

PX-Store

  • Software-defined storage layer that automates container storage for developers and admins
  • Consistent storage APIs: cloud, bare metal, or arrays

PX-Migrate

  • Easily move applications between clusters
  • Enables hybrid cloud and multi-cloud mobility

PX-Backup

  • Application-consistent backup for cloud native apps with all k8s artefacts and state
  • Backup to any cloud or on-premises object storage

PX-Secure

  • Implement consistent encryption and security policies across clouds
  • Enable multi-tenancy with access controls

PX-DR

  • Sync and async replication between Availability Zones and regions
  • Zero RPO active / active for high resiliency

PX-Autopilot

  • GitOps-driven automation allows for easier platform for non-storage experts to deploy stateful applications, monitors everything about an application, reacts and prevents problems from happening
  • Auto-scale storage as your app grows to reduce costs

 

How It Fits Together

When you bring Portworx into the Pure Storage picture, you start to see that it fits well with the existing Pure Storage picture. In the picture below you’ll also see support for the standard container storage interface (CSI) to work with other vendors.

[image courtesy of Pure Storage]

Also worth noting is that PX-Essentials remains free forever for workloads under 5TB and 5 nodes).

 

Thoughts and Further Reading

I think this is a great move by Pure, mainly because it lends them a whole lot more credibility with the DevOps folks. Pure was starting to make inroads with Pure Storage Orchestrator, and I think this move will strengthen that story. Giving Portworx access to Pure’s salesforce globally is also going to broaden its visibility in the market and open up doors to markets that may have been difficult to get into previously.

Persistent storage for containers is heating up. As Rao pointed out in our discussion, “as container adoption grows, storage becomes a problem”. Portworx already had a good story to tell in this space, and Pure is no slouch when it comes to delivering advanced storage capabilities across a variety of platforms. I like that the messaging has been firmly based in maintaining the openness of the platform and I’m interested to see what other integrations happen as the two companies start working more closely together. If you’d like another perspective on the news, check out Chris Evans’s article here.

Rancher Labs Announces 2.5

Rancher Labs recently announced version 2.5 of its platform. I had the opportunity to catch up with co-founder and CEO Sheng Liang about the release and other things that Rancher has been up to and thought I’d share some of my notes here.

 

Introducing Rancher Labs 2.5

Liang described Rancher as a way for organisations to “[f]ocus on enriching their own apps, rather than trying to be a day 1, day 2 K8s outfit”. With that thinking in mind, the new features in 2.5 are as follows:

  1. Rancher now installs everywhere – on EKS, OpenShift, whatever – and they’ve removed a bunch of dependencies. Rancher 2.5 can now be installed on any CNCF-certified Kubernetes cluster, eliminating the need to set up a separate Kubernetes cluster before installing Rancher. The new lightweight installation experience is useful for users who already have access to a cloud-managed Kubernetes service like EKS.
  2. Enhanced management for EKS. Rancher Labs was a launch partner for EKS and used to treat it like a dumb distribution. The management architecture has been revamped with improved lifecycle management for EKS. It now uses the native EKS way of doing various things and only adds value where it’s not already present.
  3. Managing edge clusters. Liang described K3s as “almost the goto distribution for edge computing (5G, IoT, ATMs, etc)”. When you get into some of these scenarios, the scale of operations is becoming pretty big. You need to re-think multi-cluster management when you have that in place. Rancher has introduced a GitOps framework to do that. “GitOps at scale” – created its own GitOp framework to accommodate the required scale.
  4. K8s has plenty of traction in government and high security environments, hence the development of RKE Government Edition.

 

Other Notes

Liang mentioned that Longhorn uptake (made generally available in May 2020) has been great, with over 10000 active deployments (not just downloads) in the wild now. He noted that persistent storage with K8s has been hard to do, and Longhorn has gone some way to improving that experience. K3s is now a CNCF Sandbox project, not just a Rancher project, and this has certainly helped with its popularity as well. He also mentioned the acquisition by SUSE was continuing to progress, and expected it would be closed in Q4, 2020.

 

Thoughts and Further Reading

Longtime readers of this blog will know that my background is fairly well entrenched in infrastructure as opposed to cloud-native technologies. Liang understands this, and always does a pretty good job of translating some of the concepts he talks about with me back into infrastructure terms. The world continues to change, though, and the popularity of Kubernetes and solutions like Rancher Labs highlights that it’s no longer a simple conversation about LUNs, CPUs, network throughput and which server I’ll use to host my application. Organisations are looking for effective ways to get the most out of their technology investment, and Kubernetes can provide an extremely effective way of deploying and running containerised applications in an agile and efficient fashion. That said, the bar for entry into the cloud-native world can still be considered pretty high, particularly when you need to do things at large scale. This is where I think platforms like the one from Rancher Labs make so much sense. I may have described some elements of cloud-native architecture as a bin fire previously, but I think the progress that Rancher is making demonstrates just how far we’ve come. I know that VMware and Kubernetes has little in common, but it strikes me that we’re seeing the same development progress that we saw 15 years ago with VMware (and ESX in particular). I remember at the time that VMware seemed like a whole bunch of weird to many infrastructure folks, and it wasn’t until much later that these same people were happily using VMware in every part of the data centre. I suspect that the adoption of Kubernetes (and useful management frameworks for it) will be a bit quicker than that, but it’s going to be heavily reliant on solutions like this to broaden the appeal of what’s a very useful (but nonetheless challenging) container deployment and management ecosystem.

If you’re in the APAC region, Rancher is hosting a webinar in a friendly timezone later this month. You can get more details on that here. And if you’re on US Eastern time, there’s the “Computing on the Edge with Kubernetes” one day event that’s worth checking out.