- I’ve been doing some stuff with Runecast in my day job, so this post over at Gestalt IT really resonated.
- I enjoyed this article from Alastair on AWS Design, and the mention of “handcrafted perfection” in particular has put an abrupt end to any yearning I’d be doing to head back into the enterprise fray.
- Speaking of AWS, you can now hire Mac mini instances. Frederic did a great job of documenting the process here.
- Liking VMware Cloud Foundation but wondering if you can get it via your favourite public cloud provider? Wonder no more with this handy reference from Simon Long.
- Ransomware. Seems like everyone’s doing it. This was a great article on the benefits of the air gap approach to data protection. Remember, it’s not a matter of if, but when.
- Speaking of data protection and security, BackupAssist Classic v11 launched recently. You can read the press release here.
- Using draw.io but want to use some VVD stencils? Christian has the scoop here.
- Speaking of VMware Cloud Director, Steve O has a handy guide on upgrading to 10.2 that you can read here.
DXP and Company Update
Firstly, Druva’s first conference, DXP, is coming up shortly. There’s an interesting range of topics and speakers, and it looks to be jam packed with useful info. You can find out more and register for that here. The company seems to be going from strength to strength, enjoying 50% year-on-year growth, and 70% for Phoenix in particular (its DC product).
If you’re into Gartner Peer Insights – Druva has taken out the top award in 3 categories – file analysis, DRaaS, and data centre backup. Preston also tells me Druva is handling around 5 million backups a day, for what it’s worth. Finally, if you’re into super fluffy customer satisfaction metrics, Druva is reporting an “industry-leading NPS score of 88” that has been third-party verified.
It’s Fun To Read The CCPA
If you’re unfamiliar, California has released its version of the GDPR, know as the California Consumer Privacy Act. Druva has created a template for data types that shouldn’t be stored in plain text and can flag them as they’re backed up. It can also do the same thing in email, and you can now do a federated search against both of these things. If anything turns up that shouldn’t be there, you can go and remove problematic files.
Druva now has support for automated SNOW ticket creation. It’s based on some advanced logic, too. For example, if a backup fails 3 times, a ticket will be created and can be routed to the people who should be caring about such things.
There’s been a lot of done work to deliver more APIs, and a more robust RBAC implementation.
DRaaS is currently only for VMware, VMC, and AWS-based workloads. Preston tells me that users are getting an RTO of 15-20 minutes, and an RPO of 1 hour. Druva added failback support a little while ago (one VM at a time). That feature has now been enhanced, and you can failback as many workloads as you want. You can also add a prefix or suffix to a VM name, and Druva has added a failover prerequisite check as well.
In other news, Druva is now certified on VMC on Dell. It’s added support for Microsoft Teams and support for Slack. Both useful if you’ve stopped storing your critical data in email and started storing it in collaboration apps instead.
Storage Insights and Recommendations
There’s also a storage insights feature that is particularly good for unstructured data. Say, for example, that 30% of your backups are media files, you might not want to back them up (unless you’re in the media streaming business, I guess). You can delete bad files from backups, and automatically create an exclusion for those file types.
Support for K8s
Support for everyone’s favourite container orchestration system has been announced, not yet released. Read about that here. You can now do a full backup of an entire K8s environment (AWS only in v1). This includes Docker containers, mounted volumes, and DBs referenced in those containers.
Druva has enhanced its NAS backup in two ways, the first of which is performance. Preston tells me the current product is at least 10X faster than one year ago. Also, for customers already using a native recovery mechanism like snapshots, Druva has also added the option to backup directly to Glacier, which cuts your cost in half.
For Oracle, Druva has what Preston describes as “two solid options”. Right now there’s an OVA that provides a ready to go, appliance-like experience, uses the image copy format (supporting block-level incremental, and incremental merge). The other option will be announced next week at DxP.
Thoughts and Further Reading
Some of these features seem like incremental improvements, but when you put it all together, it makes for some impressive reading. Druva has done a really impressive job, in my opinion, of sticking with the built in the cloud, for the cloud mantra that dominates much of its product design. The big news is the support for K8s, but things like multi-VM failback with the DRaaS solution is nothing to sneeze at. There’s more news coming shortly, and I look forward to covering that. In the meantime, if you have the time, be sure to check out DXP – I think it will be quite an informative event.
- Enrico recently attended Cloud Field Day 9, and had some thoughts on NetApp’s identity in the new cloud world. You can read his insights here.
- This article from Chris Wahl on multi-cloud design patterns was fantastic, and well worth reading.
- I really enjoyed this piece from Russ on technical debt, and some considerations when thinking about how we can “future-proof” our solutions.
- The Raspberry Pi 400 was announced recently. My first computer was an Amstrad CPC 464, so I have a real soft spot for jamming computers inside keyboards.
- I enjoyed this piece from Chris M. Evans on hybrid storage, and what it really means nowadays.
- Working from home a bit this year? Me too. Tom wrote a great article on some of the security challenges associated with the new normal.
- Everyone has a quadrant nowadays, and Zerto has found itself in another one recently. You can read more about that here.
- Working with VMware Cloud Director and wanting to build a custom theme? Check out this article.
The November edition of the Brisbane VMUG meeting is a special one – we’re doing a joint session with a number of the other VMUG chapters in Australia and New Zealand. It will be held on Tuesday 17th November on Zoom from 3pm – 5pm AEST. It’s sponsored by Google Cloud for VMware and promises to be a great afternoon.
Here’s the agenda:
- VMUG Intro
- VMware Presentation: VMware SASE
- Google Presentation: Google Cloud VMware Engine Overview
Google Cloud has gone to great lengths to make sure this will be a fun and informative session and I’m really looking forward to hearing about Google Cloud VMware Engine. You can find out more information and register for the event here. I hope to see you there. Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.
Zerto recently announced 8.5 of its product, along with a new offering, Zerto Data Protection (ZDP). I had the good fortune to catch up with Caroline Seymour (VP, Product Marketing) about the news and thought I’d share some thoughts here.
ZDP, Yeah You Know Me
Global Pandemic for $200 Please, Alex
In “these uncertain times”, organisations are facing new challenges
- No downtime, no data loss, 24/7 availability
- Influx of remote work
- Data growth and sprawl
- Security threats
- Acceleration of cloud
Many of these things were already a problem, and the global pandemic has done a great job highlighting them.
Zerto paints a bleak picture of the “legacy architecture” adopted by many of the traditional dat protection solutions, positing that many IT shops need to use a variety of tools to get to a point where operations staff can sleep better at night. Disaster recovery, for example, is frequently handled via replication for mission-critical applications, with backup being performed via periodic snapshots for all other applications. ZDP aims to being all this together under one banner of continuous data protection, delivering:
- Local continuous backup and long-term retention (LTR) to public cloud; and
- Pricing optimised for backup.
[image courtesy of Zerto]
[image courtesy of Zerto]
So what do you get with ZDP? Some neat features, including:
- Continuous backup with journal
- Instant restore from local journal
- Application consistent recovery
- Short-term SLA policy settings
- Intelligent index and search
- LTR to disk, object or Cloud (Azure, AWS)
- LTR policies, daily incremental with weekly, monthly or yearly fulls
- Data protection workflows
It wouldn’t be a new software product without some mention of new licensing. If you want to use ZDP, you get:
- Backup for short-term retention and LTR;
- On-premises or backup to cloud;
- Analytics; and
- Orchestration and automation for backup functions.
If you’re sticking with (the existing) Zerto Cloud Edition, you get:
- Everything in ZDP;
- Disaster Recovery for on-premises and cloud;
- Multi-cloud support; and
- Orchestration and automation.
A big focus of Zerto’s recently has been VMware on public cloud support, including the various flavours of VMware on Azure, AWS, and Oracle Cloud. There are a bunch of reasons why this approach has proven popular with existing VMware customers looking to migrate from on-premises to public cloud, including:
- Native VMware support – run existing VMware workloads natively on IaaS;
- Policies and configuration don’t need to change;
- Minimal changes – no need to refactor applications; and
- IaaS benefits- reliability, scale, and operational model.
[image courtesy of Zerto]
New in 8.5
With 8.5, you can now backup directly to Microsoft Azure and AWS. You also get instant file and folder restores to production. There’s now support for VMware on public cloud disaster recovery and data protection for Microsoft Azure VMware Solution, Google Cloud VMware Engine, and the Oracle Cloud VMware Solution. You also get platform automation and lifecycle management features, including:
- Auto-evacuate for recovery hosts;
- Auto-populate for recovery hosts; and
- Encryption capabilities.
And finally, a Zerto PowerShell Cmdlets Module has also been released.
Thoughts and Further Reading
The writing’s been on the wall for some time that Zerto might need to expand its solution offering to incorporate backup and recovery. Continuous data protection is a great feature and my experience with Zerto has been that it does what it says on the tin. The market, however, is looking for ways to consolidate solution offerings in order to save a few more dollarydoos and keep the finance department happy. I haven’t seen the street pricing for ZDP, but Seymour seemed confident that it stacks up well against the more traditional data protection options on the market, particularly when compared against offerings that incorporate components that deal with CDP and periodic data protection with different tools. There’s a new TCO calculator on the Zerto website, and there’s also the opportunity to talk to a Zerto account representative about your particular needs.
I’ve always treated regular backup and recovery and disaster recovery as very different things, mainly because they are. Companies frequently make the mistake of trying to cobble together some kind of DR solution using traditional backup and recovery tools. I’m interested to see how Zerto goes with this approach. It’s not the first company to converge elements that fit in the data protection space together, and it will be interesting to see how much of the initial uptake of ZDP is with existing customers or net new logos. The broadening of support for the VMware on X public cloud workloads is good news for enterprises too (putting aside my thoughts on whether or not that’s a great long term strategy for said enterprises). There’s some interesting stuff happening, and I’m looking forward to see how the story unfolds over the next 6 – 12 months.
Pure Storage announced its intention to acquire Portworx in mid-September. Around that time I had the opportunity to talk about the news with Goutham Rao (Portworx CTO) and Matt Kixmoeller (Pure Storage VP, Strategy) and thought I’d share some brief thoughts here.
Pure and Portworx have entered an agreement that will see Pure pay approximately $370M US in cash. Portworx will form a new Cloud Native Business Unit inside Pure to be led by Portworx CEO Murli Thirumale. All Portworx founders are joining Pure, with Pure investing significantly to grow the new business unit. According to Pure, “Portworx software to continue as-is, supporting deployments in any cloud and on-premises, and on any bare metal, VM, or array-based storage”. It was also noted that “Portworx solutions to be integrated with Pure yet maintain a commitment to an open ecosystem”.
Described as the “leading Kubernetes data services platform”, Portworx was founded in 2014 in Los Altos, CA. It runs a 100% software, subscription, and cloud business model with development and support sites in California, India, and Eastern Europe. The product has been GA since 2017, and is used by some of the largest enterprise and Cloud / SaaS companies globally.
What’s A Portworx?
The idea behind Portworx is that it gives you data services for any application, on any Kubernetes distribution, running on any cloud, any infrastructure, and at any stage of the application lifecycle. To that end, it’s broken up into a bunch of different components, and runs in the K8s control plane adjacent to the applications.
- Software-defined storage layer that automates container storage for developers and admins
- Consistent storage APIs: cloud, bare metal, or arrays
- Easily move applications between clusters
- Enables hybrid cloud and multi-cloud mobility
- Application-consistent backup for cloud native apps with all k8s artefacts and state
- Backup to any cloud or on-premises object storage
- Implement consistent encryption and security policies across clouds
- Enable multi-tenancy with access controls
- Sync and async replication between Availability Zones and regions
- Zero RPO active / active for high resiliency
- GitOps-driven automation allows for easier platform for non-storage experts to deploy stateful applications, monitors everything about an application, reacts and prevents problems from happening
- Auto-scale storage as your app grows to reduce costs
How It Fits Together
When you bring Portworx into the Pure Storage picture, you start to see that it fits well with the existing Pure Storage picture. In the picture below you’ll also see support for the standard container storage interface (CSI) to work with other vendors.
[image courtesy of Pure Storage]
Also worth noting is that PX-Essentials remains free forever for workloads under 5TB and 5 nodes).
Thoughts and Further Reading
I think this is a great move by Pure, mainly because it lends them a whole lot more credibility with the DevOps folks. Pure was starting to make inroads with Pure Storage Orchestrator, and I think this move will strengthen that story. Giving Portworx access to Pure’s salesforce globally is also going to broaden its visibility in the market and open up doors to markets that may have been difficult to get into previously.
Persistent storage for containers is heating up. As Rao pointed out in our discussion, “as container adoption grows, storage becomes a problem”. Portworx already had a good story to tell in this space, and Pure is no slouch when it comes to delivering advanced storage capabilities across a variety of platforms. I like that the messaging has been firmly based in maintaining the openness of the platform and I’m interested to see what other integrations happen as the two companies start working more closely together. If you’d like another perspective on the news, check out Chris Evans’s article here.
Rancher Labs recently announced version 2.5 of its platform. I had the opportunity to catch up with co-founder and CEO Sheng Liang about the release and other things that Rancher has been up to and thought I’d share some of my notes here.
Introducing Rancher Labs 2.5
Liang described Rancher as a way for organisations to “[f]ocus on enriching their own apps, rather than trying to be a day 1, day 2 K8s outfit”. With that thinking in mind, the new features in 2.5 are as follows:
- Rancher now installs everywhere – on EKS, OpenShift, whatever – and they’ve removed a bunch of dependencies. Rancher 2.5 can now be installed on any CNCF-certified Kubernetes cluster, eliminating the need to set up a separate Kubernetes cluster before installing Rancher. The new lightweight installation experience is useful for users who already have access to a cloud-managed Kubernetes service like EKS.
- Enhanced management for EKS. Rancher Labs was a launch partner for EKS and used to treat it like a dumb distribution. The management architecture has been revamped with improved lifecycle management for EKS. It now uses the native EKS way of doing various things and only adds value where it’s not already present.
- Managing edge clusters. Liang described K3s as “almost the goto distribution for edge computing (5G, IoT, ATMs, etc)”. When you get into some of these scenarios, the scale of operations is becoming pretty big. You need to re-think multi-cluster management when you have that in place. Rancher has introduced a GitOps framework to do that. “GitOps at scale” – created its own GitOp framework to accommodate the required scale.
- K8s has plenty of traction in government and high security environments, hence the development of RKE Government Edition.
Liang mentioned that Longhorn uptake (made generally available in May 2020) has been great, with over 10000 active deployments (not just downloads) in the wild now. He noted that persistent storage with K8s has been hard to do, and Longhorn has gone some way to improving that experience. K3s is now a CNCF Sandbox project, not just a Rancher project, and this has certainly helped with its popularity as well. He also mentioned the acquisition by SUSE was continuing to progress, and expected it would be closed in Q4, 2020.
Thoughts and Further Reading
Longtime readers of this blog will know that my background is fairly well entrenched in infrastructure as opposed to cloud-native technologies. Liang understands this, and always does a pretty good job of translating some of the concepts he talks about with me back into infrastructure terms. The world continues to change, though, and the popularity of Kubernetes and solutions like Rancher Labs highlights that it’s no longer a simple conversation about LUNs, CPUs, network throughput and which server I’ll use to host my application. Organisations are looking for effective ways to get the most out of their technology investment, and Kubernetes can provide an extremely effective way of deploying and running containerised applications in an agile and efficient fashion. That said, the bar for entry into the cloud-native world can still be considered pretty high, particularly when you need to do things at large scale. This is where I think platforms like the one from Rancher Labs make so much sense. I may have described some elements of cloud-native architecture as a bin fire previously, but I think the progress that Rancher is making demonstrates just how far we’ve come. I know that VMware and Kubernetes has little in common, but it strikes me that we’re seeing the same development progress that we saw 15 years ago with VMware (and ESX in particular). I remember at the time that VMware seemed like a whole bunch of weird to many infrastructure folks, and it wasn’t until much later that these same people were happily using VMware in every part of the data centre. I suspect that the adoption of Kubernetes (and useful management frameworks for it) will be a bit quicker than that, but it’s going to be heavily reliant on solutions like this to broaden the appeal of what’s a very useful (but nonetheless challenging) container deployment and management ecosystem.
If you’re in the APAC region, Rancher is hosting a webinar in a friendly timezone later this month. You can get more details on that here. And if you’re on US Eastern time, there’s the “Computing on the Edge with Kubernetes” one day event that’s worth checking out.
Welcome to Random Short Take #44. A few players have worn 44 in the NBA, including Danny Ainge and Pistol Pete, but my favourite from this list is Keith Van Horn. A nice shooting touch and strong long sock game. Let’s get random.
- ATMs are just computers built to give you money. And it’s scary to think of the platforms that are used to deliver that functionality. El Reg pointed out a recent problem with one spotted in the wild in Ngunnawal.
- Speaking of computing at the edge, I found this piece from Keith interesting. As much as things change they stay the same. I think he’s spot on when he says “[m]anufacturers and technology companies must come together with modular solutions that enable software upgrades for these assets’ lives”. We need to be demanding more from the industry when it comes to some of this stuff.
- Heard about Project Monterey at VMworld and wanted to know more? Pensando has you covered.
- I enjoyed this article from Preston about the difference between bunkers and vaults – worth checking out even if you’re not a Dell EMC customer.
- Cloud – it can be tough to know which way to go. And a whole bunch of people have an interest in you using their particular solution. This article from Chris Evans was particularly insightful.
- DH2i has launched DxOdyssey for IoT – you can read more about that here.
- Speaking of news, Retrospect recently announced Backup 17.5 too. There are some cloud improvements, and support for macOS Big Sur beta.
- It’s the 30th anniversary of Vanilla Ice’s “Ice Ice Baby“, and like me you were probably looking for a comprehensive retrospective on Vanilla Ice’s career. Look no further than this article over at The Ringer.
This happened a little while ago, and the news about Rancher Labs has shifted to Suse’s announcement regarding its intent to acquire Rancher Labs. Nonetheless, I had a chance to speak to Sheng Liang (Co-founder and CEO) about Longhorn’s general availability, and thought I’d share some thoughts here.
What Is It?
Described by Rancher Labs as “an enterprise-grade, cloud-native container storage solution”, Longhorn has been in development for around 6 years, in beta for a year, and is now generally available. It’s comprised of around 40k lines of Golang code, and each volume is a set of independent micro-services, orchestrated by Kubernetes.
Liang described this to me as “enterprise-grade distributed block storage for K8S”, and the features certainly seem to line up with those expectations. There’s support for:
- Thin-provisioning, snapshots, backup, and restore
- Non-disruptive volume expansion
- Cross-cluster disaster recovery volume with defined RTO and RPO
- Live upgrade of Longhorn software without impacting running volumes
- Full-featured Kubernetes CLI integration and standalone UI
From a licensing perspective, Longhorn is free to download and use, and customers looking for support can purchase a premium support model with the same SLAs provided through Rancher Support Services. There are no licensing fees, and node-based subscription pricing keeps costs to a minimum.
Why would you use it?
- Bare metal workloads
- Edge persistent
- Geo-replicated storage for Amazon EKS
- Application backup and disaster recovery
One of the barriers to entry when moving from traditional infrastructure to cloud-native is that concepts seem slightly different to the comfortable slippers you may have been used to in enterprise infrastructure land. The neat thing about Longhorn is that it leverages a lot of the same concepts you’ll see in traditional storage deployments to deliver resilient and scalable persistent storage for Kubernetes.
This doesn’t mean that Rancher Labs is trying to compete with traditional storage vendors like Pure Storage and NetApp when it comes to delivering persistent storage for cloud workloads. Liang acknowledges that these shops can offer more storage features than Longhorn can. There seems to be nonetheless a requirement for this kind of accessible and robust solution. Plus it’s 100% open source.
Rancher Labs already has a good story to tell when it comes to making Kubernetes management a whole lot simpler. The addition of Longhorn simply improves that story further. If you’re feeling curious about Longhorn and would like to know more, this website has a lot of useful information.
Komprise recently made some announcements around extending its product to cloud. I had the opportunity to speak to Krishna Subramanian (President and COO) about the news and I thought I’d share some of my thoughts here.
Komprise has traditionally focused on unstructured data stored on-premises. It has now extended the capabilities of Komprise Intelligent Data Management to include cloud data. There’s currently support for Amazon S3 and Wasabi, with Google Cloud, Microsoft Azure, and IBM support coming soon.
So what do you get with this capability?
Analyse data usage across cloud accounts and buckets easily
- Single view across cloud accounts, buckets, and storage classes
- Analyse AWS usage by various metrics accurately based on access times
- Explore different data archival, replication, and deletion strategies with instant cost projections
Optimise AWS costs with analytics-driven archiving
- Continuously move objects by policy across Cloud Network Attached Storage (NAS), Amazon S3, Amazon S3 Standard-IA, Amazon S3 Glacier, and Amazon S3 Glacier DeepArchive
- Minimise costs and penalties by moving data at the right time based on access patterns
Bridge to Big Data/Artificial Intelligence (AI) projects
- Create virtual data lakes for Big Data, AI – search for exactly what you need across cloud accounts and buckets
- Native access to moved data on each storage class with full data fidelity
Create Cyber Resiliency with AWS
- Copy S3 data to AWS to protect from ransomware with an air-gapped copy
[image courtesy of Komprise]
Why Is This Good?
The move to cloud storage hasn’t been all beer and skittles for enterprise. Storing large amounts of data in public cloud presents enterprises with a number of challenges, including:
- Poor visibility – “Bucket sprawl”
- Insufficient data – Cloud does not easily track last access / data use
- Cost complexity – Manual data movement can lead to unexpected retrieval cost surprises
- Labour – Manually moving data is error-prone and time-consuming
Sample Use Cases
Some other reasons you might want to have Komprise manage your data include:
- Finding ex-employee data stored in buckets.
- Data migration – you might want to take a copy of your data from Wasabi to AWS.
There’s support for all unstructured data (file and object), so the benefits of Komprise can be enjoyed regardless of how you’re storing your unstructured data. It’s also important to note that there’s no change to the existing licensing model, you’re just now able to use the product on public cloud storage.
Effective data management remains a big challenge for enterprises. It’s no secret that public cloud storage is really just storage that lives in another company’s data centre. Sure, it might be object storage, rather than file based, but it’s still just a bunch of unstructured data sitting in another company’s data centre. The way you consume that data may have changed, and certainly the way you pay for it has changed, but fundamentally it’s still your unstructured data sitting on a share or a filesystem. The problems you had on-premises though, still manifest in public cloud environments (i.e. data sprawl, capacity issues, etc). That’s why the Komprise solution seems so compelling when it comes to managing your on-premises storage consumption, and extending that capability to cloud storage is a no-brainer. When it comes to storing unstructured data, it’s frequently a bin fire of some sort or another. The reason for this is because it doesn’t scale well. I don’t mean the storage doesn’t scale – you can store petabytes all over the place if you like. But if you’re still hand crafting your shares and manually moving data around, you’ll notice that it becomes more and more time consuming as time goes on (and your data storage needs grow).
One way to address this challenge is to introduce a level of automation, which is something that Komprise does quite well. If you’ve got many terabytes of data stored on-premises and in AWS buckets (or you’re looking to move some old data from on-premises to the cloud) and you’re not quite sure what it’s all for or how best to go about it, Komprise can certainly help you out.