Random Short Take #43

Welcome to Random Short Take #43. A few players have worn 43 in the NBA, including Frank Brickowski, but my favourite from this list is Red Kerr (more for his commentary chops than his game, I think).  Let’s get random.

  • Mike Wilson has published Part 2 of his VMware VCP 2020 Study Guide and it’s a ripper. Check it out here. I try to duck and weave when it comes to certification exams nowadays, but these kind of resources are invaluable.
  • It’s been a while since I had stick time with Data Domain OS, but Preston’s article on password hardening was very useful.
  • Mr Foskett bought a cloud, of sorts. Read more about that here. Anyone who knows Stephen knows that he’s all about what’s talking about what’s happening in the industry, but I do enjoy reading about these home projects as well.
  • Speaking of clouds, Rancher was named “A Leader” in multi-cloud container development platforms by an independent research firm. You can read the press release here.
  • Datadobi had a good story to share about what it did with UMass Memorial Health Care. You can read the story here.
  • Steve O has done way too much work understanding how to change the default theme in Veeam Enterprise Manager 10 and documenting the process so you don’t need to work it out. Read about the process here.
  • Speaking of data protection, Zerto has noticed Azure adoption increasing at quite a pace, amongst other things.
  • This was a great article on open source storage from Chin-Fah.

Random Short Take #41

Welcome to Random Short Take #41. A few players have worn 41 in the NBA, but it’s hard to go past Dirk Nowitzki for a quality big man with a sweet, sweet jumpshot. So let’s get random.

  • There have been a lot of articles written by folks about various home office setups since COVID-19 became a thing, but this one by Jason Benedicic deserves a special mention. I bought a new desk and decluttered a fair bit of my setup, but it wasn’t on this level.
  • Speaking of COVID-19, there’s a hunger for new TV content as people across the world find themselves confined to their homes. The Ringer published an interesting article on the challenges of diving in to the archives to dig up and broadcast some television gold.
  • Backblaze made the news a while ago when they announced S3 compatibility, and this blog post covers how you can move from AWS S3 to Backblaze. And check out the offer to cover your data transfer costs too.
  • Zerto has had a bigger cloud presence with 7.5 and 8.0, and Oracle Public Cloud is now a partner too.
  • Speaking of cloud, Leaseweb Global recently announced the launch of its Leaseweb Cloud Connect product offering. You can read the press release here.
  • One of my favourite bands is The Mark Of Cain. It’s the 25th anniversary of the Ill At Ease album (the ultimate gym or breakup album – you choose), and the band has started publishing articles detailing the background info on the recording process. It’s fascinating stuff, and you can read Part 1 here and Part 2 here.
  • The nice folks over at Scale Computing have been doing some stuff with various healthcare organisations lately. You can read more about that here. I’m hoping to check in with Scale Computing in the near future when I’ve got a moment. I’m looking forward to hearing about what else they’ve been up to.
  • Ray recently attended Cloud Field Day 8, and the presentation from Igneous prompted this article.

Random Short Take #40

Welcome to Random Short Take #40. Quite a few players have worn 40 in the NBA, including the flat-top king Frank Brickowski. But my favourite player to wear number 40 was the Reign Man – Shawn Kemp. So let’s get random.

  • Dell EMC PowerProtect Data Manager 19.5 was released in early July and Preston covered it pretty comprehensively here.
  • Speaking of data protection software releases and enhancements, we’ve barely recovered from the excitement of Veeam v10 being released and Anthony is already talking about v11. More on that here.
  • Speaking of Veeam, Rhys posted a very detailed article on setting up a Veeam backup repository on NFS using a Pure Storage FlashBlade environment.
  • Sticking with the data protection theme, I penned a piece over at Gestalt IT for Druva talking about OneDrive protection and why it’s important.
  • OpenDrives has some new gear available – you can read more about that here.
  • The nice folks at Spectro Cloud recently announced that its first product is generally available. You can read the press release here.
  • Wiliam Lam put out a great article on passing through the integrated GPU on Apple Mac minis with ESXi 7.
  • Time passes on, and Christian recently celebrated 10 years on his blog, which I think is a worthy achievement.

Happy Friday!

Rancher Labs Announces Longhorn General Availability

This happened a little while ago, and the news about Rancher Labs has shifted to Suse’s announcement regarding its intent to acquire Rancher Labs. Nonetheless, I had a chance to speak to Sheng Liang (Co-founder and CEO) about Longhorn’s general availability, and thought I’d share some thoughts here.

 

What Is It?

Described by Rancher Labs as “an enterprise-grade, cloud-native container storage solution”, Longhorn has been in development for around 6 years, in beta for a year, and is now generally available. It’s comprised of around 40k lines of Golang code, and each volume is a set of independent micro-services, orchestrated by Kubernetes.

Liang described this to me as “enterprise-grade distributed block storage for K8S”, and the features certainly seem to line up with those expectations. There’s support for:

  • Thin-provisioning, snapshots, backup, and restore
  • Non-disruptive volume expansion
  • Cross-cluster disaster recovery volume with defined RTO and RPO
  • Live upgrade of Longhorn software without impacting running volumes
  • Full-featured Kubernetes CLI integration and standalone UI

From a licensing perspective, Longhorn is free to download and use, and customers looking for support can purchase a premium support model with the same SLAs provided through Rancher Support Services. There are no licensing fees, and node-based subscription pricing keeps costs to a minimum.

Use Cases

Why would you use it?

  • Bare metal workloads
  • Edge persistent
  • Geo-replicated storage for Amazon EKS
  • Application backup and disaster recovery

 

Thoughts

One of the barriers to entry when moving from traditional infrastructure to cloud-native is that concepts seem slightly different to the comfortable slippers you may have been used to in enterprise infrastructure land. The neat thing about Longhorn is that it leverages a lot of the same concepts you’ll see in traditional storage deployments to deliver resilient and scalable persistent storage for Kubernetes.

This doesn’t mean that Rancher Labs is trying to compete with traditional storage vendors like Pure Storage and NetApp when it comes to delivering persistent storage for cloud workloads. Liang acknowledges that these shops can offer more storage features than Longhorn can. There seems to be nonetheless a requirement for this kind of accessible and robust solution. Plus it’s 100% open source.

Rancher Labs already has a good story to tell when it comes to making Kubernetes management a whole lot simpler. The addition of Longhorn simply improves that story further. If you’re feeling curious about Longhorn and would like to know more, this website has a lot of useful information.

Komprise Announces Cloud Capability

Komprise recently made some announcements around extending its product to cloud. I had the opportunity to speak to Krishna Subramanian (President and COO) about the news and I thought I’d share some of my thoughts here.

 

The Announcement

Komprise has traditionally focused on unstructured data stored on-premises. It has now extended the capabilities of Komprise Intelligent Data Management to include cloud data. There’s currently support for Amazon S3 and Wasabi, with Google Cloud, Microsoft Azure, and IBM support coming soon.

 

Benefits

So what do you get with this capability?

Analyse data usage across cloud accounts and buckets easily

  • Single view across cloud accounts, buckets, and storage classes
  • Analyse AWS usage by various metrics accurately based on access times
  • Explore different data archival, replication, and deletion strategies with instant cost projections

Optimise AWS costs with analytics-driven archiving

  • Continuously move objects by policy across Cloud Network Attached Storage (NAS), Amazon S3, Amazon S3 Standard-IA, Amazon S3 Glacier, and Amazon S3 Glacier DeepArchive
  • Minimise costs and penalties by moving data at the right time based on access patterns

Bridge to Big Data/Artificial Intelligence (AI) projects

  • Create virtual data lakes for Big Data, AI – search for exactly what you need across cloud accounts and buckets
  • Native access to moved data on each storage class with full data fidelity

Create Cyber Resiliency with AWS

  • Copy S3 data to AWS to protect from ransomware with an air-gapped copy

[image courtesy of Komprise]

 

Why Is This Good?

The move to cloud storage hasn’t been all beer and skittles for enterprise. Storing large amounts of data in public cloud presents enterprises with a number of challenges, including:

  • Poor visibility – “Bucket sprawl”
  • Insufficient data – Cloud does not easily track last access / data use
  • Cost complexity – Manual data movement can lead to unexpected retrieval cost surprises
  • Labour – Manually moving data is error-prone and time-consuming

Sample Use Cases

Some other reasons you might want to have Komprise manage your data include:

  • Finding ex-employee data stored in buckets.
  • Data migration – you might want to take a copy of your data from Wasabi to AWS.

There’s support for all unstructured data (file and object), so the benefits of Komprise can be enjoyed regardless of how you’re storing your unstructured data. It’s also important to note that there’s no change to the existing licensing model, you’re just now able to use the product on public cloud storage.

 

Thoughts

Effective data management remains a big challenge for enterprises. It’s no secret that public cloud storage is really just storage that lives in another company’s data centre. Sure, it might be object storage, rather than file based, but it’s still just a bunch of unstructured data sitting in another company’s data centre. The way you consume that data may have changed, and certainly the way you pay for it has changed, but fundamentally it’s still your unstructured data sitting on a share or a filesystem. The problems you had on-premises though, still manifest in public cloud environments (i.e. data sprawl, capacity issues, etc). That’s why the Komprise solution seems so compelling when it comes to managing your on-premises storage consumption, and extending that capability to cloud storage is a no-brainer. When it comes to storing unstructured data, it’s frequently a bin fire of some sort or another. The reason for this is because it doesn’t scale well. I don’t mean the storage doesn’t scale – you can store petabytes all over the place if you like. But if you’re still hand crafting your shares and manually moving data around, you’ll notice that it becomes more and more time consuming as time goes on (and your data storage needs grow).

One way to address this challenge is to introduce a level of automation, which is something that Komprise does quite well. If you’ve got many terabytes of data stored on-premises and in AWS buckets (or you’re looking to move some old data from on-premises to the cloud) and you’re not quite sure what it’s all for or how best to go about it, Komprise can certainly help you out.

Random Short Take #38

Welcome to Random Short Take #38. Not a huge amount of players have worn 38 in the NBA, and I’m not going to pretend I was ever a Kwame Brown fan. Although it did seem like he had a tough time of it. Anyway let’s get random.

  • Ransomware is the new hotness. Or, rather, protecting storage systems from ransomware is the new hotness. My man Chin-Fah had a writeup on that here. It’s not a matter of if, but rather when you’ll run into a problem. It’s been interesting to see the various approaches being taken by the storage vendors and the data protection companies.
  • Applications for the vExpert program intake for the second half of 2020 are open, but closing soon. It’s a fantastic program to be a part of, so if you think you’ve got the goods, you can apply here. I also recommend this article from Christopher on his experiences.
  • This was a great article from Alastair on some of the differences between networking with AWS and VMC on AWS. As someone who works for a VMware Cloud Provider, I can confirm that NSX (T or V, I don’t care) has a whole slew of capabilities and whole slew of integration challenges.
  • Are you Zoomed out? I am. Even when you think the problem can’t be the network, it might just be the network (I hope my friends in networking appreciate that it’s not always the storage). John Nicholson posted a typically comprehensive overview of how your bandwidth might be one of the things keeping you from demonstrating excellent radio voice on those seemingly endless meetings you’re doing at the moment. It could also be that you’re using crap audio devices too, but I think John’s going to cover that in the future.
  • Scale Computing has a good story to tell about what it’s been doing with a large school district in the U.S. Read more about that here.
  • This is one of those promotions aimed at my friends in Northern America more than folks based where I am, but I’m always happy to talk about deals on data protection. StorCentric has launched its “Retrospect Dads & Grads Promotion” offering a free 90-Day subscription license for every Retrospect Backup product. You can read more about that here.
  • Pure//Accelerate Online was this week, and Max did a nice write-up on Pure Storage File Services over at Gestalt IT.
  • Rancher Labs recently announced the general availability of Longhorn (a cloud-native container storage solution). I’m looking forward to digging in to this a bit more over the next little while.

 

 

Datadobi Announces S3 Migration Capability

Datadobi recently announced S3 migration capabilities as part of DobiMigrate 5.9. I had the opportunity to speak to Carl D’Halluin and Michael Jack about the announcement and thought I’d share some thoughts on it here.

 

What Is It?

In short, you can now use DobiMigrate to perform S3 to S3 object storage migrations. It’s flexible too, offering the ability to migrate data from a variety of on-premises object systems up to public cloud object storage, between on-premises systems, or back to on-premises from public cloud storage. There’s support for a variety of S3 systems, including:

In the future Datadobi is looking to add support for AWS Glacier, object locks, object tags, and non-current object versions.

 

Why Would You?

There are quite a few reasons why you might want to move S3 data around. You could be seeing high egress charges from AWS because you’re accessing more data in S3 than you’d initially anticipated. You might be looking to move to the cloud and have a significant on-premises footprint that needs to go. Or you might be looking to replace your on-premises solution with a solution from another vendor.

 

How Would You?

The process used to migrate object is fairly straightforward, and follows a pattern that will be familiar if you’ve done anything with any kind of storage migration tool before. In short, you setup a migration pair (source and destination), run a scan and first copy, then do some incremental copies. Once you’ve got a maintenance window, there’s a cutover where the final scan and copy is done. And then you’re good to go. Basically.

[image courtesy of Datadobi]

 

Final Thoughts

Why am I so interested in these types of offerings? Part of it is that it reminds of all of the time I burnt through earlier in my career migrating data from various storage platforms to other storage platforms. One of the funny things about storage is that there’s rarely enough to service demand, and it rarely delivers the performance you need after it’s been in use for a few years. As such, there’s always some requirement to move data from one spot to another, and to keep that data intact in terms of its permissions, and metadata.

Amazon’s S3 offering has been amazing in terms of bringing object storage to the front of mind of many storage consumers who had previously only used block or file storage. Some of those users are now discovering that, while S3 is great, it can be expensive if you haven’t accounted for egress costs, or you’ve started using a whole lot more of it than initially anticipated. Some companies simply have to take their lumps, as everything is done in public cloud. But for those organisations with some on-premises footprint, the idea of being able to do performance oriented object storage in their own data centre holds a great deal of appeal. But how do you get it back on-premises in a reliable fashion? I believe that’s where Datadobi’s solution really shines.

I’m a fan of software that makes life easier for storage folk. Platform migrations can be a real pain to deal with, and are often riddled with risky propositions and daunting timeframes. Datadobi can’t necessarily change the laws of physics in a way that will keep your project manager happy, but it can do some stuff that means you won’t be quite as broken after a storage migration as you might have been previously. They already had a good story when it came to file storage migration, and the object to object story enhances it. Worth checking out.

Spectro Cloud – Profile-Based Kubernetes Management For The Enterprise

 

Spectro Cloud launched in March. I recently had the opportunity to speak to Tenry Fu (CEO) and Tina Nolte (VP, Products) about the launch, and what Spectro Cloud is, and thought I’d share some notes here.

 

The Problem?

I was going to start this article by saying that Kubernetes in the enterprise is a bin fire, but that’s too harsh (and entirely unfair on the folks who are doing it well). There is, however, a frequent compromise being made between ease of use, control, and visibility.

[image courtesy of Spectro Cloud]

According to Fu, the way that enterprises consume Kubernetes shouldn’t just be on the left or the right side of the diagram. There is a way to do both.

 

The Solution?

According to the team, Spectro Cloud is “a SaaS platform that gives Enterprises control over Kubernetes infrastructure stack integrations, consistently and at scale”. What does that mean though? Well, you get access to the “table stakes” SaaS management, including:

  • Managed Kubernetes experience;
  • Multi-cluster and environment management; and
  • Enterprise features.

Profile-Based Management

You also get some cool stuff that heavily leverages profile-based management, including infrastructure stack modelling and lifecycle management that can be done based on integration policies. In short, you build cluster profiles and then apply them to your infrastructure. The cluster profile usually describes the OS flavour and version, Kubernetes version, storage configuration, networking drivers, and so on. The Pallet orchestrator then ensures these profiles are used to maintain the desired cluster state. There are also security-hardened profiles available out of the box.

If you’re a VMware-based cloud user, the appliance (deployed from an OVA file) sits in your on-premises VMware cloud environment and communicates with the Spectro Cloud SaaS offering over TLS, and the cloud properties are dynamically propagated.

Licensing

The solution is licensed on the number of worker node cores under management. This is tiered based on the number of cores and it follows a simple model: More cores and a longer commitment equals a bigger discount.

 

The Differentiator?

Current Kubernetes deployment options vary in their complexity and maturity. You can take the DIY path, but you might find that this option is difficult to maintain at scale. There are packaged options available, such as VMware Tanzu, but you might find that multi-cluster management is not always a focus. The managed Kubernetes option (such as those offered by Google and AWS) has its appeal to the enterprise crowd, but those offerings are normally quite restricted in terms of technology offerings and available versions.

Why does Spectro Cloud have appeal as a solution then? Because you get control over the integrations you might want to use with your infrastructure, but also get the warm and fuzzy feeling of leveraging a managed service experience.

 

Thoughts

I’m no great fan of complexity for complexity’s sake, particularly when it comes to enterprise IT deployments. That said, there are always reasons why things get complicated in the enterprise. Requirements come from all parts of the business, legacy applications need to be fed and watered, rules and regulations seem to be in place simply to make things difficult. Enterprise application owners crave solutions like Kubernetes because there’s some hope that they, too, can deliver modern applications if only they used some modern application deployment and management constructs. Unfortunately, Kubernetes can be a real pain in the rear to get right, particularly at scale. And if enterprise has taught us anything, it’s that most enterprise shops are struggling to do the basics well, let alone the needlessly complicated stuff.

Solutions like the one from Spectro Cloud aren’t a silver bullet for enterprise organisations looking to modernise the way applications are deployed, scaled, and managed. But something like Spectro Cloud certainly has great appeal given the inherent difficulties you’re likely to experience if you’re coming at this from a standing start. Sure, if you’re a mature Kubernetes shop, chances are slim that you really need something like this. But if you’re still new to it, or are finding that the managed offerings don’t give you the flexibility you might need, then something like Spectro Cloud could be just what you’re looking for.

Backblaze B2 And A Happy Customer

Backblaze recently published a case study with AK Productions. I had the opportunity to speak to Aiden Korotkin and thought I’d share some of my notes here.

 

The Problem

Korotkin’s problem was a fairly common one – he had lots of data from previous projects that had built up over the years. He’d been using a bunch of external drives to store this data, and had had a couple of external drives fail, including the backup drives. Google’s cloud storage option “seemed like a more redundant and safer investment financially to go into the cloud space”. He was already using G Suite. And so he migrated his old projects off hard drives and into the cloud. He had a credit with Google for a year to use its cloud platform. It became pretty expensive after that, not really feasible. Korotkin also stated that calculating the expected costs was difficult. He also felt that he needed to find something more private / secure.

 

The Solution

So how did he come by Backblaze? He did a bunch of research. Backblaze B2 consistently showed up in the top 15 results when online magazines were publishing their guides to cloud storage. He’d heard of it before, possibly seen a demo. The technology seemed very streamlined, exactly what he needed for his business. A bonus was that there were no extra steps to backup his QNAP NAS as well. This seemed like the best option.

Current Workflow

I asked Korotkin to walk me though his current workflow. B2 is being used as a backup target for the moment. Physics being what it is, it’s still “[h]ard to do video editing direct on the cloud”. The QNAP NAS houses current projects, with data mirrored to B2. Archives are uploaded to a different area of B2. After time, data is completely archived to the cloud.

How About Ingest?

Korotkin needed to move 12TB from Google to Backblaze. He used Flexify.IO to transfer from one cloud to the next. They walked him through how to do it. The good news is that they were able to do it in 12 hours.

It’s About Support

Korotkin noted that between Backblaze and Flexify.IO “the tech support experience was incredible”. He said that he “[f]elt like I was very much taken care of”. He got the strong impression that the support staff enjoyed helping him, and were with him through every step of the way. The most frustrating part of the migration, according to Korotkin, was dealing with Google generally. The offloading of the data from Google cost more money than he’s paid to date with Backblaze. “As a small business owner I don’t have $1500 just to throw away”.

 

Thoughts

I’ve been a fan of Backblaze for some time. I’m a happy customer when it comes to the consumer backup product, and I’ve always enjoyed the transparency it’s displayed as a company with regards to its pod designs and the process required to get to where it is today. I remain fascinated by the workflows required to do multimedia content creation successfully, and I think this story is a great tribute to the support culture of Backblaze. It’s nice to see that smaller shops, such as Korotkin’s, are afforded the same kind of care and support experience as some of the bigger customers might. This is a noticeable point of distinction when compared to working with the hyperscalers. It’s not that those folks aren’t happy to help, they’re just operating at a different level.

Korotkin’s approach was not unreasonable, or unusual, particularly for content creators. Keeping data safe is a challenge for small business, and solutions that make storing and protecting data easier are going to be popular. Korotkin’s story is a good one, and I’m always happy to hear these kinds of stories. If you find yourself shuffling external drives, or need a lot of capacity but don’t want to invest too heavily in on-premises storage, Backblaze has a good story in terms of both cloud storage and data protection.

Random Short Take #34

Welcome to Random Short Take #34. Some really good players have worn 34 in the NBA, including Ray Allen and Sir Charles. This one, though, goes out to my favourite enforcer, Charles Oakley. If it feels like it’s only been a week since the last post, that’s because it has.

  • I spoke to the folks at Rancher Labs a little while ago, and they’re doing some stuff around what they call “Edge Scalability” and have also announced Series D funding.
  • April Fool’s is always a bit of a trying time, what with a lot of the world being a few timezones removed from where I live. Invariably I stop checking news sites for a few days to be sure. Backblaze recognised that these are strange times, and decided to have some fun with their releases, rather than trying to fool people outright. I found the post on Catblaze Cloud Backup inspiring.
  • Hal Yaman announced the availability of version 2.6 of his Office 365 Backup sizing tool. Speaking of Veeam and handy utilities, the Veeam Extract utility is now available as a standalone tool. Cade talks about that here.
  • VMware vSphere 7 recently went GA. Here’s a handy article covering what it means for VMware cloud providers.
  • Speaking of VMware things, John Nicholson wrote a great article on SMB and vSAN (I can’t bring myself to write CIFS, even when I know why it’s being referred to that way).
  • Scale is infinite, until it isn’t. Azure had some minor issues recently, and Keith Townsend shared some thoughts on the situation.
  • StorMagic recently announced that it has acquired KeyNexus. It also announced the availability of SvKMS, a key management system for edge, DC, and cloud solutions.
  • Joey D’Antoni, in collaboration with DH2i, is delivering a webinar titled “Overcoming the HA/DR and Networking Challenges of SQL Server on Linux”. It’s being held on Wednesday 15th April at 11am Pacific Time. If that timezone works for you, you can find out more and register here.