Rancher Labs Announces Longhorn General Availability

This happened a little while ago, and the news about Rancher Labs has shifted to Suse’s announcement regarding its intent to acquire Rancher Labs. Nonetheless, I had a chance to speak to Sheng Liang (Co-founder and CEO) about Longhorn’s general availability, and thought I’d share some thoughts here.

 

What Is It?

Described by Rancher Labs as “an enterprise-grade, cloud-native container storage solution”, Longhorn has been in development for around 6 years, in beta for a year, and is now generally available. It’s comprised of around 40k lines of Golang code, and each volume is a set of independent micro-services, orchestrated by Kubernetes.

Liang described this to me as “enterprise-grade distributed block storage for K8S”, and the features certainly seem to line up with those expectations. There’s support for:

  • Thin-provisioning, snapshots, backup, and restore
  • Non-disruptive volume expansion
  • Cross-cluster disaster recovery volume with defined RTO and RPO
  • Live upgrade of Longhorn software without impacting running volumes
  • Full-featured Kubernetes CLI integration and standalone UI

From a licensing perspective, Longhorn is free to download and use, and customers looking for support can purchase a premium support model with the same SLAs provided through Rancher Support Services. There are no licensing fees, and node-based subscription pricing keeps costs to a minimum.

Use Cases

Why would you use it?

  • Bare metal workloads
  • Edge persistent
  • Geo-replicated storage for Amazon EKS
  • Application backup and disaster recovery

 

Thoughts

One of the barriers to entry when moving from traditional infrastructure to cloud-native is that concepts seem slightly different to the comfortable slippers you may have been used to in enterprise infrastructure land. The neat thing about Longhorn is that it leverages a lot of the same concepts you’ll see in traditional storage deployments to deliver resilient and scalable persistent storage for Kubernetes.

This doesn’t mean that Rancher Labs is trying to compete with traditional storage vendors like Pure Storage and NetApp when it comes to delivering persistent storage for cloud workloads. Liang acknowledges that these shops can offer more storage features than Longhorn can. There seems to be nonetheless a requirement for this kind of accessible and robust solution. Plus it’s 100% open source.

Rancher Labs already has a good story to tell when it comes to making Kubernetes management a whole lot simpler. The addition of Longhorn simply improves that story further. If you’re feeling curious about Longhorn and would like to know more, this website has a lot of useful information.

StorONE Announces AFA.next

StorONE recently announced the All-Flash Array.next (AFAn). I had the opportunity to speak to George Crump (StorONE Chief Marketing Officer) about the news, and thought I’d share some brief thoughts here.

 

What Is It? 

It’s a box! (Sorry I’ve been re-watching Silicon Valley with my daughter recently).

[image courtesy of StorONE]

More accurately, it’s an Intel Server with Intel Optane and Intel QLC storage, powered by StorONE’s software.

S1:Tier

S1:Tier is StorONE’s tiering solution. It operates within the parameters of a high and low watermark. Once the Optane tier fills up, the data is written out, sequentially, to QLC. The neat thing is that when you need to recall the data on QLC, you don’t necessarily need to move it all back to the Optane tier. Rather, read requests can be served directly from QLC. StorONE call this a multi-tier capability, because you can then move data to cloud storage for long-term retention if required.

[image courtesy of StorONE]

S1:HA

Crump noted that the Optane drives are single ported, leading some customers to look highly available configurations. These are catered for with a variation of S1:HA, where the HA solution is now a synchronous mirror between 2 stacks.

 

Thoughts and Further Reading

I’m not just a fan of StorONE because the company occasionally throws me a few dollarydoos to keep the site running. I’m a fan because the folks over there do an awful lot of storage type stuff on what is essentially commodity hardware, and they’re getting results that are worth writing home about, with a minimum of fuss. The AFAn uses Optane as a storage tier, not just read cache, so you get all of the benefit of Optane write performance (many, many IOPS). It has the resilience and data protection features you see in many midrange and enterprise arrays today (namely vRAID, replication, and snapshots). Finally, it has varying support for all three use cases (block, file, and object), so there’s a good chance your workload will fit on the box.

More and more vendors are coming to market with Optane-based storage solutions. It still seems that only a small number of them are taking full advantage of Optane as a write medium, instead focusing on its benefit as a read tier. As I mentioned before, Crump and the team at StorONE have positioned some pretty decent numbers coming out of the AFAn. I think the best thing is that it’s now available as a configuration item on the StorONE TRUprice site as well, so you can see for yourself how much the solution costs. If you’re after a whole lot of performance in a small box, this might be just the thing. You can read more about the solution and check out the lab report here. My friend Max wrote a great article on the solution that you can read here.

Komprise Announces Cloud Capability

Komprise recently made some announcements around extending its product to cloud. I had the opportunity to speak to Krishna Subramanian (President and COO) about the news and I thought I’d share some of my thoughts here.

 

The Announcement

Komprise has traditionally focused on unstructured data stored on-premises. It has now extended the capabilities of Komprise Intelligent Data Management to include cloud data. There’s currently support for Amazon S3 and Wasabi, with Google Cloud, Microsoft Azure, and IBM support coming soon.

 

Benefits

So what do you get with this capability?

Analyse data usage across cloud accounts and buckets easily

  • Single view across cloud accounts, buckets, and storage classes
  • Analyse AWS usage by various metrics accurately based on access times
  • Explore different data archival, replication, and deletion strategies with instant cost projections

Optimise AWS costs with analytics-driven archiving

  • Continuously move objects by policy across Cloud Network Attached Storage (NAS), Amazon S3, Amazon S3 Standard-IA, Amazon S3 Glacier, and Amazon S3 Glacier DeepArchive
  • Minimise costs and penalties by moving data at the right time based on access patterns

Bridge to Big Data/Artificial Intelligence (AI) projects

  • Create virtual data lakes for Big Data, AI – search for exactly what you need across cloud accounts and buckets
  • Native access to moved data on each storage class with full data fidelity

Create Cyber Resiliency with AWS

  • Copy S3 data to AWS to protect from ransomware with an air-gapped copy

[image courtesy of Komprise]

 

Why Is This Good?

The move to cloud storage hasn’t been all beer and skittles for enterprise. Storing large amounts of data in public cloud presents enterprises with a number of challenges, including:

  • Poor visibility – “Bucket sprawl”
  • Insufficient data – Cloud does not easily track last access / data use
  • Cost complexity – Manual data movement can lead to unexpected retrieval cost surprises
  • Labour – Manually moving data is error-prone and time-consuming

Sample Use Cases

Some other reasons you might want to have Komprise manage your data include:

  • Finding ex-employee data stored in buckets.
  • Data migration – you might want to take a copy of your data from Wasabi to AWS.

There’s support for all unstructured data (file and object), so the benefits of Komprise can be enjoyed regardless of how you’re storing your unstructured data. It’s also important to note that there’s no change to the existing licensing model, you’re just now able to use the product on public cloud storage.

 

Thoughts

Effective data management remains a big challenge for enterprises. It’s no secret that public cloud storage is really just storage that lives in another company’s data centre. Sure, it might be object storage, rather than file based, but it’s still just a bunch of unstructured data sitting in another company’s data centre. The way you consume that data may have changed, and certainly the way you pay for it has changed, but fundamentally it’s still your unstructured data sitting on a share or a filesystem. The problems you had on-premises though, still manifest in public cloud environments (i.e. data sprawl, capacity issues, etc). That’s why the Komprise solution seems so compelling when it comes to managing your on-premises storage consumption, and extending that capability to cloud storage is a no-brainer. When it comes to storing unstructured data, it’s frequently a bin fire of some sort or another. The reason for this is because it doesn’t scale well. I don’t mean the storage doesn’t scale – you can store petabytes all over the place if you like. But if you’re still hand crafting your shares and manually moving data around, you’ll notice that it becomes more and more time consuming as time goes on (and your data storage needs grow).

One way to address this challenge is to introduce a level of automation, which is something that Komprise does quite well. If you’ve got many terabytes of data stored on-premises and in AWS buckets (or you’re looking to move some old data from on-premises to the cloud) and you’re not quite sure what it’s all for or how best to go about it, Komprise can certainly help you out.

Random Short Take #39

Welcome to Random Short Take #39. Not a huge amount of players have worn 39 in the NBA, and I’m not going to pretend I’m any real fan of The Dwightmare. But things are tough all around, so let’s remain optimistic and push through to number 40. Anyway let’s get random.

  • VeeamON 2020 was online this week, and Anthony Spiteri has done a great job of summarising the major technical session announcements here.
  • I’ve known Howard Marks for a while now, and always relish the opportunity to speak with him when I can. This post is pretty hilarious, and I’m looking forward to reading the followup posts.
  • This is a great article from Alastair Cooke on COVID-19 and what En-Zed has done effectively to stop the spread. It was interesting to hear his thoughts on returning to the US, and I do agree that it’s going to be some time until I make the trip across the Pacific again.
  • Sometimes people get crazy ideas about how they might repurpose some old bits of technology. It’s even better when they write about their experiences in doing so. This article on automating an iPod Hi-Fi’s volume control over at Six Colors was fantastic.
  • Chris M. Evans put out a typically thought-provoking piece on data migration challenges recently that I think is worth checking out. I’ve been talking a lot to customers that are facing these challenges on a daily basis, and it’s interesting to see how, regardless of the industry vertical they operate in, it’s sometimes just a matter of the depth varying, so to speak.
  • I frequently bump into Ray Lucchesi at conferences, and he knows a fair bit about what does and doesn’t work. This article on his experiences recently with a number of virtual and online conferences is the epitome of constructive criticism.
  • Speaking of online conferences, the Australian VMUG UserCon will be virtual this year and will be held on the 30th July. You can find out more and register here.
  • Finally, if you’ve spent any time with me socially, you’ll know I’m a basketball nut. And invariably I’ll tell you that Deftones is may favouritest band ever. So it was great to come across this article about White Pony on one of my favourite sports (and popular culture) websites. If you’re a fan of Deftones, this is one to check out.

 

Random Short Take #38

Welcome to Random Short Take #38. Not a huge amount of players have worn 38 in the NBA, and I’m not going to pretend I was ever a Kwame Brown fan. Although it did seem like he had a tough time of it. Anyway let’s get random.

  • Ransomware is the new hotness. Or, rather, protecting storage systems from ransomware is the new hotness. My man Chin-Fah had a writeup on that here. It’s not a matter of if, but rather when you’ll run into a problem. It’s been interesting to see the various approaches being taken by the storage vendors and the data protection companies.
  • Applications for the vExpert program intake for the second half of 2020 are open, but closing soon. It’s a fantastic program to be a part of, so if you think you’ve got the goods, you can apply here. I also recommend this article from Christopher on his experiences.
  • This was a great article from Alastair on some of the differences between networking with AWS and VMC on AWS. As someone who works for a VMware Cloud Provider, I can confirm that NSX (T or V, I don’t care) has a whole slew of capabilities and whole slew of integration challenges.
  • Are you Zoomed out? I am. Even when you think the problem can’t be the network, it might just be the network (I hope my friends in networking appreciate that it’s not always the storage). John Nicholson posted a typically comprehensive overview of how your bandwidth might be one of the things keeping you from demonstrating excellent radio voice on those seemingly endless meetings you’re doing at the moment. It could also be that you’re using crap audio devices too, but I think John’s going to cover that in the future.
  • Scale Computing has a good story to tell about what it’s been doing with a large school district in the U.S. Read more about that here.
  • This is one of those promotions aimed at my friends in Northern America more than folks based where I am, but I’m always happy to talk about deals on data protection. StorCentric has launched its “Retrospect Dads & Grads Promotion” offering a free 90-Day subscription license for every Retrospect Backup product. You can read more about that here.
  • Pure//Accelerate Online was this week, and Max did a nice write-up on Pure Storage File Services over at Gestalt IT.
  • Rancher Labs recently announced the general availability of Longhorn (a cloud-native container storage solution). I’m looking forward to digging in to this a bit more over the next little while.

 

 

Datadobi Announces S3 Migration Capability

Datadobi recently announced S3 migration capabilities as part of DobiMigrate 5.9. I had the opportunity to speak to Carl D’Halluin and Michael Jack about the announcement and thought I’d share some thoughts on it here.

 

What Is It?

In short, you can now use DobiMigrate to perform S3 to S3 object storage migrations. It’s flexible too, offering the ability to migrate data from a variety of on-premises object systems up to public cloud object storage, between on-premises systems, or back to on-premises from public cloud storage. There’s support for a variety of S3 systems, including:

In the future Datadobi is looking to add support for AWS Glacier, object locks, object tags, and non-current object versions.

 

Why Would You?

There are quite a few reasons why you might want to move S3 data around. You could be seeing high egress charges from AWS because you’re accessing more data in S3 than you’d initially anticipated. You might be looking to move to the cloud and have a significant on-premises footprint that needs to go. Or you might be looking to replace your on-premises solution with a solution from another vendor.

 

How Would You?

The process used to migrate object is fairly straightforward, and follows a pattern that will be familiar if you’ve done anything with any kind of storage migration tool before. In short, you setup a migration pair (source and destination), run a scan and first copy, then do some incremental copies. Once you’ve got a maintenance window, there’s a cutover where the final scan and copy is done. And then you’re good to go. Basically.

[image courtesy of Datadobi]

 

Final Thoughts

Why am I so interested in these types of offerings? Part of it is that it reminds of all of the time I burnt through earlier in my career migrating data from various storage platforms to other storage platforms. One of the funny things about storage is that there’s rarely enough to service demand, and it rarely delivers the performance you need after it’s been in use for a few years. As such, there’s always some requirement to move data from one spot to another, and to keep that data intact in terms of its permissions, and metadata.

Amazon’s S3 offering has been amazing in terms of bringing object storage to the front of mind of many storage consumers who had previously only used block or file storage. Some of those users are now discovering that, while S3 is great, it can be expensive if you haven’t accounted for egress costs, or you’ve started using a whole lot more of it than initially anticipated. Some companies simply have to take their lumps, as everything is done in public cloud. But for those organisations with some on-premises footprint, the idea of being able to do performance oriented object storage in their own data centre holds a great deal of appeal. But how do you get it back on-premises in a reliable fashion? I believe that’s where Datadobi’s solution really shines.

I’m a fan of software that makes life easier for storage folk. Platform migrations can be a real pain to deal with, and are often riddled with risky propositions and daunting timeframes. Datadobi can’t necessarily change the laws of physics in a way that will keep your project manager happy, but it can do some stuff that means you won’t be quite as broken after a storage migration as you might have been previously. They already had a good story when it came to file storage migration, and the object to object story enhances it. Worth checking out.

Random Short Take #37

Welcome to Random Short Take #37. Not a huge amount of players have worn 37 in the NBA, but Metta World Peace did a few times. When he wasn’t wearing 15, and other odd numbers. But I digress. Let’s get random.

  • Pavilion Data recently added S3 capability to its platform. It’s based on a variant of MinIO, and adds an interesting dimension to what Pavilion Data has traditionally offered. Mellor provided some good coverage here.
  • Speaking of object storage, Dell EMC recently announced ECS 3.5. You can read more on that here. The architectural white paper has been updated to reflect the new version as well.
  • Speaking of Dell EMC, Preston posted a handy article on Data Domain Retention Lock and NetWorker. Have you pre-ordered Preston’s book yet? I’ll keep asking until you do.
  • Online events are all the rage at the moment, and two noteworthy events are coming up shortly: Pure//Accelerate and VeeamON 2020. Speaking of online events, we’re running a virtual BNEVMUG next week. Details on that here. ZertoCON Virtual is also a thing.
  • Speaking of Pure Storage, this article from Cody Hosterman on NVMe and vSphere 7 is lengthy, but definitely worth the read.
  • I can’t recall whether I mentioned that this white paper  covering VCD on VCF 3.9 is available now, and I can’t be bothered checking. So here it is.
  • I’m not just a fan of Backblaze because of its cool consumer backup solution and object storage platform, I’m also a big fan because of its blog. Articles like this one are a great example of companies doing corporate culture right (at least from what I can see).
  • I have the impression that Datadobi has been doing some cool stuff recently, and this story certainly seems to back it up.

StorONE Announces S1:TRUprice

StorONE recently announced S1:TRUprice. I had the opportunity to talk about the announcement with George Crump, and thought I’d share some of my notes here.

 

What Is It?

A website that anyone can access that provides a transparent view of StorONE’s pricing. There are three things you’ll want to know when doing a sample configuration:

  • Capacity
  • Use case (All-Flash, Hybrid, or All-HDD); and
  • Preferred server hardware (Dell EMC, HPE, Supermicro)

There’s also an option to do a software-only configuration if you’d rather roll your own. In the following example, I’ve configured HPE hardware in a highly available fashion with 92TB of capacity. This costs US $97243.14. Simple as that. Once you’re happy with the configuration, you can have a formal quote sent to you, or choose to get on a call with someone.

 

Thoughts and Further Reading

Astute readers will notice that there’s a StorONE banner on my website, and the company has provided funds that help me pay the costs of running my blog. This announcement is newsworthy regardless of my relationship with StorONE though. If you’ve ever been an enterprise storage customer, you’ll know that getting pricing is frequently a complicated endeavour. there’s rarely a page hosted on the vendor’s website that provides the total cost of whatever array / capacity you’re looking to consume. Instead, there’ll be an exercise involving a pre-sales engineer, possibly some sizing and analysis, and a bunch of data is put into a spreadsheet. This then magically determines the appropriate bit of gear. This specification is sent to a pricing team, some discounts to the recommended retail price are usually applied, and it’s sent to you to consider. If it’s a deal that’s competitive, there might be some more discount. If it’s the end of quarter and the sales person is “motivated”, you might find it’s a good time to buy. There are a whole slew of reasons why the price is never the price. But the problem with this is you can never know the price without talking to someone working for the vendor. Want to budget for some new capacity? Or another site deployment? Talk to the vendor. This makes a lot of sense for the vendor. It gives the sales team insight into what’s happening in the account. There’s “engagement” and “partnership”. Which is all well and good, but does withholding pricing need to be the cost of this engagement?

The Cloud Made Me Do It

The public availability of cloud pricing is changing the conversation when it comes to traditional enterprise storage consumption. Not just in terms of pricing transparency, but also equipment availability, customer enablement, and time to value. Years ago we were all beholden to our storage vendor of choice to deliver storage to us under the terms of the vendor, and when the vendor was able to do it. Nowadays, even enterprise consumers can go and grab the cloud storage they want or need with only a small modicum of fuss. This has changed the behaviours of the traditional storage vendors in a way that I don’t think was foreseen. Sure, cloud still isn’t the answer to every solution, and if you’re selling big tin into big banks, you might have a bit of runway before you need show your customers too much of what’s happening behind the curtain. But this move by StorONE demonstrates that there’s a demand for pricing transparency in the market, and customers are looking to vendors to show some innovation when it comes to the fairly boring business of enterprise storage. I’m very curious to see what other vendors decide to follow suit.

We won’t automatically see the end of some of the practices surrounding enterprise storage pricing, but initiatives like this certainly put some pressure back on the vendors to justify the price per GB they’re slinging gear for. It’s a bit easier to keep prices elevated when your customers have to do a lot of work to go to a competitor and find out what it charges for a similar solution. There are reasons for everything (including high prices), and I’m not suggesting that the major storage vendors have been colluding on price by any means. But something like S1:TRUprice is another nail in the coffin of the old way of doing things, and I’m happy about that. For another perspective on this news, check out Chris M. Evans’ article here.

Random Short Take #36

Welcome to Random Short Take #36. Not a huge amount of players have worn 36 in the NBA, but Shaq did (at the end of his career), and Marcus Smart does. This one, though, goes out to one of my favourite players from the modern era, Rasheed Wallace. It seems like Boston is the common thread here. Might have something to do with those hall of fame players wearing numbers in the low 30s. Or it might be entirely unrelated.

  • Scale Computing recently announced its all-NVMe HC3250DF as a new appliance targeting core data centre and edge computing use cases. It offers higher performance storage, networking and processing. You can read the press release here.
  • Dell EMC PowerStore has been announced. Chris Mellor covered the announcement here. I haven’t had time to dig into this yet, but I’m keen to learn more. Chris Evans also wrote about it here.
  • Rubrik Andes 5.2 was recently announced. You can read a wrap-up from Mellor here.
  • StorCentric’s Nexsan recently announced the E-Series 32F Storage Platform. You can read the press release here.
  • In what can only be considered excellent news, Preston de Guise has announced the availability of the second edition of his book, “Data Protection: Ensuring Data Availability”. It will be available in a variety of formats, with the ebook format already being out. I bought the first edition a few times to give as a gift, and I’m looking forward to giving away a few copies of this one too.
  • Backblaze B2 has been huge for the company, and Backblaze B2 with S3-compatible API access is even huger. Read more about that here. Speaking of Backblaze, it just released its hard dive stats for Q1, 2020. You can read more on that here.
  • Hal recently upgraded his NUC-based home lab to vSphere 7. You can read more about the process here.
  • Jon recently posted an article on a new upgrade command available in OneFS. If you’re into Isilon, you might just be into this.

Random Short Take #35

Welcome to Random Short Take #35. Some really good players have worn 35 in the NBA, including The Big Dog Antoine Carr, and Reggie Lewis. This one, though, goes out to one of my favourite players from the modern era, Kevin Durant. If it feels like it’s only been a week since the last post, that’s because it has. I bet you wish that I was producing some content that’s more useful than a bunch of links. So do I.

  • I don’t often get excited about funding rounds, but I have a friend who works there, so here’s an article covering the latest round (C) of funding for VAST Data.
  • Datadobi continue to share good news in these challenging times, and has published a success story based on some work it’s done with Payspan.
  • Speaking of challenging times, the nice folks a Retrospect are offering a free 90-day license subscription for Retrospect Backup. You don’t need a credit card to sign up, and “[a]ll backups can be restored, even if the subscription is cancelled”.
  • I loved this post from Russ discussing a recent article on Facebook and learning from network failures at scale. I’m in love with the idea that you can’t automate your way out of misconfiguration. We’ve been talking a lot about this in my day job lately. Automation can be a really exciting concept, but it’s not magic. And as scale increase, so too does the time it takes to troubleshoot issues. It all seems like a straightforward concept, but you’d be surprised how many people are surprised by these ideas.
  • Software continues to dominate the headlines, but hardware still has a role to play in the world. Alastair talks more about that idea here.
  • Paul Stringfellow recently jumped on the Storage Unpacked podcast to talk storage myths versus reality. Worth listening to.
  • It’s not all good news though. Sometimes people make mistakes, and pull out the wrong cables. This is a story I’ll be sharing with my team about resiliency.
  • SMR drives and consumer NAS devices aren’t necessarily the best combo. So this isn’t the best news either. I’m patiently waiting for consumer Flash drive prices to come down. It’s going to take a while though.