Rancher Labs Announces Longhorn General Availability

This happened a little while ago, and the news about Rancher Labs has shifted to Suse’s announcement regarding its intent to acquire Rancher Labs. Nonetheless, I had a chance to speak to Sheng Liang (Co-founder and CEO) about Longhorn’s general availability, and thought I’d share some thoughts here.

 

What Is It?

Described by Rancher Labs as “an enterprise-grade, cloud-native container storage solution”, Longhorn has been in development for around 6 years, in beta for a year, and is now generally available. It’s comprised of around 40k lines of Golang code, and each volume is a set of independent micro-services, orchestrated by Kubernetes.

Liang described this to me as “enterprise-grade distributed block storage for K8S”, and the features certainly seem to line up with those expectations. There’s support for:

  • Thin-provisioning, snapshots, backup, and restore
  • Non-disruptive volume expansion
  • Cross-cluster disaster recovery volume with defined RTO and RPO
  • Live upgrade of Longhorn software without impacting running volumes
  • Full-featured Kubernetes CLI integration and standalone UI

From a licensing perspective, Longhorn is free to download and use, and customers looking for support can purchase a premium support model with the same SLAs provided through Rancher Support Services. There are no licensing fees, and node-based subscription pricing keeps costs to a minimum.

Use Cases

Why would you use it?

  • Bare metal workloads
  • Edge persistent
  • Geo-replicated storage for Amazon EKS
  • Application backup and disaster recovery

 

Thoughts

One of the barriers to entry when moving from traditional infrastructure to cloud-native is that concepts seem slightly different to the comfortable slippers you may have been used to in enterprise infrastructure land. The neat thing about Longhorn is that it leverages a lot of the same concepts you’ll see in traditional storage deployments to deliver resilient and scalable persistent storage for Kubernetes.

This doesn’t mean that Rancher Labs is trying to compete with traditional storage vendors like Pure Storage and NetApp when it comes to delivering persistent storage for cloud workloads. Liang acknowledges that these shops can offer more storage features than Longhorn can. There seems to be nonetheless a requirement for this kind of accessible and robust solution. Plus it’s 100% open source.

Rancher Labs already has a good story to tell when it comes to making Kubernetes management a whole lot simpler. The addition of Longhorn simply improves that story further. If you’re feeling curious about Longhorn and would like to know more, this website has a lot of useful information.

StorONE Announces AFA.next

StorONE recently announced the All-Flash Array.next (AFAn). I had the opportunity to speak to George Crump (StorONE Chief Marketing Officer) about the news, and thought I’d share some brief thoughts here.

 

What Is It? 

It’s a box! (Sorry I’ve been re-watching Silicon Valley with my daughter recently).

[image courtesy of StorONE]

More accurately, it’s an Intel Server with Intel Optane and Intel QLC storage, powered by StorONE’s software.

S1:Tier

S1:Tier is StorONE’s tiering solution. It operates within the parameters of a high and low watermark. Once the Optane tier fills up, the data is written out, sequentially, to QLC. The neat thing is that when you need to recall the data on QLC, you don’t necessarily need to move it all back to the Optane tier. Rather, read requests can be served directly from QLC. StorONE call this a multi-tier capability, because you can then move data to cloud storage for long-term retention if required.

[image courtesy of StorONE]

S1:HA

Crump noted that the Optane drives are single ported, leading some customers to look highly available configurations. These are catered for with a variation of S1:HA, where the HA solution is now a synchronous mirror between 2 stacks.

 

Thoughts and Further Reading

I’m not just a fan of StorONE because the company occasionally throws me a few dollarydoos to keep the site running. I’m a fan because the folks over there do an awful lot of storage type stuff on what is essentially commodity hardware, and they’re getting results that are worth writing home about, with a minimum of fuss. The AFAn uses Optane as a storage tier, not just read cache, so you get all of the benefit of Optane write performance (many, many IOPS). It has the resilience and data protection features you see in many midrange and enterprise arrays today (namely vRAID, replication, and snapshots). Finally, it has varying support for all three use cases (block, file, and object), so there’s a good chance your workload will fit on the box.

More and more vendors are coming to market with Optane-based storage solutions. It still seems that only a small number of them are taking full advantage of Optane as a write medium, instead focusing on its benefit as a read tier. As I mentioned before, Crump and the team at StorONE have positioned some pretty decent numbers coming out of the AFAn. I think the best thing is that it’s now available as a configuration item on the StorONE TRUprice site as well, so you can see for yourself how much the solution costs. If you’re after a whole lot of performance in a small box, this might be just the thing. You can read more about the solution and check out the lab report here. My friend Max wrote a great article on the solution that you can read here.

Komprise Announces Cloud Capability

Komprise recently made some announcements around extending its product to cloud. I had the opportunity to speak to Krishna Subramanian (President and COO) about the news and I thought I’d share some of my thoughts here.

 

The Announcement

Komprise has traditionally focused on unstructured data stored on-premises. It has now extended the capabilities of Komprise Intelligent Data Management to include cloud data. There’s currently support for Amazon S3 and Wasabi, with Google Cloud, Microsoft Azure, and IBM support coming soon.

 

Benefits

So what do you get with this capability?

Analyse data usage across cloud accounts and buckets easily

  • Single view across cloud accounts, buckets, and storage classes
  • Analyse AWS usage by various metrics accurately based on access times
  • Explore different data archival, replication, and deletion strategies with instant cost projections

Optimise AWS costs with analytics-driven archiving

  • Continuously move objects by policy across Cloud Network Attached Storage (NAS), Amazon S3, Amazon S3 Standard-IA, Amazon S3 Glacier, and Amazon S3 Glacier DeepArchive
  • Minimise costs and penalties by moving data at the right time based on access patterns

Bridge to Big Data/Artificial Intelligence (AI) projects

  • Create virtual data lakes for Big Data, AI – search for exactly what you need across cloud accounts and buckets
  • Native access to moved data on each storage class with full data fidelity

Create Cyber Resiliency with AWS

  • Copy S3 data to AWS to protect from ransomware with an air-gapped copy

[image courtesy of Komprise]

 

Why Is This Good?

The move to cloud storage hasn’t been all beer and skittles for enterprise. Storing large amounts of data in public cloud presents enterprises with a number of challenges, including:

  • Poor visibility – “Bucket sprawl”
  • Insufficient data – Cloud does not easily track last access / data use
  • Cost complexity – Manual data movement can lead to unexpected retrieval cost surprises
  • Labour – Manually moving data is error-prone and time-consuming

Sample Use Cases

Some other reasons you might want to have Komprise manage your data include:

  • Finding ex-employee data stored in buckets.
  • Data migration – you might want to take a copy of your data from Wasabi to AWS.

There’s support for all unstructured data (file and object), so the benefits of Komprise can be enjoyed regardless of how you’re storing your unstructured data. It’s also important to note that there’s no change to the existing licensing model, you’re just now able to use the product on public cloud storage.

 

Thoughts

Effective data management remains a big challenge for enterprises. It’s no secret that public cloud storage is really just storage that lives in another company’s data centre. Sure, it might be object storage, rather than file based, but it’s still just a bunch of unstructured data sitting in another company’s data centre. The way you consume that data may have changed, and certainly the way you pay for it has changed, but fundamentally it’s still your unstructured data sitting on a share or a filesystem. The problems you had on-premises though, still manifest in public cloud environments (i.e. data sprawl, capacity issues, etc). That’s why the Komprise solution seems so compelling when it comes to managing your on-premises storage consumption, and extending that capability to cloud storage is a no-brainer. When it comes to storing unstructured data, it’s frequently a bin fire of some sort or another. The reason for this is because it doesn’t scale well. I don’t mean the storage doesn’t scale – you can store petabytes all over the place if you like. But if you’re still hand crafting your shares and manually moving data around, you’ll notice that it becomes more and more time consuming as time goes on (and your data storage needs grow).

One way to address this challenge is to introduce a level of automation, which is something that Komprise does quite well. If you’ve got many terabytes of data stored on-premises and in AWS buckets (or you’re looking to move some old data from on-premises to the cloud) and you’re not quite sure what it’s all for or how best to go about it, Komprise can certainly help you out.

Random Short Take #39

Welcome to Random Short Take #39. Not a huge amount of players have worn 39 in the NBA, and I’m not going to pretend I’m any real fan of The Dwightmare. But things are tough all around, so let’s remain optimistic and push through to number 40. Anyway let’s get random.

  • VeeamON 2020 was online this week, and Anthony Spiteri has done a great job of summarising the major technical session announcements here.
  • I’ve known Howard Marks for a while now, and always relish the opportunity to speak with him when I can. This post is pretty hilarious, and I’m looking forward to reading the followup posts.
  • This is a great article from Alastair Cooke on COVID-19 and what En-Zed has done effectively to stop the spread. It was interesting to hear his thoughts on returning to the US, and I do agree that it’s going to be some time until I make the trip across the Pacific again.
  • Sometimes people get crazy ideas about how they might repurpose some old bits of technology. It’s even better when they write about their experiences in doing so. This article on automating an iPod Hi-Fi’s volume control over at Six Colors was fantastic.
  • Chris M. Evans put out a typically thought-provoking piece on data migration challenges recently that I think is worth checking out. I’ve been talking a lot to customers that are facing these challenges on a daily basis, and it’s interesting to see how, regardless of the industry vertical they operate in, it’s sometimes just a matter of the depth varying, so to speak.
  • I frequently bump into Ray Lucchesi at conferences, and he knows a fair bit about what does and doesn’t work. This article on his experiences recently with a number of virtual and online conferences is the epitome of constructive criticism.
  • Speaking of online conferences, the Australian VMUG UserCon will be virtual this year and will be held on the 30th July. You can find out more and register here.
  • Finally, if you’ve spent any time with me socially, you’ll know I’m a basketball nut. And invariably I’ll tell you that Deftones is may favouritest band ever. So it was great to come across this article about White Pony on one of my favourite sports (and popular culture) websites. If you’re a fan of Deftones, this is one to check out.

 

Random Short Take #38

Welcome to Random Short Take #38. Not a huge amount of players have worn 38 in the NBA, and I’m not going to pretend I was ever a Kwame Brown fan. Although it did seem like he had a tough time of it. Anyway let’s get random.

  • Ransomware is the new hotness. Or, rather, protecting storage systems from ransomware is the new hotness. My man Chin-Fah had a writeup on that here. It’s not a matter of if, but rather when you’ll run into a problem. It’s been interesting to see the various approaches being taken by the storage vendors and the data protection companies.
  • Applications for the vExpert program intake for the second half of 2020 are open, but closing soon. It’s a fantastic program to be a part of, so if you think you’ve got the goods, you can apply here. I also recommend this article from Christopher on his experiences.
  • This was a great article from Alastair on some of the differences between networking with AWS and VMC on AWS. As someone who works for a VMware Cloud Provider, I can confirm that NSX (T or V, I don’t care) has a whole slew of capabilities and whole slew of integration challenges.
  • Are you Zoomed out? I am. Even when you think the problem can’t be the network, it might just be the network (I hope my friends in networking appreciate that it’s not always the storage). John Nicholson posted a typically comprehensive overview of how your bandwidth might be one of the things keeping you from demonstrating excellent radio voice on those seemingly endless meetings you’re doing at the moment. It could also be that you’re using crap audio devices too, but I think John’s going to cover that in the future.
  • Scale Computing has a good story to tell about what it’s been doing with a large school district in the U.S. Read more about that here.
  • This is one of those promotions aimed at my friends in Northern America more than folks based where I am, but I’m always happy to talk about deals on data protection. StorCentric has launched its “Retrospect Dads & Grads Promotion” offering a free 90-Day subscription license for every Retrospect Backup product. You can read more about that here.
  • Pure//Accelerate Online was this week, and Max did a nice write-up on Pure Storage File Services over at Gestalt IT.
  • Rancher Labs recently announced the general availability of Longhorn (a cloud-native container storage solution). I’m looking forward to digging in to this a bit more over the next little while.

 

 

Datadobi Announces S3 Migration Capability

Datadobi recently announced S3 migration capabilities as part of DobiMigrate 5.9. I had the opportunity to speak to Carl D’Halluin and Michael Jack about the announcement and thought I’d share some thoughts on it here.

 

What Is It?

In short, you can now use DobiMigrate to perform S3 to S3 object storage migrations. It’s flexible too, offering the ability to migrate data from a variety of on-premises object systems up to public cloud object storage, between on-premises systems, or back to on-premises from public cloud storage. There’s support for a variety of S3 systems, including:

In the future Datadobi is looking to add support for AWS Glacier, object locks, object tags, and non-current object versions.

 

Why Would You?

There are quite a few reasons why you might want to move S3 data around. You could be seeing high egress charges from AWS because you’re accessing more data in S3 than you’d initially anticipated. You might be looking to move to the cloud and have a significant on-premises footprint that needs to go. Or you might be looking to replace your on-premises solution with a solution from another vendor.

 

How Would You?

The process used to migrate object is fairly straightforward, and follows a pattern that will be familiar if you’ve done anything with any kind of storage migration tool before. In short, you setup a migration pair (source and destination), run a scan and first copy, then do some incremental copies. Once you’ve got a maintenance window, there’s a cutover where the final scan and copy is done. And then you’re good to go. Basically.

[image courtesy of Datadobi]

 

Final Thoughts

Why am I so interested in these types of offerings? Part of it is that it reminds of all of the time I burnt through earlier in my career migrating data from various storage platforms to other storage platforms. One of the funny things about storage is that there’s rarely enough to service demand, and it rarely delivers the performance you need after it’s been in use for a few years. As such, there’s always some requirement to move data from one spot to another, and to keep that data intact in terms of its permissions, and metadata.

Amazon’s S3 offering has been amazing in terms of bringing object storage to the front of mind of many storage consumers who had previously only used block or file storage. Some of those users are now discovering that, while S3 is great, it can be expensive if you haven’t accounted for egress costs, or you’ve started using a whole lot more of it than initially anticipated. Some companies simply have to take their lumps, as everything is done in public cloud. But for those organisations with some on-premises footprint, the idea of being able to do performance oriented object storage in their own data centre holds a great deal of appeal. But how do you get it back on-premises in a reliable fashion? I believe that’s where Datadobi’s solution really shines.

I’m a fan of software that makes life easier for storage folk. Platform migrations can be a real pain to deal with, and are often riddled with risky propositions and daunting timeframes. Datadobi can’t necessarily change the laws of physics in a way that will keep your project manager happy, but it can do some stuff that means you won’t be quite as broken after a storage migration as you might have been previously. They already had a good story when it came to file storage migration, and the object to object story enhances it. Worth checking out.

StorONE Announces S1:TRUprice

StorONE recently announced S1:TRUprice. I had the opportunity to talk about the announcement with George Crump, and thought I’d share some of my notes here.

 

What Is It?

A website that anyone can access that provides a transparent view of StorONE’s pricing. There are three things you’ll want to know when doing a sample configuration:

  • Capacity
  • Use case (All-Flash, Hybrid, or All-HDD); and
  • Preferred server hardware (Dell EMC, HPE, Supermicro)

There’s also an option to do a software-only configuration if you’d rather roll your own. In the following example, I’ve configured HPE hardware in a highly available fashion with 92TB of capacity. This costs US $97243.14. Simple as that. Once you’re happy with the configuration, you can have a formal quote sent to you, or choose to get on a call with someone.

 

Thoughts and Further Reading

Astute readers will notice that there’s a StorONE banner on my website, and the company has provided funds that help me pay the costs of running my blog. This announcement is newsworthy regardless of my relationship with StorONE though. If you’ve ever been an enterprise storage customer, you’ll know that getting pricing is frequently a complicated endeavour. there’s rarely a page hosted on the vendor’s website that provides the total cost of whatever array / capacity you’re looking to consume. Instead, there’ll be an exercise involving a pre-sales engineer, possibly some sizing and analysis, and a bunch of data is put into a spreadsheet. This then magically determines the appropriate bit of gear. This specification is sent to a pricing team, some discounts to the recommended retail price are usually applied, and it’s sent to you to consider. If it’s a deal that’s competitive, there might be some more discount. If it’s the end of quarter and the sales person is “motivated”, you might find it’s a good time to buy. There are a whole slew of reasons why the price is never the price. But the problem with this is you can never know the price without talking to someone working for the vendor. Want to budget for some new capacity? Or another site deployment? Talk to the vendor. This makes a lot of sense for the vendor. It gives the sales team insight into what’s happening in the account. There’s “engagement” and “partnership”. Which is all well and good, but does withholding pricing need to be the cost of this engagement?

The Cloud Made Me Do It

The public availability of cloud pricing is changing the conversation when it comes to traditional enterprise storage consumption. Not just in terms of pricing transparency, but also equipment availability, customer enablement, and time to value. Years ago we were all beholden to our storage vendor of choice to deliver storage to us under the terms of the vendor, and when the vendor was able to do it. Nowadays, even enterprise consumers can go and grab the cloud storage they want or need with only a small modicum of fuss. This has changed the behaviours of the traditional storage vendors in a way that I don’t think was foreseen. Sure, cloud still isn’t the answer to every solution, and if you’re selling big tin into big banks, you might have a bit of runway before you need show your customers too much of what’s happening behind the curtain. But this move by StorONE demonstrates that there’s a demand for pricing transparency in the market, and customers are looking to vendors to show some innovation when it comes to the fairly boring business of enterprise storage. I’m very curious to see what other vendors decide to follow suit.

We won’t automatically see the end of some of the practices surrounding enterprise storage pricing, but initiatives like this certainly put some pressure back on the vendors to justify the price per GB they’re slinging gear for. It’s a bit easier to keep prices elevated when your customers have to do a lot of work to go to a competitor and find out what it charges for a similar solution. There are reasons for everything (including high prices), and I’m not suggesting that the major storage vendors have been colluding on price by any means. But something like S1:TRUprice is another nail in the coffin of the old way of doing things, and I’m happy about that. For another perspective on this news, check out Chris M. Evans’ article here.

Backblaze B2 And A Happy Customer

Backblaze recently published a case study with AK Productions. I had the opportunity to speak to Aiden Korotkin and thought I’d share some of my notes here.

 

The Problem

Korotkin’s problem was a fairly common one – he had lots of data from previous projects that had built up over the years. He’d been using a bunch of external drives to store this data, and had had a couple of external drives fail, including the backup drives. Google’s cloud storage option “seemed like a more redundant and safer investment financially to go into the cloud space”. He was already using G Suite. And so he migrated his old projects off hard drives and into the cloud. He had a credit with Google for a year to use its cloud platform. It became pretty expensive after that, not really feasible. Korotkin also stated that calculating the expected costs was difficult. He also felt that he needed to find something more private / secure.

 

The Solution

So how did he come by Backblaze? He did a bunch of research. Backblaze B2 consistently showed up in the top 15 results when online magazines were publishing their guides to cloud storage. He’d heard of it before, possibly seen a demo. The technology seemed very streamlined, exactly what he needed for his business. A bonus was that there were no extra steps to backup his QNAP NAS as well. This seemed like the best option.

Current Workflow

I asked Korotkin to walk me though his current workflow. B2 is being used as a backup target for the moment. Physics being what it is, it’s still “[h]ard to do video editing direct on the cloud”. The QNAP NAS houses current projects, with data mirrored to B2. Archives are uploaded to a different area of B2. After time, data is completely archived to the cloud.

How About Ingest?

Korotkin needed to move 12TB from Google to Backblaze. He used Flexify.IO to transfer from one cloud to the next. They walked him through how to do it. The good news is that they were able to do it in 12 hours.

It’s About Support

Korotkin noted that between Backblaze and Flexify.IO “the tech support experience was incredible”. He said that he “[f]elt like I was very much taken care of”. He got the strong impression that the support staff enjoyed helping him, and were with him through every step of the way. The most frustrating part of the migration, according to Korotkin, was dealing with Google generally. The offloading of the data from Google cost more money than he’s paid to date with Backblaze. “As a small business owner I don’t have $1500 just to throw away”.

 

Thoughts

I’ve been a fan of Backblaze for some time. I’m a happy customer when it comes to the consumer backup product, and I’ve always enjoyed the transparency it’s displayed as a company with regards to its pod designs and the process required to get to where it is today. I remain fascinated by the workflows required to do multimedia content creation successfully, and I think this story is a great tribute to the support culture of Backblaze. It’s nice to see that smaller shops, such as Korotkin’s, are afforded the same kind of care and support experience as some of the bigger customers might. This is a noticeable point of distinction when compared to working with the hyperscalers. It’s not that those folks aren’t happy to help, they’re just operating at a different level.

Korotkin’s approach was not unreasonable, or unusual, particularly for content creators. Keeping data safe is a challenge for small business, and solutions that make storing and protecting data easier are going to be popular. Korotkin’s story is a good one, and I’m always happy to hear these kinds of stories. If you find yourself shuffling external drives, or need a lot of capacity but don’t want to invest too heavily in on-premises storage, Backblaze has a good story in terms of both cloud storage and data protection.

Random Short Take #35

Welcome to Random Short Take #35. Some really good players have worn 35 in the NBA, including The Big Dog Antoine Carr, and Reggie Lewis. This one, though, goes out to one of my favourite players from the modern era, Kevin Durant. If it feels like it’s only been a week since the last post, that’s because it has. I bet you wish that I was producing some content that’s more useful than a bunch of links. So do I.

  • I don’t often get excited about funding rounds, but I have a friend who works there, so here’s an article covering the latest round (C) of funding for VAST Data.
  • Datadobi continue to share good news in these challenging times, and has published a success story based on some work it’s done with Payspan.
  • Speaking of challenging times, the nice folks a Retrospect are offering a free 90-day license subscription for Retrospect Backup. You don’t need a credit card to sign up, and “[a]ll backups can be restored, even if the subscription is cancelled”.
  • I loved this post from Russ discussing a recent article on Facebook and learning from network failures at scale. I’m in love with the idea that you can’t automate your way out of misconfiguration. We’ve been talking a lot about this in my day job lately. Automation can be a really exciting concept, but it’s not magic. And as scale increase, so too does the time it takes to troubleshoot issues. It all seems like a straightforward concept, but you’d be surprised how many people are surprised by these ideas.
  • Software continues to dominate the headlines, but hardware still has a role to play in the world. Alastair talks more about that idea here.
  • Paul Stringfellow recently jumped on the Storage Unpacked podcast to talk storage myths versus reality. Worth listening to.
  • It’s not all good news though. Sometimes people make mistakes, and pull out the wrong cables. This is a story I’ll be sharing with my team about resiliency.
  • SMR drives and consumer NAS devices aren’t necessarily the best combo. So this isn’t the best news either. I’m patiently waiting for consumer Flash drive prices to come down. It’s going to take a while though.

 

Komprise Announces Elastic Data Migration

Komprise recently announced the availability of its Elastic Data Migration solution. I was lucky enough to speak with Krishna Subramanian about the announcement and thought I’d share some of my notes here.

 

Migration Evolution

Komprise?

I’ve written about Komprise before. A few times, as it happens. Subramanian describes it as “analytics driven data management software”, capable of operating with NFS, SMB, and S3 storage. The data migration capability was added last year (at no additional charge), but it was initially focused on LAN-based migration.

Enter Elastic Data Migration

Elastic Data Migration isn’t just for LAN-based migrations though, it’s for customers want to migrate to the cloud, or perhaps another data centre. Invariably they’ll be looking to do this over a WAN, rather than a LAN. Given that WAN connections invariably suffer from lower speeds and higher latencies, how does Komprise deal with this? I’m glad you asked. The solution addresses latency thusly:

  • Increased parallelism inside the software (based on Komprise VMs, and the nature of the data sets);
  • Reducing round trips over the network; and
  • It’s been optimised to reduce the chatter of the protocol (eg NFS being chatty).

Sounds simple enough, but Komprise is seeing some great results when compared to traditional tools such as rsync.

It’s Graphical

There are some other benefits over the more traditional tools, including GUI access that allows you to run hundreds of migrations simultaneously.

[image courtesy of Komprise]

Of course, if you’re not into doing things with GUIs (and it doesn’t always make sense where a level of automation is required), you can do this programmatically via API access.

 

Thoughts and Further Reading

Depending on what part of the IT industry you’re most involved in, the idea of data migrations may seem like something that’s a little old fashioned. Moving a bunch of unstructured data around using tools from way back when? Why aren’t people just using the various public cloud options to store their data? Well, I guess it’s partly because things take time to evolve and, based on the sorts of conversations I’m still regularly having, simple to use data migration solutions for large volumes of data are still required, and hard to come across.

Komprise has made its name making sense of vast chunks of unstructured data living under various rocks in enterprises. It also has a good story when it comes to archiving that data. It makes a lot of sense that it would turn its attention to improving the experience and performance of migrating a large number of terabytes of unstructured data from one source to another. There’s already a good story here in terms of extensive multi-protocol support and visibility into data sources. I like that Komprise has worked hard on the performance piece as well, and has removed some of the challenges traditionally associated with migrating unstructured data over WAN connections. Data migrations are still a relatively complex undertaking, but they don’t need to be painful.

One of the few things I’m sure of nowadays is that the amount of data we are storing is not shrinking. Komprise is working hard to make sense of what all that data is being used for. Once it knows what that data is for, it’s making it easy to put it in the place that you’ll get the most value from it. Whether that’s on a different NAS on your LAN, or sitting in another data centre somewhere. Komprise has published a whitepaper with the test results I referred to earlier, and you can grab it from here (registration required). Enrico Signoretti also had Subramanian on his podcast recently – you can listen to that here.