Random Short Take #39

Welcome to Random Short Take #39. Not a huge amount of players have worn 39 in the NBA, and I’m not going to pretend I’m any real fan of The Dwightmare. But things are tough all around, so let’s remain optimistic and push through to number 40. Anyway let’s get random.

  • VeeamON 2020 was online this week, and Anthony Spiteri has done a great job of summarising the major technical session announcements here.
  • I’ve known Howard Marks for a while now, and always relish the opportunity to speak with him when I can. This post is pretty hilarious, and I’m looking forward to reading the followup posts.
  • This is a great article from Alastair Cooke on COVID-19 and what En-Zed has done effectively to stop the spread. It was interesting to hear his thoughts on returning to the US, and I do agree that it’s going to be some time until I make the trip across the Pacific again.
  • Sometimes people get crazy ideas about how they might repurpose some old bits of technology. It’s even better when they write about their experiences in doing so. This article on automating an iPod Hi-Fi’s volume control over at Six Colors was fantastic.
  • Chris M. Evans put out a typically thought-provoking piece on data migration challenges recently that I think is worth checking out. I’ve been talking a lot to customers that are facing these challenges on a daily basis, and it’s interesting to see how, regardless of the industry vertical they operate in, it’s sometimes just a matter of the depth varying, so to speak.
  • I frequently bump into Ray Lucchesi at conferences, and he knows a fair bit about what does and doesn’t work. This article on his experiences recently with a number of virtual and online conferences is the epitome of constructive criticism.
  • Speaking of online conferences, the Australian VMUG UserCon will be virtual this year and will be held on the 30th July. You can find out more and register here.
  • Finally, if you’ve spent any time with me socially, you’ll know I’m a basketball nut. And invariably I’ll tell you that Deftones is may favouritest band ever. So it was great to come across this article about White Pony on one of my favourite sports (and popular culture) websites. If you’re a fan of Deftones, this is one to check out.

 

Komprise Announces Elastic Data Migration

Komprise recently announced the availability of its Elastic Data Migration solution. I was lucky enough to speak with Krishna Subramanian about the announcement and thought I’d share some of my notes here.

 

Migration Evolution

Komprise?

I’ve written about Komprise before. A few times, as it happens. Subramanian describes it as “analytics driven data management software”, capable of operating with NFS, SMB, and S3 storage. The data migration capability was added last year (at no additional charge), but it was initially focused on LAN-based migration.

Enter Elastic Data Migration

Elastic Data Migration isn’t just for LAN-based migrations though, it’s for customers want to migrate to the cloud, or perhaps another data centre. Invariably they’ll be looking to do this over a WAN, rather than a LAN. Given that WAN connections invariably suffer from lower speeds and higher latencies, how does Komprise deal with this? I’m glad you asked. The solution addresses latency thusly:

  • Increased parallelism inside the software (based on Komprise VMs, and the nature of the data sets);
  • Reducing round trips over the network; and
  • It’s been optimised to reduce the chatter of the protocol (eg NFS being chatty).

Sounds simple enough, but Komprise is seeing some great results when compared to traditional tools such as rsync.

It’s Graphical

There are some other benefits over the more traditional tools, including GUI access that allows you to run hundreds of migrations simultaneously.

[image courtesy of Komprise]

Of course, if you’re not into doing things with GUIs (and it doesn’t always make sense where a level of automation is required), you can do this programmatically via API access.

 

Thoughts and Further Reading

Depending on what part of the IT industry you’re most involved in, the idea of data migrations may seem like something that’s a little old fashioned. Moving a bunch of unstructured data around using tools from way back when? Why aren’t people just using the various public cloud options to store their data? Well, I guess it’s partly because things take time to evolve and, based on the sorts of conversations I’m still regularly having, simple to use data migration solutions for large volumes of data are still required, and hard to come across.

Komprise has made its name making sense of vast chunks of unstructured data living under various rocks in enterprises. It also has a good story when it comes to archiving that data. It makes a lot of sense that it would turn its attention to improving the experience and performance of migrating a large number of terabytes of unstructured data from one source to another. There’s already a good story here in terms of extensive multi-protocol support and visibility into data sources. I like that Komprise has worked hard on the performance piece as well, and has removed some of the challenges traditionally associated with migrating unstructured data over WAN connections. Data migrations are still a relatively complex undertaking, but they don’t need to be painful.

One of the few things I’m sure of nowadays is that the amount of data we are storing is not shrinking. Komprise is working hard to make sense of what all that data is being used for. Once it knows what that data is for, it’s making it easy to put it in the place that you’ll get the most value from it. Whether that’s on a different NAS on your LAN, or sitting in another data centre somewhere. Komprise has published a whitepaper with the test results I referred to earlier, and you can grab it from here (registration required). Enrico Signoretti also had Subramanian on his podcast recently – you can listen to that here.

Datadobi Announces DobiMigrate 5.8 – Introduces Chain of Custody

Datadobi recently announced version 5.8 of its DobiMigrate software and introduced a “Chain of Custody” feature. I had the opportunity to speak to Carl D’Halluin and Michael Jack about the announcement and thought I’d share some thoughts on it here.

 

Don’t They Do File Migration?

If you’re unfamiliar with Datadobi, it’s a company that specialises in NAS migration software. It tends to get used a lot by the major NAS vendors as rock solid method of moving data of a competitor’s box and onto theirs. Datadobi has been around for quite a while, and a lot of the founders have heritage with EMC Centera.

Chain of Custody?

So what exactly does the Chain of Custody feature offer?

  • Tracking files and objects throughout an entire migration
  • Full photo-finish of source and destination system at cutover time
  • Forensic input which can serve as future evidence of tampering
  • Available for all migrations.
    • No performance hit.
    • No enlarged maintenance window.

[image courtesy of Datadobi]

Why Is This Important?

Organisations are subject to a variety of legislative requirements the word over to ensure that the data presented as evidence in courts of law hasn’t been tampered with. Some of them spend an inordinate amount of money ensuring that the document management systems (and the hardware those systems reside on) offer all kinds of compliance and governance features that ensure that you can reliably get up in front of a judge and say that nothing has been messed with. Or you can reliably say that it has been messed with. Either way though, it’s reliable. Unfortunately, nothing lasts forever (not even those Centera cubes we put in years ago).

So what do you do when you have to migrate your data from one platform to another? If you’ve just used rsync or robocopy to get the data from one share to another, how can you reliably prove that you’ve done so, without corrupting or otherwise tampering with the data? Logs are just files, after all, so what’s to stop someone “losing” some data. along the way?

It turns out that a lot of folks in the legal profession have been aware that this was a problem for a while, but they’ve looked the other way. I am no lawyer, but as it was explained to me, if you introduce some doubt into the reliability of the migration process, it’s easy enough for the other side to counter that your stuff may not have been so reliable either, and the whole thing becomes something of a shambles. Of course, there’s likely a more coherent way to explain this, but this is tech blog and I’m being lazy.

 

Thoughts

I’ve done all kinds of data migrations over the years. I think I’ve been fortunate that I’ve never specifically had to deal with a system that was being relied on seriously for legislative reasons, because I’m sure that some of those migrations were done more by the seat of my pants than anything else. Usually the last thing on the organisation’s mind (?) was whether the migration activity was compliant or not. Instead, the focus of the project manager was normally to get the data from the old box to the new box as quickly as possible and with as little drama / downtime as possible.

If you’re working on this stuff in a large financial institution though, you’ll likely have a different focus. And I’m sure the last thing your corporate counsel want to hear is that you’ve been playing a little fast and loose with data over the years. I anticipate this announcement will be greeted with some happiness by people who’ve been saddled with these kinds of daunting tasks in the past. As we move to a more and more digital world, we need to carry some of the concepts from the physical world across. It strikes me that Datadobi has every reason to be excited about this announcement. You can read the press release here.

 

EMC – naviseccli getlun -capacity

I needed to run this command recently to get the blocksize of a pool LUN that I wanted to migrate to a traditional FLARE LUN. I’ll going into the reasons for the migration another time, but basically a pool LUN doesn’t show you the number of blocks consumed when viewed through Unisphere.

So I used naviseccli to report the block count accurately so I could create another LUN of exactly the same size.

I:\>naviseccli -address 256.256.256.256 getlun 432 -capacity
LUN Capacity(Megabytes):    1048576
LUN Capacity(Blocks):       2147483648

It’s also important to note that you cannot migrate a LUN using the LUN Migration tool to a LUN that is larger than the source. Test it for yourself if you don’t believe me. If you want to migrate a LUN to a larger destination you need to use SAN Copy. This also became an issue recently when I needed to migrate some Pool LUNs to traditional MetaLUNs and used components that were a block or two too large. Fortunately when you create a MetaLUN you can specify the correct block count / MB / GB / size.

naviseccli – don’t hate it because it’s beautiful.