Random Short Take #31

Welcome to Random Short Take #31. Lot of good players have worn 31 in the NBA. You’d think I’d call this the Reggie edition (and I appreciate him more after watching Winning Time), but this one belongs to Brent Barry. This may be related to some recency bias I have, based on the fact that Brent is a commentator in NBA 2K19, but I digress …

  • Late last year I wrote about Scale Computing’s big bet on a small form factor. Scale Computing recently announced that Jerry’s Foods is using the HE150 solution for in-store computing.
  • I find Plex to be a pretty rock solid application experience, and most of the problems I’ve had with it have been client-related. I recently had a problem with a server update that borked my installation though, and had to roll back. Here’s the quick and dirty way to do that on macOS.
  • Here’s are 7 contentious thoughts on data protection from Preston. I think there are some great ideas here and I recommend taking the time to read this article.
  • I recently had the chance to speak with Michael Jack from Datadobi about the company’s announcement about its new DIY Starter Pack for NAS migrations. Whilst it seems that the professional services market for NAS migrations has diminished over the last few years, there’s still plenty of data out there that needs to be moved from on box to another. Robocopy and rsync aren’t always the best option when you need to move this much data around.
  • There are a bunch of things that people need to learn to do operations well. A lot of them are learnt the hard way. This is a great list from Jan Schaumann.
  • Analyst firms are sometimes misunderstood. My friend Enrico Signoretti has been working at GigaOm for a little while now, and I really enjoyed this article on the thinking behind the GigaOm Radar.
  • Nexsan recently announced some enhancements to its “BEAST” storage platforms. You can read more on that here.
  • Alastair isn’t just a great writer and moustache aficionado, he’s also a trainer across a number of IT disciplines, including AWS. He recently posted this useful article on what AWS newcomers can expect when it comes to managing EC2 instances.

Random Short Take #30

Welcome to Random Short Take #30. You’d think 30 would be an easy choice, given how much I like Wardell Curry II, but for this one I’m giving a shout out to Rasheed Wallace instead. I’m a big fan of ‘Sheed. I hope you all enjoy these little trips down NBA memory lane. Here we go.

  • Veeam 10’s release is imminent. Anthony has been doing a bang up job covering some of the enhancements in the product. This article was particularly interesting because I work in a company selling Veeam and using vCloud Director.
  • Sticking with data protection, Curtis wrote an insightful article on backups and frequency.
  • If you’re in Europe or parts of the US (or can get there easily), like writing about technology, and you’re into cars and stuff, this offer from Cohesity could be right up your alley.
  • I was lucky enough to have a chat with Sheng Liang from Rancher Labs a few weeks ago about how it’s going in the market. I’m relatively Kubernetes illiterate, but it sounds like there’s a bit going on.
  • For something completely different, this article from Christian on Raspberry Pi, volumio and HiFiBerry was great. Thanks for the tip!
  • Spinning disk may be as dead as tape, if these numbers are anything to go by.
  • This was a great article from Matt Crape on home lab planning.
  • Speaking of home labs, Shanks posted an interesting article on what he has running. The custom-built rack is inspired.

Infrascale Protects Your Infrastructure At Scale

Disclaimer: I recently attended Storage Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Infrascale recently presented at Storage Field Day 19. You can see videos of the presentation here, and download my rough notes from here.

 

Infrascale?

Russ Reeder (CEO) introduced the delegates to Infrascale. If you’ve not heard of Infrascale before, it’s a service provider and vendor focused primarily on backup and disaster recovery services. It has around 150 employees and operates in 10 cities in 5 countries. Infrascale currently services around 60000 customers / 250000 VMs and endpoints. Reeder said Infrascale as a company is “[p]assionate about its customers’ happiness and success”.

 

Product Portfolio

There are four different products in the Infrascale portfolio.

Infrascale Cloud Backup (ICB)

  • Backup directly to the cloud
  • Recover data in seconds
  • Optimised for endpoints and branch office servers
  • Ransomware detection & remediation

Infrascale Cloud Application Backup (ICAB)

  • Defy cloud applications limited retention policies
  • Backup O365, SharePoint and OneDrive, G-Suite, Salesforce.com, box.com, and more
  • Recover individual mail items or mailboxes

Infrascale Disaster Recovery – Local (IDR-LOCAL)

  • Backup systems to an on-premises appliance
  • Run system replicas (locally) in minutes
  • Restore from on-premises appliance or the cloud
  • Archive / DR data to disk

Infrascale Disaster Recovery – Cloud (IDR-CLOUD)

  • Backup systems to an on-premises appliance and to a bootable cloud appliance
  • Run system replicas in minutes (locally or boot in the cloud)
  • Optimised for mission-critical physical and virtual servers

Support for Almost Everything

Infrascale offers support for almost everything, including VMware, Hyper-V, Bare Metal, End Points, public cloud workloads.

Other Features

Speedy DR locally or to the Cloud

  • IDR is very fast – boot ready in minutes
  • IDR enables recovery locally or in the cloud

Backup Target Optionality; Vigilant Data Security

  • ICB allows for backup targets “anywhere”
  • ICB detects ransomware and mitigates impact

Single View

The Infrascale dashboard does a pretty decent job of providing all of the information you might need about the service in a single view.

[image courtesy of Infrascale]

Appliances

There are a variety of appliance options available, as well as virtual editions of the appliance that you can use.

[image courtesy of Infrascale]

 

Thoughts and Further Reading

Regular readers of this blog would know that I’m pretty interested in data protection as a topic. I’m sad to say that I hadn’t heard of Infrascale prior to this presentation, but I’m glad I have now. There are a lot of service providers out there offering some level of data protection and disaster recovery as a service. These services offer varying levels of protection, features, and commercial benefits. Infrascale distinguish themselves by offering its own hardware platform as a core part of the offering, rather than building a solution based on one of the major data protection vendors.

In my day job I work a lot with product development for these types of solutions and, to be honest, the idea of developing a hardware data protection appliance is not something that appeals. As a lot of failed hardware vendors will tell you, it’s one thing to have a great idea, and quite another to execute successfully on that idea. But Infrascale has done the hard work on engineering the solution, and it seems to offer all of the features the average punter looks for in a DPaaS and DRaaS offering. I’m also a big fan of the fact that it offers support for endpoint protection, as I think this is a segment that is historically under-represented in the data protection space. It has a good number of customers, primarily in the SME range, and is continuing to add services to its product portfolio.

Disaster recovery and data protection are things at that aren’t always done very well by small to medium enterprises. Unfortunately, these types of businesses tend to have the most to lose when something goes wrong with their critical business data (either via operator error, ransomware, or actual disaster). Something like Infrascale’s offering is a great way to take away a lot of the complexity traditionally associated with protecting that important data. I’m looking forward to hearing more about Infrascale in the future.

Random Short Take #29

Welcome to Random Short Take #29. You’d think 29 would be a hard number to line up with basketball players, but it turns out that Marcus Camby wore it one year when he played for Houston. It was at the tail-end of his career, but still. Anyhoo …

  • I love a good story about rage-quitting projects, and this one is right up there. I’ve often wondered what it must be like to work on open source projects and dealing with the craziness that is the community.
  • I haven’t worked on a Scalar library in over a decade, but Quantum is still developing them. There’s an interesting story here in terms of protecting your protection data using air gaps. I feel like this is already being handled a different way by the next-generation data protection companies, but when all you have is a hammer. And the cost per GB is still pretty good with tape.
  • I always enjoy Keith’s ability to take common problems and look at them with a fresh perspective. I’m interested to see just how far he goes down the rabbit hole with this DC project.
  • Backblaze frequently comes up with useful articles for both enterprise punters and home users alike. This article on downloading your social media presence is no exception. The processes are pretty straightforward to follow, and I think it’s a handy exercise to undertake every now and then.
  • The home office is the new home lab. Or, perhaps, as we work anywhere now, it’s important to consider setting up a space in your home that actually functions as a workspace. This article from Andrew Miller covers some of the key considerations.
  • This article from John Troyer about writing was fantastic. Just read it.
  • Scale Computing was really busy last year. How busy? Busy enough to pump out a press release that you can check out here. The company also has a snazzy new website and logo that you should check out.
  • Veeam v10 is coming “very soon”. You can register here to find out more. I’m keen to put this through its paces.

Random Short Take #27

Welcome to my semi-regular, random news post in a short format. This is #27. You’d think it would be hard to keep naming them after basketball players, and it is. None of my favourite players ever wore 27, but Marvin Barnes did surface as a really interesting story, particularly when it comes to effective communication with colleagues. Happy holidays too, as I’m pretty sure this will be the last one of these posts I do this year. I’ll try and keep it short, as you’ve probably got stuff to do.

  • This story of serious failure on El Reg had me in stitches.
  • I really enjoyed this article by Raj Dutt (over at Cohesity’s blog) on recovery predictability. As an industry we talk an awful lot about speeds and feeds and supportability, but sometimes I think we forget about keeping it simple and making sure we can get our stuff back as we expect.
  • Speaking of data protection, I wrote some articles for Druva about, well, data protection and things of that nature. You can read them here.
  • There have been some pretty important CBT-related patches released by VMware recently. Anthony has provided a handy summary here.
  • Everything’s an opinion until people actually do it, but I thought this research on cloud adoption from Leaseweb USA was interesting. I didn’t expect to see everyone putting their hands up and saying they’re all in on public cloud, but I was also hopeful that we, as an industry, hadn’t made things as unclear as they seem to be. Yay, hybrid!
  • Site sponsor StorONE has partnered with Tech Data Global Computing Components to offer an All-Flash Array as a Service solution.
  • Backblaze has done a nice job of talking about data protection and cloud storage through the lens of Star Wars.
  • This tip on removing particular formatting in Microsoft Word documents really helped me out recently. Yes I know Word is awful.
  • Someone was nice enough to give me an acknowledgement for helping review a non-fiction book once. Now I’ve managed to get a character named after me in one of John Birmingham’s epics. You can read it out of context here. And if you’re into supporting good authors on Patreon – then check out JB’s page here. He’s a good egg, and his literary contributions to the world have been fantastic over the years. I don’t say this just because we live in the same city either.

Random Short Take #26

Welcome to my semi-regular, random news post in a short format. This is #26. I was going to start naming them after my favourite basketball players. This one could be the Korver edition, for example. I don’t think that’ll last though. We’ll see. I’ll stop rambling now.

Random Short Take #25

Want some news? In a shorter format? And a little bit random? Here’s a short take you might be able to get behind. Welcome to #25. This one seems to be dominated by things related to Veeam.

  • Adam recently posted a great article on protecting VMConAWS workloads using Veeam. You can read it about it here.
  • Speaking of Veeam, Hal has released v2 of MS Office 365 Backup Analysis Tool. You can use it to work out how much capacity you’ll need to protect your O365 workloads. And you can figure out what your licensing costs will be, as well as a bunch of other cool stuff.
  • And in more Veeam news, the VeeamON Virtual event is coming up soon. It will be run across multiple timezones and should be really interesting. You can find out more about that here.
  • This article by Russ on copyright and what happens when bots go wild made for some fascinating reading.
  • Tech Field Day turns 10 years old this year, and Stephen has been running a series of posts covering some of the history of the event. Sadly I won’t be able to make it to the celebration at Tech Field Day 20, but if you’re in the right timezone it’s worthwhile checking it out.
  • Need to connect to an SMB share on your iPad or iPhone? Check out this article (assuming you’re running iOS 13 or iPadOS 13.1).
  • It grinds my gears when this kind of thing happens. But if the mighty corporations have launched a line of products without thinking it through, we shouldn’t expect them to maintain that line of products. Right?
  • Storage and Hollywood can be a real challenge. This episode of Curtis‘s podcast really got into some of the details with Jeff Rochlin.

 

Veeam Basics – Cloud Tier And v10

Disclaimer: I recently attended Veeam Vanguard Summit 2019.  My flights, accommodation, and some meals were paid for by Veeam. There is no requirement for me to blog about any of the content presented and I am not compensated by Veeam for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Overview

Depending on how familiar you are with Veeam, you may already have heard of the Cloud Tier feature. This was new in Veeam Availability Suite 9.5 Update 4, and “is the built-in automatic tiering feature of Scale-out Backup Repository that offloads older backup files to more affordable storage, such as cloud or on-premises object storage”. The idea is you can use the cloud (or cloud-like on-premises storage resources) to make more effective (read: economical) use of your primary storage repositories. You can read more about Veeam’s object storage capabilities here.

 

v10 Enhancements

Move, Copy, Move and Copy

In 9.5 U4 the Move mode was introduced:

  • Policy allows chunks of data to be stripped out of a backup files
  • Metadata remains locally on the performance tier
  • Data moved and offloaded into capacity tier
  • Capacity Tier backed by an object storage repository

The idea was that your performance tier provided the landing zone for backup data, and the capacity tier was an object storage repository that data was moved to. Rhys does a nice job of covering Cloud Tier here.

Copy + Move

In v10, you’ll be able to do both copy and move activities on older backup data. Here are some things to note about copy mode:

  • Still uses the same mechanics as Move
  • Data is chunked and offloaded to the Capacity Tier
  • Unlike Move we don’t dehydrate VBK / VIB / VRB
  • Like Move this ensures that all restore functionality is retained
  • Still makes use of the Archive Index and similar to Move
  • Will not duplicate blocks being offloaded from the Performance Tier
  • Both Copy + Move is fully supported
  • Copy + Move will share block data between them

[image courtesy of Veeam]

With Copy and Move the Capacity Tier will contain a copy of every backup file that has been created as well as offloaded data from the Performance Tier. Anthony does a great job of covering off the Cloud Tier Copy feature in more depth here.

Immutability

One of the features I’m really excited about (because I’m into some weird stuff) is the Cloud Tier Immutability feature.

  • Guarantees additional protection for data stored in Object storage
  • Protects against malicious users and accidental deletion (ITP Theory)
  • Applies to data offloaded to capacity tier for Move or Copy
  • Protects the most recent (more important) backup points
  • Beware of increased storage consumption and S3 costs

 

Thoughts and Further Reading

The idea of moving protection data to a cheaper storage repository isn’t a new one. Fifteen years ago we were excited to be enjoying backup to disk as a new way of doing data protection. Sure, it wasn’t (still isn’t) as cheap as tape, but it was a lot more flexible and performance oriented. Unfortunately, the problem with disk-based backup systems is that you need a lot of disk to keep up with the protection requirements of primary storage systems. And then you probably want to keep many, many copies of this data for a long time. Deduplication and compression helps with this problem, but it’s not magic. Hence the requirement to move protection data to lower tiers of storage.

Veeam may have been a little late to market with this feature, but their implementation in 9.5 U4 is rock solid. It’s the kind of thing we’ve come to expect from them. With v10 the addition of the Copy mode, and the Immutability feature in Cloud Tier, should give people cause to be excited. Immutability is a really handy feature, and provides the kind of security that people should be focused on when looking to pump data into the cloud.

I still have some issues with people using protection data as an “archive” – that’s not what it is. Rather, this is a copy of protection data that’s being kept for a long time. It keeps auditors happy. And fits nicely with people’s idea of what archives are. Putting my weird ideas about archives versus protection data aside, the main reason you’d want to move or copy data to a cheaper tier of disk is to save money. And that’s not a bad thing, particularly if you’re working with enterprise protection policies that don’t necessarily make sense (e.g. keeping all backup data for seven years). I’m looking forward to v10 coming soon, and taking these features for a spin.

Cohesity – NAS Data Migration Overview

Data Migration

Cohesity NAS Data Migration, part of SmartFiles, was recently announced as a generally available feature within the Cohesity DataPlatform 6.4 release (after being mentioned in the 6.3 release blog post). The idea behind it is that you can use the feature to perform the migration of NAS data from a primary source to the Cohesity DataPlatform. It is supported for NAS storage registered as SMB or NFS (so it doesn’t necessarily need to be a NAS appliance as such, it can also be a file share hosted somewhere).

 

What To Think About

There are a few things to think about when you configure your migration policy, including:

  • The last time the file was accessed;
  • Last time the file was modified; and
  • The size of the file.

You also need to think about how frequently you want to run the job. Finally, it’s worth considering which View you want the archived data to reside on.

 

What Happens?

When the data is migrated an SMB2 symbolic link is left in place of the file with the same name as the file and the original data is moved to the Cohesity View. Note that on Windows boxes, remote to remote symbolic links are disabled, so you need to run these commands:

C:\Windows\system32>fsutil behavior set SymlinkEvaluation R2R:1
C:\Windows\system32>fsutil behavior query SymlinkEvaluation

Once the data is migrated to the Cohesity cluster, subsequent read and write operations are performed on the Cohesity host. You can move data back to the environment by mounting the Cohesity target View on a Windows client, and copying it back to the NAS.

 

Configuration Steps

To get started, select File Services, and click on Data Migration.

Click on the Migrate Data to configure a migration job.

You’ll need to give it a name.

 

The next step is to select the Source. If you already have a NAS source configured, you’ll see it here. Otherwise you can register a Source.

Click on the arrow to expand the registered NAS mount points.

Select the mount point you’d like to use.

Once you’ve selected the mount point, click on Add.

You then need to select the Storage Domain (formerly known as a ViewBox) to store the archived data on.

You’ll need to provide a name, and configure schedule options.

You can also configure advanced settings, including QoS and exclusions. Once you’re happy, click on Migrate and the job will be created.

You can then run the job immediately, or wait for the schedule to kick in.

 

Other Things To Consider

You’ll need to think about your anti-virus options as well. You can register external anti-virus software or install the anti-virus app from the Cohesity Marketplace

 

Thoughts And Further Reading

Cohesity have long positioned their secondary storage solution as something more than just a backup and recovery solution. There’s some debate about the difference between storage management and data management, but Cohesity seem to have done a good job of introducing yet another feature that can help users easily move data from their primary storage to their secondary storage environment. Plenty of backup solutions have positioned themselves as archive solutions, but many have been focused on moving protection data, rather than primary data from the source. You’ll need to do some careful planning around sizing your environment, as there’s always a chance that an end user will turn up and start accessing files that you thought were stale. And I can’t say with 100% certainty that this solution will transparently work with every line of business application in your environment. But considering it’s aimed at SMB and NFS shares, it looks like it does what it says on the tin, and moves data from one spot to another.

You can read more about the new features in Cohesity DataPlatform 6.4 (Pegasus) on the Cohesity site, and Blocks & Files covered the feature here. Alastair also shared some thoughts on the feature here.

Random Short Take #24

Want some news? In a shorter format? And a little bit random? This listicle might be for you. Welcome to #24 – The Kobe Edition (not a lot of passing, but still entertaining). 8 articles too. Which one was your favourite Kobe? 8 or 24?

  • I wrote an article about how architecture matters years ago. It’s nothing to do with this one from Preston, but he makes some great points about the importance of architecture when looking to protect your public cloud workloads.
  • Commvault GO 2019 was held recently, and Chin-Fah had some thoughts on where Commvault’s at. You can read all about that here. Speaking of Commvault, Keith had some thoughts as well, and you can check them out here.
  • Still on data protection, Alastair posted this article a little while ago about using the Cohesity API for reporting.
  • Cade just posted a great article on using the right transport mode in Veeam Backup & Replication. Goes to show he’s not just a pretty face.
  • VMware vFORUM is coming up in November. I’ll be making the trip down to Sydney to help out with some VMUG stuff. You can find out more here, and register here.
  • Speaking of VMUG, Angelo put together a great 7-part series on VMUG chapter leadership and tips for running successful meetings. You can read part 7 here.
  • This is a great article on managing Rubrik users from the CLI from Frederic Lhoest.
  • Are you into Splunk? And Pure Storage? Vaughn has you covered with an overview of Splunk SmartStore on Pure Storage here.