Random Short Take #16

Here are a few links to some random news items and other content that I recently found interesting. You might find them interesting too. Episode 16 – please enjoy these semi-irregular updates.

  • Scale Computing has been doing a bit in the healthcare sector lately – you can read news about that here.
  • This was a nice roundup of the news from Apple’s recent WWDC from Six Colors. Hat tip to Stephen Foskett for the link. Speaking of WWDC news, you may have been wondering what happened to all of your purchased content with the imminent demise of iTunes on macOS. It’s still a little fuzzy, but this article attempts to shed some light on things. Spoiler: you should be okay (for the moment).
  • There’s a great post on the Dropbox Tech Blog from James Cowling discussing the mission versus the system.
  • The more things change, the more they remain the same. For years I had a Windows PC running Media Center and recording TV. I used IceTV as the XMLTV-based program guide provider. I then started to mess about with some HDHomeRun devices and the PC died and I went back to a traditional DVR arrangement. Plex now has DVR capabilities and it has been doing a reasonable job with guide data (and recording in general), but they’ve decided it’s all a bit too hard to curate guides and want users (at least in Australia) to use XMLTV-based guides instead. So I’m back to using IceTV with Plex. They’re offering a free trial at the moment for Plex users, and setup instructions are here. No, I don’t get paid if you click on the links.
  • Speaking of axe-throwing, the Cohesity team in Queensland is organising a social event for Friday 21st June from 2 – 4 pm at Maniax Axe Throwing in Newstead. You can get in contact with Casey if you’d like to register.
  • VeeamON Forum Australia is coming up soon. It will be held at the Hyatt Regency Hotel in Sydney on July 24th and should be a great event. You can find out more information and register for it here. The Vanguards are also planning something cool, so hopefully we’ll see you there.
  • Speaking of Veeam, Anthony Spiteri recently published his longest title in the Virtualization is Life! catalogue – Orchestration Of NSX By Terraform For Cloud Connect Replication With vCloud Director. It’s a great article, and worth checking out.
  • There’s a lot of talk and slideware devoted to digital transformation, and a lot of it is rubbish. But I found this article from Chin-Fah to be particularly insightful.

Cohesity Basics – Excluding VMs Using Tags – Real World Example

I’ve written before about using VM tags with Cohesity to exclude VMs from a backup. I wanted to write up a quick article using a real world example in the test lab. In this instance, we had someone deploying 200 VMs over a weekend to test a vendor’s storage array with a particular workload. The problem was that I had Cohesity set to automatically protect any new VMs that are deployed in the lab. This wasn’t a problem from a scalability perspective. Rather, the problem was that we were backing up a bunch of test data that didn’t dedupe well and didn’t need to be protected by what are ultimately finite resources.

As I pointed out in the other article, creating tags for VMs and using them as a way to exclude workloads from Cohesity is not a new concept, and is fairly easy to do. You can also apply the tags in bulk using the vSphere Web Client if you need to. But a quicker way to do it (and something that can be done post-deployment) is to use PowerCLI to search for VMs with a particular naming convention and apply the tags to those.

Firstly, you’ll need to log in to your vCenter.

PowerCLI C:\> Connect-VIServer vCenter

In this example, the test VMs are deployed with the prefix “PSV”, so this makes it easy enough to search for them.

PowerCLI C:\> get-vm | where {$_.name -like "PSV*"} | New-TagAssignment -Tag "COH-NoBackup"

This assumes that the tag already exists on the vCenter side of things, and you have sufficient permissions to apply tags to VMs. You can check your work with the following command.

PowerCLI C:\> get-vm | where {$_.name -like "PSV*"} | Get-TagAssignment

One thing to note. If you’ve updated the tags of a bunch of VMs in your vCenter environment, you may notice that the objects aren’t immediately excluded from the Protection Job on the Cohesity side of things. The reason for this is that, by default, Cohesity only refreshes vCenter source data every 4 hours. One way to force the update is to manually refresh the source vCenter in Cohesity. To do this, go to Protection -> Sources. Click on the ellipsis on the right-hand side of your vCenter source you’d like to refresh, and select Refresh.

You’ll then see that the tagged VMs are excluded in the Protection Job. Hat tip to my colleague Mike for his help with PowerCLI. And hat tip to my other colleague Mike for causing the problem in the first place.

Zerto – News From ZertoCON 2019

Zerto recently held their annual user conference (ZertoCON) in Nashville, TN. I had the opportunity to talk to Rob Strechay about some of the key announcements coming out of the event and thought I’d cover them here.

 

Key Announcements

Licensing

You can now acquire Zerto either as a perpetual license or via a subscription. There’s previously been some concept of subscription pricing with Zerto, with customers having rented via managed service providers, but this is the first time it’s being offered directly to customers. Strechay noted that Zerto is “[n]ot trying to move to a subscription-only model”, but they are keen to give customers further flexibility in how they consume the product. Note that the subscription pricing also includes maintenance and support.

7.5 Is Just Around The Corner

If it feels like 7.0 was only just delivered, that’s because it was (in April). But 7.5 is already just around the corner. They’re looking to add a bunch of features, including:

  • Deeper integration with StoreOnce from HPE using Catalyst-based API, leveraging source-side deduplication
  • Qualification of Azure’s Data Box
  • Cloud mobility – in 7.0 they started down the path with Azure. Zerto Cloud Appliances now autoscale within Azure.

Azure Integration

There’s a lot more focus on Azure in 7.5, and Zerto are working on

  • Managed failback / managed disks in Azure
  • Integration with Azure Active Directory
  • Adding encryption at rest in AWS, and doing some IAM integration
  • Automated driver injection on the fly as you recover into AWS (with Red Hat)

Resource Planner

Building on their previous analytics work, you’ll also be able to (shortly) download Zerto Virtual Manager. This talks to vCenter and can gather data and help customers plan their VMware to VMware (or to Azure / AWS) migrations.

VAIO

Zerto has now completed the initial certification to use VMware’s vSphere APIs for I/O Filtering (VAIO) and they’ll be leveraging these in 7.5. Strechay said they’ll probably have both versions in the product for a little while.

 

Thoughts And Further Reading

I’d spoken with Strechay previously about Zerto’s plans to compete against the “traditional” data protection vendors, and asked him what the customer response has been to Zerto’s ambitions (and execution). He said that, as they’re already off-siting data (as part of the 3-2-1 data protection philosophy), how hard is it to take it to the next level? He said a number of customers were very motivated to use long term retention, and wanted to move on from their existing backup vendors. I’ve waxed lyrical in the past about what I thought some of the key differences were between periodic data protection, disaster recovery, and disaster avoidance were. That doesn’t mean that companies like Zerto aren’t doing a pretty decent job of blurring the lines between the types of solution they offer, particularly with the data mobility capabilities built in to their offerings. I think there’s a lot of scope with Zerto to move into spaces that they’ve previously only been peripherally involved in. It makes sense that they’d focus on data mobility and off-site data protection capabilities. There’s a good story developing with their cloud integration, and it seems like they’ll just continue to add features and capabilities to the product. I really like that they’re not afraid to make promises on upcoming releases and have (thus far) been able to deliver on them.

The news about VAIO certification is pretty big, and it might remove some of the pressure that potential customers have faced previously about adopting protection solutions that weren’t entirely blessed by VMware.

I’m looking forward to see what Zerto ends up delivering with 7.5, and I’m really enjoying the progress they’re making with both their on-premises and public cloud focused solutions. You can read Zerto’s press release here, and Andrea Mauro published a comprehensive overview here.

Pure Storage – ObjectEngine and Commvault Integration

I’ve been working with Pure Storage’s ObjectEngine in our lab recently, and wanted to share a few screenshots from the Commvault configuration bit, as it had me stumped for a little while. This is a quick one, but hopefully it will help those of you out there who are trying to get it working. I’m assuming you’ve already created your bucket and user in the ObjectEngine environment, and you have the details of your OE environment at hand.

The first step is to add a Cloud Storage Library to your Libraries configuration.

You’ll need to provide a name, and select the type as Amazon S3. You’ll see in this example that I’m using the fully qualified domain name as the Service Host.

At this point you should be able to click on Detect to detect the bucket you’ll use to store data in. For some reason though, I kept getting an error when I did this.

The trick is to put http:// in front of the FQDN. Note that this doesn’t work with https://.

Now when you click on Detect, you’ll see the Bucket that you’ve configured on the OE environment (assuming you haven’t fat-fingered your credentials).

And that’s it. You can then go on and configure your storage polices and SubClient policies as required.

Random Short Take #15

Here are a few links to some random news items and other content that I recently found interesting. You might find them interesting too. Episode 15 – it could become a regular thing. Maybe every other week? Fortnightly even.

Veeam Basics – Configuring A Scale-Out Backup Repository

I’ve been doing some integration testing with Pure Storage and Veeam in the lab recently, and thought I’d write an article on configuring a scale-out backup repository (SOBR). To learn more about SOBR configurations, you can read the Veeam documentation here. This post from Rick Vanover also covers the what and the why of SOBR. In this example, I’m using a couple of FlashBlade-based NFS repositories that I’ve configured as per these instructions. Each NFS repository is mounted on a separate Linux virtual machine. I’m using a Windows-based Veeam Backup & Replication server running version 9.5 Update 4.

 

Process

Start by going to Backup Infrastructure -> Scale-out Repositories and click on Add Scale-out Repository.

Give it a name, maybe something snappy like “Scale-out Backup Repository 1”?

Click on Add to add the backup repositories.

When you click on Add, you’ll have the option to select the backup repositories you want to use. You can select them all, but for the purpose of this exercise, we won’t.

In this example, Backup Repository 1 and 2 are the NFS locations I configured previously. Select those two and click on OK.

You’ll now see the repositories listed as Extents.

Click on Advanced to check the advanced setttings are what you expect them to be. Click on OK.

Click Next to continue. You’ll see the following message.

You then choose the placement policy. It’s strongly recommended that you stick with Data locality as the placement policy.

You can also pick object storage to use as a Capacity Tier.

You’ll also have an option to configure the age of the files to be moved, and when they can be moved. And you might want to encrypt the data uploaded to your object storage environment, depending on where that object storage lives.

Once you’re happy, click on Apply. You’ll be presented with a summary of the configuration (and hopefully there won’t be any errors).

 

Thoughts

The SOBR feature, in my opinion, is pretty cool. I particularly like the ability to put extents in maintenance mode. And the option to use object storage as a capacity tier is a very useful feature. You get some granular control in terms of where you put your backup data, and what kind of performance you can throw at the environment. And as you can see, it’s not overly difficult to configure the environment. There are a few things to keep on mind though. Make sure your extents are stored on resilient hardware. If you keep your backup sets together with the data locality option, you’l be a sad panda if that extent goes bye bye. And the same goes for the performance option. You’ll also need Enterprise or Enterprise Plus editions of Veeam Backup & Replication for this feature to work. And you can’t use this feature for these types of jobs:

  • Configuration backup job;
  • Replication jobs (including replica seeding);
  • VM copy jobs; and
  • Veeam Agent backup jobs created by Veeam Agent for Microsoft Windows 1.5 or earlier and Veeam Agent for Linux 1.0 Update 1 or earlier.

There are any number of reasons why a scale-out backup repository can be a handy feature to use in your data protection environment. I’ve had the misfortune in the past of working with products that were difficult to manage from a data mobility perspective. Too many times I’ve been stuck going through all kinds of mental gymnastics working out how to migrate data sets from one storage platform to the next. With this it’s a simple matter of a few clicks and you’re on your way with a new bucket. The tiering to object feature is also useful, particularly if you need to keep backup sets around for compliance reasons. There’s no need to spend money on these living on performance disk if you can comfortably have them sitting on capacity storage after a period of time. And if you can control this movement through a policy-driven approach, then that’s even better. If you’re new to Veeam, it’s worth checking out a feature like this, particularly if you’re struggling with media migration challenges in your current environment. And if you’re an existing Enterprise or Enterprise Plus customer, this might be something you can take advantage of.

Using A Pure Storage FlashBlade As A Veeam Repository

I’ve been doing some testing in the lab recently. The focus of this testing has been primarily on Pure Storage’s ObjectEngine and its associated infrastructure. As part of that, I’ve been doing various things with Veeam Backup & Replication 9.5 Update 4, including setting up a FlashBlade NFS repository. I’ve documented the process in a document here. One thing that I thought worthy of noting separately was the firewall requirements. For my Linux Mount Server, I used a CentOS 7 VM, configured with 8 vCPUs and 16GB of RAM. I know, I normally use Debian, but for some reason (that I didn’t have time to investigate) it kept dying every time I kicked off a backup job.

In any case, I set everything up as per Pure’s instructions, but kept getting timeout errors on the job. The error I got was “5/17/2019 10:03:47 AM :: Processing HOST-01 Error: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond NFSMOUNTHOST:2500“. It felt like it was probably a firewall issue of some sort. I tried to make an exception on the Windows VM hosting the Veeam Backup server, but that didn’t help. The problem was with the Linux VM’s firewall. I used the instructions I found here to add in some custom rules. According to the Veeam documentation, Backup Repository access uses TCP ports 2500 – 5000. Your SecOps people will no doubt have a conniption, but here’s how to open those ports on CentOS.

Firstly, is the firewall running?

[danf@nfsmounthost ~]$ sudo firewall-cmd --state
[sudo] password for danf:
running

Yes it is. So let’s stop it to see if this line of troubleshooting is worth pursuing.

[danf@nfsmounthost ~]$ sudo systemctl stop firewalld

The backup job worked after that. Okay, so let’s start it up again and open up some ports to test.

[danf@nfsmounthost ~]$ sudo systemctl start firewalld
[danf@nfsmounthost ~]$ sudo firewall-cmd --add-port=2500-5000/tcp
success

That worked, so I wanted to make it a more permanent arrangement.

[danf@nfsmounthost ~]$ sudo firewall-cmd --permanent --add-port=2500-5000/tcp
success
[danf@nfsmounthost ~]$ sudo firewall-cmd --permanent --list-ports
2500-5000/tcp

Remember, it’s never the storage. It’s always the firewall. Also, keep in my mind this article is about the how. I’m not offering my opinion about whether it’s really a good idea to configure your host-based firewalls with more holes than Swiss cheese. Or whatever things have lots of holes in them.

Random Short Take #14

Here are a few links to some random news items and other content that I found interesting. You might find them interesting too. Episode 14 – giddy-up!

Brisbane VMUG – May 2019

hero_vmug_express_2011

The May 2019 edition of the Brisbane VMUG meeting will be held on Tuesday 28th May at Fishburners from 4pm – 6pm. It’s sponsored by Cohesity and promises to be a great afternoon.

Here’s the agenda:

  • VMUG Intro
  • Cohesity Presentation: Changing Data Protection from Nightmares to Sweet Dreams
  • vCommunity Presentation – Introduction to Hyper-converged Infrastructure
  • Q&A
  • Light refreshments.

Cohesity have gone to great lengths to make sure this will be a fun and informative session and I’m really looking forward to hearing about how they can make recovery simple. You can find out more information and register for the event here. I hope to see you there. Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.

Dell EMC Announces PowerProtect Software (And Hardware)

Disclaimer: I recently attended Dell Technologies World 2019.  My flights, accommodation and conference pass were paid for by Dell Technologies via the Media, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Last week at Dell Technologies World there were a number of announcements made regarding Data Protection. I thought I’d cover them here briefly. Hopefully I’ll have the chance to dive a little deeper into the technology in the next few weeks.

 

PowerProtect Software

The new PowerProtect software is billed as Dell EMC’s “Next Generation Data Management software platform” and provides “data protection, replication and reuse, as well as SaaS-based management and self-service capabilities that give individual data owners the autonomy to control backup and recovery operations”. It currently offers support for:

  • Oracle;
  • Microsoft SQL;
  • VMware;
  • Windows Filesystems; and
  • Linux Filesystems.

More workload support is planned to arrive in the next little while. There are some nice features included, such as automated discovery and on-boarding of databases, VMs and Data Domain protection storage. There’s also support for tiering protection data to public cloud environments, and support for SaaS-based management is a nice feature too. You can view the data sheet here.

 

PowerProtect X400

The PowerProtect X400 is being positioned by Dell EMC as a “multi-dimensional” appliance, with support for both scale out and scale up expansion.

There are three “bits” to the X400 story. There’s the X400 cube, which is the brains of the operation. You then scale it out using either X400F (All-Flash) or X400H (Hybrid) cubes. The All-Flash version can be configured from 64 – 448TB of capacity, delivering up to 22.4PB of logical capacity. The Hybrid version runs from 64 – 384TB of capacity, and can deliver up to 19.2PB of logical capacity. The logical capacity calculation is based on “10x – 50x deduplication ratio”. You can access the spec sheet here, and the data sheet can be found here.

Scale Up and Out?

So what do Dell EMC mean by “multi-dimensional” then? It’s a neat marketing term that means you can scale up and out as required.

  • Scale-up with grow-in-place capacity expansion (16TB); and
  • Scale-out compute and capacity with additional X400F or X400H cubes (starting at 64TB each).

This way you can “[b]enefit from the linear scale-out of performance, compute, network and capacity”.

 

IDPA

Dell EMC also announced that the Integrated Data Protection Appliance (IDPA) was being made available in an 8-24TB version, providing a lower capacity option to service smaller environments.

 

Thoughts and Further Reading

Everyone I spoke to at Dell Technologies World was excited about the PowerProtect announcement. Sure, it’s their job to be excited about this stuff, but there’s a lot here to be excited about, particularly if you’re an existing Dell EMC data protection customer. The other “next-generation” data protection vendors seem to have given the 800 pound gorilla the wakeup call it needed, and the PowerProtect offering is a step in the right direction. The scalability approach used with the X400 appliance is potentially a bit different to what’s available in the market today, but it seems to make sense in terms of reducing the footprint of the hardware to a manageable amount. There were some high numbers being touted in terms of performance but I won’t be repeating any of those until I’ve seen this for myself in the wild. The all-flash option seems a little strange at first, as this normally associated with data protection, but I think it’s competitive nod to some of the other vendors offering top of rack, all-flash data protection.

So what if you’re an existing Data Domain / NetWorker / Avamar customer? There’s no need to panic. You’ll see continued development of these products for some time to come. I imagine it’s not a simple thing for an established company such as Dell EMC to introduce a new product that competes in places with something it already sells to customers. But I think it’s the right thing for them to do, as there’s been significant pressure from other vendors when it comes to telling a tale of simplified data protection leveraging software-defined solutions. Data protection requirements have seen significant change over the last few years, and this new architecture is a solid response to those changes.

The supported workloads are basic for the moment, but a cursory glance through most enterprise environments would be enough to reassure you that they have the most common stuff covered. I understand that existing DPS customers will also get access to PowerProtect to take it for a spin. There’s no word yet on what the migration path for existing customers looks like, but I have no doubt that people have already thought long and hard about what that would look like and are working to make sure the process is field ready (and hopefully straightforward). Dell EMC PowerProtect Software platform and PowerProtect X400 appliance will be generally available in July 2019.

For another perspective on the announcement, check out Preston‘s post here.