Random Short Take #16

Here are a few links to some random news items and other content that I recently found interesting. You might find them interesting too. Episode 16 – please enjoy these semi-irregular updates.

  • Scale Computing has been doing a bit in the healthcare sector lately – you can read news about that here.
  • This was a nice roundup of the news from Apple’s recent WWDC from Six Colors. Hat tip to Stephen Foskett for the link. Speaking of WWDC news, you may have been wondering what happened to all of your purchased content with the imminent demise of iTunes on macOS. It’s still a little fuzzy, but this article attempts to shed some light on things. Spoiler: you should be okay (for the moment).
  • There’s a great post on the Dropbox Tech Blog from James Cowling discussing the mission versus the system.
  • The more things change, the more they remain the same. For years I had a Windows PC running Media Center and recording TV. I used IceTV as the XMLTV-based program guide provider. I then started to mess about with some HDHomeRun devices and the PC died and I went back to a traditional DVR arrangement. Plex now has DVR capabilities and it has been doing a reasonable job with guide data (and recording in general), but they’ve decided it’s all a bit too hard to curate guides and want users (at least in Australia) to use XMLTV-based guides instead. So I’m back to using IceTV with Plex. They’re offering a free trial at the moment for Plex users, and setup instructions are here. No, I don’t get paid if you click on the links.
  • Speaking of axe-throwing, the Cohesity team in Queensland is organising a social event for Friday 21st June from 2 – 4 pm at Maniax Axe Throwing in Newstead. You can get in contact with Casey if you’d like to register.
  • VeeamON Forum Australia is coming up soon. It will be held at the Hyatt Regency Hotel in Sydney on July 24th and should be a great event. You can find out more information and register for it here. The Vanguards are also planning something cool, so hopefully we’ll see you there.
  • Speaking of Veeam, Anthony Spiteri recently published his longest title in the Virtualization is Life! catalogue – Orchestration Of NSX By Terraform For Cloud Connect Replication With vCloud Director. It’s a great article, and worth checking out.
  • There’s a lot of talk and slideware devoted to digital transformation, and a lot of it is rubbish. But I found this article from Chin-Fah to be particularly insightful.

Random Short Take #15

Here are a few links to some random news items and other content that I recently found interesting. You might find them interesting too. Episode 15 – it could become a regular thing. Maybe every other week? Fortnightly even.

Veeam Basics – Configuring A Scale-Out Backup Repository

I’ve been doing some integration testing with Pure Storage and Veeam in the lab recently, and thought I’d write an article on configuring a scale-out backup repository (SOBR). To learn more about SOBR configurations, you can read the Veeam documentation here. This post from Rick Vanover also covers the what and the why of SOBR. In this example, I’m using a couple of FlashBlade-based NFS repositories that I’ve configured as per these instructions. Each NFS repository is mounted on a separate Linux virtual machine. I’m using a Windows-based Veeam Backup & Replication server running version 9.5 Update 4.

 

Process

Start by going to Backup Infrastructure -> Scale-out Repositories and click on Add Scale-out Repository.

Give it a name, maybe something snappy like “Scale-out Backup Repository 1”?

Click on Add to add the backup repositories.

When you click on Add, you’ll have the option to select the backup repositories you want to use. You can select them all, but for the purpose of this exercise, we won’t.

In this example, Backup Repository 1 and 2 are the NFS locations I configured previously. Select those two and click on OK.

You’ll now see the repositories listed as Extents.

Click on Advanced to check the advanced setttings are what you expect them to be. Click on OK.

Click Next to continue. You’ll see the following message.

You then choose the placement policy. It’s strongly recommended that you stick with Data locality as the placement policy.

You can also pick object storage to use as a Capacity Tier.

You’ll also have an option to configure the age of the files to be moved, and when they can be moved. And you might want to encrypt the data uploaded to your object storage environment, depending on where that object storage lives.

Once you’re happy, click on Apply. You’ll be presented with a summary of the configuration (and hopefully there won’t be any errors).

 

Thoughts

The SOBR feature, in my opinion, is pretty cool. I particularly like the ability to put extents in maintenance mode. And the option to use object storage as a capacity tier is a very useful feature. You get some granular control in terms of where you put your backup data, and what kind of performance you can throw at the environment. And as you can see, it’s not overly difficult to configure the environment. There are a few things to keep on mind though. Make sure your extents are stored on resilient hardware. If you keep your backup sets together with the data locality option, you’l be a sad panda if that extent goes bye bye. And the same goes for the performance option. You’ll also need Enterprise or Enterprise Plus editions of Veeam Backup & Replication for this feature to work. And you can’t use this feature for these types of jobs:

  • Configuration backup job;
  • Replication jobs (including replica seeding);
  • VM copy jobs; and
  • Veeam Agent backup jobs created by Veeam Agent for Microsoft Windows 1.5 or earlier and Veeam Agent for Linux 1.0 Update 1 or earlier.

There are any number of reasons why a scale-out backup repository can be a handy feature to use in your data protection environment. I’ve had the misfortune in the past of working with products that were difficult to manage from a data mobility perspective. Too many times I’ve been stuck going through all kinds of mental gymnastics working out how to migrate data sets from one storage platform to the next. With this it’s a simple matter of a few clicks and you’re on your way with a new bucket. The tiering to object feature is also useful, particularly if you need to keep backup sets around for compliance reasons. There’s no need to spend money on these living on performance disk if you can comfortably have them sitting on capacity storage after a period of time. And if you can control this movement through a policy-driven approach, then that’s even better. If you’re new to Veeam, it’s worth checking out a feature like this, particularly if you’re struggling with media migration challenges in your current environment. And if you’re an existing Enterprise or Enterprise Plus customer, this might be something you can take advantage of.

Using A Pure Storage FlashBlade As A Veeam Repository

I’ve been doing some testing in the lab recently. The focus of this testing has been primarily on Pure Storage’s ObjectEngine and its associated infrastructure. As part of that, I’ve been doing various things with Veeam Backup & Replication 9.5 Update 4, including setting up a FlashBlade NFS repository. I’ve documented the process in a document here. One thing that I thought worthy of noting separately was the firewall requirements. For my Linux Mount Server, I used a CentOS 7 VM, configured with 8 vCPUs and 16GB of RAM. I know, I normally use Debian, but for some reason (that I didn’t have time to investigate) it kept dying every time I kicked off a backup job.

In any case, I set everything up as per Pure’s instructions, but kept getting timeout errors on the job. The error I got was “5/17/2019 10:03:47 AM :: Processing HOST-01 Error: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond NFSMOUNTHOST:2500“. It felt like it was probably a firewall issue of some sort. I tried to make an exception on the Windows VM hosting the Veeam Backup server, but that didn’t help. The problem was with the Linux VM’s firewall. I used the instructions I found here to add in some custom rules. According to the Veeam documentation, Backup Repository access uses TCP ports 2500 – 5000. Your SecOps people will no doubt have a conniption, but here’s how to open those ports on CentOS.

Firstly, is the firewall running?

[danf@nfsmounthost ~]$ sudo firewall-cmd --state
[sudo] password for danf:
running

Yes it is. So let’s stop it to see if this line of troubleshooting is worth pursuing.

[danf@nfsmounthost ~]$ sudo systemctl stop firewalld

The backup job worked after that. Okay, so let’s start it up again and open up some ports to test.

[danf@nfsmounthost ~]$ sudo systemctl start firewalld
[danf@nfsmounthost ~]$ sudo firewall-cmd --add-port=2500-5000/tcp
success

That worked, so I wanted to make it a more permanent arrangement.

[danf@nfsmounthost ~]$ sudo firewall-cmd --permanent --add-port=2500-5000/tcp
success
[danf@nfsmounthost ~]$ sudo firewall-cmd --permanent --list-ports
2500-5000/tcp

Remember, it’s never the storage. It’s always the firewall. Also, keep in my mind this article is about the how. I’m not offering my opinion about whether it’s really a good idea to configure your host-based firewalls with more holes than Swiss cheese. Or whatever things have lots of holes in them.

Random Short Take #14

Here are a few links to some random news items and other content that I found interesting. You might find them interesting too. Episode 14 – giddy-up!

Random Short Take #13

Here are a few links to some random news items and other content that I found interesting. You might find them interesting too. Let’s dive in to lucky number 13.

Veeam Vanguard 2019

I was very pleased to get an email from Rick Vanover yesterday letting me know I was accepted as part of the Veeam Vanguard Program for 2019. This is my first time as part of this program, but I’m really looking forward to participating in it. Big shout out to Dilupa Ranatunga and Anthony Spiteri for nominating me in the first place, and for Rick and the team for having me as part of the program. Also, (and I’m getting a bit parochial here) special mention of the three other Queenslanders in the program (Rhys Hammond, Nathan Oldfield, and Chris Gecks). There’s going to be a lot of cool stuff happening with Veeam and in data protection generally this year and I can’t wait to get started. More soon.

Random Short Take #8

Here are a few links to some news items and other content that might be useful. Maybe.

Hyper-Veeam

Disclaimer: I recently attended VeeamON Forum Sydney 2018My flights and accommodation were paid for by Veeam. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

I recently had the opportunity to attend VeeamON Forum in Sydney courtesy of Veeam. I was lucky enough to see Dave Russell‘s keynote speech, and also fortunate to spend some time chatting with him in the afternoon. Dave was great to talk to and I thought I’d share some of the key points here.

 

Hyper All of the Things

If you scroll down Veeam’s website you’ll see mention of a number of different “hyper” things, including hyper-availability. Veeam are keen to position themselves as an availability company, with their core focus being on making data you need recoverable, at the time when you need it to be recoverable.

Hyper-critical

Russell mentioned that data has become “hyper-critical” to business, with the likes of:

  • GDPR compliance;
  • PII data retention;
  • PCI compliance requirements;
  • Customer data; and
  • Financial records, etc.

Hyper-growth

Russell also spoke about the hyper-growth of data, with all kinds of data (including structured, unstructured, application, and Internet of things data) is also growing at a rapid clip.

Hyper-sprawl

This explosive growth of data has also lead to the “hyper-sprawl” of data, with your data now potentially living in any or all of the following locations:

  • SaaS-based solutions
  • Private cloud
  • Public cloud

 

Five Stages of Intelligent Data Management

Russell broke down Intelligent Data Management (IDM) into 5 stages.

Backup

A key part of any data management strategy is the ability to backup all workloads and ensure they are always recoverable in the event of outages, attack, loss or theft.

Aggregation

The ability to cope with data sprawl, as well as growth, means you need to ensure protection and access to data across multiple clouds to drive digital services and ensure continuous business operations.

Visibility

It’s not just about protecting vast chunks of data in multiple places though. You also need to look at the requirement to “improve management of data across multi-clouds with clear, unified visibility and control into usage, performance issues and operations”.

Orchestration

Orchestration, ideally, can then be used to “[s]eamlessly move data to the best location across multi-clouds to ensure business continuity, compliance, security and optimal use of resources for business operations”.

Automation

The final piece of the puzzle is automation. According to Veeam, you can get to a point where the “[d]ata becomes self-managing by learning to backup, migrate to ideal locations based on business needs, secure itself during anomalous activity and recover instantaneously”.

 

Thoughts

Data growth is not a new phenomenon by any stretch, and Veeam obviously aren’t the first to notice that protecting all this staff can be hard. Sprawl is also becoming a real problem in all types environments. It’s not just about knowing you have some unstructured data that can impact workflows in a key application. It’s about knowing which cloud platform that data might reside in. If you don’t know where it is, it makes it a lot harder to protect, and your risk profile increases as a result. It’s not just the vendors banging on about data growth through IoT either, it’s a very real phenomena that is creating all kinds of headaches for CxOs and their operations teams. Much like the push in to public cloud by “shadow IT” teams, IoT solutions are popping up in all kinds of unexpected places in the enterprise and making it harder to understand exactly where the important data is being kept and how it’s protected.

Veeam are talking a very good game around intelligent data management. I remember a similar approach being adopted by a three-letter storage company about a decade ago. They lost their way a little under the weight of acquisitions, but the foundation principles seem to still hold water today. Dave Russell obviously saw quite a bit at Gartner in his time there prior to Veeam, so it’s no real surprise that he’s pushing them in this direction.

Backup is just the beginning of the data management problem. There’s a lot else that needs to be done in order to get to the “intelligent” part of the equation. My opinion remains that a lot of enterprises are still some ways away from being there. I also really like Veeam’s focus on moving from policy-based through to a behaviour-based approach to data management.

I’ve been aware of Veeam for a number of years now, and have enjoyed watching them grow as a company. They’re working hard to make their way in the enterprise now, but still have a lot to offer the smaller environments. They tell me they’re committed to remaining a software-only solution, which gives them a certain amount of flexibility in terms of where they focus their R & D efforts. There’s a great cloud story there, and the bread and butter capabilities continue to evolve. I’m looking to see what they have coming over the next 12 months. It’s a relatively crowded market now, and it’s only going to get more competitive. I’ll be doing a few more articles in the next month or two focusing on some of Veeam’s key products so stay tuned.

Random Short Take #6

Welcome to the sixth edition of the Random Short Take. Here are a few links to a few things that I think might be useful, to someone.