Random Short Take #93

Welcome to Random Short Take #93. If it’s old news can it still be called news? Maybe olds. Let’s get random.

Disaster Recovery – Do It Yourself or As-a-Service?

I don’t normally do articles off the back of someone else’s podcast episode, but I was listening to W. Curtis Preston and Prasanna Malaiyandi discuss “To DRaaS or Not to DRaaS” on The Backup Wrap-up a little while ago, and thought it was worth diving into a bit on the old weblog. In my day job I talk to many organisations about their Disaster Recovery (DR) requirements. I’ve been doing this a while, and I’ve seen the conversation change somewhat over the last few years. This article will barely skim the surface on what you need to think about, so I recommend you listen to Curtis and Prasanna’s podcast, and while you’re at it, buy Curtis’s book. And, you know, get to know your environment. And what your environment needs.

 

The Basics

As Curtis says in the podcast, “[b]ackup is hard, recovery is harder, and disaster recovery is an advanced form of recovery”. DR is rarely a matter of having another copy of your data safely stored away from wherever your primary workloads are hosted. Sure, that’s a good start, but it’s not just about getting that data back, or the servers, it’s the connectivity as well. So you have a bunch of servers running somewhere, and you’ve managed to get some / all / most of your data back. Now what? How do your users connect to that data? How do they authenticate? How long will it take to reconfigure your routing? What if your gateway doesn’t exist any more? A lot of customers think of DR in a similar fashion to the way they treat their backup and recovery approach. Sure, it’s super important that you understand how you can meet your recovery point objectives and your recovery time objectives, but there are some other things you’ll need to worry about too.

 

What’s A Disaster?

Natural

What kind of disaster are you looking to recover from? In the olden days (let’s say about 15 years ago), I was talking to clients about natural disasters in the main. What happens when the Brisbane River has a one in a hundred years flood for the second time in 5 years? Are your data centres above river level? What about your generators? What if there’s a fire? What if you live somewhere near the ocean and there’s a big old tsunami heading your way? My friends across the ditch know all about how not to build data centres on fault lines.

Accidental

Things have evolved to cover operational considerations too. I like to joke with my customers about the backhoe operator cutting through your data centre’s fibre connection, but this is usually the reason why data centres have multiple providers. And there’s alway human error to contend with. As data gets more concentrated, and organisations look to do things more efficiently (i.e. some of them are going cheap on infrastructure investments), the risk of messing up a single component can have a bigger impact than it would have previously. You can architect around some of this, for sure, but a lot of businesses are not investing in those areas and just leaving it up to luck.

Bad Hacker, Stop It

More recently, however, the conversation I’ve been having with folks across a variety of industries has been more about some kind of human malfeasance. Whether it’s bad actors (in hoodies and darkened rooms no doubt) looking to infect your environment with malware, or someone not paying attention and unleashing ransomware into your environment, we’re seeing more and more of this kind of activity having a real impact on organisations of all shapes and sizes. So not only do you need to be able to get your data back, you need to know that it’s clean and not going to re-infect your production environment.

A Real Disaster

And how pragmatic are you going to be when considering your recovery scenarios? When I first started in the industry, the company I worked for talked about their data centres being outside the blast radius (assuming someone wanted to drop a bomb in the middle of the Brisbane CBD). That’s great as far as it goes, but are all of your operational staff going to be around to start of the recovery activities? Are any of them going to be interested in trying to recovery your SQL environment if they have family and friends who’ve been impacted by some kind of earth-shattering disaster? Probably not so much. In Australia many organisations have started looking at having some workloads in Sydney, and recovery in Melbourne, or Brisbane for that matter. All cities on the Eastern seaboard. Is that enough? What if Oz gets wiped out? Should you have a copy of the data stored in Singapore? Is data sovereignty a problem for you? Will anyone care if things are going that badly for the country?

 

Can You Do It Better?

It’s not just about being able to recover your workloads either, it’s about how you can reduce the complexity and cost of that recovery. This is key to both the success of the recovery, and the likelihood of it not getting hit with relentless budget cuts. How fast do you want to be able to recover? How much data do you want to recover? To what level? I often talk to people about them shutting down their test and development environments during a DR event. Unless your whole business is built on software development, it doesn’t always make sense to have your user acceptance testing environment up and running in your DR environment.

The cool thing about public cloud is that there’s a lot more flexibility when it comes to making decisions about how much you want to run and where you want to run it. The cloud isn’t everything though. As Prasanna mentions, if the cloud can’t run all of your workloads (think mainframe and non-x86, for example), it’s pointless to try and use it as a recovery platform. But to misquote Jason Statham’s character Turkish in Snatch – “What do I know about DR?” And if you don’t really know about DR, you should think about getting someone who does involved.

 

What If You Can’t Do It?

And what if you want to DRaaS your SaaS? Chances are high that you probably can’t do much more than rely on your existing backup and recovery processes for your SaaS platforms. There’s not much chance that you can take that Salesforce data and put it somewhere else when Salesforce has a bad day. What you can do, however, is be sure that you back that Salesforce data up so that when someone accidentally lets the bad guys in you’ve got data you can recover from.

 

Thoughts

The optimal way to approach DR is an interesting problem to try and solve, and like most things in technology, there’s no real one size fits all approach that you can take. I haven’t even really touched on the pros and cons of as-a-Service offerings versus rolling your own either. Despite my love of DIY (thanks mainly to punk rock, not home improvement), I’m generally more of a fan of getting the experts to do it. They’re generally going to have more experience, and the scale, to do what needs to be done efficiently (if not always economically). That said, it’s never as simple as just offloading the responsibility to a third-party. You might have a service provider delivering a service for you, but ultimately you’ll always be accountable for your DR outcomes, even if you’re not directly responsible for the mechanics of it.

VMware Cloud on AWS – TMCHAM – Part 13 – Delete the SDDC

Following on from my article on host removal, in this edition of Things My Customers Have Asked Me (TMCHAM), I’m going to cover SDDC removal on the VMware-managed VMware Cloud on AWS platform. Don’t worry, I haven’t lost my mind in a post-acquisition world. Rather, this is some of the info you’ll find useful if you’ve been running a trial or a proof of concept (as opposed to a pilot) deployment of VMware Cloud Disaster Recovery (VCDR) and / or VMware Cloud on AWS and want to clean some stuff up when you’re all done.

 

Process

Firstly, if you’re using VCDR and want to deactivate the deployment, the steps to perform are outlined here, and I’ve copied the main bits from that page below.

  1. Remove all DRaaS Connectors from all protected sites. See Remove a DRaaS Connector from a Protected Site.
  2. Delete all recovery SDDCs. See Delete a Recovery SDDC.
  3. Deactivate the recovery region from the Global DR Console. (Do this step last.) See Deactivate a Recovery Region. Usage charges for VMware Cloud DR are not stopped until this step is completed.

Funnily enough, as I was writing this, someone zapped our lab for reasons. So this is what a Region deactivation looks like in the VCDR UI.

Note that it’s important you perform these steps in that order, or you’ll have more cleanup work to do to get everything looking nice and tidy. I have witnessed firsthand someone doing it the other way and it’s not pretty. Note also that if your Recovery SDDC had services such as HCX connected, you should hold off deleting the Recovery SDDC until you’ve cleaned that bit up.

Secondly, if you have other workloads deployed in a VMware Cloud on AWS SDDC and want to remove a PoC SDDC, there are a few steps that you will need to follow.

If you’ve been using HCX to test migrations or network extension, you’ll need to follow these steps to remove it. Note that this should be initiated from the source side, and your HCX deployment should be in good order before you start (site pairings functioning, etc). You might also wish to remove a vCenter Cloud Gateway, and you can find information on that process here.

Finally, there are some AWS activities that you might want to undertake to clean everything up. These include:

  • Removing VIFs attached to your AWS VPC.
  • Deleting the VPC (this will likely be required if your organisation has a policy about how PoC deployments  are managed).
  • Tidy up and on-premises routing and firewall rules that may have been put in place for the PoC activity.

And that’s it. There’s not a lot to it, but tidying everything up after a PoC will ensure that you avoid any unexpected costs popping up in the future.

Random Short Take #90

Welcome to Random Short Take #90. I remain somewhat preoccupied with the day job and acquisitions. It’s definitely Summer here now. Let’s get random.

  • You do something for long enough, and invariably you assume that everyone else knows how to do that thing too. That’s why this article from Danny on data protection basics is so useful.
  • Speaking of data protection, Preston has a book on recovery for busy people coming soon. Read more about it here.
  • Still using a PDP-11 at home? Here’s a simple stack buffer overflow attack you can try.
  • I hate it when the machines shout at me, and so do a lot of other people it seems. JB has a nice write-up on the failure of self-service in the modern retail environment. The sooner we throw those things in the sea, the better.
  • In press release news, Hammerspace picked up an award at SC2023. One to keep an eye on.
  • In news from the day job, VMware Cloud on AWS SDDC Version 1.24 was just made generally available. You can read more about some of the new features (like Express Storage Architecture support – yay!) here. I hope to cover off some of that in more detail soon.
  • You like newsletters? Sign up for Justin’s weekly newsletter here. He does thinky stuff, and funny stuff too. It’s Justin, why would you not?
  • Speaking of newsletters, Anthony’s looking to get more subscribers to his daily newsletter, The Sizzle. To that end, he’s running a “Sizzlethon”. I know, it’s a pretty cool name. If you sign up using this link you also get a 90-day free trial. And the price of an annual subscription is very reasonable. There’s only a few days left, so get amongst it and let’s help content creators to keep creating content.

VMware Cloud on AWS – Check TRIM/UNMAP

This a really quick follow up to one of my TMCHAM articles on TRIM/UNMAP on VMware Cloud on AWS. In short, a customer wanted to know whether TRIM/UNMAP had been enabled on one of their clusters, as they’d requested. The good news is it’s easy enough to find out. On your cluster, go to Configure. Under vSAN, you’ll see Services. Expand the Advanced Options section and you’ll see whether TRIM/UNMAP has been enabled for the cluster or not.

VMware Cloud Disaster Recovery – Ransomware Recovery Activation

One of the cool features of VMware Cloud Disaster Recovery (VCDR) is the Enhanced Ransomware Recovery capability. This is a quick post to talk through how to turn it on in your VCDR environment, and things you need to consider.

 

Organization Settings

The first step is to enable the ransomware services integration in your VCDR dashboard. You’ll need to be an Organisation owner to do this. Go to Settings, and click on Ransomware Recovery Services.

You’ll then have the option to select where the data analysis is performed.

You’ll also need to tick some boxes to ensure that you understand that an appliance will be deployed in each of your Recovery SDDCs, Windows VMs will get a sensor installed, and some preinstalled sensors may clash with Carbon Black.

Click on Activate and it will take a few moments. If it takes much longer than that, you’ll need to talk to someone in support.

Once the analysis integration is activated, you can then activate NSX Advanced Firewall. Page 245 of the PDF documentation covers this better than I can, but note that NSX Advanced Firewall is a chargeable service (if you don’t already have a subscription attached to your Recovery SDDC). There’s some great documentation here on what you do and don’t have access to if you allow the activation of NSX Advanced Firewall.

Like your favourite TV chef would say, here’s one I’ve prepared earlier.

Recovery Plan Configuration

Once the services integration is done, you can configure Ransomware Recovery on a per Recovery Plan basis.

Start by selecting Activate ransomware recovery. You’ll then need to acknowledge that this is a chargeable feature.

You can also choose whether you want to use integrated analysis (i.e. Carbon Black Cloud), and if you want to manually remove other security sensors when you recover. You can, also, choose to use your own tools if you need to.

And that’s it from a configuration perspective. The actual recovery bit? A story for another time.

VMware Cloud Disaster Recovery – Firewall Ports

I published an article a while ago on getting started with VMware Cloud Disaster Recovery (VCDR). One thing I didn’t cover in any real depth was the connectivity requirements between on-premises and the VCDR service. VMware has worked pretty hard to ensure this is streamlined for users, but it’s still something you need to pay attention to. I was helping a client work through this process for a proof of concept recently and thought I’d cover it off more clearly here. The diagram below highlights the main components you need to look at, being:

  • The Cloud File System (frequently referred to as the SCFS)
  • The VMware Cloud DR SaaS Orchestrator (the Orchestrator); and
  • VMware Cloud DR Auto-support.

It’s important to note that the first two services are assigned IP addresses when you enable the service in the Cloud Service Console, and the Auto-support service has three public IP addresses that you need to be able to communicate with. All of this happens outbound over TCP 443. The Auto-support service is not required, but it is strongly recommended, as it makes troubleshooting issues with the service much easier, and provides VMware with an opportunity to proactively resolve cases. Network connectivity requirements are documented here.

[image courtesy of VMware]

So how do I know my firewall rules are working? The first sign that there might be a problem is that the DRaaS Connector deployment will fail to communicate with the Orchestrator at some point (usually towards the end), and you’ll see a message similar to the following. “ERROR! VMware Cloud DR authentication is not configured. Contact support.”

How can you troubleshoot the issue? Fortunately, we have a tool called the DRaaS Connector Connectivity Check CLI that you can run to check what’s not working. In this instance, we suspected an issue with outbound communication, and ran the following command on the console of the DRaaS Connector to check:

drc network test --scope cloud

This returned a status of “reachable” for the Orchestrator and Auto-support services, but the SCFS was unreachable. Some negotiations with the firewall team, and we were up and running.

Note, also, that VMware supports the use of proxy servers for communicating with Auto-support services, but I don’t believe we support the use of a proxy for Orchestrator and SCFS communications. If you’re worried about VCDR using up all your bandwidth, you can throttle it. Details on how to do that can be found here. We recommend a minimum of 100Mbps, but you can go as low as 20Mbps if required.

Random Short Take #88

Welcome to Random Short Take #88. This one’s been sitting in my drafts folder for a while. Let’s get random.

VMware Cloud on AWS – Melbourne Region Added

VMware recently announced that VMware Cloud on AWS is now available in the AWS Asia-Pacific (Melbourne) Region. I thought I’d share some brief thoughts here along with a video I did with my colleague Satya.

 

What?

VMware Cloud on AWS is now available to consume in three Availability Zones (apse4-az1, apse4-az2, apse4-az3) in the Melbourne Region. From a host type – you have the option to deploy either I3en.metal or I4i.metal hosts. There is also support for stretched clusters and PCI-DSS compliance if required. The full list of VMware Cloud on AWS Regions and Availability Zones is here.

 

Why Is This Interesting?

Since the launch of VMware Cloud on AWS, customers have only had one choice when it comes to a Region – Sydney. This announcement gives organisations the ability to deploy architectures that can benefit from both increased availability and resiliency by leveraging multi-regional capabilities.

Availability

VMware Cloud on AWS already offers platform availability at a number of levels, including a choice of Availability Zones, Partition Placement groups, and support for stretched clusters across two Availability Zones. There’s also support for VMware High Availability, as well as support for automatically remediating failed hosts.

Resilience

In addition to the availability options customers can take advantage of, VMware Cloud on AWS also provides support for a number of resilience solutions, including VMware Cloud Disaster Recovery (VCDR) and VMware Site Recovery. Previously, customers in Australia and New Zealand were able to leverage these VMware (or third-party) solutions and deploy them across multiple Availability Zones. Invariably, it would look like the below diagram, with workloads hosted in one Availability Zone, and a second Availability Zone being used as the recovery location for those production workloads.

With the introduction of a second Region in A/NZ, customers can now look to deploy resilience solutions that are more like this diagram:

In this example, they can choose to run production workloads in the Melbourne Region and recover workloads into the Sydney Region if something goes pear-shaped. Note that VCDR is not currently available to deploy in the Melbourne Region, although it’s expected to be made available before the end of 2023.

 

Why Else Should I Care?

Data Sovereignty 

There are a variety of legal, regulatory, and administrative obligations governing the access, use, security and preservation of information within various government and commercial organisations in Victoria. These regulations are both national and state-based, and in the case of the Melbourne Region, provide organisations in Victoria the opportunity to store data in VMware Cloud on AWS that may not otherwise have been possible.

Data Locality

Not all applications and data reside in the same location. Many organisations have a mix of workloads residing on-premises and in the cloud. Some of these applications are latency-sensitive, and the launch of the Melbourne Region provides organisations with the ability to host applications closer to that data, as well as accessing native AWS services with improved responsiveness over applications hosted in the Sydney Region.

 

How?

If you’re an existing VMware Cloud on AWS customer, head over to https://cloud.vmware.com. Login to the Cloud Services Console. Click on the VMware Cloud on AWS tile. Click on Inventory. Then click on Create SDDC.

 

Thoughts

Some of the folks in the US and Europe are probably wondering why on earth this is such a big deal for the Australian and New Zealand market. And plenty of folks in this part of the world are probably not that interested either. Not every organisation is going to benefit from or look to take advantage of the Melbourne Region. Many of them will continue to deploy workloads into one or two of the Sydney-based Availability Zones, with DR in another Availability Zone, and not need to do any more. But for those organisations looking for resiliency across geographical regions, this is a great opportunity to really do some interesting stuff from a disaster recovery perspective. And while it seems delightfully antiquated to think that, in this global world we live in, some information can’t cross state lines, there are plenty of organisations in Victoria facing just that issue, and looking at ways to store that data in a sensible fashion close to home. Finally, we talk a lot about data having gravity, and this provides many organisations in Victoria with the ability to run workloads closer to that centre of data gravity.

If you’d like to hear me talking about this with my learned colleague Satya, you can check out the video here. Thanks to Satya for prompting me to do the recording, and for putting it all together. We’re aiming to do this more regularly on a variety of VMware-related topics, so keep an eye out.

Random Short Take #87

Welcome to Random Short Take #87. Happy Fête Nationale du 14 juillet to those who celebrate. Let’s get random.

  • I always enjoy it when tech vendors give you a little peak behind the curtain, and Dropbox excels at this. Here is a great article on how Dropbox selects data centre sites. Not every company is operating at the scale that Dropbox is, but these kinds of articles provide useful insights nonetheless. Even if you just skip to the end and follow this process when making technology choices:
    1. Identify what you need early.
    2. Understand what’s being offered.
    3. Validate the technical details.
    4. Physically verify each proposal.
    5. Negotiate.
  • I haven’t used NetWorker for a while, but if you do, this article from Preston on what’s new in NetWorker 19.9 should be of use to you.
  • In VMware Cloud on AWS news, vCenter Federation for VMware Cloud on AWS is now live. You can read all about it here.
  • Familiar with Write Once, Read Many (WORM) storage? This article from the good folks at Datadobi on WORM retention made for some interesting reading. In short, keeping everything for ever is really a data management strategy, and it could cost you.
  • Speaking of data management, check out this article from Chin-Fah on data management and ransomware – it’s an alternative view very much worth considering.
  • Mellor wrote an article on Pixar and VAST Data’s collaboration. And he did one on DreamWorks and NetApp for good measure. I’m fascinated by media creation in general, and it’s always interesting to see what the big shops are using as part of their infrastructure toolkit.
  • JB put out a short piece highlighting some AI-related content shenanigans over at Gizmodo. The best part was the quoted reactions from staff – “16 thumbs down emoji, 11 wastebasket emoji, six clown emoji, two face palm emoji and two poop emoji.”
  • Finally, the recent Royal Commission into the “Robodebt” program completed and released a report outlining just how bad it really was. You can read Simon’s coverage over at El Reg. It’s these kinds of things that make you want to shake people when they come up with ideas that are destined to cause pain.