VMware Cloud Disaster Recovery – Firewall Ports

I published an article a while ago on getting started with VMware Cloud Disaster Recovery (VCDR). One thing I didn’t cover in any real depth was the connectivity requirements between on-premises and the VCDR service. VMware has worked pretty hard to ensure this is streamlined for users, but it’s still something you need to pay attention to. I was helping a client work through this process for a proof of concept recently and thought I’d cover it off more clearly here. The diagram below highlights the main components you need to look at, being:

  • The Cloud File System (frequently referred to as the SCFS)
  • The VMware Cloud DR SaaS Orchestrator (the Orchestrator); and
  • VMware Cloud DR Auto-support.

It’s important to note that the first two services are assigned IP addresses when you enable the service in the Cloud Service Console, and the Auto-support service has three public IP addresses that you need to be able to communicate with. All of this happens outbound over TCP 443. The Auto-support service is not required, but it is strongly recommended, as it makes troubleshooting issues with the service much easier, and provides VMware with an opportunity to proactively resolve cases. Network connectivity requirements are documented here.

[image courtesy of VMware]

So how do I know my firewall rules are working? The first sign that there might be a problem is that the DRaaS Connector deployment will fail to communicate with the Orchestrator at some point (usually towards the end), and you’ll see a message similar to the following. “ERROR! VMware Cloud DR authentication is not configured. Contact support.”

How can you troubleshoot the issue? Fortunately, we have a tool called the DRaaS Connector Connectivity Check CLI that you can run to check what’s not working. In this instance, we suspected an issue with outbound communication, and ran the following command on the console of the DRaaS Connector to check:

drc network test --scope cloud

This returned a status of “reachable” for the Orchestrator and Auto-support services, but the SCFS was unreachable. Some negotiations with the firewall team, and we were up and running.

Note, also, that VMware supports the use of proxy servers for communicating with Auto-support services, but I don’t believe we support the use of a proxy for Orchestrator and SCFS communications. If you’re worried about VCDR using up all your bandwidth, you can throttle it. Details on how to do that can be found here. We recommend a minimum of 100Mbps, but you can go as low as 20Mbps if required.

Brisbane VMUG – October 2023

 

Event Overview

This month I’ll be presenting a recap of VMware Explore for all those that could not make it overseas, exploring what’s new and innovations. The agenda covers:

  • Cloud & Edge infrastructure
  • Modernize infrastructure, operating models and applications
  • Networking & Security
  • Automating app experiences with a comprehensive and secure network
  • Modern Applications & Cloud Management
  • Develop, operate and optimize apps at scale on any cloud
  • Hybrid Workforce
  • Enable work anywhere with secure and frictionless experiences

An introduction to VyOS by Shah Anupam covering how VyOS networking can be leveraged within the VMware ecosystem.

 

Primary Venue

Brisbane VMware Office

Queen St 8/324, 4000 Brisbane, QLD, AU

Brisbane VMware Office – Goondiwindi Room

Register here. Hope to see you there. [Edit] I should mention it’s happening on Wednesday October 18th, 2023 from 12:00 – 1:30pm.

Random Short Take #88

Welcome to Random Short Take #88. This one’s been sitting in my drafts folder for a while. Let’s get random.

Brisbane VMUG – Lunch And Learn – September 2023

The Brisbane VMUG team are running a lunch and learn with the local VMware team on September 6th, 2023. You can find out more about it below, and register for the event here.

 

Event Overview

In this Lunch & Learn session, attendees will embark on a journey through VMware Aria Operations, exploring its capabilities and innovations. The agenda is designed to provide an understanding of Aria Operations, covering:

  • General Overview: An introduction to the platform, highlighting its evolution.
  • Deployment and Enhancements: A look into new feature enhancements. SaaS and on-prem deployment options.
  • Integration with Clouds: Insight into seamless integration with VMware Cloud and native cloud platforms.
  • Core Capabilities: Exploration of essential features like troubleshooting, automation, and cost.
  • Compliance Engine: discover the compliance management ensuring adherence to standards.
  • Extended Capabilities: A focus on extended monitoring capabilities for applications and operating systems.
  • Live Demo & Q&A: An interactive segment with a live demonstration of selected features, followed by an open Q&A session.

The session aims to unlock the potential of VMware Aria Operations, guiding attendees through its multifaceted functionalities and demonstrating how they can leverage these features in their own environments. Whether new to Aria Operations or looking to explore its latest updates, this session offers valuable insights and practical knowledge.

 

Noil Oomman, Senior Solutions Architect VMware.

Noil Oomman is a Senior Solutions Architect in the Multi-Cloud Management team, based in Melbourne. Noil is experienced in working with customers and 3 X VMware Certified Professional – Cloud Operations and Automation. Noil enjoys helping customers understand the benefits of VMware’s Cloud Operation Model and how the application of our associated multi-cloud management portfolio can support them with their unique Digital Transformations.

 

Primary Venue

Brisbane VMware Office

Queen St 8/324, 4000 Brisbane, QLD, AU

Brisbane VMware Office – Goondiwindi Room

VMware Cloud on AWS – Melbourne Region Added

VMware recently announced that VMware Cloud on AWS is now available in the AWS Asia-Pacific (Melbourne) Region. I thought I’d share some brief thoughts here along with a video I did with my colleague Satya.

 

What?

VMware Cloud on AWS is now available to consume in three Availability Zones (apse4-az1, apse4-az2, apse4-az3) in the Melbourne Region. From a host type – you have the option to deploy either I3en.metal or I4i.metal hosts. There is also support for stretched clusters and PCI-DSS compliance if required. The full list of VMware Cloud on AWS Regions and Availability Zones is here.

 

Why Is This Interesting?

Since the launch of VMware Cloud on AWS, customers have only had one choice when it comes to a Region – Sydney. This announcement gives organisations the ability to deploy architectures that can benefit from both increased availability and resiliency by leveraging multi-regional capabilities.

Availability

VMware Cloud on AWS already offers platform availability at a number of levels, including a choice of Availability Zones, Partition Placement groups, and support for stretched clusters across two Availability Zones. There’s also support for VMware High Availability, as well as support for automatically remediating failed hosts.

Resilience

In addition to the availability options customers can take advantage of, VMware Cloud on AWS also provides support for a number of resilience solutions, including VMware Cloud Disaster Recovery (VCDR) and VMware Site Recovery. Previously, customers in Australia and New Zealand were able to leverage these VMware (or third-party) solutions and deploy them across multiple Availability Zones. Invariably, it would look like the below diagram, with workloads hosted in one Availability Zone, and a second Availability Zone being used as the recovery location for those production workloads.

With the introduction of a second Region in A/NZ, customers can now look to deploy resilience solutions that are more like this diagram:

In this example, they can choose to run production workloads in the Melbourne Region and recover workloads into the Sydney Region if something goes pear-shaped. Note that VCDR is not currently available to deploy in the Melbourne Region, although it’s expected to be made available before the end of 2023.

 

Why Else Should I Care?

Data Sovereignty 

There are a variety of legal, regulatory, and administrative obligations governing the access, use, security and preservation of information within various government and commercial organisations in Victoria. These regulations are both national and state-based, and in the case of the Melbourne Region, provide organisations in Victoria the opportunity to store data in VMware Cloud on AWS that may not otherwise have been possible.

Data Locality

Not all applications and data reside in the same location. Many organisations have a mix of workloads residing on-premises and in the cloud. Some of these applications are latency-sensitive, and the launch of the Melbourne Region provides organisations with the ability to host applications closer to that data, as well as accessing native AWS services with improved responsiveness over applications hosted in the Sydney Region.

 

How?

If you’re an existing VMware Cloud on AWS customer, head over to https://cloud.vmware.com. Login to the Cloud Services Console. Click on the VMware Cloud on AWS tile. Click on Inventory. Then click on Create SDDC.

 

Thoughts

Some of the folks in the US and Europe are probably wondering why on earth this is such a big deal for the Australian and New Zealand market. And plenty of folks in this part of the world are probably not that interested either. Not every organisation is going to benefit from or look to take advantage of the Melbourne Region. Many of them will continue to deploy workloads into one or two of the Sydney-based Availability Zones, with DR in another Availability Zone, and not need to do any more. But for those organisations looking for resiliency across geographical regions, this is a great opportunity to really do some interesting stuff from a disaster recovery perspective. And while it seems delightfully antiquated to think that, in this global world we live in, some information can’t cross state lines, there are plenty of organisations in Victoria facing just that issue, and looking at ways to store that data in a sensible fashion close to home. Finally, we talk a lot about data having gravity, and this provides many organisations in Victoria with the ability to run workloads closer to that centre of data gravity.

If you’d like to hear me talking about this with my learned colleague Satya, you can check out the video here. Thanks to Satya for prompting me to do the recording, and for putting it all together. We’re aiming to do this more regularly on a variety of VMware-related topics, so keep an eye out.

Random Short Take #87

Welcome to Random Short Take #87. Happy Fête Nationale du 14 juillet to those who celebrate. Let’s get random.

  • I always enjoy it when tech vendors give you a little peak behind the curtain, and Dropbox excels at this. Here is a great article on how Dropbox selects data centre sites. Not every company is operating at the scale that Dropbox is, but these kinds of articles provide useful insights nonetheless. Even if you just skip to the end and follow this process when making technology choices:
    1. Identify what you need early.
    2. Understand what’s being offered.
    3. Validate the technical details.
    4. Physically verify each proposal.
    5. Negotiate.
  • I haven’t used NetWorker for a while, but if you do, this article from Preston on what’s new in NetWorker 19.9 should be of use to you.
  • In VMware Cloud on AWS news, vCenter Federation for VMware Cloud on AWS is now live. You can read all about it here.
  • Familiar with Write Once, Read Many (WORM) storage? This article from the good folks at Datadobi on WORM retention made for some interesting reading. In short, keeping everything for ever is really a data management strategy, and it could cost you.
  • Speaking of data management, check out this article from Chin-Fah on data management and ransomware – it’s an alternative view very much worth considering.
  • Mellor wrote an article on Pixar and VAST Data’s collaboration. And he did one on DreamWorks and NetApp for good measure. I’m fascinated by media creation in general, and it’s always interesting to see what the big shops are using as part of their infrastructure toolkit.
  • JB put out a short piece highlighting some AI-related content shenanigans over at Gizmodo. The best part was the quoted reactions from staff – “16 thumbs down emoji, 11 wastebasket emoji, six clown emoji, two face palm emoji and two poop emoji.”
  • Finally, the recent Royal Commission into the “Robodebt” program completed and released a report outlining just how bad it really was. You can read Simon’s coverage over at El Reg. It’s these kinds of things that make you want to shake people when they come up with ideas that are destined to cause pain.

VMware Cloud on AWS – TMCHAM – Part 11 – Storage Policies

In this edition of Things My Customers Have Asked Me (TMCHAM), I’m going to cover Managed Storage Policy Profiles (MSPPs) on the VMware-managed VMware Cloud on AWS platform.

 

Background

VMware Cloud on AWS has MSPPs deployed on clusters to ensure that customers have sufficient resilience built into the cluster to withstand disk or node failures. By default, clusters are configured with RAID 1, Failures to Tolerate (FTT):1 for 2 – 5 nodes, and RAID 6, FTT:2 for clusters with 6 or more nodes. Note that single-node clusters have no Service Level Agreement (SLA) attached to them, as you generally only run those on a trial basis, and if the node fails, there’s nowhere for the data to go. You can read more about vSAN Storage Polices and MSPPs here, and there’s a great Tech Zone article here. The point of these policies is that they are designed to ensure your cluster(s) remain in compliance with the SLAs for the platform. You can view the policies in your environment by going to Policies and Profiles in vCenter and selecting VM Storage Policies.

 

Can I Change Them?

The MSPPs are maintained by VMware, and so it’s not a great idea to change the default policies on your cluster, as the system will change them back at some stage. And why would you want to change the policies on your cluster? Well, you might decide that 4 or 5 nodes could actually run better (from a capacity perspective) using RAID 5, rather than RAID 1. This is a reasonable thing to want to do, and as the SLA talks about FTT numbers, not RAID types, you can change the RAID type and remain in compliance. And the capacity difference can be material in some cases, particularly if you’re struggling to fit your workloads onto a smaller node count.

 

So How Do I Do It Then?

Clone The Policy

There are a few ways to approach this, but the simplest is by cloning an existing policy. In this example, I’ll clone the vSAN Default Storage Policy. In the VMware Cloud on AWS, there is an MSPP assigned to each cluster with the name “VMC Workload Storage Policy – ClusterName“. Select the policy you want to clone and then click on Clone.

The first step is to give the VM Storage Policy a name. Something cool with your initials should do the trick.

You can edit the policy structure at this point, or just click Next.

Here you can configure your Availability options. You can also do other things, like configure Tags and Advanced Policy Rules.

Once this is configured, the system will check that your vSAN datastore are compatible with your policy.

And then you’re ready to go. Click Finish, make yourself a beverage, bask in the glory of it all.

Apply The Policy

So you have a fresh new policy, now what? You can choose to apply it to your workload datastore, or apply it to specific Virtual Machines. To apply it to your datastore, select the datastore you want to modify, click on General, then click on Edit next to the Default Storage Policy option. The process to apply the policy to VMs is outlined here. Note that if you create a non-compliant policy and apply it to your datastore, you’ll get hassled about it and you should likely consider changing your approach.

 

Thoughts

The thing about managed platforms is that the service provider is on the hook for architecture decisions that reduce the resilience of the platform. And the provider is trying to keep the platform running within the parameters of the SLA. This is why you’ll come across configuration items in VMware Cloud on AWS that you either can’t change, or have some default options that seem conservative. Many of these decisions have been made with the SLAs and the various use cases in mind for the platform. That said, it doesn’t mean there’s no flexibility here. If you need a little more capacity, particularly in smaller environments, there are still options available that won’t reduce the platform’s resilience, while still providing additional capacity options.

Brisbane VMUG – Lunch and Learn – July 2023

The Brisbane VMUG team are running a lunch and learn with the local VMware team on July 11th, 2023. You can find out more about it below, and register for the event here.

 

Event Overview

We live in a world where applications dominate the way we get things done and help us be productive. Whether that be on a mobile device, a desktop or perhaps on a device that isn’t even ours.

Many organisations utilise VDI, DaaS, and published app environments to securely host and deliver virtual apps, including legacy apps tied to the data center. While VDI, DaaS, and published app deployments have aided the way IT admins can securely deliver apps to end users at scale, there are common challenges in published app environments.

For example, IT typically deploys groups of servers into farms, silos, or delivery groups to host their published apps for users. These servers are architected to scale to the peak usage of the apps being hosted (servers sit idle and are wasted resources when apps are not used). IT admins must manage the copies of apps on each farm and independently manage OS updates on each host, leading to inefficiencies as app and server counts increase. Running antiquated and inefficient app environments can lead to inflated costs, app-lifecycle management challenges, and incompatibility of apps.

This session is to help understand how VMware can help you accelerate the delivery of applications on-demand whilst simplifying the lifecycle and helping drive lower management overheads.

VMware has a long history of helping clients get more from their existing investments through augmentation and we can show you how you can use these existing investments to deliver better outcomes for your workers.

 

Primary Venue

VMware Brisbane Office (Goondiwindi Room)

Level 8, 324 Queen St, 4000 Brisbane, QLD

Random Short Take #86

Welcome to Random Short Take #86. It’s been a while, and I’ve been travelling a bit for work. So let’s get random.

  • Let’s get started with three things / people I like: Gestalt IT, Justin Warren, and Pure Storage. This article by Justin digs into some of the innovation we’re seeing from Pure Storage. Speaking of Justin, if you don’t subscribe to his newsletter “The Crux”, you should. I do. Subscribe here.
  • And speaking of Pure Storage, a survey was conducted and results were had. You can read more on that here.
  • Switching gears slightly (but still with a storage focus), check out the latest Backblaze drive stats report here.
  • Oh you like that storage stuff? What about this article on file synchronisation and security from Chin-Fah?
  • More storage? What about this review of the vSAN Objects Viewer from Victor?
  • I’ve dabbled in product management previously, but this article from Frances does a much better job of describing what it’s really like.
  • Edge means different things to different people, and I found this article from Ben Young to be an excellent intro to the topic.
  • You know I hate Netflix but love its tech blog. Check out this article on migrating critical traffic at scale.

Bonus round. I’m in the Bay Area briefly next week. If you’re around, let me know! Maybe we can watch one of the NBA Finals games.

VMware Cloud on AWS – TMCHAM – Part 10 – Cluster Conversion

In this edition of Things My Customers Have Asked Me (TMCHAM), I’m going to delve into the topic of cluster conversions on the VMware-managed VMware Cloud on AWS platform.

 

Background

With the end of sale announcement of the I3.metal node type in VMware Cloud on AWS, I’ve had a few customers ask about how the cluster conversion process works. We’ve previously offered the ability to convert nodes from I3.metal to I3en.metal, and we’ve taken that process and made it possible for the I4i.metal node type as well. The process is outlined in some detail here. From a technical perspective, you’ll need to be on SDDC version 1.18v8 or 1.20v2 at a minimum. From a commercial perspective, to use your existing subscriptions, they’ll need to be flexible, or you can choose to add new subscriptions. Your account team can help with that.

 

Sounds Easy, What’s the Catch?

I’ve had a few customers run through this process now in my part of the world, and more and more folks are converting across to I4i.metal every week. One of the key considerations when planning the conversion, particularly with smaller environments, is sizing and storage policies. When the team converts your cluster, they will do some sizing estimates prior to the activity, and the results of this sizing might be higher than you’d expect. For example, we talk about the I4i.metal being something in the order of 1.6 – 2 times as powerful as the I3.metal node. But this really depends on a variety of factors, including the vSAN RAID policy in use, the types of workloads running on the cluster, and so forth. I’ve seen scenarios where a customer has wanted to convert a 6-node I3.metal cluster to 4 I4i.metal nodes. From a calculated capacity perspective, this should be a no-brainer. But what you’ll find, when working with the conversion team, is that they will likely come back to you saying that 6 nodes will be the target. The reason for this is that they’re assuming your cluster is running RAID 6.

How do you solve this problem? Think about the vSAN policy you want to run moving forward. If you’re happy to drop to RAID 5, for example, you have a way forward. Once the cluster conversion is complete, jump on and change the default policy to RAID 5 / FTT:1. This will cause vSAN to modify the policy for all of the VMs on the cluster. This is a background process, and won’t interfere with normal operations. Once you’ve done that, you can then remove the additional nodes. It’s a little fiddly, and will require some amount of coordination with the conversion team and your account team, but it’s a fairly simple task, and will get you running on new shiny boxes without having to muck about with setting up another cluster (or SDDC) and manually migrating workloads across.

You’ll want to ensure that changing your RAID policy won’t have an impact on your available storage. Every workload is different, but at a high level, you can use the public sizer to work through some of these numbers. A 16-node I3.metal cluster with RAID 6 configured will give you roughly 165.89 TiB of useable capacity (ignoring management workload overheads and vSAN slack space), and a similar storage footprint can be had with a 8 or 9-node cluster of I4i.metal nodes. You’ll also want to be sure your organisation is comfortable with the vSAN policy you’re moving to. If you’re moving from 16 nodes to 8 or 9 nodes, for example, this isn’t really a problem, as you’ll likely be sticking with RAID 6 for clusters that large. But if you’re going from 6 nodes to 3 nodes, you’re going from RAID 6 to RAID 1.

 

Thoughts and Further Reading

The neat thing about the VMware Cloud on AWS offering is that it’s a managed service from VMware, and we do a good job of managing boring stuff like this for you, reducing the impact of software and hardware changes by leveraging core VMware technologies that aren’t otherwise available on native cloud platforms. If you’d like to read more about the I4i.metal node – check out our FAQ here.