VMware Cloud on AWS – TMCHAM – Part 13 – Delete the SDDC

Following on from my article on host removal, in this edition of Things My Customers Have Asked Me (TMCHAM), I’m going to cover SDDC removal on the VMware-managed VMware Cloud on AWS platform. Don’t worry, I haven’t lost my mind in a post-acquisition world. Rather, this is some of the info you’ll find useful if you’ve been running a trial or a proof of concept (as opposed to a pilot) deployment of VMware Cloud Disaster Recovery (VCDR) and / or VMware Cloud on AWS and want to clean some stuff up when you’re all done.

 

Process

Firstly, if you’re using VCDR and want to deactivate the deployment, the steps to perform are outlined here, and I’ve copied the main bits from that page below.

  1. Remove all DRaaS Connectors from all protected sites. See Remove a DRaaS Connector from a Protected Site.
  2. Delete all recovery SDDCs. See Delete a Recovery SDDC.
  3. Deactivate the recovery region from the Global DR Console. (Do this step last.) See Deactivate a Recovery Region. Usage charges for VMware Cloud DR are not stopped until this step is completed.

Funnily enough, as I was writing this, someone zapped our lab for reasons. So this is what a Region deactivation looks like in the VCDR UI.

Note that it’s important you perform these steps in that order, or you’ll have more cleanup work to do to get everything looking nice and tidy. I have witnessed firsthand someone doing it the other way and it’s not pretty. Note also that if your Recovery SDDC had services such as HCX connected, you should hold off deleting the Recovery SDDC until you’ve cleaned that bit up.

Secondly, if you have other workloads deployed in a VMware Cloud on AWS SDDC and want to remove a PoC SDDC, there are a few steps that you will need to follow.

If you’ve been using HCX to test migrations or network extension, you’ll need to follow these steps to remove it. Note that this should be initiated from the source side, and your HCX deployment should be in good order before you start (site pairings functioning, etc). You might also wish to remove a vCenter Cloud Gateway, and you can find information on that process here.

Finally, there are some AWS activities that you might want to undertake to clean everything up. These include:

  • Removing VIFs attached to your AWS VPC.
  • Deleting the VPC (this will likely be required if your organisation has a policy about how PoC deployments  are managed).
  • Tidy up and on-premises routing and firewall rules that may have been put in place for the PoC activity.

And that’s it. There’s not a lot to it, but tidying everything up after a PoC will ensure that you avoid any unexpected costs popping up in the future.

VMware Cloud on AWS – TMCHAM – Part 12 – Host Removal

In this edition of Things My Customers Have Asked Me (TMCHAM), I’m going to cover host removal on the VMware-managed VMware Cloud on AWS platform. This is a fairly brief post, as there’s not a lot to say about the process, but I’ve had enough questions about that I thought it was worth covering.

 

Background

I’ve written about Elastic DRS (EDRS) in VMware Cloud on AWS previously. It’s something you can’t turn off, and the Baseline policy does a good job of making sure you don’t get in hot water from a storage perspective. That said, there might be occasions where you want to remove a host or two to scale in your cluster manually. This might happen after a cluster conversion, or you may have had a rapid scale out event and you have now removed whatever workloads caused that scale out event to occur.

 

Process

The process to remove a host is documented here. Note that it is a sequential process, with one host being removed at a time. Depending on the number of hosts in your cluster, you may need to adjust your storage and fault tolerance policies as well. To start the process, go to your cloud console and select the SDDC you want to remove the hosts from. If there’s only one cluster, you can click on Remove Hosts under Actions. If there are multiple clusters in the SDDC, you’ll need to select the cluster you want to remove the host from.

You’ll then be advised that you need to understand what you’re doing (and acknowledge that), and you may be advised to change your default storage policies as well. More info on those policies is here.

Once you kick off the process, the cluster will be evaluated to ensure that removing hosts will not violate the applicable EDRS policies. VMs will be migrated off the host when it’s put into maintenance mode, and billing will be stopped for that host.

And that’s it. Pretty straightforward.

VMware Cloud on AWS – Check TRIM/UNMAP

This a really quick follow up to one of my TMCHAM articles on TRIM/UNMAP on VMware Cloud on AWS. In short, a customer wanted to know whether TRIM/UNMAP had been enabled on one of their clusters, as they’d requested. The good news is it’s easy enough to find out. On your cluster, go to Configure. Under vSAN, you’ll see Services. Expand the Advanced Options section and you’ll see whether TRIM/UNMAP has been enabled for the cluster or not.

VMware Cloud on AWS – TMCHAM – Part 11 – Storage Policies

In this edition of Things My Customers Have Asked Me (TMCHAM), I’m going to cover Managed Storage Policy Profiles (MSPPs) on the VMware-managed VMware Cloud on AWS platform.

 

Background

VMware Cloud on AWS has MSPPs deployed on clusters to ensure that customers have sufficient resilience built into the cluster to withstand disk or node failures. By default, clusters are configured with RAID 1, Failures to Tolerate (FTT):1 for 2 – 5 nodes, and RAID 6, FTT:2 for clusters with 6 or more nodes. Note that single-node clusters have no Service Level Agreement (SLA) attached to them, as you generally only run those on a trial basis, and if the node fails, there’s nowhere for the data to go. You can read more about vSAN Storage Polices and MSPPs here, and there’s a great Tech Zone article here. The point of these policies is that they are designed to ensure your cluster(s) remain in compliance with the SLAs for the platform. You can view the policies in your environment by going to Policies and Profiles in vCenter and selecting VM Storage Policies.

 

Can I Change Them?

The MSPPs are maintained by VMware, and so it’s not a great idea to change the default policies on your cluster, as the system will change them back at some stage. And why would you want to change the policies on your cluster? Well, you might decide that 4 or 5 nodes could actually run better (from a capacity perspective) using RAID 5, rather than RAID 1. This is a reasonable thing to want to do, and as the SLA talks about FTT numbers, not RAID types, you can change the RAID type and remain in compliance. And the capacity difference can be material in some cases, particularly if you’re struggling to fit your workloads onto a smaller node count.

 

So How Do I Do It Then?

Clone The Policy

There are a few ways to approach this, but the simplest is by cloning an existing policy. In this example, I’ll clone the vSAN Default Storage Policy. In the VMware Cloud on AWS, there is an MSPP assigned to each cluster with the name “VMC Workload Storage Policy – ClusterName“. Select the policy you want to clone and then click on Clone.

The first step is to give the VM Storage Policy a name. Something cool with your initials should do the trick.

You can edit the policy structure at this point, or just click Next.

Here you can configure your Availability options. You can also do other things, like configure Tags and Advanced Policy Rules.

Once this is configured, the system will check that your vSAN datastore are compatible with your policy.

And then you’re ready to go. Click Finish, make yourself a beverage, bask in the glory of it all.

Apply The Policy

So you have a fresh new policy, now what? You can choose to apply it to your workload datastore, or apply it to specific Virtual Machines. To apply it to your datastore, select the datastore you want to modify, click on General, then click on Edit next to the Default Storage Policy option. The process to apply the policy to VMs is outlined here. Note that if you create a non-compliant policy and apply it to your datastore, you’ll get hassled about it and you should likely consider changing your approach.

 

Thoughts

The thing about managed platforms is that the service provider is on the hook for architecture decisions that reduce the resilience of the platform. And the provider is trying to keep the platform running within the parameters of the SLA. This is why you’ll come across configuration items in VMware Cloud on AWS that you either can’t change, or have some default options that seem conservative. Many of these decisions have been made with the SLAs and the various use cases in mind for the platform. That said, it doesn’t mean there’s no flexibility here. If you need a little more capacity, particularly in smaller environments, there are still options available that won’t reduce the platform’s resilience, while still providing additional capacity options.

VMware Cloud on AWS – TMCHAM – Part 10 – Cluster Conversion

In this edition of Things My Customers Have Asked Me (TMCHAM), I’m going to delve into the topic of cluster conversions on the VMware-managed VMware Cloud on AWS platform.

 

Background

With the end of sale announcement of the I3.metal node type in VMware Cloud on AWS, I’ve had a few customers ask about how the cluster conversion process works. We’ve previously offered the ability to convert nodes from I3.metal to I3en.metal, and we’ve taken that process and made it possible for the I4i.metal node type as well. The process is outlined in some detail here. From a technical perspective, you’ll need to be on SDDC version 1.18v8 or 1.20v2 at a minimum. From a commercial perspective, to use your existing subscriptions, they’ll need to be flexible, or you can choose to add new subscriptions. Your account team can help with that.

 

Sounds Easy, What’s the Catch?

I’ve had a few customers run through this process now in my part of the world, and more and more folks are converting across to I4i.metal every week. One of the key considerations when planning the conversion, particularly with smaller environments, is sizing and storage policies. When the team converts your cluster, they will do some sizing estimates prior to the activity, and the results of this sizing might be higher than you’d expect. For example, we talk about the I4i.metal being something in the order of 1.6 – 2 times as powerful as the I3.metal node. But this really depends on a variety of factors, including the vSAN RAID policy in use, the types of workloads running on the cluster, and so forth. I’ve seen scenarios where a customer has wanted to convert a 6-node I3.metal cluster to 4 I4i.metal nodes. From a calculated capacity perspective, this should be a no-brainer. But what you’ll find, when working with the conversion team, is that they will likely come back to you saying that 6 nodes will be the target. The reason for this is that they’re assuming your cluster is running RAID 6.

How do you solve this problem? Think about the vSAN policy you want to run moving forward. If you’re happy to drop to RAID 5, for example, you have a way forward. Once the cluster conversion is complete, jump on and change the default policy to RAID 5 / FTT:1. This will cause vSAN to modify the policy for all of the VMs on the cluster. This is a background process, and won’t interfere with normal operations. Once you’ve done that, you can then remove the additional nodes. It’s a little fiddly, and will require some amount of coordination with the conversion team and your account team, but it’s a fairly simple task, and will get you running on new shiny boxes without having to muck about with setting up another cluster (or SDDC) and manually migrating workloads across.

You’ll want to ensure that changing your RAID policy won’t have an impact on your available storage. Every workload is different, but at a high level, you can use the public sizer to work through some of these numbers. A 16-node I3.metal cluster with RAID 6 configured will give you roughly 165.89 TiB of useable capacity (ignoring management workload overheads and vSAN slack space), and a similar storage footprint can be had with a 8 or 9-node cluster of I4i.metal nodes. You’ll also want to be sure your organisation is comfortable with the vSAN policy you’re moving to. If you’re moving from 16 nodes to 8 or 9 nodes, for example, this isn’t really a problem, as you’ll likely be sticking with RAID 6 for clusters that large. But if you’re going from 6 nodes to 3 nodes, you’re going from RAID 6 to RAID 1.

 

Thoughts and Further Reading

The neat thing about the VMware Cloud on AWS offering is that it’s a managed service from VMware, and we do a good job of managing boring stuff like this for you, reducing the impact of software and hardware changes by leveraging core VMware technologies that aren’t otherwise available on native cloud platforms. If you’d like to read more about the I4i.metal node – check out our FAQ here.

VMware Cloud on AWS – TMCHAM – Part 9 – Elastic DRS Policy Changes

In this edition of Things My Customers Have Asked Me (TMCHAM), I’m going to delve into some questions around recent(ish) changes to Elastic DRS policies and capacity on the VMware-managed VMware Cloud on AWS platform.

I’ve had a few customers ask about changes VMware has made to Elastic DRS policies on VMware Cloud on AWS. I’ve talked a little about eDRS previously, and the release notes cover the changes here (go to March 27th, 2023). In short the changes are as follows:

  • Elastic DRS optimize for rapid scaling policy now supports rapid scaling-in to enable faster scaling use cases like  VDI, disaster recovery or any other business needs.
  • The Elastic DRS Cost Policy improvement will allow automated scale-in of a cluster if the storage utilization falls below 40% instead of the current 20% limit.

What does it mean from a practical perspective? Not a lot for customers using the default baseline policy. But if you’re using “Optimize for Lower Cost” or “Rapid Scaling”, it might be worth looking into.

 

Huh?

Optimize for Lowest Cost

The documentation does a great job of describing how this works: “When scaling in, this policy removes hosts quickly to maintain baseline performance while keeping host counts to a practical minimum. It removes hosts only if it anticipates that storage utilization would not result in a scale out in the near term after host removal”. It has the following thresholds:

Old High Old Low New High New Low
CPU 90% 60% 90% 60%
Memory 80% 60% 80% 60%
Storage 70% 20% 80% (this changed a while ago) 40%

You’ll see that the new low has 40% as the threshold for storage now (I added in the change from 70 – 80% as well, but this was done a while ago). Generally speaking, the algorithm is designed not to do silly things, but we’ve added in this number to enable customers to scale in workloads sooner, helping to reduce the cost of scaling events.

Rapid Scaling

From the documentation: “[t]his policy adds multiple hosts at a time when needed for memory or CPU, and adds hosts incrementally when needed for storage. By default, hosts are added four at a time. You can specify a larger scale-out increment (8 or 12) if you need faster scaling for disaster recovery, Virtual Desktop Infrastructure (VDI), and similar use cases. As with any EDRS policy, scale-out time increases with increment size. When the increment is large (12 hosts), it can take up to 40 minutes to complete in some configurations.

When scaling in, this policy removes hosts rapidly, maintaining baseline performance while keeping host count to a practical minimum. It does not remove hosts if it anticipates that doing so would degrade performance and force a near-term scale-out. Scale-in stops when the cluster reaches the minimum host count or the number of hosts in the scale-out increment has been removed”. This policy has the following thresholds:

Old High Old Low New High New Low
CPU 80% 0% 80% 50%
Memory 80% 0% 80% 50%
Storage 70% 0% 80% 40%

What does that mean? We’ve added in some guardrails for rapid scale-in to ensure that things don’t get too hectic too quickly. And on the flip side, it means that you’ll scale out your environment faster as well. Again, this is useful for bursty workloads such as VDI or, potentially, rapid DR.

 

Thoughts

Elastic DRS is one of the cooler features of VMware Cloud on AWS. You can do some really interesting things from a scaling perspective, particularly if you’re operating with some volatile / bursty workloads. That said, if you only use the default baseline policy you’ll also likely be in a good spot, as the thing that can really hurt in these kinds of environments is when your hosts run short of storage.

VMware Cloud on AWS – TMCHAM – Part 8 – TRIM/UNMAP

In this edition of Things My Customers Have Asked Me (TMCHAM), I’m going to delve into some questions around TRIM/UNMAP and capacity reclamation on the VMware-managed VMware Cloud on AWS platform.

 

Why TRIM/UNMAP?

TRIM/UNMAP, in short, is the capability for operating systems to reclaim no longer used space on thin-provisioned filesystems. Why is this important? Imagine you have a thin-provisioned volume that has 100GB of capacity allocated to it. It consumes maybe 1GB when it’s first deployed. You then add 50GB of data to it. You then delete 50GB of data from the volume. You’ll still see 51GB of capacity being consumed on the filesystem. This is because older operating systems just mark the blocks as deleted, but don’t zero them out. Modern operating systems do support TRIM/UNMAP though, but the hypervisor needs to understand the commands being sent to it. You can read more on that here.

How I Do This For VMware Cloud on AWS?

You can contact your account team, and we raise a ticket to get the feature enabled. We had some minor issues recently that meant we weren’t enabling the feature, but if you’re running M16v12 or M18v5 (or above) on your SDDCs, you should be good to go. Note that this feature is enabled on a per-cluster basis, and you need to reboot the VMs in the cluster for it to take effect.

What About Migrating With HCX?

Do the VMs come across thin? Do you need to reclaim space first? If you’re using HCX to go from thick to thin, you should be fine. If you’re migrating thin to thin, it’s worth checking whether you’ve got any space reclamation in place on your source side. I’ve had customers report back that some environments have migrated across with higher than expected storage usage due to a lack of space reclamation happening on the source storage environment. You can use something like Live Optics to report on your capacity consumed vs allocated, and how much capacity can be reclaimed.

Why Isn’t This Enabled By Default?

I don’t know for sure, but I imagine it has something to do with the fact that TRIM/UNMAP has the potential to have a performance impact from a latency perspective, depending on the workloads running in the environment, and the amount of capacity being reclaimed at any given time. We recommend that you “schedule large space reclamation jobs during off-peak hours to reduce any potential impact”. Given that VMware Cloud on AWS is a fully-managed service, I imagine we want to control as many of the performance variables as possible to ensure our customers enjoy a reliable and stable platform. That said, TRIM/UNMAP is a really useful feature, and you should look at getting it enabled if you’re concerned about the potential for wasted capacity in your SDDC.

VMware Cloud on AWS – TMCHAM – Part 7 – Elastic DRS and Host Failure Remediation

In this edition of Things My Customers Have Asked Me (TMCHAM), I’m going to delve into some questions around managing host additions and failures on the VMware-managed VMware Cloud on AWS platform.

Elastic DRS

One of the questions I frequently get asked by customers is what happens when you reach a certain capacity in your VMware Cloud on AWS cluster? The good news is we have a feature called Elastic DRS that can take care of that for you. Elastic DRS is a little different to what you might know as the vSphere Distributed Resource Scheduler (DRS). Elastic DRS operates at a host level and takes care of capacity constraints in your VMC environment. The idea is that, when your cluster reaches a certain resource threshold (be it storage, vCPU, or RAM), Elastic DRS takes care of adding in additional host resources as required. 

The algorithm runs every 5 minutes and uses the following parameters:

  • Minimum and maximum number of hosts the algorithm should scale up or down to.
  • Thresholds for CPU, memory and storage utilisation such that host allocation is optimized for cost or performance.

Note also that your cluster may scale back in, assuming the resources stay consistently below the threshold for a number of iterations.

Settings

There are a few different options for Elastic DRS, with the default being the “Elastic DRS Baseline Policy”. With this policy, a host is automatically added when there’s less than 20% free vSAN storage. Note that this doesn’t apply to single-node SDDC configurations, and only the baseline policy is available with 2-node configurations. Beyond those limitations, though, there are a number of other configurations available and these are outlined here. The neat thing is that there’s some amount of flexibility in how you have your SDDC automatically managed, with options for best performance, lowest cost, or rapid scale-out also available.

Can I Turn It Off?

No, but you can fiddle with the settings from your VMC cloud console.

Other Questions

What happens if I’m adding a host manually? The Elastic DRS recommendations are ignored. Same goes with planned maintenance or SDDC maintenance, where the support team may be adding in an additional host. But what if you’ve lost a host? The auto-remediation process kicks in and the Elastic DRS recommendations are ignored while the failed host is being replaced. You can read more about that process here.

 

Thoughts

One of the things I like about the VMware Cloud on AWS approach is that VMware has looked into a number of common scenarios that occur in the wild (hosts running out of capacity, for example) and built some automation on top of an already streamlined SDDC stack. Elastic DRS and the Auto-Scaler features seem like minor things, but when you’re managing an SDDC of any significant scale, it’s nice to have the little things taken care of.

VMware Cloud on AWS – TMCHAM – Part 6 – Sizing

In this edition of Things My Customers Have Asked Me (TMCHAM), I’m going to touch briefly on some things you might come across when sizing workloads for the VMware Cloud on AWS platform using the VMware Cloud on AWS Sizer.

VMware Cloud on AWS Sizer

One of the neat things about VMware Cloud on AWS is that you can jump on the publicly available sizing tool and input some numbers (or import RVTools or LiveOptics files) and it will spit out the number of nodes that you’ll (likely) need to support your workloads. Of course, if that’s all there was to it, you wouldn’t need folks like me to help you with sizing. That said, VMware has worked hard to ensure that the sizing part of your VMware Cloud on AWS planning is fairly straightforward. There are a few things to look out for though.

Why Do I See A Weird Number Of Cores In The Sizer?

If you put a workload into the sizer, you might see some odd core counts in the output. For example, the below screenshot shows 4x i3en nodes with 240 cores, but clearly it should be 192 cores (4x 48).

Yet when the same workload is changed to the i3 instance type, the correct amount of cores (5x 36 = 180) is displayed.

The reason for this is that the i3en instance types support Hyper-Threading, and the Sizer applies a weighting to calculations. This can be changed via the Global Settings in the Advanced section of the Sizer. If you’re not into HT, set it to 0%. If you’re a believer, set it to 100%. By default it’s set to 25%, hence the 240 cores number in the previous example (48 x 1.25 x 4 nodes).

Why Do I Need This Many Nodes?

You might need to satisfy Host Admission Control requirements. The current logic of Host Admission Control (as it’s applied in VMC sizer) is as follows:

  • A 2-host cluster should have 50.00 percent reserved CPU and memory capacity for HA Admission Control.
  • A 3-host cluster reserves 33.33 percent for HAC

And so on until you get to

  • A 16-host cluster reserving 6.25 percent of resources for HAC.

It’s also important to note that a 2-host cluster can accommodate a maximum of 35 VMs. Anything above that will need an extra host. And if you’re planning to run a full HCX configuration on two nodes, you should review this Knowledge Base article. Speaking of running things at capacity, I’ll go into Elastic DRS in another post, but by default we add another host to your cluster when you hit 80% storage capacity.

What About My Storage Consumption?

By default there are some storage policies applied to your vSAN configurations too. A standard Cluster with 5 hosts or less is set to 1 Failure / RAID-1, whilst a standard Cluster with 6 hosts or more is set to tolerate 2 Failures / RAID-6 by default. You can read more about that here.

Conclusion

There’s a bunch of stuff I haven’t covered here, including the choices you have to make between using RVTools and LiveOptics, and whether you should size with a high CPU to core ratio or keep it one to one like the old timers like. But hopefully this post has been of some use explaining some of the quirky things that pop up in the Sizer from time to time.

VMware Cloud on AWS – TMCHAM – Part 4 – VM Resource Management

In this episode of Things My Customers Have Asked Me (TMCHAM), I’m going to delve into some questions around resource management for VMs running on the VMware-managed VMware Cloud on AWS platform, and what customers need to know to make it work for them.

Distributed Resource Scheduler

If you’ve used VMware vSphere before, it’s likely that you’ve come across the Distributed Resource Scheduler (DRS) capability. DRS is a way to keep workloads evenly distributed across nodes in a cluster, and moves VMs around based on various performance considerations. The cool thing about this is that you don’t need to manually move workloads around when a particular guest or host goes a little nuts from a CPU or Memory usage perspective. There are cases, however, when you might not want your VMs to be moving around too much. In this instance, you’ll want to create what is called a “Disable DRS vMotion Policy”. You configure this via Compute Policies in vCenter, and you can read more about the process here.

If you don’t like reading documentation though, I’ve got some pictures you can look at instead. Log in to your vSphere Client and click on Policies and Profiles.

Then click on Compute Policies and click Add.

Under Policy type, there’s a dropdown box where you can select Disable DRS vMotion.

You’ll then give the policy a Name and Description. You then need to select the tag category you want to use.

Once you’ve selected the tag category you want to use, you can select the tags you want to apply to the policy.

Click on Create to create the Compute Policy, and you’re good to go.

Memory Overcommit Techniques

I’ve had a few customers ask me about how some of the traditional VMware resource management technologies translate to VMware Cloud on AWS. The good news is there’s quite a lot in common with what you’re used to with on-premises workload management, including memory overcommit techniques. As with anything, the effectiveness or otherwise of these technologies really depends on a number of different factors. If you’re interested in finding out more, I recommend checking out this article.

General Resource Management

Can I use the resource management mechanisms I know and love, such as Reservations, Shares, and Limits? You surely can, and you can read more about that capability here.

Conclusion

Just as you would with on-premises vSphere workloads, you do need to put some thought into your workload resource planning prior to moving your VMs onto the magic sky computers. The good news, however, is that there are quite a few smart technologies built into VMware Cloud on AWS that means you’ve got a lot of flexibility when it comes to managing your workloads.