Random Short Take #80

Welcome to Random Short Take #80. Lots of press release news this week and some parochial book recommendations. Let’s get random.

Random Short Take #73

Welcome to Random Short Take #73. Let’s get random.

VMware Cloud on AWS – TMCHAM – Part 4 – VM Resource Management

In this episode of Things My Customers Have Asked Me (TMCHAM), I’m going to delve into some questions around resource management for VMs running on the VMware-managed VMware Cloud on AWS platform, and what customers need to know to make it work for them.

Distributed Resource Scheduler

If you’ve used VMware vSphere before, it’s likely that you’ve come across the Distributed Resource Scheduler (DRS) capability. DRS is a way to keep workloads evenly distributed across nodes in a cluster, and moves VMs around based on various performance considerations. The cool thing about this is that you don’t need to manually move workloads around when a particular guest or host goes a little nuts from a CPU or Memory usage perspective. There are cases, however, when you might not want your VMs to be moving around too much. In this instance, you’ll want to create what is called a “Disable DRS vMotion Policy”. You configure this via Compute Policies in vCenter, and you can read more about the process here.

If you don’t like reading documentation though, I’ve got some pictures you can look at instead. Log in to your vSphere Client and click on Policies and Profiles.

Then click on Compute Policies and click Add.

Under Policy type, there’s a dropdown box where you can select Disable DRS vMotion.

You’ll then give the policy a Name and Description. You then need to select the tag category you want to use.

Once you’ve selected the tag category you want to use, you can select the tags you want to apply to the policy.

Click on Create to create the Compute Policy, and you’re good to go.

Memory Overcommit Techniques

I’ve had a few customers ask me about how some of the traditional VMware resource management technologies translate to VMware Cloud on AWS. The good news is there’s quite a lot in common with what you’re used to with on-premises workload management, including memory overcommit techniques. As with anything, the effectiveness or otherwise of these technologies really depends on a number of different factors. If you’re interested in finding out more, I recommend checking out this article.

General Resource Management

Can I use the resource management mechanisms I know and love, such as Reservations, Shares, and Limits? You surely can, and you can read more about that capability here.

Conclusion

Just as you would with on-premises vSphere workloads, you do need to put some thought into your workload resource planning prior to moving your VMs onto the magic sky computers. The good news, however, is that there are quite a few smart technologies built into VMware Cloud on AWS that means you’ve got a lot of flexibility when it comes to managing your workloads.

Random Short Take #32

Welcome to Random Short Take #32. Lot of good players have worn 32 in the NBA. I’m a big fan of Magic Johnson, but honourable mentions go to Jimmer Fredette and Blake Griffin. It’s a bit of a weird time around the world at the moment, but let’s get to it.

  • Veeam 10 was finally announced a little while ago and is now available for deployment. I work for a service provider, and we use Veeam, so this article from Anthony was just what I was after. There’s a What’s New article from Veeam you can view here too.
  • I like charts, and I like Apple laptops, so this chart was a real treat. The lack of ports is nice to look at, I guess, but carrying a bag of dongles around with me is a bit of a pain.
  • VMware recently made some big announcements around vSphere 7, amongst other things. Ather Beg did a great job of breaking down the important bits. If you like to watch videos, this series from VMware’s recent presentations at Tech Field Day 21 is extremely informative.
  • Speaking of VMware Cloud Foundation, Cormac Hogan recently wrote a great article on getting started with VCF 4.0. If you’re new to VCF – this is a great resource.
  • Leaseweb Global recently announced the availability of 2nd Generation AMD EPYC powered hosts as part of its offering. I had a chance to speak with Mathijs Heikamph about it a little while ago. One of the most interesting things he said, when I questioned him about the market appetite for dedicated servers, was “[t]here’s no beating a dedicated server when you know the workload”. You can read the press release here.
  • This article is just … ugh. I used to feel a little sorry for businesses being disrupted by new technologies. My sympathy is rapidly diminishing though.
  • There’s a whole bunch of misinformation on the Internet about COVID-19 at the moment, but sometimes a useful nugget pops up. This article from Kieren McCarthy over at El Reg delivers some great tips on working from home – something more and more of us (at least in the tech industry) are doing right now. It’s not all about having a great webcam or killer standup desk.
  • Speaking of things to do when you’re working at home, JB posted a handy note on what he’s doing when it comes to lifting weights and getting in some regular exercise. I’ve been using this opportunity to get back into garage weights, but apparently it’s important to lift stuff more than once a month.

Cohesity Basics – Excluding VMs Using Tags – Real World Example

I’ve written before about using VM tags with Cohesity to exclude VMs from a backup. I wanted to write up a quick article using a real world example in the test lab. In this instance, we had someone deploying 200 VMs over a weekend to test a vendor’s storage array with a particular workload. The problem was that I had Cohesity set to automatically protect any new VMs that are deployed in the lab. This wasn’t a problem from a scalability perspective. Rather, the problem was that we were backing up a bunch of test data that didn’t dedupe well and didn’t need to be protected by what are ultimately finite resources.

As I pointed out in the other article, creating tags for VMs and using them as a way to exclude workloads from Cohesity is not a new concept, and is fairly easy to do. You can also apply the tags in bulk using the vSphere Web Client if you need to. But a quicker way to do it (and something that can be done post-deployment) is to use PowerCLI to search for VMs with a particular naming convention and apply the tags to those.

Firstly, you’ll need to log in to your vCenter.

PowerCLI C:\> Connect-VIServer vCenter

In this example, the test VMs are deployed with the prefix “PSV”, so this makes it easy enough to search for them.

PowerCLI C:\> get-vm | where {$_.name -like "PSV*"} | New-TagAssignment -Tag "COH-NoBackup"

This assumes that the tag already exists on the vCenter side of things, and you have sufficient permissions to apply tags to VMs. You can check your work with the following command.

PowerCLI C:\> get-vm | where {$_.name -like "PSV*"} | Get-TagAssignment

One thing to note. If you’ve updated the tags of a bunch of VMs in your vCenter environment, you may notice that the objects aren’t immediately excluded from the Protection Job on the Cohesity side of things. The reason for this is that, by default, Cohesity only refreshes vCenter source data every 4 hours. One way to force the update is to manually refresh the source vCenter in Cohesity. To do this, go to Protection -> Sources. Click on the ellipsis on the right-hand side of your vCenter source you’d like to refresh, and select Refresh.

You’ll then see that the tagged VMs are excluded in the Protection Job. Hat tip to my colleague Mike for his help with PowerCLI. And hat tip to my other colleague Mike for causing the problem in the first place.

VMware – Unmounting NFS Datastores From The CLI

This is a short article, but hopefully useful. I did a brief article a while ago linking to some useful articles about using NFS with VMware vSphere. I recently had to do some maintenance on one of the arrays in our lab and I was having trouble unmounting the datastores using the vSphere client. I used some of the commands in this KB article (although I don’t have SIOC enabled) to get the job done instead.

The first step was to identify if any of the volumes were still mounted on the individual host.

[root@esxihost:~] esxcli storage nfs list
Volume Name  Host            Share                 Accessible  Mounted  Read-Only   isPE  Hardware Acceleration
-----------  --------------  --------------------  ----------  -------  ---------  -----  ---------------------
Pav05        10.300.300.105  /nfs/GB000xxxxxbbf97        true     true      false  false  Not Supported
Pav06        10.300.300.106  /nfs/GB000xxxxxbbf93        true     true      false  false  Not Supported
Pav01        10.300.300.101  /nfs/GB000xxxxxbbf95        true     true      false  false  Not Supported

In this case there are three datastores that I haven’t been able to unmount.

[root@esxihost:~] esxcli storage nfs remove -v Pav05
[root@esxihost:~] esxcli storage nfs remove -v Pav06
[root@esxihost:~] esxcli storage nfs remove -v Pav01

Now there should be no volumes mounted on the host.

[root@esxihost:~] esxcli storage nfs list
[root@esxihost:~]

See, I told you it would be quick.

VMware vSphere and NFS – Some Links

Most of my experience with vSphere storage has revolved around various block storage technologies, such as DAS, FC and iSCSI. I recently began an evaluation of one of those fresh new storage startups running an NVMe-based system. We didn’t have the infrastructure to support NVMe-oF in our lab, so we’ve used NFS to connect the datastores to our vSphere environment. Obviously, at this point, it is less about maximum performance and more about basic functionality. In any case, I thought it might be useful to include a series of links regarding NFS and vSphere that I’ve been using to both get up and running, and troubleshoot some minor issues we had getting everything running. Note that most of these links cover vSphere 6.5, as our lab is currently running that version.

Basics

Create an NFS Datastore

How to add NFS export to VMware ESXi 6.5

NFS Protocols and ESXi

Best Practice

Best Practices for running VMware vSphere on Network Attached Storage

Troubleshooting

Maximum supported volumes reached (1020652)

Increasing the default value that defines the maximum number of NFS mounts on an ESXi/ESX host (2239)

Troubleshooting connectivity issues to an NFS datastore on ESX and ESXi hosts (1003967)

VMware – VMware HealthAnalyzer vha.properties

I was using VMware’s HealthAnalyzer tool (version 5.2.0) recently to perform a vSphere health check for a customer and encountered the following error when using a read-only account.

A service error during during collection” (you might also see “A runtime error occurred during collection” pop up).

In addition to the Read-Only permissions to the vCenter user account, you need to assign “Profile-driven storage > Profile-driven storage view” privileges to the user account in order to collect Storage Policy data. If, for some reason, you can’t do that (I was working with a third-party in this case), you need to edit the vha.properties file. This is located at:

<VHA_Instance>/usr/share/vha/tomcat/webapps/vha/WEB-INF/classes/vha.properties

You’ll need to use vi to set the following properties to false:

collection.storagepolicies.enabled
collection.iscsiport.enabled

Note that by doing so some things won’t be scanned and some recommendations won’t be made.

VMware – vSphere Basics – vCenter 6.5 Upgrade Scenarios

I did an article on the vSphere 6 Platform Services Controller a while ago. After attending a session on changes in vSphere 6.5 at vFORUM, I thought it would be an idea to revisit this, and frame it in the context of vCenter 6.5 upgrades.

 

vSphere Components

In vCenter 6.5, the architecture is a bit different to 5.x. With the PSC, you get:

  • VMware vCenter Single Sign-On
  • License service
  • Lookup service
  • VMware Directory Services
  • VMware Certificate Authority

And the vCenter Server Service gives you:

  • vCenter Server
  • VMware vSphere Web Client
  • VMware vSphere Auto Deploy
  • VMware vSphere ESXi Dump Collector
  • vSphere Syslog Collector on Windows and vSphere Syslog Service for VMware vCenter Server Appliance
  • vSphere Update Manager

 

Architecture Choices

There are some basic configurations that you can go with, but I generally don’t recommend these for anything outside of a lab or test environment. In these configurations, the PSC is either embedded or external to the vCenter Server. The choice here will be dependent on the sizing and feature requirements of your environment.

If you want to use Enhanced Linked Mode an external PSC is recommended. If you want it highly available, you’ll still need to use a load balancer. This VMware KB  article provides some handy insights and updates from 6.0.

 

vCenter Upgrade Scenarios

Your upgrade architecture you’ll choose depends on where your vCenter services reside. If your vCenter server has SSO installed, it becomes a vCenter Server with an embedded PSC.

If, however, some of the vSphere components are installed on separate VMs then the Web Client and Inventory Service become part of the “Management Node” (your vCenter box) and the PSC (with SSO) is separate/external.

Note also that vSphere 6.5 still requires a load balancer for vSphere High Availability.

 

Final Thoughts

This is not something that’s necessarily going to come up each day. But if you’re working either directly with VMware, via an integrator or doing it yourself, your choice of vCenter architecture should be a key consideration in your planning activities. As with most upgrades to key infrastructure components, you should take the time to plan appropriately.

VMware vSphere Next Beta Applications Are Now Open

VMware recently announced that applications for the next VMware vSphere Beta Program are now open. People wishing to participate in the program can now indicate their interest by filling out this simple form. The vSphere team will grant access to the program to selected candidates in stages. This vSphere Beta Program leverages a private Beta community to download software and share information. There will be discussion forums, webinars, and service requests to enable you to share your feedback with VMware.

So what’s involved? Participants are expected to:

  • Accept the Master Software Beta Test Agreement prior to visiting the Private Beta Community;
  • Install beta software within 3 days of receiving access to the beta product;
  • Provide feedback within the first 4 weeks of the beta program;
  • Submit Support Requests for bugs, issues and feature requests;
  • Complete surveys and beta test assignments; and
  • Participate in the private beta discussion forum and conference calls.

All testing is free-form and you’re encouraged to use the software in ways that interest you. This will provide VMware with valuable insight into how you use vSphere in real-world conditions and with real-world test cases.

Why participate? Some of the many reasons to participate include:

  • Receiving early access to the vSphere Beta products;
  • Interacting with the vSphere Beta team consisting of Product Managers, Engineers, Technical Support, and Technical Writers;
  • Providing direct input on product functionality, configurability, usability, and performance;
  • Providing feedback influencing future products, training, documentation, and services; and
  • Collaborating with other participants, learning about their use cases, and sharing advice and learnings.

I’m a big fan of public beta testing. While we’re not all experts on how things should work, it’s a great opportunity to at least have your say on how you think that vSphere should work. While the guys in vSphere product management may not be able to incorporate every idea you have for how vSphere should work, you’ll at least have an opportunity to contribute feedback and give VMware some insight on how their product is being used in the wild. In my opinion this is extremely valuable for both VMware and us, the consumers of their product. Plus, you’ll get a sneak peak into what’s coming up.

So, if you’re good with NDAs and have some time to devote to some testing of next-generation vSphere, this is the program for you. So head over to the website and check it out.