I passed my VCP410 exam yesterday with a score of 450. I’m pleased to have finally gotten it out of the way as, even though I had signed up for the second-shot voucher with VMware, there seem to be no free slots in the 4 testing centres in Brisbane this month. After the epic fail of my previous employer to stay afloat, I also had to pony up the AU$275 myself, so I felt a little bit more pressure than I normally would when taking one of these exams.
I found the following resources of particular use:
I also recommend you read through as much reference material / admin guides that you can, and remember that what you’re taught in the course doesn’t always correlate with what you see in the exam. Good luck!
Yesterday a colleague of mine was having some issues performing sVMotions on guests sitting in a development ESX 3.5 cluster. He kept getting an error along the lines of:
“IP address change for 10.x.x.x to 10.x.x.y not handled, SSL certificate verification is not enabled.”
They had changed the Service Console IP address of the host manually to perform some “secure” guest migrations previously (don’t ask me why – there’s always my way or the hard way), and basically the IP address of the host hadn’t been updated in the vxpa.cfg file. VMware has a 2-3 step process to reoslve the issue, which ultimately will require you to pull the host out of the cluster and re-add it to vCenter. It’s not a big deal, but it can be confusing when things seem to be working, but aren’t really. You can read more about it here.
I’ve created a new page, imaginatively titled “Articles”, that has a number of articles I’ve done recently covering various simple operational or implementation-focused tasks. You may or may not find them useful. I hope this doesn’t become my personal technical documentation graveyard, although I have a feeling that a number of the documents will probably stay at version 0.01 until such time as the underlying technology no longer exists. Enjoy!
While everyone is talking about new VMwares, I’d like to focus on the mundane stuff. Creating a VMFS datastore on an ESX host is a relatively trivial activity, and something that you’ve probably done a few times before. But I noticed, the other day, some behaviour that I can only describe as “really silly”.
I needed to create a datastore on a host that only had local SCSI disks attached in a single RAID-1 container. I wanted to do this post-installation for reasons that I’ll discuss at another time. Here’s a screenshot from the Add Storage Wizard.
Notice the problem with the first option? Yep, you can blow away your root filesystem. In Australia, we would describe this situation as “being rooted”, but probably nor for the reasons you think.
What I haven’t had a chance to test yet, having had limited access to the lab lately, is whether the Wizard is actually “silly” enough to let you go through with it. I’ve seen running systems happily blow themselves away with a miscued “dd” command – so I’m going to assume yes. I hope to have a little time in the next few weeks to test this theory.
I’ve been nuts deep in a SAN migration project recently and promptly missed the announcement that VMware VirtualCenter 2.5 Update 4 is now available for download. I haven’t had time to put it through its paces yet, but noticed in the release notes that some plugins have been updated, some more useful things have been added to Virtual Machine monitoring, and this little nugget with esxcfg-mpath (a command dear to my heart) still isn’t fixed. But, hey, it’s still better than Sun’s CAM.
Some few weekends ago I did some failover testing for a client using 2 EMC CLARiiON CX4-120 arrays, MirrorView/Asynchronous over iSCSI and a 2-node ESX Cluster at each site. the primary goal of the exercise was to ensure that we could promote mirrors at the DR site if need be and run Virtual Machines off the replicas. Keep in mind that the client, at this stage isn’t running SRM, just MV and ESX. I’ve read many, many articles about how MirrorView could be an awesome addition to the DR story, and in the past this has rung true for my clients running Windows hosts. But VMware ESX isn’t Windows, funnily enough, and since the client hadn’t put any production workloads on the clusters yet, we decided to run it through its paces to see how it worked outside of a lab environment.
One thing to consider, when using layered applications like SnapView or MirrorView with the CLARiiON, is that the LUNs generated by these applications are treated, rightly so, as replicas by the ESX hosts. This makes sense, of course, as the secondary image in a MirrorView relationship is a block-copy replica of the source LUN. As a result of this, there are rules in play for VMFS LUNs regarding what volumes can be presented to what, and how they’ll be treated by the host. There are variations on the LVM settings that can be configured on the ESX node. These are outlined here. Duncan of Yellow Bricks fame also discusses them here. Both of these articles are well written and explain clearly why you would take the approach that you have and use the settings that you have with LVM. However, what neither article addresses, at least clearly enough for my dumb arse, is what to do when what you see and what you expect to see are different things.
In short, we wanted to set the hosts to “State 3 – EnableResignature=0, DisallowSnapshotLUN=0”, because the hosts at the DR site had never seen the original LUNs before, nor did we want to go through and resignature the datastores at the failover site and have to put up with datastore volume labels that looked unsightly. Here’s some pretty screenshots of what your Advanced – LVM settings might look like after you’ve done this.
But we wanted it to look like this:
However, when I set the LVM settings accordingly, admin-fractured the LUN, promoted the secondary and presented it to the failover ESX host, I was able to rescan and see the LUN, but was unable to see any data on the VMFS datastore. Cool. So we set the LVM settings to “State 2 – EnableResignature=1, (DisallowSnapshotLUN is not relevant)”, and were able to resignature the LUNs and see the data, register a virtual machine and boot okay. Okay, so why doesn’t State 3 give me the desired result? I still don’t know. But I do know that a call to friendly IE at the local EMC office tipped me off to using the VI Client connected directly to the failover ESX host, rather than VirtualCenter. Lo and behold, this worked fine, and we were able to present the promoted replica, see the data, and register and boot the VMs at the failover site. I’m speculating that it’s something very obvious that I’ve missed here, but I’m also of the opinion that this should be mentioned in some of those shiny whitepapers and knowledge books that EMC like to put out promoting their solution. If someone wants to correct me, feel free to wade in at any time.
A colleague pointed out to me recently that VirtualCenter no longer has the annoying habit of creating vmdk files with the same labels across multiple datastores. For example, on VC 2.5 Build 104215, when creating a VM with disks across multiple storage locations, they are labelled identically:
The good news is this seems fixed in the latest builds. I haven’t had time to confirm whether this is a VirtualCenter or ESX function, but it makes VCB deployments a little simpler …
I spent some time last week deploying VirtualCenter 2.5 Update 3. As I mentioned previously, some things have improved. And some things haven’t. I don’t know how long Windows 2008 has been out, although I’m sure the Exchange Guy could help me out there. In any case, it’s been more than a few weeks. So I had to build a few Windows 2008 templates last week. But, hey, guess what? When you deploy from template you get this gem:
Cool. Very helpful. If you were using RIS or WDS or whatever Microsofties are calling it today, this wouldn’t be a problem. But we weren’t. There is a workaround that has been documented here. It basically involves telling VirtualCenter the guest is Vista rather than Windows 2008. It then runs through and lets you customise the guest before deploying. I can here MCSEs crying in anguish already … but it _does_ work. But it’s messy. And this was after I’d shown the customer how “streamlined” the VC installation process was. So, maybe there’s a bit more to do before it’s pretty enough.
With all of the hullabaloo around the licensing issue in 3.5 Update 2, some people might have missed the fact that there have been some really neat improvements in the product.
One of my pet irritations is that a lot of environments have dysfunctional DNS. In the past, this has caused HA to break, unless we put entries in the hosts files of the ESX hosts and the VirtualCenter host. Starting with ESX/ESXi 3.5 Update 2, DNS resolution or /etc/hosts file entries are no longer required to configure VMware HA. It’s now down to VirtualCenter to do that for you. This is no excuse to not have functioning DNS in place, but it does make situations where the ESX environment is the first thing going in a little more palatable.
The Remote Command Line Interface (Remote CLI), which was previously supported only on ESXi (with support for ESX only in conjunction with Storage VMotion), is now fully supported for both ESXi 3.5 Update 2 and ESX 3.5 Update 2.
As of ESX/ESXi 3.5 Update 2, VMware now supports up to 192 logical (virtual) CPUs per host, provided the host has no more than 170 VMs, and there are no more than three virtual floppy devices or virtual CDROM devices configured on the host at any given time. This is cool, although unfortunately it’s not something I’ve had time to test in the lab.
Hot virtual disk extension support has been added to this release, which makes the CLI-averse a little more comfortable with virtual disk expansion. Hot extend is supported for monolithic disks in VMFS that do not have a VM snapshot.
So, there are some cool things, besides the obvious boneheaded error mentioned previously. But don’t get too excited, as I noticed that VirtualCenter 2.5 Update 3 has just been released. Get it here and have a look through the release notes here. This release seems to focus on resolving issues, of which there are many in VirtualCenter, and this is no bad thing. There is a FLEX license server upgrade included, but (and I imagine they’ve poached coders from Sun for this functionality) “the license server will not be automatically upgraded when using the VMware Infrastructure Management Installer to upgrade an existing installation”. No, that would be too easy. There’s a standalone installer instead. Hopefully this will decrease the number of license server calls I get, but it’s still running FLEX isn’t it?
I’ve been doing a few implementations lately, and ran across an old issue that I’d not seen since the early releases of VirtualCenter 2. If you’re silly enough, like I was, to install VC from a zip file, and then have the audacity to use an iso image to perform an upgrade, you’re going to have trouble. The details of this particular little quirk are here, and things haven’t changed much, as I had this problem recently upgrading from VC 2.5 U1 to U2. I’ve dealt with some insane software before (ranking Sun’s Common Array Manager as some of the cludgiest code to install and use that I’ve come across), but I think this particular error is just really silly. I’m not sure it’s even a problem with VMware’s product, or the ineptness of MSI. So, in short, always do your VC installations from one place or another, but don’t do both.
And while I’m at it, if you’re deploying VMware Update Manager in the field, which I think is a commendable thing to do, don’t forget that it’s not delivering VC updates, but is quite happy to provide ESX updates, including point releases. So when VMware recently released ESX 3.5 Update 2, VUM went out and retrieved it. But how many of you thought about upgrading your version of VirtualCenter to Update 2 before remediating the ESX hosts? Well, now you know what we’ve been told since the old days of ESX 2.x / VC 1.x – don’t be trying to run new versions of ESX with old versions of VC.