VMware – Changed Block Tracking

This is just a brief post for my own reference. A friend of mine had some backup problems on a VM recently. Seems they were using VMware’s Changed Block Tracking (CBT) and had since added a physical-mode RDM to the VM. This caused problems and things stopped working. The quick solution was to clone the VM and delete the stale data. I’m sure there’s a more thorough solution. But I don’t like to scratch the surface too much. CBT has been around a while, and is a pretty nifty feature. The VMguy has a good post on CBT and how it works. Duncan has a bit more info. And VMware themselves have some useful KB articles here and here.

File system Alignment redux

So I wrote a post a little while ago about filesystem alignment, and why I think it’s important. You can read it here. Obviously, the issue of what to do with guest OS file systems comes up from time to time too. When I asked a colleague to build some VMs for me in our lab environment with the system disks aligned he dismissed the request out of hand and called it an unnecessary overhead. I’m kind of at that point in my life where the only people who dismiss my ideas so quickly are my kids, so I called him on it. He promptly reached for a tattered copy of EMC’s Techbook entitled “Using EMC CLARiiON Storage with VMware vSphere and VMware Infrastructure” (EMC P/N h2197.5 – get it on Powerlink). He then pointed me to this nugget from the book.

I couldn’t let it go, so I reached for my copy (version 4 versus his version 3.1), and found this:

We both thought this wasn’t terribly convincing one way or another, so we decided to test it out. The testing wasn’t super scientific, nor was it particularly rigorous, but I think we got the results that we needed to move forward. We used Passmark‘s PerformanceTest 7.0 to perform some basic disk benchmarks on 2 VMs – one aligned and one not. These are the settings we used for Passmark:

As you can see it’s a fairly simple setup that we’re running with. Now here’s the results of the unaligned VM benchmark.

And here’s the results of the aligned VM.

We ran the tests a few more times and got similar results. So, yeah, there’s a marginal difference in performance. And you may not find it worthwhile pursuing. But I would think, in a large environment like ours where we have 800+ VMs in Production, surely any opportunity to reduce the workload on the array should be taken? Of course, this all changes with Windows 2008. So maybe you should just sit tight until then?

File system Alignment – All the kids are doing it

Ever since I was a boy, or, at least, ever since I started working with CLARiiON arrays (when R11 was, er, popular), I’ve been aware of the need to align file systems that lived on the array. I didn’t come to this conclusion myself, but instead found it written in some performance-focused whitepapers on Powerlink. I used to use diskpar.exe with Windows 2000, and fdisk for linux hosts. As time moved on Microsoft introduced diskpart.exe, which did a bunch of other partition things as well. So it sometimes surprises me that people still debate the issue, at least from a CLARiiON perspective. I’m not actually going to go into why you should do it, but I am going to include a number of links that I think are useful when it comes to this issue.

It pains me to say this, but Microsoft have probably the best, publicly available article on the issue here. The succinctly titled “Disk performance may be slower than expected when you use multiple disks in Windows Server 2003, in Windows XP, and in Windows 2000” is a pretty thorough examination of why or why not you’ll see dodgy performance from that expensive array you just bought.

Of course, it doesn’t mean that the average CLARiiON owner gets any less cranky with the situation. I can only assume that the sales guy has given them such a great spiel about how awesome their new array is that they couldn’t possibly need to do anything further to improve its performance. If you have access to the EMC Community forums, have a look at this and this

If you have access to Powerlink you should really read the latest performance whitepaper relating to FLARE 29. It has a bunch of great stuff in it that goes well beyond file system alignment. And if you have access to the knowledge base, look for emc143897 – Do disk partitions created by Windows 2003 64-bit servers require file system alignment? – Hells yes they do.

emc151782 – Navisphere Analyzer reports disk crossings even after aligning disk partitions using the DISKPAR tool. – Disk crossings are bad. Stripe crossings are not.

emc135197 – How to align the file system on an ESX volume presented to a Windows Virtual Machine (VM). Basic stuff, but important to know if you’ve not had to do it before.

Finally, Duncan Epping’s post on VM disk alignment has some great information, in an easy to understand diagram. I also recommend you look at the comments section, because that’s where the fun starts.

Kids, if someone says that file system alignment isn’t important, punch them in the face. In a Windows environment, get used to using diskpart.exe. In an ESX environment, create your VMFS using the vSphere client, and then make sure you’re aligning the file systems of the guests as well. Next week I’ll try and get some information together about why stripe crossings on a CLARiiON aren’t the end of the world, but disk crossings are the first sign of the apocalypse. That is all.

VMware Lab Manager, ssmove.exe and why I don’t care

Sounds like a depressing topic, but really it’s not all bad. As I’d mentioned previously, I’ve spent a good chunk of the previous 4 months commissioning a CLARiiON CX4-960 array and migrating data from our production CX3-40f and CX700. All told, there’s about 112TB in use, and I’ve moved about 90TB so far. I’ve had to use a number of different methods, including Incremental SAN Copy, sVMotion, vmkfstools, and, finally, ssmove. For those of you who pay attention to more knowledgeable people’s blogs, Scott Lowe had a succinct but useful summary of how to use the ssmove utility here. So I had to move what amounted to about 3TB of SATA-II configs in a Lab Manager 3.0.1 environment. You can read the VMware KB article for the instructions, but ultimately it’s a very simple process. Except when it doesn’t work. By doesn’t work I mean wait for 25 hours and see no progress doesn’t work. So I got to spend about 6 hours on the phone with the Live queue, and the SR took a long time to resolve. The utility really doesn’t provide a lot in terms of logging, nor does it provide a lot of information if it’s not working but has ultimately timed out. It’s always the last 400GB that we get stuck on with data migrations, isn’t it?

The solution involved manually migrating the vmdk files and then updating the database. There’s an internal-only KB article that refers to the process, but VMware don’t really want to tell you about it, because it’s a bit hairy. Hairier stil was the fact that we only had a block replica of the environment, and rolling back would have meant losing all the changes that I’d done over the weekend. The fortunate thing is that this particular version of ssmove does a copy, not a move, so we were able to cancel the failed ssmove process and still use the original, problematic configuration. If you find yourself needing to migrate LM datastores and ssmove isn’t working for you, let me know and I can send you the KB reference for the process to do it manually.

So to celebrate the end of my involvement in the project, I thought I’d draw a graph. Preston is a lot better at graphs than I am, but I thought this one summed up quite nicely my feelings about this project.

Locked vmdk files

Somehow, a colleague of mine put an ESX host in a cluster into maintenance mode while VMs were still running. Or maybe it just happened to crash when she was about to do this. I don’t know how, and I’m not sure I still believe it, but I saw some really weird stuff last week. the end result was that VMs powered off ungracefully, and the host became unresponsive, and things were generally bad. We started adding VMs back to other hosts, but one VM had locked files. Check out this entry at Gabe’s Virtual World on how to address this, but basically you want to ps, grep and kill -9 some stuff.

ps -elf | grep vmname

kill -9 PID

And you’ll find that it’s probably the vmdk files that are locked, not necessarily the vmx file.

2009 and penguinpunk.net

It was a busy year, and I don’t normally do these type of posts, but I thought I’d try to do a year in review type thing so I can look back at the end of 2010 and see what kind of promises I’ve broken. Also, the Exchange Guy will no doubt enjoy the size comparison. You can see what I mean by that here.

In any case, here’re some broad stats on the site. In 2008 the site had 14966 unique visitors according to Advanced Web Statistics 6.5 (build 1.857). But in 2009, it had 15856 unique visitors – according to Advanced Web Statistics 6.5 (build 1.857). That’s an increase of some 890 unique visitors, also known as year-on-year growth of approximately 16.82%. I think. My maths are pretty bad at the best of times, but I normally work with storage arrays, not web statistics. In any case, most of the traffic is no doubt down to me spending time editing posts and uploading articles, but it’s nice to think that it’s been relatively consistent, if not a little lower than I’d hoped. This year (2010 for those of you playing at home), will be the site’s first full year using Google analytics, so assuming I don’t stuff things up too badly, I’ll have some prettier graphs to present this time next year. That said, MYOB / smartyhost are updating the web backend shortly so I can’t make any promises that I’ll have solid stats for this year, or even a website :)

What were the top posts? Couldn’t tell you. I do, however, have some blogging-type goals for the year:

1. Blog with more focus and frequency – although this doesn’t mean I won’t throw in random youtube clips at times.

2. Work more on the promotion of the site. Not that there’s a lot of point promoting something if it lacks content.

3. Revisit the articles section and revise where necessary. Add more articles to the articles page.

On the work front, I’m architecting the move of my current employer from a single data centre to a 2+1 active / active architecture (from a storage and virtualisation perspective). There’s more blades, more CLARiiON, more MV/S, some vSphere and SRM stuff, and that blasted Cisco MDS fabric stuff is involved too. Plus a bunch of stuff I’ve probably forgotten. So I think it will be a lot of fun, and a great achievement if we actually get anything done by June this year. I expect there’ll be some moments of sheer boredom as I work my way through 100s of incremental SAN Copies and sVMotions. But I also expect there will be moments of great excitement when we flick the switch on various things and watch a bunch of visio illustrations turn into something meaningful.

Or I might just pursue my dream of blogging about the various media streaming devices on the market. Not sure yet. In any case, thanks for reading, keep on reading, tell your friends, and click on the damn Google ads.

sVMotion with snapshot bad

You know when it says in the release notes, and pretty much every forum on the internet, that doing sVMotion migrations with snapshots attached to a vmdk is bad? Turns out they were right, and you might just end up munting your vmdk file in the process. So you might just need this link to recreate the vmdk. You may find yourself in need of this process to commit the snapshot as well. Or, if you’re really lucky, you’ll find yourself with a vmsn file that references a missing vmdk file. Wow, how rad! To work around this, I renamed the vmsn to .old, ran another snapshot, and then committed the snapshots. I reiterate that I think snapshots are good when you’re in a tight spot, in the same way that having a baseball bat can be good when you’re attacked in your home. But if you just go around swinging randomly, something’s going to get broken. Bad analogy? Maybe, but I think you get what I’m saying here.

To recap, when using svmotion.pl with VIMA, here’s the syntax:

svmotion.pl --datacenter=network.internal --url=https://virtualcenter.network.internal/sdk --username=vcadmin --vm="[VMFS_02] host01/host01.vmx:VMFS_01"

Of course, my preferred method is here:

svmotion --interactive

Enjoy!

Creating a VMFS datastore

While everyone is talking about new VMwares, I’d like to focus on the mundane stuff. Creating a VMFS datastore on an ESX host is a relatively trivial activity, and something that you’ve probably done a few times before. But I noticed, the other day, some behaviour that I can only describe as “really silly”.

I needed to create a datastore on a host that only had local SCSI disks attached in a single RAID-1 container. I wanted to do this post-installation for reasons that I’ll discuss at another time. Here’s a screenshot from the Add Storage Wizard.

damn_vmwares

Notice the problem with the first option? Yep, you can blow away your root filesystem. In Australia, we would describe this situation as “being rooted”, but probably nor for the reasons you think.

What I haven’t had a chance to test yet, having had limited access to the lab lately, is whether the Wizard is actually “silly” enough to let you go through with it. I’ve seen running systems happily blow themselves away with a miscued “dd” command – so I’m going to assume yes. I hope to have a little time in the next few weeks to test this theory.

vmdk labelling

A colleague pointed out to me recently that VirtualCenter no longer has the annoying habit of creating vmdk files with the same labels across multiple datastores. For example, on VC 2.5 Build 104215, when creating a VM with disks across multiple storage locations, they are labelled identically:

The good news is this seems fixed in the latest builds. I haven’t had time to confirm whether this is a VirtualCenter or ESX function, but it makes VCB deployments a little simpler …

VMware Converter 3.0.3

As I mentioned previously, I had the opportunity last week to use VMware Converter 3.0.3. It worked a charm on a slightly odd P2V. Odd in the sense that the customer ripped 2 hard disks out of a HP Blade BL20P and put them in a pizzabox server, booted it up, logged me in and said go fo it. As the release notes promised, you can now “convert individual volumes on a single physical disk from the source physical machine to separate and independent virtual disks across different datastores”. This would have made things a lot easier at quite a few sites where the physicals used mirrored disks with volumes that needed to go to different datastores. And it seems quicker too. The whole P2V only took less than 2 hours, including mucking about in device manager and testing connectivity.