Axellio Announces Azure Stack HCI Support

Microsoft recently announced their Azure Stack HCI program, and I had the opportunity to speak to the team from Axellio (including Bill Miller, Barry Martin, and Kara Smith) about their support for it.

 

Azure Stack Versus Azure Stack HCI

So what’s the difference between Azure Stack and Azure Stack HCI? You can think of Azure Stack as an extension of Azure – designed for cloud-native applications. The Azure Stack HCI is more for your traditional VM-based applications – the kind of ones that haven’t been refactored (or can’t be) for public cloud.

[image courtesy of Microsoft]

The Azure Stack HCI program has fifteen vendor partners on launch day, of which Axellio is one.

 

Axellio’s Take

Miller describes the Axellio solution as “[n]ot your father’s HCI infrastructure”, and Axellio tell me it “has developed the new FabricXpress All-NVMe HCI edge-computing platform built from the ground up for high-performance computing and fast storage for intense workload environments. It delivers 72 NVMe SSDS per server, and packs 2 servers into one 2U chassis”. Cluster sizes start at 4 nodes and run up to 16. Note that the form factor measurement in the table below includes any required switching for the solution. You can grab the data sheet from here.

[image courtesy of Axellio]

It uses the same Hyper-V based software-defined compute, storage and networking as Azure Stack and integrates on-premises workloads with Microsoft hybrid data services including Azure Site Recovery and Azure Backup, Cloud Witness and Azure Monitor.

 

Thoughts and Further Reading

When Microsoft first announced plans for a public cloud presence, some pundits suggested they didn’t have the chops to really make it. It seems that Microsoft has managed to perform well in that space despite what some of the analysts were saying. What Microsoft has had working in its favour is that it understands the enterprise pretty well, and has made a good push to tap that market and help get the traditionally slower moving organisations to look seriously at public cloud.

Azure Stack HCI fits nicely in between Azure and Azure Stack, giving enterprises the opportunity to host workloads that they want to keep in VMs hosted on a platform that integrates well with public cloud services that they may also wish to leverage. Despite what we want to think, not every enterprise application can be easily refactored to work in a cloud-native fashion. Nor is every enterprise ready to commit that level of investment into doing that with those applications, preferring instead to host the applications for a few more years before introducing replacement application architectures.

It’s no secret that I’m a fan of Axellio’s capabilities when it comes to edge compute and storage solutions. In speaking to the Axellio team, what stands out to me is that they really seem to understand how to put forward a performance-oriented solution that can leverage the best pieces of the Microsoft stack to deliver an on-premises hosting capability that ticks a lot of boxes. The ability to move workloads (in a staged fashion) so easily between public and private infrastructure should also have a great deal of appeal for enterprises that have traditionally struggled with workload mobility.

Enterprise operations can be a pain in the backside at the best of times. Throw in the requirement to host some workloads in public cloud environments like Azure, and your operations staff might be a little grumpy. Fans of HCI have long stated that the management of the platform, and the convergence of compute and storage, helps significantly in easing the pain of infrastructure operations. If you then take that management platform and integrate it successfully with you public cloud platform, you’re going to have a lot of fans. This isn’t Axellio’s only solution, but I think it does fit in well with their ability to deliver performance solutions in both the core and edge.

Thomas Maurer wrote up a handy article covering some of the differences between Azure Stack and Azure Stack HCI. The official Microsoft blog post on Azure Stack HCI is here. You can read the Axellio press release here.

File system Alignment redux

So I wrote a post a little while ago about filesystem alignment, and why I think it’s important. You can read it here. Obviously, the issue of what to do with guest OS file systems comes up from time to time too. When I asked a colleague to build some VMs for me in our lab environment with the system disks aligned he dismissed the request out of hand and called it an unnecessary overhead. I’m kind of at that point in my life where the only people who dismiss my ideas so quickly are my kids, so I called him on it. He promptly reached for a tattered copy of EMC’s Techbook entitled “Using EMC CLARiiON Storage with VMware vSphere and VMware Infrastructure” (EMC P/N h2197.5 – get it on Powerlink). He then pointed me to this nugget from the book.

I couldn’t let it go, so I reached for my copy (version 4 versus his version 3.1), and found this:

We both thought this wasn’t terribly convincing one way or another, so we decided to test it out. The testing wasn’t super scientific, nor was it particularly rigorous, but I think we got the results that we needed to move forward. We used Passmark‘s PerformanceTest 7.0 to perform some basic disk benchmarks on 2 VMs – one aligned and one not. These are the settings we used for Passmark:

As you can see it’s a fairly simple setup that we’re running with. Now here’s the results of the unaligned VM benchmark.

And here’s the results of the aligned VM.

We ran the tests a few more times and got similar results. So, yeah, there’s a marginal difference in performance. And you may not find it worthwhile pursuing. But I would think, in a large environment like ours where we have 800+ VMs in Production, surely any opportunity to reduce the workload on the array should be taken? Of course, this all changes with Windows 2008. So maybe you should just sit tight until then?

File system Alignment – All the kids are doing it

Ever since I was a boy, or, at least, ever since I started working with CLARiiON arrays (when R11 was, er, popular), I’ve been aware of the need to align file systems that lived on the array. I didn’t come to this conclusion myself, but instead found it written in some performance-focused whitepapers on Powerlink. I used to use diskpar.exe with Windows 2000, and fdisk for linux hosts. As time moved on Microsoft introduced diskpart.exe, which did a bunch of other partition things as well. So it sometimes surprises me that people still debate the issue, at least from a CLARiiON perspective. I’m not actually going to go into why you should do it, but I am going to include a number of links that I think are useful when it comes to this issue.

It pains me to say this, but Microsoft have probably the best, publicly available article on the issue here. The succinctly titled “Disk performance may be slower than expected when you use multiple disks in Windows Server 2003, in Windows XP, and in Windows 2000” is a pretty thorough examination of why or why not you’ll see dodgy performance from that expensive array you just bought.

Of course, it doesn’t mean that the average CLARiiON owner gets any less cranky with the situation. I can only assume that the sales guy has given them such a great spiel about how awesome their new array is that they couldn’t possibly need to do anything further to improve its performance. If you have access to the EMC Community forums, have a look at this and this

If you have access to Powerlink you should really read the latest performance whitepaper relating to FLARE 29. It has a bunch of great stuff in it that goes well beyond file system alignment. And if you have access to the knowledge base, look for emc143897 – Do disk partitions created by Windows 2003 64-bit servers require file system alignment? – Hells yes they do.

emc151782 – Navisphere Analyzer reports disk crossings even after aligning disk partitions using the DISKPAR tool. – Disk crossings are bad. Stripe crossings are not.

emc135197 – How to align the file system on an ESX volume presented to a Windows Virtual Machine (VM). Basic stuff, but important to know if you’ve not had to do it before.

Finally, Duncan Epping’s post on VM disk alignment has some great information, in an easy to understand diagram. I also recommend you look at the comments section, because that’s where the fun starts.

Kids, if someone says that file system alignment isn’t important, punch them in the face. In a Windows environment, get used to using diskpart.exe. In an ESX environment, create your VMFS using the vSphere client, and then make sure you’re aligning the file systems of the guests as well. Next week I’ll try and get some information together about why stripe crossings on a CLARiiON aren’t the end of the world, but disk crossings are the first sign of the apocalypse. That is all.

The Basics AKA Death by Screenshot

I’ve created a new page, imaginatively titled “Articles”, that has a number of articles I’ve done recently covering various simple operational or implementation-focused tasks. You may or may not find them useful. I hope this doesn’t become my personal technical documentation graveyard, although I have a feeling that a number of the documents will probably stay at version 0.01 until such time as the underlying technology no longer exists. Enjoy!

Netlimiter is limiting …

This happened a little time ago, but I’ve been repressing the memory and ignoring the fact that I won’t get eight hours of my life back. Netlimiter is  an application that, the website claims, is the “Ultimate Bandwidth Shaper”. And from what I’ve heard from friends, it certainly does a good job at shaping bandwidth. In my case, however, it meant that the little Exchange host I was attempting to connect to a Dell | EqualLogic PS5000XV array was unable to communicate effectively via the Microsoft iSCSI initiator (2.07). The only error I was getting was “Target Portal Error” and some Event IDs that were meaningless. Google made a few suggestions, but the solutions ranged from “I disabled the VSS service” (!) to “HP upgraded their array software”. Yeah, right. Or maybe, when I asked the client if they had a firewall in between the host and the array / target, I should have asked the next question: “Do you have a bandwidth shaping application in between the host and the target?”. Alas, I didn’t go with my gut and immediately uninstall said software. Instead, I disabled the service and killed any running versions. This made no difference. I changed the IP range and subnet for the hosts and target. No dice. I read all of Google. Nothing. Useful. At. All. On the second day, I uninstalled the software, after MS Netmon monitoring showed some activity between the host and target, but not enough to actually login. So next time you need to ask a client a question, think carefully about what question you’re asking, and what answer you’ll get. Otherwise, you’ll find yourself wasting time chasing pointless error codes. Meh.

I made a joke …

I had the good fortune to do some SAN work in at the local Microsoft office recently. They know I do a bit of VMware around town and so there was a little banter back and forth. I told them that Vista convinced me to finally buy a Mac, and so on. So I was cabling up the rack and the dialogue went something like:

Me: “I have a new joke – want to hear it?”

Patient MS guy: “Okay.”

Me: “This rack has more cable ties in it than you have virtualisation marketshare.”

MS guy: “You’ve been thinking about that one all morning haven’t you?”

Me: “Yep.”

MS guy: “In 5 years, we’ll own it.”

Me: “Embrace and extend – I’m cool with that.”

They’re nice guys, doing interesting things. It’s not often I get to try out Microsoft gags on people who work for the company.