Disclaimer: Scale Computing have provided me with the use of a refurbished HC3 system comprised of 3 HC1000 nodes, along with an HP 2920-24G Gigabit switch. They are keen for me to post about my experiences using the platform and I am too, so this arrangement works out well for both parties. I’m not a professional equipment reviewer, and everyone’s situation is different, so if you’re interested in this product I recommend getting in touch with Scale Computing.
This is the first of what is going to be a few posts covering my experiences with the Scale Computing HC3 platform. By way of introduction, I recently wrote a post on some of the new features available with HC3. Trevor Pott provides an excellent overview of the platform here, and Taneja Group provides an interesting head to head comparison with VSAN here.
In this post I’d like to cover off my installation experience and make a brief mention of the firmware update process.
I’d heard of Scale Computing around the traps, but hadn’t really taken the time to get to know them. For whatever reason I was given a run through of their gear by Alan Conboy. We agreed that it would be good for me to get hands on with some kit and by the next week I had 3 boxes of HC1000 nodes lob up at my front door. Scale support staff contacted me to arrange installation as soon as they knew the delivery had occurred. I had to travel however so I asked them to send me through the instructions and I’d get to it when I got home. Long story short I was back a week before I got these things out of the box and started looking into getting stuff setup. By the way the screwdriver included with every node was a nice touch.
The other issue I had is that I really haven’t had a functional lab environment at home for some time, so I had no switching or basic infrastructure services to speak of. And my internet connection is ADSL 1, so uploading big files can be a pain. And while I have DNS in the house it’s really not enterprise grade. In some ways, my generally shonky home “lab” environment is similar to a lot of small business environments I’ve come across during my career. Perhaps this is why Scale support staff are never too stressed about the key elements being missing.
As I mentioned in the disclaimer, Scale also kindly provided my with an HP 2920-24G gigabit switch. I set this up per the instructions here. In the real world, you’d be running off two switches. But as anyone who’s been to my house can attest, my home office is a long way from the real world.
I do have a rack at home, but it’s full of games consoles for the moment. I’m currently working on finding a more appropriate home for the HC3 cluster.
So, I unpacked and cabled up the three nodes as per the instructions in the HC3 Getting Started Guide. I initialised each node, and then started the cluster initialisation process on the first node. I couldn’t, however, get the nodes to join the cluster or talk to each other. I’d spent about an hour unpacking everything and then another hour futzing my way about the nodes trying to get them to talk to each other (or me). Checked cables, checked switch configuration, and so forth. No dice. It was Saturday afternoon, so I figured I’d move on to some LEGO. I sent off a message to Scale support to provide an update on my progress and figured I’d hear back Tuesday AM my time (the beauty of living +10 GMT). To my surprise I got a response back from Tom Roeder five minutes after I sent the e-mail. I chipped him about working too late but he claims he was already working on another case.
It turns out that the nodes I was sent had both the 10Gbps and 1Gbps cards installed in them, and by default the 10Gbps cards were being used. The simplest fix for this (given that I wouldn’t be using 10Gbps in the short term) was to remove the cards from each node.
Once this was done, I had to log into each node as the root user and run the following command:
I then rebooted and reinitialised each node. At this point I was then able to get them joining the cluster. This took about twenty minutes. Happy days.
So I thought everything was fine, but I started getting messages about the second node being unable to update from the configured time source. I messaged Tom and he got me to open up a support tunnel on one of the working nodes (this has been a bloody awesome feature of the support process). While the cluster looked in good shape, he wasn’t able to ping external DNS servers (e.g. 18.104.22.168) from node 2, nor could he get it to synchronise with the NTP pool I’d nominated. I checked and re-checked the new Ethernet cables I’d used. I triple-checked the switch config. I rebooted the Airport Express (!) that everything was hanging off in my office. Due to the connectivity weirdness I was also unable to update the firmware o the cluster. I grumbled and moaned a lot.
Tom then had another poke around and noticed that, for some reason, no gateway was configured on node 2. He added one in and voilà, the node started merrily chatting to its peers and the outside world. Tom has the patience of a saint. And I was pretty excited that it was working.
Tom has been super supportive in, well, supporting me during this installation. He’s been responsive, knowledgeable and has made the whole installation experience a breeze, minor glitches notwithstanding.
I thought I’d also quickly run through the firmware update process, as it’s extremely simple and I like posts with screenshots. I think it took 10 minutes, tops. Which is a little unusual, and probably due to a few factors, including the lack of VMs running on the nodes (day job has been a bit busy), and it was a fairly minor update. Scale generally suggest 20 minutes per node for updates.
Here’s the process. Firstly, if there’s new firmware available to install, you’ll see it in the top-right corner of the HC3 GUI.
Click on “Update Available” for more information. You can also access the release notes from here.
If you click on “Apply Update” you’ll be asked to confirm your decision.
You’ll then see a bunch of activity.
And once it’s done, the version will change. In this case I went from 6.4.1 to 6.4.2 (a reasonably minor update).
The whole thing took about 10 minutes, according to the cluster logs.
Getting up and running with the HC3 platform has been a snap so far, even though there were some minor issues getting started. Support staff were super responsive, instructions were easy to follow and the general experience has been top notch. Coupled with the fact that the interface is really easy to use, I think Scale are onto a winner here, particularly given the market they’re aiming at and the price point. I’m looking forward to putting together some more articles on actually using the kit.