Scale Computing and Leostream – Better Than Bert And Ernie

Scale Computing recently announced some news about a VDI solution they delivered for Illinois-based Paris Community Hospital. I had the opportunity to speak with Alan Conboy about it and thought I’d share some coverage here.

 

VDI and HCI – A Pretty Famous Pairing

When I started to write this article, I was trying to think of a dynamic duo that I could compare VDI and HCI to. Batman and Robin? Bert and Ernie? MJ and Scottie? In any case, hyper-converged infrastructure and virtual desktop infrastructure has gone well together since the advent of HCI. It’s my opinion that HCI is in a number of enterprises by virtue of the fact that a VDI requirement arose. Once HCI is introduced into those enterprise environments, folks start to realise it’s useful for other stuff too.

Operational Savings

So it makes sense that Scale Computing’s HC3 solution would be used to deliver VDI solutions at some stage. And Leostream can provide the lifecycle manager / connection broker / gateway part of the story without breaking a sweat. According to Conboy Paris Community Hospital has managed to drastically reduce its operating costs, to the point that it’s reduced its resource investment to a part-time operations staff member to manage the environment. They’re apparently saving around $1 million (US) over the next five years, meaning they can now afford an extra doctor and additional nursing staff.

HCI – It’s All In The Box

If you’re familiar with HCI, you’ll know that most of the required infrastructure comes with the solution – compute, storage, and hypervisor. You also get the ability to do cool stuff in terms of snapshots and disaster recovery via replication.

 

Thoughts

VDI solutions have proven popular in healthcare environments for a number of reasons. They generally help the organisation control the applications that are run in the (usually) security-sensitive environment, particularly at the edge. It’s also useful in terms of endpoint maintenance, and removes the requirement to deploy high end client devices in clinical environments. It also provides a centralised mechanism to ensure that critical application updates are performed in a timely fashion.

You won’t necessarily save money deploying VDI on HCI in terms of software licensing or infrastructure investment. But you will potentially save money in terms of the operational resources required for endpoint and application support. If you can then spend those savings on medical staff, that has to be a win for the average healthcare organisation.

I’m the first to admit that I don’t get overly excited about VDI solutions. I can see the potential for value in some organisations, but I tend to lose focus rapidly when people start to talk to me about this stuff. That said, I do get enthusiastic about HCI solutions that make sense, and deliver value back to the business. It strikes me that this Scale Computing and Leostream combo has worked out pretty well for Paris Community Hospital. And that’s pretty cool. For more insight, Scale Computing has published a Customer Case Study that you can read here.

Random Short Take #11

Here are a few links to some random news items and other content that I found interesting. You might find it interesting too. Maybe. Happy New Year too. I hope everyone’s feeling fresh and ready to tackle 2019.

  • I’m catching up with the good folks from Scale Computing in the next little while, but in the meantime, here’s what they got up to last year.
  • I’m a fan of the fruit company nowadays, but if I had to build a PC, this would be it (hat tip to Stephen Foskett for the link).
  • QNAP announced the TR-004 over the weekend and I had one delivered on Tuesday. It’s unusual that I have cutting edge consumer hardware in my house, so I’ll be interested to see how it goes.
  • It’s not too late to register for Cohesity’s upcoming Helios webinar. I’m looking forward to running through some demos with Jon Hildebrand and talking about how Helios helps me manage my Cohesity environment on a daily basis.
  • Chris Evans has published NVMe in the Data Centre 2.0 and I recommend checking it out.
  • I went through a basketball card phase in my teens. This article sums up my somewhat confused feelings about the card market (or lack thereof).
  • Elastifile Cloud File System is now available on the AWS Marketplace – you can read more about that here.
  • WekaIO have posted some impressive numbers over at spec.org if you’re into that kind of thing.
  • Applications are still open for vExpert 2019. If you haven’t already applied, I recommend it. The program is invaluable in terms of vendor and community engagement.

 

 

Random Short Take #10

Here are a few links to some random news items and other content that I found interesting. You might find it interesting too. Maybe. This will be the last one for this year. I hope you and yours have a safe and merry Christmas / holiday break.

  • Scale Computing have finally entered the Aussie market in partnership with Amnesium. You can read more about that here
  • Alastair is back in the classroom, teaching folks about AWS. He published a bunch of very useful notes from a recent class here.
  • The folks at Backblaze are running a “Refer-A-Friend” promotion. If you’re looking to become a new Backblaze customer and sign up with my referral code, you’ll get some free time on your account. And I will too! Hooray! I’ve waxed lyrical about Backblaze before, and I recommend it. The offer runs out on January 6th 2019, so get a move on.
  • Howard did a nice article on VVols that I recommend checking out.
  • GDPR has been a challenge (within and outside the EU), but I enjoyed Mark Browne‘s take on Cohesity’s GDPR compliance.
  • I’m quite a fan of the Netflix Tech Blog, and this article on the Netflix Media Database was a ripper.
  • From time to time I like to poke fun at my friends in the US for what seems like an excessive amount of shenanigans happening in that country, but there’s plenty of boneheaded stuff happening in Australia too. Read Preston’s article on the recently passed anti-encryption laws to get a feel for the heady heights of stupidity that we’ve been able to reach recently.

 

Scale Computing Have Been Busy

I recently had the opportunity to get on a call with Alan Conboy to talk about what’s been happening with Scale Computing lately. It was an interesting chat, as always, and I thought I’d share some of the news here.

 

Detroit Rock City

It’s odd how sometimes I forget that pretty much every type of business in existence uses some form of IT. Arts and performance organisations, such as the Detroit Symphony Orchestra are no exception. They are also now very happy Scale customers. There’s a YouTube video detailing their experiences that you can check out here.

 

Lenovo Partnership

Scale and Lenovo recently announced a strategic partnership, focussed primarily on edge workloads, with particular emphasis on retail and industrial environments. You can download a solution brief here. This doesn’t mean that Lenovo are giving up on some of their other HCI partnerships, but it does give them a competent partner to attack the edge infrastructure market.

 

GCG, Yeah You Know Me

Grupo Colón Gerena is a Puerto Rico-based “restaurant management company that owns franchises of brands including Wendy’s, Applebee’s, Famous Davés, Sizzler’s, Longhorn Steakhouse, Olive Garden and Red Lobster throughout the island”. You may recall Puerto Rico suffered through some pretty devastating weather in 2017 thanks to Hurricane Maria. GCG have been running the bulk of their workload in Google Cloud since just before the event, and are still deciding whether they really want to move it back to an on-premises solution. There’s definitely a good story with Scale delivering workloads from the edge to the core and through to Google Cloud. You can read the full case study here.

 

Thoughts

It’s no big secret that I’m a fan of Scale Computing. And not just because I have an old HC1000 in my office that I fire up every now and then (Collier I’m still waiting on those SSDs you promised me a few years ago). They are relentlessly focussed on delivering easy to use solutions that work well and deliver great resiliency and performance, particularly in smaller environments. Their DRaaS play, and partnership with Google, has opened up some doors to customers that may not have considered Scale previously. The Lenovo partnership, and success with customers like GCG and DSO, is proof that Scale are doing a lot of good stuff in the HCI space.

Anyone who’s had the good fortune to deal with Scale, from their executives and founders through to their support staff, will tell you that they’re super easy to deal with and pretty good at what they do. It’s great to see them enjoying some success. It strikes me that they go about their business without a lot of the chest beating and carry on associated with some other vendors in the industry. This is a good thing, and I’m looking forward to seeing what comes next for them.

Scale Computing Announces Partnership With APC by Schneider Electric For DCIAB

(I’m really hoping the snappy title will bring in a few more readers). I recently had a chance to speak with Doug Howell, Senior Director Global Alliances at Scale Computing about their Data Centre In A Box (DCIAB) offering in collaboration with APC by Schneider Electric and thought I’d share some thoughts.

 

It’s A Box

Well, a biggish box. The solution is built on APC’s Micro Data Centre solution, combined with 3 Scale HC3 1150 nodes. The idea is that you have 1 SKU to deal with, which includes the Scale HC3 nodes, UPS, PDUs, and rack. You can then wheel it in, plug it in to the wall and network and it’s ready to go. Howell mentioned that they have a customer that is in the process of deploying a significant amount of these things in the wild.

Note that this is slightly different to the EMEA campaign with Lenovo from earlier in the year and is focused, at this stage, on the North American market. You can grab the solution brief from here.

 

Thoughts

The “distributed enterprise” has presented challenges to IT organisations for years now. Not everyone works in a location that is nicely co-located with headquarters. And these folks need compute and storage too. You’ve no doubt heard about how the “edge” is the new hotness in IT, and I frequently hear pitches from vendors talking about how they handle storage or compute requirements at the edge in some kind of earth-shattering way. It’s been a hard problem to solve, because locality (either for storage or compute or both) is generally a big part of the success of these solutions, particularly from the end user’s perspective. This is oftentimes at odds with traditional enterprise deployments, where all of the key compute and storage components are centrally located for ease of access, management and protection. Improvements in WAN technologies, and distributed application availability is changing that story to an extent though, hence the requirement for these kind of edge solutions. Sometimes, you just need to have stuff close to where you’re main business activity is occurring.

So what makes the Scale and APC offering any different? Nothing really, except that Scale have built their reputation on being able to deliver simple to operate hyper-converged infrastructure to small and medium enterprises with a minimum of fuss and at a reasonable price point. The cool thing here is that you’re also leveraging APC’s ability to deliver robust micro DC services with Scale’s offering that can fit in well with their other solutions, such as DRaaS.

Not every solution from every vendor needs to be unique for it to stand out from the crowd. Scale have historically demonstrated a relentless focus on quality products, excellent after-sales support and market focus. This collaboration will no doubt open up some more doors for them with APC customers who were previously unaware of the Scale story (and vice versa). This can only be a good thing in my opinion.

2018 AKA The Year After 2017

I said last year that I don’t do future prediction type posts, and then I did one anyway. This year I said the same thing and then I did one around some Primary Data commentary. Clearly I don’t know what I’m doing, so here we are again. This time around, my good buddy Jason Collier (Founder at Scale Computing) had some stuff to say about hybrid cloud, and I thought I’d wade in and, ostensibly, nod my head in vigorous agreement for the most part. Firstly, though, here’s Jason’s quote:

“Throughout 2017 we have seen many organizations focus on implementing a 100% cloud focused model and there has been a push for complete adoption of the cloud. There has been a debate around on-premises and cloud, especially when it comes to security, performance and availability, with arguments both for and against. But the reality is that the pendulum stops somewhere in the middle. In 2018 and beyond, the future is all about simplifying hybrid IT. The reality is it’s not on-premises versus the cloud. It’s on-premises and the cloud. Using hyperconverged solutions to support remote and branch locations and making the edge more intelligent, in conjunction with a hybrid cloud model, organizations will be able to support highly changing application environments”.

 

The Cloud

I talk to people every day in my day job about what their cloud strategy is, and most people in enterprise environments are telling me that there are plans afoot to go all in on public cloud. No one wants to run their own data centres anymore. No one wants to own and operate their own infrastructure. I’ve been hearing this for the last five years too, and have possibly penned a few strategy documents in my time that said something similar. Whether it’s with AWS, Azure, Google or one of the smaller players, public cloud as a consumption model has a lot going for it. Unfortunately, it can be hard to get stuff working up there reliably. Why? Because no-one wants to spend time “re-factoring” their applications. As a result of this, a lot of people want to lift and shift their workloads to public cloud. This is fine in theory, but a lot of those applications are running crusty versions of Microsoft’s flagship RDBMS, or they’re using applications that are designed for low-latency, on-premises data centres, rather than being addressable over the Internet. And why is this? Because we all spent a lot of the business’s money in the late nineties and early noughties building these systems to a level of performance and resilience that we thought people wanted. Except we didn’t explain ourselves terribly well, and now the business is tired of spending all of this money on IT. And they’re tired of having to go through extensive testing cycles every time they need to do a minor upgrade. So they stop doing those upgrades, and after some time passes, you find that a bunch of key business applications are suddenly approaching end of life and in need of some serious TLC. As a result of this, those same enterprises looking to go cloud first also find themselves struggling mightily to get there. This doesn’t mean public cloud isn’t necessarily the answer, it just means that people need to think things through a bit.

 

The Edge

Another reason enterprises aren’t necessarily lifting and shifting every single workload to the cloud is the concept of data gravity. Sometimes, your applications and your data need to be close to each other. And sometimes that closeness needs to occur closest to the place you generate the data (or run the applications). Whilst I think we’re seeing a shift in the deployment of corporate workloads to off-premises data centres, there are still some applications that need everything close by. I generally see this with enterprises working with extremely large datasets (think geo-spatial stuff or perhaps media and entertainment companies) that struggle to move large amounts of the data around in a fashion that is cost effective and efficient from a time and resource perspective. There are some neat solutions to some of these requirements, such as Scale Computing’s single node deployment option for edge workloads, and X-IO Technologiesneat approach to moving data from the edge to the core. But physics is still physics.

 

The Bit In Between

So back to Jason’s comment on hybrid cloud being the way it’s really all going. I agree that it’s very much a question of public cloud and on-premises, rather than one or the other. I think the missing piece for a lot of organisations, however, doesn’t necessarily lie in any one technology or application architecture. Rather, I think the key to a successful hybrid strategy sits squarely with the capability of the organization to provide consistent governance throughout the stack. In my opinion, it’s more about people understanding the value of what their company does, and the best way to help it achieve that value, than it is about whether HCI is a better fit than traditional rackmount servers connected to fibre channel fabrics. Those considerations are important, of course, but I don’t think they have the same impact on a company’s potential success as the people and politics does. You can have some super awesome bits of technology powering your company, but if you don’t understand how you’re helping the company do business, you’ll find the technology is not as useful as you hoped it would be. You can talk all you want about hybrid (and you should, it’s a solid strategy) but if you don’t understand why you’re doing what you do, it’s not going to be as effective.

Scale Computing and WinMagic Announce Partnership, Refuse to Sit Still

Scale Computing and WinMagic recently announced a partnership improving the security of Scale’s HC3 solution. I had the opportunity to be briefed by the good folks at Scale and WinMagic and thought I’d provide a brief overview of the announcement here.

 

But Firstly, Some Background

Scale Computing announced their HC3 Cloud Unity offering in late September this year. Cloud Unity, in a nutshell, let’s you run embedded HC3 instances in Google Cloud. Coupled with some SD-WAN smarts, you can move workloads easily between on-premises infrastructure and GCP. It enables companies to perform lift and shift migrations, if required, with relative ease, and removes a lot of the complexity traditionally associated of deploying hybrid-friendly workloads in the data centre.

 

So the WinMagic Thing?

WinMagic have been around for quite some time, and offer a range of security products aimed at various sizes of organization. This partnership with Scale delivers SecureDoc CloudVM as a mechanism for encryption and key management. You can download a copy of the brochure from here. The point of the solution is to provide a secure mechanism for hosting your VMs either on-premises or in the cloud. Key management can be a pain in the rear, and WinMagic provides a fully-featured solution for this that’s easy to use and simple to manage. There’s broad support for a variety of operating environments and clients. Authentication and authorized key distribution takes place prior to workloads being deployed to ensure that the right person is accessing data from an expected place and device and there’s support for password only or multi-factor authentication.

 

Thoughts

Scale Computing have been doing some really cool stuff in the hyperconverged arena for some time now. The new partnership with Google Cloud, and the addition of the WinMagic solution, demonstrates their focus on improving an already impressive offering with some pretty neat features. It’s one thing to enable customers to get to the cloud with relative ease, but it’s a whole other thing to be able to help them secure their assets when they make that move to the cloud.

It’s my opinion that Scale Computing have been the quiet achievers in the HCI marketplace, with reported fantastic customer satisfaction and a solid range of products on offer at a very reasonable RRP. Couple this with an intelligent hypervisor platform and the ability to securely host assets in the public cloud, and it’s clear that Scale Computing aren’t interested in standing still. I’m really looking forward to seeing what’s next for them. If you’re after an HCI solution where you can start really (really) small and grow as required, it would be worthwhile having a chat to them.

Also, if you’re into that kind of thing, Scale and WinMagic are hosting a joint webinar on November 28 at 10:30am EST. Registration for the webinar “Simplifying Security across your Universal I.T. Infrastructure: Top 5 Considerations for Securing Your Virtual and Cloud IT Environments, Without Introducing Unneeded Complexity” can be found here.

 

 

Scale Computing Announces Cloud Unity – Clouds For Everyone

 

The Announcement

Scale Computing recently announced the availability of a new offering: Cloud Unity. I had the opportunity to speak with the Scale Computing team at VMworld US this year to run through some of the finer points of the announcement and thought I’d cover it off here.

 

Cloud What?

So what exactly is Cloud Unity? If you’ve been keeping an eye on the IT market in the last few years, you’ll notice that everything has cloud of some type in its product name. In this case, Cloud Unity is a mechanism by which you can run Scale Computing’s HC3 hypervisor nested in Google Cloud Platform (GCP). The point of the solution, ostensibly, is to provide a business with disaster recovery capability on a public cloud platform. You’re basically running an HC3 cluster on GCP, with the added benefit that you can create an encrypted VXLAN connection between your on-premises HC3 cluster and the GCP cluster. The neat thing here is that everything runs as a small instance to handle replication from on-premises and only scales up when you’re actually needing to run the VMs in anger. The service is bought through Scale Computing, and starts from as little as $1000US per month (for 5TB). There are other options available as well and the solution is expected to be Generally Available in Q4 this year.

 

Conclusion and Further Reading

This isn’t the first time nested virtualisation has been released as a product, with AWS, Azure and Ravello all doing similar things. The cool thing here is that it’s aimed at Scale Computing’s traditional customers, namely small to medium businesses. These are the people who’ve bought into the simplicity of the Scale Computing model and don’t necessarily have time to re-write their line of business applications to work as cloud native applications (as much as it would be nice that this were the case). Whilst application lift and shift isn’t the ideal outcome, the other benefit of this approach is that companies who may not have previously invested in DR capability can now leverage this product to solve the technical part of the puzzle fairly simply.

DR should be a simple thing to have in place. Everyone has horror stories of data centres going off line because of natural disasters or, more commonly, human error. The price of good DR, however, has traditionally been quite high. And it’s been pretty hard to achieve. The beauty of this solution is that it provides businesses with solid technical capabilities for a moderate price, and allows them to focus on people and processes, which are arguably the key parts of DR that are commonly overlooked. Disasters are bad, which is why they’re called disasters. If you run a small to medium business and want to protect yourself from bad things happening, this is the kind of solution that should be of interest to you.

A few years ago, Scale Computing sent me a refurbished HC1000 cluster to play with, and I’ve had first-hand exposure to the excellent support staff and experience that Scale Computing tell people about. The stories are true – these people are very good at what they do and this goes a long way in providing consumers with confidence in the solution. This confidence is fairly critical to the success of technical DR solutions – you want to leverage something that’s robust in times of duress. You don’t want to be worrying about whether it will work or not when your on-premises DC is slowly becoming submerged in water because building maintenance made a boo boo. You want to be able to focus on your processes to ensure that applications and data are available when and where they’re required to keep doing what you do.

If you’d like to read what other people have written, Justin Warren posted a handy article at Forbes, and Chris Evans provided a typically insightful overview of the announcement and the challenges it’s trying to solve that you can read here. Scott D. Lowe also provided a very useful write-up here. Scale Computing recently presented at Tech Field Day 15, and you can watch their videos here.

Scale Computing and single-node configurations for the discerning punter

Scale_Logo_High_Res

I had the opportunity to speak to Jason Collier and Alan Conboy from Scale Computing about a month ago after they announced their “single-node configuration of its HC3 virtualization platform”. This is “designed to deliver affordable, flexible hyperconverged infrastructure for distributed enterprise and Disaster Recovery (DR) use cases”. I’m a big fan of Scale’s HC3 platform (and not just because they gave me a HC1000 demo cluster to trial). It’s simple, it’s cheap, and it works well for its target market. Traditionally (!), HCI and other converged offerings have called for 3 nodes (and sometimes 2) to be considered resilient at a cluster level. When Scale sell you their HC3 solution you start at 3 nodes and work up from there. So how do they do the single-node thing?

 

Use Cases

In the market I operate in, the use of a single node at the edge makes quite a bit of sense. Customers with edge sites generally don’t have access to IT resources or even suitable facilities but they sometimes have a reasonable link back to the core or decent WAN optimisation in place. So why not use the core to provide the resiliency you would have normally configured locally? Scale call this the “distributed enterprise” model, with the single node HC3 solution replicating workload back to the core. If something goes wrong at the edge you can point users to your workload in the core and keep on going [do you like how I made that sound really simple?].

You can also use the single-node configuration as a DR target for an HC3 cluster. The built-in replication can be configured quickly and without extra software to a single HC3 node located locally or remotely. This, again, is a cheap and cheerful solution to what can sometimes be seen as an expensive insurance policy against infrastructure failure.

 

Thoughts and Further Reading

Risk versus cost is a funny thing. Some people are very good at identifying risk but not very good at understanding what it will cost them if something goes wrong. Some people like to talk a lot about risk but then their only mitigation is to note it in a “risk register”. In any case, I think Scale have come up with a smart solution because they’re selling to people in a cost sensitive market. This solution helps to alleviate some of the expense related to providing DR and distribute compute (and thus provides some reduced risk for SMBs). That said, everyone’s situation is different, and you need to assess the risk related to running workload at the edge on a single node. If that node fails, you can failover your workload to the core. Assuming you have a process in place for this. And applications that support it. And sufficient bandwidth from the core to the edge. In any case, if you’re in the market for low-cost HCI, Scale should be on your radar. And if you’re looking at distributed HCI you should definitely be looking at the single-node solution. You can read some more in Scale’s press release here. The solution brief can be found here. El Reg also has something to say about it here. Some generally very useful technical resources from Scale can be found here.

Scale Computing – If Only Everything Else Were This Simple

Disclaimer: Scale Computing have provided me with the use of a refurbished HC3 system comprised of 3 HC1000 nodes, along with an HP 2920-24G Gigabit switch. They are keen for me to post about my experiences using the platform and I am too, so this arrangement works out well for both parties. I’m not a professional equipment reviewer, and everyone’s situation is different, so if you’re interested in this product I recommend getting in touch with Scale Computing.

Scale_Logo_High_Res

Introduction

This is the first of what is going to be a few posts covering my experiences with the Scale Computing HC3 platform. By way of introduction, I recently wrote a post on some of the new features available with HC3. Trevor Pott provides an excellent overview of the platform here, and Taneja Group provides an interesting head to head comparison with VSAN here.

In this post I’d like to cover off my installation experience and make a brief mention of the firmware update process.

 

Background

I’d heard of Scale Computing around the traps, but hadn’t really taken the time to get to know them. For whatever reason I was given a run through of their gear by Alan Conboy. We agreed that it would be good for me to get hands on with some kit and by the next week I had 3 boxes of HC1000 nodes lob up at my front door. Scale support staff contacted me to arrange installation as soon as they knew the delivery had occurred. I had to travel however so I asked them to send me through the instructions and I’d get to it when I got home. Long story short I was back a week before I got these things out of the box and started looking into getting stuff setup. By the way the screwdriver included with every node was a nice touch.

The other issue I had is that I really haven’t had a functional lab environment at home for some time, so I had no switching or basic infrastructure services to speak of. And my internet connection is ADSL 1, so uploading big files can be a pain. And while I have DNS in the house it’s really not enterprise grade. In some ways, my generally shonky home “lab” environment is similar to a lot of small business environments I’ve come across during my career. Perhaps this is why Scale support staff are never too stressed about the key elements being missing.

As I mentioned in the disclaimer, Scale also kindly provided my with an HP 2920-24G gigabit switch. I set this up per the instructions here. In the real world, you’d be running off two switches. But as anyone who’s been to my house can attest, my home office is a long way from the real world.

HC1000_1

I do have a rack at home, but it’s full of games consoles for the moment. I’m currently working on finding a more appropriate home for the HC3 cluster.

 

First Problem

So, I unpacked and cabled up the three nodes as per the instructions in the HC3 Getting Started Guide. I initialised each node, and then started the cluster initialisation process on the first node. I couldn’t, however, get the nodes to join the cluster or talk to each other. I’d spent about an hour unpacking everything and then another hour futzing my way about the nodes trying to get them to talk to each other (or me). Checked cables, checked switch configuration, and so forth. No dice. It was Saturday afternoon, so I figured I’d move on to some LEGO. I sent off a message to Scale support to provide an update on my progress and figured I’d hear back Tuesday AM my time (the beauty of living +10 GMT). To my surprise I got a response back from Tom Roeder five minutes after I sent the e-mail. I chipped him about working too late but he claims he was already working on another case.

It turns out that the nodes I was sent had both the 10Gbps and 1Gbps cards installed in them, and by default the 10Gbps cards were being used. The simplest fix for this (given that I wouldn’t be using 10Gbps in the short term) was to remove the cards from each node.

HC1000_2

Once this was done, I had to log into each node as the root user and run the following command:

/opt/scale/libexec/40-scale-persistent-net-generate

I then rebooted and reinitialised each node. At this point I was then able to get them joining the cluster. This took about twenty minutes. Happy days.

 

Second Problem

So I thought everything was fine, but I started getting messages about the second node being unable to update from the configured time source. I messaged Tom and he got me to open up a support tunnel on one of the working nodes (this has been a bloody awesome feature of the support process). While the cluster looked in good shape, he wasn’t able to ping external DNS servers (e.g. 8.8.8.8) from node 2, nor could he get it to synchronise with the NTP pool I’d nominated. I checked and re-checked the new Ethernet cables I’d used. I triple-checked the switch config. I rebooted the Airport Express (!) that everything was hanging off in my office. Due to the connectivity weirdness I was also unable to update the firmware o the cluster. I grumbled and moaned a lot.

Tom then had another poke around and noticed that, for some reason, no gateway was configured on node 2. He added one in and voilà, the node started merrily chatting to its peers and the outside world. Tom has the patience of a saint. And I was pretty excited that it was working.

SC_Tom_tweet

Tom has been super supportive in, well, supporting me during this installation. He’s been responsive, knowledgeable and has made the whole installation experience a breeze, minor glitches notwithstanding.

 

Firmware Update

I thought I’d also quickly run through the firmware update process, as it’s extremely simple and I like posts with screenshots. I think it took 10 minutes, tops. Which is a little unusual, and probably due to a few factors, including the lack of VMs running on the nodes (day job has been a bit busy), and it was a fairly minor update. Scale generally suggest 20 minutes per node for updates.

Here’s the process. Firstly, if there’s new firmware available to install, you’ll see it in the top-right corner of the HC3 GUI.

sc01

Click on “Update Available” for more information. You can also access the release notes from here.

sc02

If you click on “Apply Update” you’ll be asked to confirm your decision.

sc03

You’ll then see a bunch of activity.

sc04

 

sc05

 

sc06

And once it’s done, the version will change. In this case I went from 6.4.1 to 6.4.2 (a reasonably minor update).

sc08

The whole thing took about 10 minutes, according to the cluster logs.

sc09

 

Conclusion

Getting up and running with the HC3 platform has been a snap so far, even though there were some minor issues getting started. Support staff were super responsive, instructions were easy to follow and the general experience has been top notch. Coupled with the fact that the interface is really easy to use, I think Scale are onto a winner here, particularly given the market they’re aiming at and the price point. I’m looking forward to putting together some more articles on actually using the kit.