Scale Computing Have Been Busy

I recently had the opportunity to get on a call with Alan Conboy to talk about what’s been happening with Scale Computing lately. It was an interesting chat, as always, and I thought I’d share some of the news here.

 

Detroit Rock City

It’s odd how sometimes I forget that pretty much every type of business in existence uses some form of IT. Arts and performance organisations, such as the Detroit Symphony Orchestra are no exception. They are also now very happy Scale customers. There’s a YouTube video detailing their experiences that you can check out here.

 

Lenovo Partnership

Scale and Lenovo recently announced a strategic partnership, focussed primarily on edge workloads, with particular emphasis on retail and industrial environments. You can download a solution brief here. This doesn’t mean that Lenovo are giving up on some of their other HCI partnerships, but it does give them a competent partner to attack the edge infrastructure market.

 

GCG, Yeah You Know Me

Grupo Colón Gerena is a Puerto Rico-based “restaurant management company that owns franchises of brands including Wendy’s, Applebee’s, Famous Davés, Sizzler’s, Longhorn Steakhouse, Olive Garden and Red Lobster throughout the island”. You may recall Puerto Rico suffered through some pretty devastating weather in 2017 thanks to Hurricane Maria. GCG have been running the bulk of their workload in Google Cloud since just before the event, and are still deciding whether they really want to move it back to an on-premises solution. There’s definitely a good story with Scale delivering workloads from the edge to the core and through to Google Cloud. You can read the full case study here.

 

Thoughts

It’s no big secret that I’m a fan of Scale Computing. And not just because I have an old HC1000 in my office that I fire up every now and then (Collier I’m still waiting on those SSDs you promised me a few years ago). They are relentlessly focussed on delivering easy to use solutions that work well and deliver great resiliency and performance, particularly in smaller environments. Their DRaaS play, and partnership with Google, has opened up some doors to customers that may not have considered Scale previously. The Lenovo partnership, and success with customers like GCG and DSO, is proof that Scale are doing a lot of good stuff in the HCI space.

Anyone who’s had the good fortune to deal with Scale, from their executives and founders through to their support staff, will tell you that they’re super easy to deal with and pretty good at what they do. It’s great to see them enjoying some success. It strikes me that they go about their business without a lot of the chest beating and carry on associated with some other vendors in the industry. This is a good thing, and I’m looking forward to seeing what comes next for them.

Scale Computing Announces Partnership With APC by Schneider Electric For DCIAB

(I’m really hoping the snappy title will bring in a few more readers). I recently had a chance to speak with Doug Howell, Senior Director Global Alliances at Scale Computing about their Data Centre In A Box (DCIAB) offering in collaboration with APC by Schneider Electric and thought I’d share some thoughts.

 

It’s A Box

Well, a biggish box. The solution is built on APC’s Micro Data Centre solution, combined with 3 Scale HC3 1150 nodes. The idea is that you have 1 SKU to deal with, which includes the Scale HC3 nodes, UPS, PDUs, and rack. You can then wheel it in, plug it in to the wall and network and it’s ready to go. Howell mentioned that they have a customer that is in the process of deploying a significant amount of these things in the wild.

Note that this is slightly different to the EMEA campaign with Lenovo from earlier in the year and is focused, at this stage, on the North American market. You can grab the solution brief from here.

 

Thoughts

The “distributed enterprise” has presented challenges to IT organisations for years now. Not everyone works in a location that is nicely co-located with headquarters. And these folks need compute and storage too. You’ve no doubt heard about how the “edge” is the new hotness in IT, and I frequently hear pitches from vendors talking about how they handle storage or compute requirements at the edge in some kind of earth-shattering way. It’s been a hard problem to solve, because locality (either for storage or compute or both) is generally a big part of the success of these solutions, particularly from the end user’s perspective. This is oftentimes at odds with traditional enterprise deployments, where all of the key compute and storage components are centrally located for ease of access, management and protection. Improvements in WAN technologies, and distributed application availability is changing that story to an extent though, hence the requirement for these kind of edge solutions. Sometimes, you just need to have stuff close to where you’re main business activity is occurring.

So what makes the Scale and APC offering any different? Nothing really, except that Scale have built their reputation on being able to deliver simple to operate hyper-converged infrastructure to small and medium enterprises with a minimum of fuss and at a reasonable price point. The cool thing here is that you’re also leveraging APC’s ability to deliver robust micro DC services with Scale’s offering that can fit in well with their other solutions, such as DRaaS.

Not every solution from every vendor needs to be unique for it to stand out from the crowd. Scale have historically demonstrated a relentless focus on quality products, excellent after-sales support and market focus. This collaboration will no doubt open up some more doors for them with APC customers who were previously unaware of the Scale story (and vice versa). This can only be a good thing in my opinion.

Scale Computing and WinMagic Announce Partnership, Refuse to Sit Still

Scale Computing and WinMagic recently announced a partnership improving the security of Scale’s HC3 solution. I had the opportunity to be briefed by the good folks at Scale and WinMagic and thought I’d provide a brief overview of the announcement here.

 

But Firstly, Some Background

Scale Computing announced their HC3 Cloud Unity offering in late September this year. Cloud Unity, in a nutshell, let’s you run embedded HC3 instances in Google Cloud. Coupled with some SD-WAN smarts, you can move workloads easily between on-premises infrastructure and GCP. It enables companies to perform lift and shift migrations, if required, with relative ease, and removes a lot of the complexity traditionally associated of deploying hybrid-friendly workloads in the data centre.

 

So the WinMagic Thing?

WinMagic have been around for quite some time, and offer a range of security products aimed at various sizes of organization. This partnership with Scale delivers SecureDoc CloudVM as a mechanism for encryption and key management. You can download a copy of the brochure from here. The point of the solution is to provide a secure mechanism for hosting your VMs either on-premises or in the cloud. Key management can be a pain in the rear, and WinMagic provides a fully-featured solution for this that’s easy to use and simple to manage. There’s broad support for a variety of operating environments and clients. Authentication and authorized key distribution takes place prior to workloads being deployed to ensure that the right person is accessing data from an expected place and device and there’s support for password only or multi-factor authentication.

 

Thoughts

Scale Computing have been doing some really cool stuff in the hyperconverged arena for some time now. The new partnership with Google Cloud, and the addition of the WinMagic solution, demonstrates their focus on improving an already impressive offering with some pretty neat features. It’s one thing to enable customers to get to the cloud with relative ease, but it’s a whole other thing to be able to help them secure their assets when they make that move to the cloud.

It’s my opinion that Scale Computing have been the quiet achievers in the HCI marketplace, with reported fantastic customer satisfaction and a solid range of products on offer at a very reasonable RRP. Couple this with an intelligent hypervisor platform and the ability to securely host assets in the public cloud, and it’s clear that Scale Computing aren’t interested in standing still. I’m really looking forward to seeing what’s next for them. If you’re after an HCI solution where you can start really (really) small and grow as required, it would be worthwhile having a chat to them.

Also, if you’re into that kind of thing, Scale and WinMagic are hosting a joint webinar on November 28 at 10:30am EST. Registration for the webinar “Simplifying Security across your Universal I.T. Infrastructure: Top 5 Considerations for Securing Your Virtual and Cloud IT Environments, Without Introducing Unneeded Complexity” can be found here.

 

 

Scale Computing Announces Cloud Unity – Clouds For Everyone

 

The Announcement

Scale Computing recently announced the availability of a new offering: Cloud Unity. I had the opportunity to speak with the Scale Computing team at VMworld US this year to run through some of the finer points of the announcement and thought I’d cover it off here.

 

Cloud What?

So what exactly is Cloud Unity? If you’ve been keeping an eye on the IT market in the last few years, you’ll notice that everything has cloud of some type in its product name. In this case, Cloud Unity is a mechanism by which you can run Scale Computing’s HC3 hypervisor nested in Google Cloud Platform (GCP). The point of the solution, ostensibly, is to provide a business with disaster recovery capability on a public cloud platform. You’re basically running an HC3 cluster on GCP, with the added benefit that you can create an encrypted VXLAN connection between your on-premises HC3 cluster and the GCP cluster. The neat thing here is that everything runs as a small instance to handle replication from on-premises and only scales up when you’re actually needing to run the VMs in anger. The service is bought through Scale Computing, and starts from as little as $1000US per month (for 5TB). There are other options available as well and the solution is expected to be Generally Available in Q4 this year.

 

Conclusion and Further Reading

This isn’t the first time nested virtualisation has been released as a product, with AWS, Azure and Ravello all doing similar things. The cool thing here is that it’s aimed at Scale Computing’s traditional customers, namely small to medium businesses. These are the people who’ve bought into the simplicity of the Scale Computing model and don’t necessarily have time to re-write their line of business applications to work as cloud native applications (as much as it would be nice that this were the case). Whilst application lift and shift isn’t the ideal outcome, the other benefit of this approach is that companies who may not have previously invested in DR capability can now leverage this product to solve the technical part of the puzzle fairly simply.

DR should be a simple thing to have in place. Everyone has horror stories of data centres going off line because of natural disasters or, more commonly, human error. The price of good DR, however, has traditionally been quite high. And it’s been pretty hard to achieve. The beauty of this solution is that it provides businesses with solid technical capabilities for a moderate price, and allows them to focus on people and processes, which are arguably the key parts of DR that are commonly overlooked. Disasters are bad, which is why they’re called disasters. If you run a small to medium business and want to protect yourself from bad things happening, this is the kind of solution that should be of interest to you.

A few years ago, Scale Computing sent me a refurbished HC1000 cluster to play with, and I’ve had first-hand exposure to the excellent support staff and experience that Scale Computing tell people about. The stories are true – these people are very good at what they do and this goes a long way in providing consumers with confidence in the solution. This confidence is fairly critical to the success of technical DR solutions – you want to leverage something that’s robust in times of duress. You don’t want to be worrying about whether it will work or not when your on-premises DC is slowly becoming submerged in water because building maintenance made a boo boo. You want to be able to focus on your processes to ensure that applications and data are available when and where they’re required to keep doing what you do.

If you’d like to read what other people have written, Justin Warren posted a handy article at Forbes, and Chris Evans provided a typically insightful overview of the announcement and the challenges it’s trying to solve that you can read here. Scott D. Lowe also provided a very useful write-up here. Scale Computing recently presented at Tech Field Day 15, and you can watch their videos here.

Scale Computing and single-node configurations for the discerning punter

Scale_Logo_High_Res

I had the opportunity to speak to Jason Collier and Alan Conboy from Scale Computing about a month ago after they announced their “single-node configuration of its HC3 virtualization platform”. This is “designed to deliver affordable, flexible hyperconverged infrastructure for distributed enterprise and Disaster Recovery (DR) use cases”. I’m a big fan of Scale’s HC3 platform (and not just because they gave me a HC1000 demo cluster to trial). It’s simple, it’s cheap, and it works well for its target market. Traditionally (!), HCI and other converged offerings have called for 3 nodes (and sometimes 2) to be considered resilient at a cluster level. When Scale sell you their HC3 solution you start at 3 nodes and work up from there. So how do they do the single-node thing?

 

Use Cases

In the market I operate in, the use of a single node at the edge makes quite a bit of sense. Customers with edge sites generally don’t have access to IT resources or even suitable facilities but they sometimes have a reasonable link back to the core or decent WAN optimisation in place. So why not use the core to provide the resiliency you would have normally configured locally? Scale call this the “distributed enterprise” model, with the single node HC3 solution replicating workload back to the core. If something goes wrong at the edge you can point users to your workload in the core and keep on going [do you like how I made that sound really simple?].

You can also use the single-node configuration as a DR target for an HC3 cluster. The built-in replication can be configured quickly and without extra software to a single HC3 node located locally or remotely. This, again, is a cheap and cheerful solution to what can sometimes be seen as an expensive insurance policy against infrastructure failure.

 

Thoughts and Further Reading

Risk versus cost is a funny thing. Some people are very good at identifying risk but not very good at understanding what it will cost them if something goes wrong. Some people like to talk a lot about risk but then their only mitigation is to note it in a “risk register”. In any case, I think Scale have come up with a smart solution because they’re selling to people in a cost sensitive market. This solution helps to alleviate some of the expense related to providing DR and distribute compute (and thus provides some reduced risk for SMBs). That said, everyone’s situation is different, and you need to assess the risk related to running workload at the edge on a single node. If that node fails, you can failover your workload to the core. Assuming you have a process in place for this. And applications that support it. And sufficient bandwidth from the core to the edge. In any case, if you’re in the market for low-cost HCI, Scale should be on your radar. And if you’re looking at distributed HCI you should definitely be looking at the single-node solution. You can read some more in Scale’s press release here. The solution brief can be found here. El Reg also has something to say about it here. Some generally very useful technical resources from Scale can be found here.

Scale Computing – If Only Everything Else Were This Simple

Disclaimer: Scale Computing have provided me with the use of a refurbished HC3 system comprised of 3 HC1000 nodes, along with an HP 2920-24G Gigabit switch. They are keen for me to post about my experiences using the platform and I am too, so this arrangement works out well for both parties. I’m not a professional equipment reviewer, and everyone’s situation is different, so if you’re interested in this product I recommend getting in touch with Scale Computing.

Scale_Logo_High_Res

Introduction

This is the first of what is going to be a few posts covering my experiences with the Scale Computing HC3 platform. By way of introduction, I recently wrote a post on some of the new features available with HC3. Trevor Pott provides an excellent overview of the platform here, and Taneja Group provides an interesting head to head comparison with VSAN here.

In this post I’d like to cover off my installation experience and make a brief mention of the firmware update process.

 

Background

I’d heard of Scale Computing around the traps, but hadn’t really taken the time to get to know them. For whatever reason I was given a run through of their gear by Alan Conboy. We agreed that it would be good for me to get hands on with some kit and by the next week I had 3 boxes of HC1000 nodes lob up at my front door. Scale support staff contacted me to arrange installation as soon as they knew the delivery had occurred. I had to travel however so I asked them to send me through the instructions and I’d get to it when I got home. Long story short I was back a week before I got these things out of the box and started looking into getting stuff setup. By the way the screwdriver included with every node was a nice touch.

The other issue I had is that I really haven’t had a functional lab environment at home for some time, so I had no switching or basic infrastructure services to speak of. And my internet connection is ADSL 1, so uploading big files can be a pain. And while I have DNS in the house it’s really not enterprise grade. In some ways, my generally shonky home “lab” environment is similar to a lot of small business environments I’ve come across during my career. Perhaps this is why Scale support staff are never too stressed about the key elements being missing.

As I mentioned in the disclaimer, Scale also kindly provided my with an HP 2920-24G gigabit switch. I set this up per the instructions here. In the real world, you’d be running off two switches. But as anyone who’s been to my house can attest, my home office is a long way from the real world.

HC1000_1

I do have a rack at home, but it’s full of games consoles for the moment. I’m currently working on finding a more appropriate home for the HC3 cluster.

 

First Problem

So, I unpacked and cabled up the three nodes as per the instructions in the HC3 Getting Started Guide. I initialised each node, and then started the cluster initialisation process on the first node. I couldn’t, however, get the nodes to join the cluster or talk to each other. I’d spent about an hour unpacking everything and then another hour futzing my way about the nodes trying to get them to talk to each other (or me). Checked cables, checked switch configuration, and so forth. No dice. It was Saturday afternoon, so I figured I’d move on to some LEGO. I sent off a message to Scale support to provide an update on my progress and figured I’d hear back Tuesday AM my time (the beauty of living +10 GMT). To my surprise I got a response back from Tom Roeder five minutes after I sent the e-mail. I chipped him about working too late but he claims he was already working on another case.

It turns out that the nodes I was sent had both the 10Gbps and 1Gbps cards installed in them, and by default the 10Gbps cards were being used. The simplest fix for this (given that I wouldn’t be using 10Gbps in the short term) was to remove the cards from each node.

HC1000_2

Once this was done, I had to log into each node as the root user and run the following command:

/opt/scale/libexec/40-scale-persistent-net-generate

I then rebooted and reinitialised each node. At this point I was then able to get them joining the cluster. This took about twenty minutes. Happy days.

 

Second Problem

So I thought everything was fine, but I started getting messages about the second node being unable to update from the configured time source. I messaged Tom and he got me to open up a support tunnel on one of the working nodes (this has been a bloody awesome feature of the support process). While the cluster looked in good shape, he wasn’t able to ping external DNS servers (e.g. 8.8.8.8) from node 2, nor could he get it to synchronise with the NTP pool I’d nominated. I checked and re-checked the new Ethernet cables I’d used. I triple-checked the switch config. I rebooted the Airport Express (!) that everything was hanging off in my office. Due to the connectivity weirdness I was also unable to update the firmware o the cluster. I grumbled and moaned a lot.

Tom then had another poke around and noticed that, for some reason, no gateway was configured on node 2. He added one in and voilà, the node started merrily chatting to its peers and the outside world. Tom has the patience of a saint. And I was pretty excited that it was working.

SC_Tom_tweet

Tom has been super supportive in, well, supporting me during this installation. He’s been responsive, knowledgeable and has made the whole installation experience a breeze, minor glitches notwithstanding.

 

Firmware Update

I thought I’d also quickly run through the firmware update process, as it’s extremely simple and I like posts with screenshots. I think it took 10 minutes, tops. Which is a little unusual, and probably due to a few factors, including the lack of VMs running on the nodes (day job has been a bit busy), and it was a fairly minor update. Scale generally suggest 20 minutes per node for updates.

Here’s the process. Firstly, if there’s new firmware available to install, you’ll see it in the top-right corner of the HC3 GUI.

sc01

Click on “Update Available” for more information. You can also access the release notes from here.

sc02

If you click on “Apply Update” you’ll be asked to confirm your decision.

sc03

You’ll then see a bunch of activity.

sc04

 

sc05

 

sc06

And once it’s done, the version will change. In this case I went from 6.4.1 to 6.4.2 (a reasonably minor update).

sc08

The whole thing took about 10 minutes, according to the cluster logs.

sc09

 

Conclusion

Getting up and running with the HC3 platform has been a snap so far, even though there were some minor issues getting started. Support staff were super responsive, instructions were easy to follow and the general experience has been top notch. Coupled with the fact that the interface is really easy to use, I think Scale are onto a winner here, particularly given the market they’re aiming at and the price point. I’m looking forward to putting together some more articles on actually using the kit.

 

Scale Computing Announces Support For Hybrid Storage and Other Good Things

Scale_Logo_High_Res

If you’re unfamiliar with Scale Computing, they’re a hyperconverged infrastructure (HCI) vendor out of Indianapolis that have been around for some time and deliver a solution aimed squarely at the small to mid-size market. They’ve been around since 2008, and launched their HC3 platform in 2012. They have around 1600 customers, and about 6000 units deployed in the field. Justin Warren provides a nice overview here as part of his research for Storage Field Day 5, while Trevor Pott wrote a comprehensive review for El Reg that you can read here. I was fortunate enough to get a briefing from Alan Conboy from Scale Computing and thought it worthy of putting pen to paper, so to speak.

 

So What is a Scale Computing?

Scale describes the HC3 as a scale-out system. It has the following features:

  • 3 or more nodes –fully automated Active/Active architecture;
  • Clustered virtualization compute platform with no virtualization licensing (KVM-based, not VMware);
  • Protocol-less pooled storage resources eliminate external storage requirements entirely with no SAN or VSA;
  • +60% efficiency gains built in to the IO path – Scale made much of this in my briefing, and it certainly looks good on paper;
  • Cluster is self healing and self load balancing – the nodes talk directly to each other;
  • Scale’s State Machine technology makes the cluster Self-Aware with no need for external management servers – so no out of band management servers. When you’ve done as many vSphere deployments as I have this becomes very appealing;

You can read a bit more about how it all hangs together here. Here’s a simple diagram of the how it looks from a networking perspective. Each node has 4 NICs, with two going to the back-end and two ports for the front-end. You can read up on recommended network switches here.

Scale01_HC3
Each node contains:

  • 8 to 40 vCores;
  • 32 to 512GB VM Memory;
  • Quad Network interface ports in 1GbE or 10GbE;
  • 4 or 8 spindles in 7.2k, 10k, or 15k RPM and SSD as a tier.

Here’s an overview of the different models, along with list prices in $US. You can check out the specification sheet here.

Scale02_Node_Models

 

So What’s New?

Flash. Scale tell me “it’s not being used as a simple cache, but as a proper, fluid tier of storage to meet the needs of a growing and changing SMB to SME market”. There are some neat features that have been built in to the interface. I was able to test these during the briefing with Scale. In a nutshell, there’s a level of granularity that the IT generalist should be pleased with.

  • Set different priority for VMs on a per virtual disk basis;
  • Change on the fly as needed;
  • Makes use of SLC SSD as a storage tier not just a cache; and
  • Keep unnecessary workloads off of the SSD tier completely.

Scale is deploying its new HyperCore Enhanced Automated Tiering (HEAT) technology across the HC3 product line and is introducing a flash storage tier as part of its HC2150 and HC4150 appliances. Scale tell me that they are “[a]vailable in 4- or 8-drive units”, and “Scale’s latest offerings include one 400 or 800GB SSD with three NL-SAS HDD in 1-6TB capacities and memory up to 256GB, or two 400 or 800GB SSD with 6 NL-SAS HDD in 1-2TB capacities and up to 512 GB memory respectively. Network connectivity for either system is achieved through two 10GbE SFP+ ports per node”.

It’s also worth noting that the new products can be used to form new clusters, or they can be added to existing HC3 clusters. Existing workloads on those clusters will automatically utilize the new storage tier when the new nodes are added. You can read more on what’s new here.

 

Further Reading and Feelings

As someone who deals with reasonably complex infrastructure builds as part of my day job, it was refreshing to get a briefing from a company who’s focus is on simplicity for a certain market segment, rather than trying to be the HCI vendor everyone goes to. I was really impressed with the intuitive nature of the interface, the simplicity with which tasks could be achieved, and the thought that’s gone into the architecture. The price, for what it offers, is very competitive as well, particularly in the face of more traditional compute + storage stacks aimed at SMEs. I’m working with Scale to get myself some more stick time in the near future and am looking forward to reporting back with the results.