Scale Computing and Leostream – Is It Finally VDI’s Year?

Scale Computing announced a partnership with Leostream a little while ago. With the global pandemic drastically changing the way a large amount of organisations are working, it seemed like a good time to talk to Alan Conboy about how this all worked from a Scale Computing and Leostream perspective.

 

Easy As 1, 2

Getting started with Leostream is surprisingly simple. To start with, you’ll need to deploy a Gateway and a Broker VM. These are CentOS machines (if you’re a Scale Computing customer you can get likely some minimally configured, pre-packaged qcow appliances from Alan). You’ll need to punch a hole through your firewall for SSL traffic, and run a couple of simple commands on the VMs, but that’s it.

But I’m getting ahead of myself. The way it works is that Leostream has a small agent that you can deploy across the PCs in your fleet. When users hit the gateway they can be directed to their own (physical) desktop inside the organisation. They can then access their desktops remotely (using RDP, SSH, or VNC) over any browser that supports SSL and HTML5. So, rather than having to go out and grab a bunch of laptops, setup a VPN (or scale it out), and have a desktop image ready to go (along with the prerequisite VDI resources hosted somewhere), you can have your remote workforce working remotely from day 1. It comes with a Windows, Java, and Linux agent, so if you have users running macOS or Linux they can still come to the party.

I know I’ve done a bad job of describing the solution, so I recommend you check out this blog post instead.

 

Thoughts

I’m not at all passionate about VDI and End User Computing in the same way some people I know are. I always thought it was a neat solution that was frequently poorly executed and oftentimes cost a lot of money. But it’s a weird time for the world and, sadly, it might be something like a global pandemic that finally means that VDI gets its due as a useful solution for remote workers. I’d also like to point out that this is just a part of what Leostream can do. If you’re after something outside of the Scale Computing alliance – they can probably help you out.

I’ve spoken to Alan and the Scale Computing team about Leostream a few times now, and I really do like the idea of being able to bring users back into the network, rather than extending the network out to your users. You don’t have to go crazy acquiring a bunch of laptops or mobile devices for traditionally desk-bound users and re-imaging said laptops for those users. You don’t need to spend a tonne of cash on extra VPN connectivity or compute to support a bunch of new “desktop” VMs. Instead, in a fairly short amount of time, you can get users working the way they always have, with a minimum of fuss. This is exactly the kind of approach that I’ve come to expect from Scale Computing – keep it simple, easy to deploy, cost-conscious, and functional.

As I said before – VDI solutions don’t really excite me. But I do appreciate the flexibility they can offer in terms of the ability to access corporate workloads from non-traditional locales. This solution takes it a step further, and does a great job of delivering what could be a complicated solution in a simple and functional fashion. This is the kind of thing we need more of at the moment.

Datrium Enhances DRaaS – Makes A Cool Thing Cooler

Datrium recently made a few announcements to the market. I had the opportunity to speak with Brian Biles (Chief Product Officer, Co-Founder), Sazzala Reddy (Chief Technology Officer and Co-Founder), and Kristin Brennan (VP of Marketing) about the news and thought I’d cover it here.

 

Datrium DRaaS with VMware Cloud

Before we talk about the new features, let’s quickly revisit the DRaaS for VMware Cloud offering, announced by Datrium in August this year.

[image courtesy of Datrium]

The cool thing about this offering was that, according to Datrium, it “gives customers complete, one-click failover and failback between their on-premises data center and an on-demand SDDC on VMware Cloud on AWS”. There are some real benefits to be had for Datrium customers, including:

  • Highly optimised, and more efficient than some competing solutions;
  • Consistent management for both on-premises and cloud workloads;
  • Eliminates the headaches as enterprises scale;
  • Single-click resilience;
  • Simple recovery from current snapshots or old backup data;
  • Cost-effective failback from the public cloud; and
  • Purely software-defined DRaaS on hyperscale public clouds for reduced deployment risk long term.

But what if you want a little flexibility in terms of where those workloads are recovered? Read on.

Instant RTO

So you’re protecting your workloads in AWS, but what happens when you need to stand up stuff fast in VMC on AWS? This is where Instant RTO can really help. There’s no rehydration or backup “recovery” delay. Datrium tells me you can perform massively parallel VM restarts (hundreds at a time) and you’re ready to go in no time at all. The full RTO varies by run-book plan, but by booting VMs from a live NFS datastore, you know it won’t take long. Failback uses VADP.

[image courtesy of Datrium]

The only cost during normal business operations (when not testing or deploying DR) is the cost of storing ongoing backups. And these are are automatically deduplicated, compressed and encrypted. In the event of a disaster, Datrium DRaaS provisions an on-demand SDDC in VMware Cloud on AWS for recovery. All the snapshots in S3 are instantly made executable on a live, cloud-native NFS datastore mounted by ESX hosts in that SDDC, with caching on NVMe flash. Instant RTO is available from Datrium today.

DRaaS Connect

DRaaS Connect extends the benefits of Instant RTO DR to any vSphere environment. DRaaS Connect is available for two different vSphere deployment models:

  • DRaaS Connect for VMware Cloud offers instant RTO disaster recovery from an SDDC in one AWS Availability Zone (AZ) to another;
  • DRaaS Connect for vSphere On Prem integrates with any vSphere physical infrastructure on-premises.

[image courtesy of Datrium]

DRaaS Connect for vSphere On Prem extends Datrium DRaaS to any vSphere on-premises infrastructure. It will be managed by a DRaaS cloud-based control plane to define VM protection groups and their frequency, replication and retention policies. On failback, DRaaS will return only changed blocks back to vSphere and the local on-premises infrastructure through DRaaS Connect.

The other cool things to note about DRaaS Connect is that:

  • There’s no Datrium DHCI system required
  • It’s a downloadable VM
  • You can start protecting workloads in minutes

DRaaS Connect will be available in Q1 2020.

 

Thoughts and Further Reading

Datrium announced some research around disaster recovery and ransomware in enterprise data centres in concert with the product announcements. Some of it wasn’t particularly astonishing, with folks keen to leverage pay as you go models for DR, and wanting easier mechanisms for data mobility. What was striking is that one of the main causes of disasters is people, not nature. Years ago I remember we used to plan for disasters that invariably involved some kind of flood, fire, or famine. Nowadays, we need to plan for some script kid pumping some nasty code onto our boxes and trashing critical data.

I’m a fan of companies that focus on disaster recovery, particularly if they make it easy for consumers to access their services. Disasters happen frequently. It’s not a matter of if, just a matter of when. Datrium has acknowledged that not everyone is using their infrastructure, but that doesn’t mean it can’t offer value to customers using VMC on AWS. I’m not 100% sold on Datrium’s vision for “disaggregated HCI” (despite Hugo’s efforts to educate me), but I am a fan of vendors focused on making things easier to consume and operate for customers. Instant RTO and DRaaS Connect are both features that round out the DRaaS for VMwareCloud on AWS quite nicely.

I haven’t dived as deep into this as I’d like, but Andre from Datrium has written a comprehensive technical overview that you can read here. Datrium’s product overview is available here, and the product brief is here.

Random Short Take #20

Here are some links to some random news items and other content that I recently found interesting. You might find them interesting too. Episode 20 – feels like it’s becoming a thing.

  • Scale Computing seems to be having a fair bit of success with their VDI solutions. Here’s a press release about what they did with Harlingen WaterWorks System.
  • I don’t read Corey Quinn’s articles enough, but I am glad I read this one. Regardless of what you think about the enforceability of non-compete agreements (and regardless of where you’re employed), these things have no place in the modern workforce.
  • If you’re getting along to VMworld US this year, I imagine there’s plenty in your schedule already. If you have the time – I recommend getting around to seeing what Cody and Pure Storage are up to. I find Cody to be a great presenter, and Pure have been doing some neat stuff lately.
  • Speaking of VMworld, this article from Tom about packing the little things for conferences in preparation for any eventuality was useful. And if you’re heading to VMworld, be sure to swing past the VMUG booth. There’s a bunch of VMUG stuff happening at VMworld – you can read more about that here.
  • I promise this is pretty much the last bit of news I’ll share regarding VMworld. Anthony from Veeam put up a post about their competition to win a pass to VMworld. If you’re on the fence about going, check it out now (as the competition closes on the 19th August).
  • It wouldn’t be a random short take without some mention of data protection. This article about tiering protection data from George Crump was bang on the money.
  • Backblaze published their quarterly roundup of hard drive stats – you can read more here.
  • This article from Paul on freelancing and side gigs was comprehensive and enlightening. If you’re thinking of taking on some extra work in the hopes of making it your full-time job, or just wanting to earn a little more pin money, it’s worthwhile reading this post.

Random Short Take #18

Here are some links to some random news items and other content that I recently found interesting. You might find them interesting too. Episode 18 – buckle up kids! It’s all happening.

  • Cohesity added support for Active Directory protection with version 6.3 of the DataPlatform. Matt covered it pretty comprehensively here.
  • Speaking of Cohesity, Alastair wrote this article on getting started with the Cohesity PowerShell Module.
  • In keeping with the data protection theme (hey, it’s what I’m into), here’s a great article from W. Curtis Preston on SaaS data protection, and what you need to consider to not become another cautionary tale on the Internet. Curtis has written a lot about data protection over the years, and you could do a lot worse than reading what he has to say. And that’s not just because he signed a book for me.
  • Did you ever stop and think just how insecure some of the things that you put your money into are? It’s a little scary. Shell are doing some stuff with Cybera to improve things. Read more about that here.
  • I used to work with Vincent, and he’s a super smart guy. I’ve been at him for years to start blogging, and he’s started to put out some articles. He’s very good at taking complex topics and distilling them down to something that’s easy to understand. Here’s his summary of VMware vRealize Automation configuration.
  • Tom’s take on some recent CloudFlare outages makes for good reading.
  • Google Cloud has announced it’s acquiring Elastifile. That part of the business doesn’t seem to be as brutal as the broader Alphabet group when it comes to acquiring and discarding companies, and I’m hoping that the good folks at Elastifile are looked after. You can read more on that here.
  • A lot of people are getting upset with terms like “disaggregated HCI”. Chris Mellor does a bang up job explaining the differences between the various architectures here. It’s my belief that there’s a place for all of this, and assuming that one architecture will suit every situation is a little naive. But what do I know?

Scale Computing Announces HE500 Range

Scale Computing recently announced its “HC3 Edge Platform“. I had a chance to talk to Alan Conboy about it, and thought I’d share some of my thoughts here.

 

The Announcement

The HE500 series has been introduced to provide smaller customers and edge infrastructure environments with components that better meet the sizing and pricing requirements of those environments. There are a few different flavours of nodes, with every node offering E-2100 Intel CPUs, 32 – 64GB RAM, and dual power supplies. There are a couple of minor differences with regards to other configuration options.

  • HE500 – 4x 1,2,4 or 8TB HDD, 4x 1GbE, 4x 10GbE
  • HE550 – 1x 480GB or 960GB SSD, 3x 1,2, or 4TB HDD, 4x 1GbE, 4x 10GbE
  • HE550F – 4 x 240GB, 480GB, 960GB SSD, 4x 1GbE, 4x 10GbE
  • HE500T – 4x 1,2,4 or 8TB HDD, 8 x HDD 4TB, 8TB, 2x 1GbE
  • HE550TF – 4 x 240GB, 480GB, 960GB SSD, 2x 1GbE

The “T” version comes in a tower form factor, and offers 1GbE connectivity. Everything runs on Scale’s HC3 platform, and offers all of the features and support you expect with that platform. In terms of scalability, you can run up to 8 nodes in a cluster.

 

Thoughts And Further Reading

In the past I’ve made mention of Scale Computing and Lenovo’s partnership, and the edge infrastructure approach is also something that lends itself well to this arrangement. If you don’t necessarily want to buy Scale-badged gear, you’ll see that the models on offer look a lot like the SR250 and ST250 models from Lenovo. In my opinion, the appeal of Scale’s hyper-converged infrastructure story has always been the software platform that sits on the hardware, rather than the specifications of the nodes they sell. That said, these kinds of offerings play an important role in the market, as they give potential customers simple options to deliver solutions at a very competitive price point. Scale tell me that an entry-level 3-node cluster comes in at about US $16K, with additional nodes costing approximately $5K. Conboy described it as “[l]owering the barrier to entry, reducing the form factor, but getting access to the entire stack”.

Combine some of these smaller solutions with various reference architectures and you’ve got a pretty powerful offering that can be deployed in edge sites for a small initial outlay. People often deploy compute at the edge because they have to, not because they necessarily want to. Anything that can be done to make operations and support simpler is a good thing. Scale Computing are focused on delivering an integrated stack that meets those requirements in a lightweight form factor. I’ll be interested to see how the market reacts to this announcement. For more information on the HC3 Edge offering, you can grab a copy of the data sheet here, and the press release is available here. There’s a joint Lenovo – Scale Computing case study that can be found here.

Random Short Take #16

Here are a few links to some random news items and other content that I recently found interesting. You might find them interesting too. Episode 16 – please enjoy these semi-irregular updates.

  • Scale Computing has been doing a bit in the healthcare sector lately – you can read news about that here.
  • This was a nice roundup of the news from Apple’s recent WWDC from Six Colors. Hat tip to Stephen Foskett for the link. Speaking of WWDC news, you may have been wondering what happened to all of your purchased content with the imminent demise of iTunes on macOS. It’s still a little fuzzy, but this article attempts to shed some light on things. Spoiler: you should be okay (for the moment).
  • There’s a great post on the Dropbox Tech Blog from James Cowling discussing the mission versus the system.
  • The more things change, the more they remain the same. For years I had a Windows PC running Media Center and recording TV. I used IceTV as the XMLTV-based program guide provider. I then started to mess about with some HDHomeRun devices and the PC died and I went back to a traditional DVR arrangement. Plex now has DVR capabilities and it has been doing a reasonable job with guide data (and recording in general), but they’ve decided it’s all a bit too hard to curate guides and want users (at least in Australia) to use XMLTV-based guides instead. So I’m back to using IceTV with Plex. They’re offering a free trial at the moment for Plex users, and setup instructions are here. No, I don’t get paid if you click on the links.
  • Speaking of axe-throwing, the Cohesity team in Queensland is organising a social event for Friday 21st June from 2 – 4 pm at Maniax Axe Throwing in Newstead. You can get in contact with Casey if you’d like to register.
  • VeeamON Forum Australia is coming up soon. It will be held at the Hyatt Regency Hotel in Sydney on July 24th and should be a great event. You can find out more information and register for it here. The Vanguards are also planning something cool, so hopefully we’ll see you there.
  • Speaking of Veeam, Anthony Spiteri recently published his longest title in the Virtualization is Life! catalogue – Orchestration Of NSX By Terraform For Cloud Connect Replication With vCloud Director. It’s a great article, and worth checking out.
  • There’s a lot of talk and slideware devoted to digital transformation, and a lot of it is rubbish. But I found this article from Chin-Fah to be particularly insightful.

Axellio Announces Azure Stack HCI Support

Microsoft recently announced their Azure Stack HCI program, and I had the opportunity to speak to the team from Axellio (including Bill Miller, Barry Martin, and Kara Smith) about their support for it.

 

Azure Stack Versus Azure Stack HCI

So what’s the difference between Azure Stack and Azure Stack HCI? You can think of Azure Stack as an extension of Azure – designed for cloud-native applications. The Azure Stack HCI is more for your traditional VM-based applications – the kind of ones that haven’t been refactored (or can’t be) for public cloud.

[image courtesy of Microsoft]

The Azure Stack HCI program has fifteen vendor partners on launch day, of which Axellio is one.

 

Axellio’s Take

Miller describes the Axellio solution as “[n]ot your father’s HCI infrastructure”, and Axellio tell me it “has developed the new FabricXpress All-NVMe HCI edge-computing platform built from the ground up for high-performance computing and fast storage for intense workload environments. It delivers 72 NVMe SSDS per server, and packs 2 servers into one 2U chassis”. Cluster sizes start at 4 nodes and run up to 16. Note that the form factor measurement in the table below includes any required switching for the solution. You can grab the data sheet from here.

[image courtesy of Axellio]

It uses the same Hyper-V based software-defined compute, storage and networking as Azure Stack and integrates on-premises workloads with Microsoft hybrid data services including Azure Site Recovery and Azure Backup, Cloud Witness and Azure Monitor.

 

Thoughts and Further Reading

When Microsoft first announced plans for a public cloud presence, some pundits suggested they didn’t have the chops to really make it. It seems that Microsoft has managed to perform well in that space despite what some of the analysts were saying. What Microsoft has had working in its favour is that it understands the enterprise pretty well, and has made a good push to tap that market and help get the traditionally slower moving organisations to look seriously at public cloud.

Azure Stack HCI fits nicely in between Azure and Azure Stack, giving enterprises the opportunity to host workloads that they want to keep in VMs hosted on a platform that integrates well with public cloud services that they may also wish to leverage. Despite what we want to think, not every enterprise application can be easily refactored to work in a cloud-native fashion. Nor is every enterprise ready to commit that level of investment into doing that with those applications, preferring instead to host the applications for a few more years before introducing replacement application architectures.

It’s no secret that I’m a fan of Axellio’s capabilities when it comes to edge compute and storage solutions. In speaking to the Axellio team, what stands out to me is that they really seem to understand how to put forward a performance-oriented solution that can leverage the best pieces of the Microsoft stack to deliver an on-premises hosting capability that ticks a lot of boxes. The ability to move workloads (in a staged fashion) so easily between public and private infrastructure should also have a great deal of appeal for enterprises that have traditionally struggled with workload mobility.

Enterprise operations can be a pain in the backside at the best of times. Throw in the requirement to host some workloads in public cloud environments like Azure, and your operations staff might be a little grumpy. Fans of HCI have long stated that the management of the platform, and the convergence of compute and storage, helps significantly in easing the pain of infrastructure operations. If you then take that management platform and integrate it successfully with you public cloud platform, you’re going to have a lot of fans. This isn’t Axellio’s only solution, but I think it does fit in well with their ability to deliver performance solutions in both the core and edge.

Thomas Maurer wrote up a handy article covering some of the differences between Azure Stack and Azure Stack HCI. The official Microsoft blog post on Azure Stack HCI is here. You can read the Axellio press release here.

Scale Computing and Leostream – Better Than Bert And Ernie

Scale Computing recently announced some news about a VDI solution they delivered for Illinois-based Paris Community Hospital. I had the opportunity to speak with Alan Conboy about it and thought I’d share some coverage here.

 

VDI and HCI – A Pretty Famous Pairing

When I started to write this article, I was trying to think of a dynamic duo that I could compare VDI and HCI to. Batman and Robin? Bert and Ernie? MJ and Scottie? In any case, hyper-converged infrastructure and virtual desktop infrastructure has gone well together since the advent of HCI. It’s my opinion that HCI is in a number of enterprises by virtue of the fact that a VDI requirement arose. Once HCI is introduced into those enterprise environments, folks start to realise it’s useful for other stuff too.

Operational Savings

So it makes sense that Scale Computing’s HC3 solution would be used to deliver VDI solutions at some stage. And Leostream can provide the lifecycle manager / connection broker / gateway part of the story without breaking a sweat. According to Conboy Paris Community Hospital has managed to drastically reduce its operating costs, to the point that it’s reduced its resource investment to a part-time operations staff member to manage the environment. They’re apparently saving around $1 million (US) over the next five years, meaning they can now afford an extra doctor and additional nursing staff.

HCI – It’s All In The Box

If you’re familiar with HCI, you’ll know that most of the required infrastructure comes with the solution – compute, storage, and hypervisor. You also get the ability to do cool stuff in terms of snapshots and disaster recovery via replication.

 

Thoughts

VDI solutions have proven popular in healthcare environments for a number of reasons. They generally help the organisation control the applications that are run in the (usually) security-sensitive environment, particularly at the edge. It’s also useful in terms of endpoint maintenance, and removes the requirement to deploy high end client devices in clinical environments. It also provides a centralised mechanism to ensure that critical application updates are performed in a timely fashion.

You won’t necessarily save money deploying VDI on HCI in terms of software licensing or infrastructure investment. But you will potentially save money in terms of the operational resources required for endpoint and application support. If you can then spend those savings on medical staff, that has to be a win for the average healthcare organisation.

I’m the first to admit that I don’t get overly excited about VDI solutions. I can see the potential for value in some organisations, but I tend to lose focus rapidly when people start to talk to me about this stuff. That said, I do get enthusiastic about HCI solutions that make sense, and deliver value back to the business. It strikes me that this Scale Computing and Leostream combo has worked out pretty well for Paris Community Hospital. And that’s pretty cool. For more insight, Scale Computing has published a Customer Case Study that you can read here.

Random Short Take #9

Here are a few links to some random news items and other content that I found interesting. You might find it interesting too. Maybe.

 

 

Scale Computing Have Been Busy

I recently had the opportunity to get on a call with Alan Conboy to talk about what’s been happening with Scale Computing lately. It was an interesting chat, as always, and I thought I’d share some of the news here.

 

Detroit Rock City

It’s odd how sometimes I forget that pretty much every type of business in existence uses some form of IT. Arts and performance organisations, such as the Detroit Symphony Orchestra are no exception. They are also now very happy Scale customers. There’s a YouTube video detailing their experiences that you can check out here.

 

Lenovo Partnership

Scale and Lenovo recently announced a strategic partnership, focussed primarily on edge workloads, with particular emphasis on retail and industrial environments. You can download a solution brief here. This doesn’t mean that Lenovo are giving up on some of their other HCI partnerships, but it does give them a competent partner to attack the edge infrastructure market.

 

GCG, Yeah You Know Me

Grupo Colón Gerena is a Puerto Rico-based “restaurant management company that owns franchises of brands including Wendy’s, Applebee’s, Famous Davés, Sizzler’s, Longhorn Steakhouse, Olive Garden and Red Lobster throughout the island”. You may recall Puerto Rico suffered through some pretty devastating weather in 2017 thanks to Hurricane Maria. GCG have been running the bulk of their workload in Google Cloud since just before the event, and are still deciding whether they really want to move it back to an on-premises solution. There’s definitely a good story with Scale delivering workloads from the edge to the core and through to Google Cloud. You can read the full case study here.

 

Thoughts

It’s no big secret that I’m a fan of Scale Computing. And not just because I have an old HC1000 in my office that I fire up every now and then (Collier I’m still waiting on those SSDs you promised me a few years ago). They are relentlessly focussed on delivering easy to use solutions that work well and deliver great resiliency and performance, particularly in smaller environments. Their DRaaS play, and partnership with Google, has opened up some doors to customers that may not have considered Scale previously. The Lenovo partnership, and success with customers like GCG and DSO, is proof that Scale are doing a lot of good stuff in the HCI space.

Anyone who’s had the good fortune to deal with Scale, from their executives and founders through to their support staff, will tell you that they’re super easy to deal with and pretty good at what they do. It’s great to see them enjoying some success. It strikes me that they go about their business without a lot of the chest beating and carry on associated with some other vendors in the industry. This is a good thing, and I’m looking forward to seeing what comes next for them.