Random Short Take #33

Welcome to Random Short Take #33. Some terrific players have worn 33 in the NBA, including Keith Closs and Stephon Marbury. This one, though, goes out to the “hick from French Lick” Larry Joe Bird. You might see the frequency of these posts ramp up a bit over the next little while. Because everything feels a little random at the moment.

  • I recently wrote about what Scale Computing has been up to with Leostream. It’s also done a bit with Acronis in the past, and it recently announced it’s now offering Acronis Cloud Storage. You can read more on that here.
  • The good folks at Druva are offering 6 months of free subscription for Office 365 and Endpoint protection (up to 300 seats) to help businesses adjust to these modern ways of working. You can find out more about that here.
  • Speaking of cloud backup, Backblaze recently surpassed the exabyte mark in terms of stored customer data.
  • I’ve been wanting to write about Panzura for a while, and I’ve been terribly slack. It’s enjoying some amount of momentum at the moment though, and is reporting revenue growth that looks the goods. Speaking of Panzura, if you haven’t heard of its Vizion.AI offshoot – it’s well worth checking out.
  • Zerto recently announced Zerto 8. Lots of cool features have been made available, including support for VMware on Google Cloud, and improved VMware integration.
  • There’s a metric shedload of “how best to work from home” posts doing the rounds at the moment. I found this one from Russ White to be both comprehensive and readable. That’s not as frequent a combination as you might expect.
  • World Backup Day was yesterday. I’ll be writing more on that this week, but in the meantime this article from Anthony Spiteri on data displacement was pretty interesting.
  • Speaking of backup and Veeam things, this article on installing Veeam PN from Andre Atkinson was very useful.

And that’s it for now. Stay safe folks.

 

 

Scale Computing and Leostream – Is It Finally VDI’s Year?

Scale Computing announced a partnership with Leostream a little while ago. With the global pandemic drastically changing the way a large amount of organisations are working, it seemed like a good time to talk to Alan Conboy about how this all worked from a Scale Computing and Leostream perspective.

 

Easy As 1, 2

Getting started with Leostream is surprisingly simple. To start with, you’ll need to deploy a Gateway and a Broker VM. These are CentOS machines (if you’re a Scale Computing customer you can get likely some minimally configured, pre-packaged qcow appliances from Alan). You’ll need to punch a hole through your firewall for SSL traffic, and run a couple of simple commands on the VMs, but that’s it.

But I’m getting ahead of myself. The way it works is that Leostream has a small agent that you can deploy across the PCs in your fleet. When users hit the gateway they can be directed to their own (physical) desktop inside the organisation. They can then access their desktops remotely (using RDP, SSH, or VNC) over any browser that supports SSL and HTML5. So, rather than having to go out and grab a bunch of laptops, setup a VPN (or scale it out), and have a desktop image ready to go (along with the prerequisite VDI resources hosted somewhere), you can have your remote workforce working remotely from day 1. It comes with a Windows, Java, and Linux agent, so if you have users running macOS or Linux they can still come to the party.

I know I’ve done a bad job of describing the solution, so I recommend you check out this blog post instead.

 

Thoughts

I’m not at all passionate about VDI and End User Computing in the same way some people I know are. I always thought it was a neat solution that was frequently poorly executed and oftentimes cost a lot of money. But it’s a weird time for the world and, sadly, it might be something like a global pandemic that finally means that VDI gets its due as a useful solution for remote workers. I’d also like to point out that this is just a part of what Leostream can do. If you’re after something outside of the Scale Computing alliance – they can probably help you out.

I’ve spoken to Alan and the Scale Computing team about Leostream a few times now, and I really do like the idea of being able to bring users back into the network, rather than extending the network out to your users. You don’t have to go crazy acquiring a bunch of laptops or mobile devices for traditionally desk-bound users and re-imaging said laptops for those users. You don’t need to spend a tonne of cash on extra VPN connectivity or compute to support a bunch of new “desktop” VMs. Instead, in a fairly short amount of time, you can get users working the way they always have, with a minimum of fuss. This is exactly the kind of approach that I’ve come to expect from Scale Computing – keep it simple, easy to deploy, cost-conscious, and functional.

As I said before – VDI solutions don’t really excite me. But I do appreciate the flexibility they can offer in terms of the ability to access corporate workloads from non-traditional locales. This solution takes it a step further, and does a great job of delivering what could be a complicated solution in a simple and functional fashion. This is the kind of thing we need more of at the moment.

Random Short Take #31

Welcome to Random Short Take #31. Lot of good players have worn 31 in the NBA. You’d think I’d call this the Reggie edition (and I appreciate him more after watching Winning Time), but this one belongs to Brent Barry. This may be related to some recency bias I have, based on the fact that Brent is a commentator in NBA 2K19, but I digress …

  • Late last year I wrote about Scale Computing’s big bet on a small form factor. Scale Computing recently announced that Jerry’s Foods is using the HE150 solution for in-store computing.
  • I find Plex to be a pretty rock solid application experience, and most of the problems I’ve had with it have been client-related. I recently had a problem with a server update that borked my installation though, and had to roll back. Here’s the quick and dirty way to do that on macOS.
  • Here’s are 7 contentious thoughts on data protection from Preston. I think there are some great ideas here and I recommend taking the time to read this article.
  • I recently had the chance to speak with Michael Jack from Datadobi about the company’s announcement about its new DIY Starter Pack for NAS migrations. Whilst it seems that the professional services market for NAS migrations has diminished over the last few years, there’s still plenty of data out there that needs to be moved from on box to another. Robocopy and rsync aren’t always the best option when you need to move this much data around.
  • There are a bunch of things that people need to learn to do operations well. A lot of them are learnt the hard way. This is a great list from Jan Schaumann.
  • Analyst firms are sometimes misunderstood. My friend Enrico Signoretti has been working at GigaOm for a little while now, and I really enjoyed this article on the thinking behind the GigaOm Radar.
  • Nexsan recently announced some enhancements to its “BEAST” storage platforms. You can read more on that here.
  • Alastair isn’t just a great writer and moustache aficionado, he’s also a trainer across a number of IT disciplines, including AWS. He recently posted this useful article on what AWS newcomers can expect when it comes to managing EC2 instances.

Random Short Take #29

Welcome to Random Short Take #29. You’d think 29 would be a hard number to line up with basketball players, but it turns out that Marcus Camby wore it one year when he played for Houston. It was at the tail-end of his career, but still. Anyhoo …

  • I love a good story about rage-quitting projects, and this one is right up there. I’ve often wondered what it must be like to work on open source projects and dealing with the craziness that is the community.
  • I haven’t worked on a Scalar library in over a decade, but Quantum is still developing them. There’s an interesting story here in terms of protecting your protection data using air gaps. I feel like this is already being handled a different way by the next-generation data protection companies, but when all you have is a hammer. And the cost per GB is still pretty good with tape.
  • I always enjoy Keith’s ability to take common problems and look at them with a fresh perspective. I’m interested to see just how far he goes down the rabbit hole with this DC project.
  • Backblaze frequently comes up with useful articles for both enterprise punters and home users alike. This article on downloading your social media presence is no exception. The processes are pretty straightforward to follow, and I think it’s a handy exercise to undertake every now and then.
  • The home office is the new home lab. Or, perhaps, as we work anywhere now, it’s important to consider setting up a space in your home that actually functions as a workspace. This article from Andrew Miller covers some of the key considerations.
  • This article from John Troyer about writing was fantastic. Just read it.
  • Scale Computing was really busy last year. How busy? Busy enough to pump out a press release that you can check out here. The company also has a snazzy new website and logo that you should check out.
  • Veeam v10 is coming “very soon”. You can register here to find out more. I’m keen to put this through its paces.

Scale Computing Makes Big Announcement About Small HE150

Scale Computing recently announced the HE150 series of small edge servers. I had the chance to chat with Alan Conboy about the announcement, and thought I’d share some thoughts here.

 

Edge, But Smaller

I’ve written in the past about additions to the HC3 Edge Platform. But those things had a rack-mount form factor. The newly announced HE150 runs on Intel NUC devices. Wait, what? That’s right, hyper-converged infrastructure on really small PCs. But don’t you need a bunch of NICs to do HC3 properly? There’s no need for backplane switch requirement, as they use some software-defined networking to tunnel the backplane network across the NIC. The HC3 platform uses less than 1GB RAM per node, and each node has 2 cores. The storage sits on an NVMe drive and you can get hold of this stuff at a retail price of around $5K US for 3 nodes.

[image courtesy of Scale Computing]

Scale at Scale?

How do you deploy these kinds of things at scale then? Conboy tells me there’s full Ansible integration, RESTful API deployment capabilities, and they come equipped with Intel AMT. In short, these things can turn up at the remote site, be plugged in, and be ready to go.

Where would you?

The HE150 solution is 100% specific to multi-site edge implementations. It’s not trying to go after workloads that would normally be serviced by the HE500 or HE1000. Where it can work though, is with:

  • Oil and Gas exploration – with one in each ship (they need 4-5 VMs to handle sensor data to make command decisions)
  • Grocery and retail chains
  • Manufacturing platforms
  • Telcos – pole-side boxes

In short, think of environments that require some amount of compute and don’t have IT people to support it.

 

Thoughts

I’ve been a fan of what Scale Computing has been doing with HCI for some time now. Scale’s take on making things simple across the enterprise has been refreshing. While this solution might surprise some folks, it strikes me that there’s an appetite for this kind fo thing in the marketplace. The edge is often a place where less is more, and there’s often not a lot of resources available to do basic stuff, like deploy a traditional, rackmounted compute environment. But a small, 3-node HCI cluster that can be stacked away in a stationery cupboard? That might just work. Particularly if you only need a few virtual machines to meet those compute requirements. As Conboy pointed out to me, Scale isn’t looking to use this as a replacement for the higher-preforming options it has available. Rather, this solution is perfect for highly distributed retail environments where they need to do one or two things and it would be useful if they didn’t do those things in a data centre located hundreds of kilometres away.

If you’re not that excited about Intel NUCs though, you might be happy to hear that solutions from Lenovo will be forthcoming shortly.

The edge presents a number of challenges to enterprises, in terms of both its definition and how to deal with it effectively. Ultimately, the success of solutions like this will hinge on ease of use, reliability, and whether it really is fit for purpose. The good folks at Scale don’t like to go off half-cocked, so you can be sure some thought went into this product – it’s not just a science project. I’m keen to see what the uptake is like, because I think this kind of solution has a place in the market. The HE150 is available for purchase form Scale Computing now. It’s also worth checking out the Scale Computing presentations at Tech Field Day 20.

Random Short Take #21

Here’s a semi-regular listicle of random news items that might be of some interest.

  • This is a great article covering QoS enhancements in Purity 5.3. Speaking of Pure Storage I’m looking forward to attending Pure//Accelerate in Austin in the next few weeks. I’ll be participating in a Storage Field Day Exclusive event as well – you can find more details on that here.
  • My friends at Scale Computing have entered into an OEM agreement with Acronis to add more data protection and DR capabilities to the HC3 platform. You can read more about that here.
  • Commvault just acquired Hedvig for a pretty penny. It will be interesting to see how they bring them into the fold. This article from Max made for interesting reading.
  • DH2i are presenting a webinar on September 10th at 11am Pacific, “On the Road Again – How to Secure Your Network for Remote User Access”. I’ve spoken to the people at DH2i in the past and they’re doing some really interesting stuff. If your timezone lines up with this, check it out.
  • This was some typically insightful coverage of VMworld US from Justin Warren over at Forbes.
  • I caught up with Zerto while I was at VMworld US last week, and they talked to me about their VAIO announcement. Justin Paul did a good job of summarising it here.
  • Speaking of VMworld, William has posted links to the session videos – check it out here.
  • Project Pacific was big news at VMworld, and I really enjoyed this article from Joep.

Random Short Take #20

Here are some links to some random news items and other content that I recently found interesting. You might find them interesting too. Episode 20 – feels like it’s becoming a thing.

  • Scale Computing seems to be having a fair bit of success with their VDI solutions. Here’s a press release about what they did with Harlingen WaterWorks System.
  • I don’t read Corey Quinn’s articles enough, but I am glad I read this one. Regardless of what you think about the enforceability of non-compete agreements (and regardless of where you’re employed), these things have no place in the modern workforce.
  • If you’re getting along to VMworld US this year, I imagine there’s plenty in your schedule already. If you have the time – I recommend getting around to seeing what Cody and Pure Storage are up to. I find Cody to be a great presenter, and Pure have been doing some neat stuff lately.
  • Speaking of VMworld, this article from Tom about packing the little things for conferences in preparation for any eventuality was useful. And if you’re heading to VMworld, be sure to swing past the VMUG booth. There’s a bunch of VMUG stuff happening at VMworld – you can read more about that here.
  • I promise this is pretty much the last bit of news I’ll share regarding VMworld. Anthony from Veeam put up a post about their competition to win a pass to VMworld. If you’re on the fence about going, check it out now (as the competition closes on the 19th August).
  • It wouldn’t be a random short take without some mention of data protection. This article about tiering protection data from George Crump was bang on the money.
  • Backblaze published their quarterly roundup of hard drive stats – you can read more here.
  • This article from Paul on freelancing and side gigs was comprehensive and enlightening. If you’re thinking of taking on some extra work in the hopes of making it your full-time job, or just wanting to earn a little more pin money, it’s worthwhile reading this post.

Scale Computing Announces HE500 Range

Scale Computing recently announced its “HC3 Edge Platform“. I had a chance to talk to Alan Conboy about it, and thought I’d share some of my thoughts here.

 

The Announcement

The HE500 series has been introduced to provide smaller customers and edge infrastructure environments with components that better meet the sizing and pricing requirements of those environments. There are a few different flavours of nodes, with every node offering E-2100 Intel CPUs, 32 – 64GB RAM, and dual power supplies. There are a couple of minor differences with regards to other configuration options.

  • HE500 – 4x 1,2,4 or 8TB HDD, 4x 1GbE, 4x 10GbE
  • HE550 – 1x 480GB or 960GB SSD, 3x 1,2, or 4TB HDD, 4x 1GbE, 4x 10GbE
  • HE550F – 4 x 240GB, 480GB, 960GB SSD, 4x 1GbE, 4x 10GbE
  • HE500T – 4x 1,2,4 or 8TB HDD, 8 x HDD 4TB, 8TB, 2x 1GbE
  • HE550TF – 4 x 240GB, 480GB, 960GB SSD, 2x 1GbE

The “T” version comes in a tower form factor, and offers 1GbE connectivity. Everything runs on Scale’s HC3 platform, and offers all of the features and support you expect with that platform. In terms of scalability, you can run up to 8 nodes in a cluster.

 

Thoughts And Further Reading

In the past I’ve made mention of Scale Computing and Lenovo’s partnership, and the edge infrastructure approach is also something that lends itself well to this arrangement. If you don’t necessarily want to buy Scale-badged gear, you’ll see that the models on offer look a lot like the SR250 and ST250 models from Lenovo. In my opinion, the appeal of Scale’s hyper-converged infrastructure story has always been the software platform that sits on the hardware, rather than the specifications of the nodes they sell. That said, these kinds of offerings play an important role in the market, as they give potential customers simple options to deliver solutions at a very competitive price point. Scale tell me that an entry-level 3-node cluster comes in at about US $16K, with additional nodes costing approximately $5K. Conboy described it as “[l]owering the barrier to entry, reducing the form factor, but getting access to the entire stack”.

Combine some of these smaller solutions with various reference architectures and you’ve got a pretty powerful offering that can be deployed in edge sites for a small initial outlay. People often deploy compute at the edge because they have to, not because they necessarily want to. Anything that can be done to make operations and support simpler is a good thing. Scale Computing are focused on delivering an integrated stack that meets those requirements in a lightweight form factor. I’ll be interested to see how the market reacts to this announcement. For more information on the HC3 Edge offering, you can grab a copy of the data sheet here, and the press release is available here. There’s a joint Lenovo – Scale Computing case study that can be found here.

Random Short Take #16

Here are a few links to some random news items and other content that I recently found interesting. You might find them interesting too. Episode 16 – please enjoy these semi-irregular updates.

  • Scale Computing has been doing a bit in the healthcare sector lately – you can read news about that here.
  • This was a nice roundup of the news from Apple’s recent WWDC from Six Colors. Hat tip to Stephen Foskett for the link. Speaking of WWDC news, you may have been wondering what happened to all of your purchased content with the imminent demise of iTunes on macOS. It’s still a little fuzzy, but this article attempts to shed some light on things. Spoiler: you should be okay (for the moment).
  • There’s a great post on the Dropbox Tech Blog from James Cowling discussing the mission versus the system.
  • The more things change, the more they remain the same. For years I had a Windows PC running Media Center and recording TV. I used IceTV as the XMLTV-based program guide provider. I then started to mess about with some HDHomeRun devices and the PC died and I went back to a traditional DVR arrangement. Plex now has DVR capabilities and it has been doing a reasonable job with guide data (and recording in general), but they’ve decided it’s all a bit too hard to curate guides and want users (at least in Australia) to use XMLTV-based guides instead. So I’m back to using IceTV with Plex. They’re offering a free trial at the moment for Plex users, and setup instructions are here. No, I don’t get paid if you click on the links.
  • Speaking of axe-throwing, the Cohesity team in Queensland is organising a social event for Friday 21st June from 2 – 4 pm at Maniax Axe Throwing in Newstead. You can get in contact with Casey if you’d like to register.
  • VeeamON Forum Australia is coming up soon. It will be held at the Hyatt Regency Hotel in Sydney on July 24th and should be a great event. You can find out more information and register for it here. The Vanguards are also planning something cool, so hopefully we’ll see you there.
  • Speaking of Veeam, Anthony Spiteri recently published his longest title in the Virtualization is Life! catalogue – Orchestration Of NSX By Terraform For Cloud Connect Replication With vCloud Director. It’s a great article, and worth checking out.
  • There’s a lot of talk and slideware devoted to digital transformation, and a lot of it is rubbish. But I found this article from Chin-Fah to be particularly insightful.

Scale Computing and Leostream – Better Than Bert And Ernie

Scale Computing recently announced some news about a VDI solution they delivered for Illinois-based Paris Community Hospital. I had the opportunity to speak with Alan Conboy about it and thought I’d share some coverage here.

 

VDI and HCI – A Pretty Famous Pairing

When I started to write this article, I was trying to think of a dynamic duo that I could compare VDI and HCI to. Batman and Robin? Bert and Ernie? MJ and Scottie? In any case, hyper-converged infrastructure and virtual desktop infrastructure has gone well together since the advent of HCI. It’s my opinion that HCI is in a number of enterprises by virtue of the fact that a VDI requirement arose. Once HCI is introduced into those enterprise environments, folks start to realise it’s useful for other stuff too.

Operational Savings

So it makes sense that Scale Computing’s HC3 solution would be used to deliver VDI solutions at some stage. And Leostream can provide the lifecycle manager / connection broker / gateway part of the story without breaking a sweat. According to Conboy Paris Community Hospital has managed to drastically reduce its operating costs, to the point that it’s reduced its resource investment to a part-time operations staff member to manage the environment. They’re apparently saving around $1 million (US) over the next five years, meaning they can now afford an extra doctor and additional nursing staff.

HCI – It’s All In The Box

If you’re familiar with HCI, you’ll know that most of the required infrastructure comes with the solution – compute, storage, and hypervisor. You also get the ability to do cool stuff in terms of snapshots and disaster recovery via replication.

 

Thoughts

VDI solutions have proven popular in healthcare environments for a number of reasons. They generally help the organisation control the applications that are run in the (usually) security-sensitive environment, particularly at the edge. It’s also useful in terms of endpoint maintenance, and removes the requirement to deploy high end client devices in clinical environments. It also provides a centralised mechanism to ensure that critical application updates are performed in a timely fashion.

You won’t necessarily save money deploying VDI on HCI in terms of software licensing or infrastructure investment. But you will potentially save money in terms of the operational resources required for endpoint and application support. If you can then spend those savings on medical staff, that has to be a win for the average healthcare organisation.

I’m the first to admit that I don’t get overly excited about VDI solutions. I can see the potential for value in some organisations, but I tend to lose focus rapidly when people start to talk to me about this stuff. That said, I do get enthusiastic about HCI solutions that make sense, and deliver value back to the business. It strikes me that this Scale Computing and Leostream combo has worked out pretty well for Paris Community Hospital. And that’s pretty cool. For more insight, Scale Computing has published a Customer Case Study that you can read here.