Random Short Take #38

Welcome to Random Short Take #38. Not a huge amount of players have worn 38 in the NBA, and I’m not going to pretend I was ever a Kwame Brown fan. Although it did seem like he had a tough time of it. Anyway let’s get random.

  • Ransomware is the new hotness. Or, rather, protecting storage systems from ransomware is the new hotness. My man Chin-Fah had a writeup on that here. It’s not a matter of if, but rather when you’ll run into a problem. It’s been interesting to see the various approaches being taken by the storage vendors and the data protection companies.
  • Applications for the vExpert program intake for the second half of 2020 are open, but closing soon. It’s a fantastic program to be a part of, so if you think you’ve got the goods, you can apply here. I also recommend this article from Christopher on his experiences.
  • This was a great article from Alastair on some of the differences between networking with AWS and VMC on AWS. As someone who works for a VMware Cloud Provider, I can confirm that NSX (T or V, I don’t care) has a whole slew of capabilities and whole slew of integration challenges.
  • Are you Zoomed out? I am. Even when you think the problem can’t be the network, it might just be the network (I hope my friends in networking appreciate that it’s not always the storage). John Nicholson posted a typically comprehensive overview of how your bandwidth might be one of the things keeping you from demonstrating excellent radio voice on those seemingly endless meetings you’re doing at the moment. It could also be that you’re using crap audio devices too, but I think John’s going to cover that in the future.
  • Scale Computing has a good story to tell about what it’s been doing with a large school district in the U.S. Read more about that here.
  • This is one of those promotions aimed at my friends in Northern America more than folks based where I am, but I’m always happy to talk about deals on data protection. StorCentric has launched its “Retrospect Dads & Grads Promotion” offering a free 90-Day subscription license for every Retrospect Backup product. You can read more about that here.
  • Pure//Accelerate Online was this week, and Max did a nice write-up on Pure Storage File Services over at Gestalt IT.
  • Rancher Labs recently announced the general availability of Longhorn (a cloud-native container storage solution). I’m looking forward to digging in to this a bit more over the next little while.

 

 

Random Short Take #36

Welcome to Random Short Take #36. Not a huge amount of players have worn 36 in the NBA, but Shaq did (at the end of his career), and Marcus Smart does. This one, though, goes out to one of my favourite players from the modern era, Rasheed Wallace. It seems like Boston is the common thread here. Might have something to do with those hall of fame players wearing numbers in the low 30s. Or it might be entirely unrelated.

  • Scale Computing recently announced its all-NVMe HC3250DF as a new appliance targeting core data centre and edge computing use cases. It offers higher performance storage, networking and processing. You can read the press release here.
  • Dell EMC PowerStore has been announced. Chris Mellor covered the announcement here. I haven’t had time to dig into this yet, but I’m keen to learn more. Chris Evans also wrote about it here.
  • Rubrik Andes 5.2 was recently announced. You can read a wrap-up from Mellor here.
  • StorCentric’s Nexsan recently announced the E-Series 32F Storage Platform. You can read the press release here.
  • In what can only be considered excellent news, Preston de Guise has announced the availability of the second edition of his book, “Data Protection: Ensuring Data Availability”. It will be available in a variety of formats, with the ebook format already being out. I bought the first edition a few times to give as a gift, and I’m looking forward to giving away a few copies of this one too.
  • Backblaze B2 has been huge for the company, and Backblaze B2 with S3-compatible API access is even huger. Read more about that here. Speaking of Backblaze, it just released its hard dive stats for Q1, 2020. You can read more on that here.
  • Hal recently upgraded his NUC-based home lab to vSphere 7. You can read more about the process here.
  • Jon recently posted an article on a new upgrade command available in OneFS. If you’re into Isilon, you might just be into this.

Random Short Take #33

Welcome to Random Short Take #33. Some terrific players have worn 33 in the NBA, including Keith Closs and Stephon Marbury. This one, though, goes out to the “hick from French Lick” Larry Joe Bird. You might see the frequency of these posts ramp up a bit over the next little while. Because everything feels a little random at the moment.

  • I recently wrote about what Scale Computing has been up to with Leostream. It’s also done a bit with Acronis in the past, and it recently announced it’s now offering Acronis Cloud Storage. You can read more on that here.
  • The good folks at Druva are offering 6 months of free subscription for Office 365 and Endpoint protection (up to 300 seats) to help businesses adjust to these modern ways of working. You can find out more about that here.
  • Speaking of cloud backup, Backblaze recently surpassed the exabyte mark in terms of stored customer data.
  • I’ve been wanting to write about Panzura for a while, and I’ve been terribly slack. It’s enjoying some amount of momentum at the moment though, and is reporting revenue growth that looks the goods. Speaking of Panzura, if you haven’t heard of its Vizion.AI offshoot – it’s well worth checking out.
  • Zerto recently announced Zerto 8. Lots of cool features have been made available, including support for VMware on Google Cloud, and improved VMware integration.
  • There’s a metric shedload of “how best to work from home” posts doing the rounds at the moment. I found this one from Russ White to be both comprehensive and readable. That’s not as frequent a combination as you might expect.
  • World Backup Day was yesterday. I’ll be writing more on that this week, but in the meantime this article from Anthony Spiteri on data displacement was pretty interesting.
  • Speaking of backup and Veeam things, this article on installing Veeam PN from Andre Atkinson was very useful.

And that’s it for now. Stay safe folks.

 

 

Scale Computing and Leostream – Is It Finally VDI’s Year?

Scale Computing announced a partnership with Leostream a little while ago. With the global pandemic drastically changing the way a large amount of organisations are working, it seemed like a good time to talk to Alan Conboy about how this all worked from a Scale Computing and Leostream perspective.

 

Easy As 1, 2

Getting started with Leostream is surprisingly simple. To start with, you’ll need to deploy a Gateway and a Broker VM. These are CentOS machines (if you’re a Scale Computing customer you can get likely some minimally configured, pre-packaged qcow appliances from Alan). You’ll need to punch a hole through your firewall for SSL traffic, and run a couple of simple commands on the VMs, but that’s it.

But I’m getting ahead of myself. The way it works is that Leostream has a small agent that you can deploy across the PCs in your fleet. When users hit the gateway they can be directed to their own (physical) desktop inside the organisation. They can then access their desktops remotely (using RDP, SSH, or VNC) over any browser that supports SSL and HTML5. So, rather than having to go out and grab a bunch of laptops, setup a VPN (or scale it out), and have a desktop image ready to go (along with the prerequisite VDI resources hosted somewhere), you can have your remote workforce working remotely from day 1. It comes with a Windows, Java, and Linux agent, so if you have users running macOS or Linux they can still come to the party.

I know I’ve done a bad job of describing the solution, so I recommend you check out this blog post instead.

 

Thoughts

I’m not at all passionate about VDI and End User Computing in the same way some people I know are. I always thought it was a neat solution that was frequently poorly executed and oftentimes cost a lot of money. But it’s a weird time for the world and, sadly, it might be something like a global pandemic that finally means that VDI gets its due as a useful solution for remote workers. I’d also like to point out that this is just a part of what Leostream can do. If you’re after something outside of the Scale Computing alliance – they can probably help you out.

I’ve spoken to Alan and the Scale Computing team about Leostream a few times now, and I really do like the idea of being able to bring users back into the network, rather than extending the network out to your users. You don’t have to go crazy acquiring a bunch of laptops or mobile devices for traditionally desk-bound users and re-imaging said laptops for those users. You don’t need to spend a tonne of cash on extra VPN connectivity or compute to support a bunch of new “desktop” VMs. Instead, in a fairly short amount of time, you can get users working the way they always have, with a minimum of fuss. This is exactly the kind of approach that I’ve come to expect from Scale Computing – keep it simple, easy to deploy, cost-conscious, and functional.

As I said before – VDI solutions don’t really excite me. But I do appreciate the flexibility they can offer in terms of the ability to access corporate workloads from non-traditional locales. This solution takes it a step further, and does a great job of delivering what could be a complicated solution in a simple and functional fashion. This is the kind of thing we need more of at the moment.

Random Short Take #31

Welcome to Random Short Take #31. Lot of good players have worn 31 in the NBA. You’d think I’d call this the Reggie edition (and I appreciate him more after watching Winning Time), but this one belongs to Brent Barry. This may be related to some recency bias I have, based on the fact that Brent is a commentator in NBA 2K19, but I digress …

  • Late last year I wrote about Scale Computing’s big bet on a small form factor. Scale Computing recently announced that Jerry’s Foods is using the HE150 solution for in-store computing.
  • I find Plex to be a pretty rock solid application experience, and most of the problems I’ve had with it have been client-related. I recently had a problem with a server update that borked my installation though, and had to roll back. Here’s the quick and dirty way to do that on macOS.
  • Here’s are 7 contentious thoughts on data protection from Preston. I think there are some great ideas here and I recommend taking the time to read this article.
  • I recently had the chance to speak with Michael Jack from Datadobi about the company’s announcement about its new DIY Starter Pack for NAS migrations. Whilst it seems that the professional services market for NAS migrations has diminished over the last few years, there’s still plenty of data out there that needs to be moved from on box to another. Robocopy and rsync aren’t always the best option when you need to move this much data around.
  • There are a bunch of things that people need to learn to do operations well. A lot of them are learnt the hard way. This is a great list from Jan Schaumann.
  • Analyst firms are sometimes misunderstood. My friend Enrico Signoretti has been working at GigaOm for a little while now, and I really enjoyed this article on the thinking behind the GigaOm Radar.
  • Nexsan recently announced some enhancements to its “BEAST” storage platforms. You can read more on that here.
  • Alastair isn’t just a great writer and moustache aficionado, he’s also a trainer across a number of IT disciplines, including AWS. He recently posted this useful article on what AWS newcomers can expect when it comes to managing EC2 instances.

Dell EMC PowerOne – Not V(x)block 2.0

Disclaimer: I recently attended Storage Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Dell EMC recently presented at Storage Field Day 19. You can see videos of the presentation here, and download my rough notes from here.

 

Not VxBlock 2.0?

Dell EMC describes PowerOne as “all-in-one autonomous infrastructure”. It’s converged infrastructure, meaning your storage, compute, and networking are all built into the rack. It’s a transportation-tested package and fully assembled when it ships. When it arrives, you can plug it in, fire up the API, and be up and going “within a few hours”.

Trey Layton is no stranger to Vblock / VxBlock, and he was very clear with the delegates that PowerOne is not replacing VxBlock. After all, VxBlock lets them sell Dell EMC external storage into Cisco UCS customers.

 

So What Is It Then?

It’s a rack or racks full of gear. All of which is now Dell EMC gear. And it’s highly automated and has some proper management around it too.

[image courtesy of Dell EMC]

So what’s in those racks?

  • PowerMax Storage – World’s “fastest” storage array
  • PowerEdge MX – industry leading compute
  • PowerSwitch – Declarative system fabric
  • PowerOne Controller – API-powered automation engine

PowerMax Storage

  • Zero-touch SAN config
  • Discovery / inventory of storage resources
  • Dynamically create storage volumes for clusters
  • Intelligent load balancing

PowerEdge MX Compute

  • Dynamically provision compute resources into clusters
  • Automated chassis expansion
  • Telemetry aggregation
  • Kinetic infrastructure

System Fabrics

  • Switches are 32Gbps
  • 98% reduction in network configuration steps
  • System fabric visibility and lifecycle management
  • Intent-based automated deployment and provision
  • PowerSwitch open networking

PowerOne Controller

  • Highly automates 1000s of tasks
  • Powered by Kubernetes and Ansible
  • Delivers next-gen autonomous outcomes via robust API capabilities

From a scalability perspective, you can go to 275 nodes in a pod, and you can look after up to 32 pods (I think). The technical specifications are here.

 

Thoughts and Further Reading

Converged infrastructure has always been an interesting architectural choice for the enterprise. When VCE first came into being 10+ years ago via Acadia, delivering consistent infrastructure experiences in the average enterprise was a time-consuming endeavour and not a lot of fun. It was also hard to do well. VCE changed a lot of that with Vblock, but you paid a premium. The reason you paid that premium was that VCE did a pretty decent job of putting together an architecture that was reliable and, more importantly, supportable by the vendor. It wasn’t just the IP behind this that made it successful though, it was the effort put into logistics and testing. And yes, a lot of that was built on the strength of spreadsheets and the blood, sweat and tears of the deployment engineers out in the field.

PowerOne feels like a very different beast in this regard. Dell EMC took us through a demo of the “unboxing” experience, and talked extensively about the lifecycle of the product. They also demonstrated many of the automation features included in the solution that weren’t always there with Vblock. I’ve been responsible for Vblock environments over the years, and a lot of the lifecycle management activities were very thoroughly documented, and extremely manual. PowerOne, on the other hand, doesn’t look like it relies extensively on documentation and spreadsheets to be managed effectively. But maybe that’s just because Trey and the team were able to demonstrate things so effectively.

So why would the average enterprise get tangled up in converged infrastructure nowadays? What with all the kids and their HCI solutions, and the public cloud, and the plethora of easy to consume infrastructure solutions available via competitive consumption models? Well, some enterprises don’t like relying on people within the organisation to deliver solutions for mission critical applications. These enterprises would rather leave that type of outcome in the hands of one trusted vendor. But they might still want that outcome to be hosted on-premises. Think of big financial institutions, and various government agencies looking after very important things. These are the kinds of customers that PowerOne is well suited to.

That doesn’t mean that what Dell EMC is doing with PowerOne isn’t innovative. In fact I think what they’ve managed to do with converged infrastructure is very innovative, within the confines of converged infrastructure. This type of approach isn’t for everyone though. There’ll always be organisations that can do it faster and cheaper themselves, but they may or may not have as much at stake as some of the other guys. I’m curious to see how much uptake this particular solution gets in the market, particularly in environments where HCI and public cloud adoption is on the rise. It strikes me that Dell EMC has turned a corner in terms of system integration too, as the out of the box experience looks really well thought out compared to some of its previous attempts at integration.

Random Short Take #29

Welcome to Random Short Take #29. You’d think 29 would be a hard number to line up with basketball players, but it turns out that Marcus Camby wore it one year when he played for Houston. It was at the tail-end of his career, but still. Anyhoo …

  • I love a good story about rage-quitting projects, and this one is right up there. I’ve often wondered what it must be like to work on open source projects and dealing with the craziness that is the community.
  • I haven’t worked on a Scalar library in over a decade, but Quantum is still developing them. There’s an interesting story here in terms of protecting your protection data using air gaps. I feel like this is already being handled a different way by the next-generation data protection companies, but when all you have is a hammer. And the cost per GB is still pretty good with tape.
  • I always enjoy Keith’s ability to take common problems and look at them with a fresh perspective. I’m interested to see just how far he goes down the rabbit hole with this DC project.
  • Backblaze frequently comes up with useful articles for both enterprise punters and home users alike. This article on downloading your social media presence is no exception. The processes are pretty straightforward to follow, and I think it’s a handy exercise to undertake every now and then.
  • The home office is the new home lab. Or, perhaps, as we work anywhere now, it’s important to consider setting up a space in your home that actually functions as a workspace. This article from Andrew Miller covers some of the key considerations.
  • This article from John Troyer about writing was fantastic. Just read it.
  • Scale Computing was really busy last year. How busy? Busy enough to pump out a press release that you can check out here. The company also has a snazzy new website and logo that you should check out.
  • Veeam v10 is coming “very soon”. You can register here to find out more. I’m keen to put this through its paces.

Scale Computing Makes Big Announcement About Small HE150

Scale Computing recently announced the HE150 series of small edge servers. I had the chance to chat with Alan Conboy about the announcement, and thought I’d share some thoughts here.

 

Edge, But Smaller

I’ve written in the past about additions to the HC3 Edge Platform. But those things had a rack-mount form factor. The newly announced HE150 runs on Intel NUC devices. Wait, what? That’s right, hyper-converged infrastructure on really small PCs. But don’t you need a bunch of NICs to do HC3 properly? There’s no need for backplane switch requirement, as they use some software-defined networking to tunnel the backplane network across the NIC. The HC3 platform uses less than 1GB RAM per node, and each node has 2 cores. The storage sits on an NVMe drive and you can get hold of this stuff at a retail price of around $5K US for 3 nodes.

[image courtesy of Scale Computing]

Scale at Scale?

How do you deploy these kinds of things at scale then? Conboy tells me there’s full Ansible integration, RESTful API deployment capabilities, and they come equipped with Intel AMT. In short, these things can turn up at the remote site, be plugged in, and be ready to go.

Where would you?

The HE150 solution is 100% specific to multi-site edge implementations. It’s not trying to go after workloads that would normally be serviced by the HE500 or HE1000. Where it can work though, is with:

  • Oil and Gas exploration – with one in each ship (they need 4-5 VMs to handle sensor data to make command decisions)
  • Grocery and retail chains
  • Manufacturing platforms
  • Telcos – pole-side boxes

In short, think of environments that require some amount of compute and don’t have IT people to support it.

 

Thoughts

I’ve been a fan of what Scale Computing has been doing with HCI for some time now. Scale’s take on making things simple across the enterprise has been refreshing. While this solution might surprise some folks, it strikes me that there’s an appetite for this kind fo thing in the marketplace. The edge is often a place where less is more, and there’s often not a lot of resources available to do basic stuff, like deploy a traditional, rackmounted compute environment. But a small, 3-node HCI cluster that can be stacked away in a stationery cupboard? That might just work. Particularly if you only need a few virtual machines to meet those compute requirements. As Conboy pointed out to me, Scale isn’t looking to use this as a replacement for the higher-preforming options it has available. Rather, this solution is perfect for highly distributed retail environments where they need to do one or two things and it would be useful if they didn’t do those things in a data centre located hundreds of kilometres away.

If you’re not that excited about Intel NUCs though, you might be happy to hear that solutions from Lenovo will be forthcoming shortly.

The edge presents a number of challenges to enterprises, in terms of both its definition and how to deal with it effectively. Ultimately, the success of solutions like this will hinge on ease of use, reliability, and whether it really is fit for purpose. The good folks at Scale don’t like to go off half-cocked, so you can be sure some thought went into this product – it’s not just a science project. I’m keen to see what the uptake is like, because I think this kind of solution has a place in the market. The HE150 is available for purchase form Scale Computing now. It’s also worth checking out the Scale Computing presentations at Tech Field Day 20.

Datrium Enhances DRaaS – Makes A Cool Thing Cooler

Datrium recently made a few announcements to the market. I had the opportunity to speak with Brian Biles (Chief Product Officer, Co-Founder), Sazzala Reddy (Chief Technology Officer and Co-Founder), and Kristin Brennan (VP of Marketing) about the news and thought I’d cover it here.

 

Datrium DRaaS with VMware Cloud

Before we talk about the new features, let’s quickly revisit the DRaaS for VMware Cloud offering, announced by Datrium in August this year.

[image courtesy of Datrium]

The cool thing about this offering was that, according to Datrium, it “gives customers complete, one-click failover and failback between their on-premises data center and an on-demand SDDC on VMware Cloud on AWS”. There are some real benefits to be had for Datrium customers, including:

  • Highly optimised, and more efficient than some competing solutions;
  • Consistent management for both on-premises and cloud workloads;
  • Eliminates the headaches as enterprises scale;
  • Single-click resilience;
  • Simple recovery from current snapshots or old backup data;
  • Cost-effective failback from the public cloud; and
  • Purely software-defined DRaaS on hyperscale public clouds for reduced deployment risk long term.

But what if you want a little flexibility in terms of where those workloads are recovered? Read on.

Instant RTO

So you’re protecting your workloads in AWS, but what happens when you need to stand up stuff fast in VMC on AWS? This is where Instant RTO can really help. There’s no rehydration or backup “recovery” delay. Datrium tells me you can perform massively parallel VM restarts (hundreds at a time) and you’re ready to go in no time at all. The full RTO varies by run-book plan, but by booting VMs from a live NFS datastore, you know it won’t take long. Failback uses VADP.

[image courtesy of Datrium]

The only cost during normal business operations (when not testing or deploying DR) is the cost of storing ongoing backups. And these are are automatically deduplicated, compressed and encrypted. In the event of a disaster, Datrium DRaaS provisions an on-demand SDDC in VMware Cloud on AWS for recovery. All the snapshots in S3 are instantly made executable on a live, cloud-native NFS datastore mounted by ESX hosts in that SDDC, with caching on NVMe flash. Instant RTO is available from Datrium today.

DRaaS Connect

DRaaS Connect extends the benefits of Instant RTO DR to any vSphere environment. DRaaS Connect is available for two different vSphere deployment models:

  • DRaaS Connect for VMware Cloud offers instant RTO disaster recovery from an SDDC in one AWS Availability Zone (AZ) to another;
  • DRaaS Connect for vSphere On Prem integrates with any vSphere physical infrastructure on-premises.

[image courtesy of Datrium]

DRaaS Connect for vSphere On Prem extends Datrium DRaaS to any vSphere on-premises infrastructure. It will be managed by a DRaaS cloud-based control plane to define VM protection groups and their frequency, replication and retention policies. On failback, DRaaS will return only changed blocks back to vSphere and the local on-premises infrastructure through DRaaS Connect.

The other cool things to note about DRaaS Connect is that:

  • There’s no Datrium DHCI system required
  • It’s a downloadable VM
  • You can start protecting workloads in minutes

DRaaS Connect will be available in Q1 2020.

 

Thoughts and Further Reading

Datrium announced some research around disaster recovery and ransomware in enterprise data centres in concert with the product announcements. Some of it wasn’t particularly astonishing, with folks keen to leverage pay as you go models for DR, and wanting easier mechanisms for data mobility. What was striking is that one of the main causes of disasters is people, not nature. Years ago I remember we used to plan for disasters that invariably involved some kind of flood, fire, or famine. Nowadays, we need to plan for some script kid pumping some nasty code onto our boxes and trashing critical data.

I’m a fan of companies that focus on disaster recovery, particularly if they make it easy for consumers to access their services. Disasters happen frequently. It’s not a matter of if, just a matter of when. Datrium has acknowledged that not everyone is using their infrastructure, but that doesn’t mean it can’t offer value to customers using VMC on AWS. I’m not 100% sold on Datrium’s vision for “disaggregated HCI” (despite Hugo’s efforts to educate me), but I am a fan of vendors focused on making things easier to consume and operate for customers. Instant RTO and DRaaS Connect are both features that round out the DRaaS for VMwareCloud on AWS quite nicely.

I haven’t dived as deep into this as I’d like, but Andre from Datrium has written a comprehensive technical overview that you can read here. Datrium’s product overview is available here, and the product brief is here.

Random Short Take #21

Here’s a semi-regular listicle of random news items that might be of some interest.

  • This is a great article covering QoS enhancements in Purity 5.3. Speaking of Pure Storage I’m looking forward to attending Pure//Accelerate in Austin in the next few weeks. I’ll be participating in a Storage Field Day Exclusive event as well – you can find more details on that here.
  • My friends at Scale Computing have entered into an OEM agreement with Acronis to add more data protection and DR capabilities to the HC3 platform. You can read more about that here.
  • Commvault just acquired Hedvig for a pretty penny. It will be interesting to see how they bring them into the fold. This article from Max made for interesting reading.
  • DH2i are presenting a webinar on September 10th at 11am Pacific, “On the Road Again – How to Secure Your Network for Remote User Access”. I’ve spoken to the people at DH2i in the past and they’re doing some really interesting stuff. If your timezone lines up with this, check it out.
  • This was some typically insightful coverage of VMworld US from Justin Warren over at Forbes.
  • I caught up with Zerto while I was at VMworld US last week, and they talked to me about their VAIO announcement. Justin Paul did a good job of summarising it here.
  • Speaking of VMworld, William has posted links to the session videos – check it out here.
  • Project Pacific was big news at VMworld, and I really enjoyed this article from Joep.