Pure Storage Announces Second Generation FlashArray//C with QLC

Disclaimer: I recently attended Storage Field Day 20.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Pure Storage recently announced its second generation FlashArray//C – an all-QLC offering offering scads of capacity in a dense form factor. Pure Storage presented on this topic at Storage Field Day 20. You can see videos of the presentation here, and download my rough notes from here.

 

It’s A Box!

FlashArray//C burst on to the scene last year as an all-flash, capacity-optimised storage option for customers looking for storage that didn’t need to go quite as fast the FlashArray//X, but that wasn’t built on spinning disk. Available capacities range from 1.3PB to 5.2PB (effective).

[image courtesy of Pure Storage]

There are a number of models available, with a variety of capacities and densities.

  Capacity Physical
 

//C60-366

 

Up to 1.3PB effective capacity**

366TB raw capacity**

3U; 1000–1240 watts (nominal–peak)

97.7 lbs (44.3 kg) fully loaded

5.12” x 18.94” x 29.72” chassis

 

//C60-494

 

Up to 1.9PB effective capacity**

494TB raw capacity**

3U; 1000–1240 watts (nominal–peak)

97.7 lbs (44.3 kg) fully loaded

5.12” x 18.94” x 29.72” chassis

 

//C60-840

 

Up to 3.2PB effective capacity**

840TB raw capacity**

6U; 1480–1760 watts (nominal–peak)

177.0lbs (80.3kg) fully loaded

10.2” x 18.94 x 29.72” chassis

 

//C60-1186

 

Up to 4.6PB effective capacity**

1.2PB raw capacity**

6U; 1480–1760 watts (nominal–peak)

185.4 lbs (84.1 kg) fully loaded

15.35” x 18.94 x 29.72” chassis

 

//C60-1390

 

Up to 5.2PB effective capacity**

1.4PB raw capacity**

9U; 1960–2280 watts (nominal–peak)

273.2 lbs (123.9 kg) fully loaded

15.35” x 18.94 x 29.72” chassis

Workloads

There are reasons why the FlashArray//C could be a really compelling option for workload consolidation. More and more workloads are “business critical” in terms of both performance and availability. There’s a requirement to do more with less, while battling complexity, and a strong desire to manage everything via a single pane of glass.

There are some other cool things you could use the //C for as well, including:

  • Automated policy-based VM tiering between //X and //C arrays;
  • DR using the //X at production and //C at your secondary site;
  • Consolidating multiple //X array workloads on a single //C array for test and dev; and
  • Consolidating multiple //X array snapshots to a single //C array for long-term retention.

 

It’s a QLC World, Sort Of

The second generation is FlashArray//C means you can potentially now have flash all through the data centre.

  • Apps and VMs – provision your high performance workloads to //X, lower performance / high capacity workloads to //C
  • Modern Data Protection & Disaster Recovery – on-premises production applications on //X efficiently replicated or backed up to //C at DR site
  • User File Shares – User file access with Purity 6.0 via SMB, NFS

QLC nonetheless presents significant engineering challenges with traditionally high write latency and low endurance (when compared to SLC, MLC, and TLC). Pure Storage’s answer to that problem has been to engineer the crap out of DirectFlash to get the required results. I’d do a bad job of explaining it, so instead I recommend you check out Pete Kirkpatrick’s explanation.

 

Thoughts And Further Reading

I covered the initial FlashArray//C announcement here and many of the reasons why this type of offering is appealing remain the same. The knock on Pure Storage in the last few years has been that, while FlashArray//X is nice and fast and a snap to use, it couldn’t provide the right kind of capacity (i.e. cheap and deep) that a number of price-sensitive punters wanted.  Sure, they could go and buy the FlashArray//X and then look to another vendor for a dense storage option, but the motivation to run with a number of storage vendors in smaller enterprise shops is normally fairly low. The folks in charge of technology in these environments are invariably stretched in terms of bodies on the floor to run the environments, and cash in the bank to procure those solutions. A single vendor solution normally makes sense for them (as opposed to some of the larger shops, or specialist organisations that really have very specific requirements that can only be serviced by particular solutions).

So now Pure Storage has the FlashArray//C, and you can get it with some decent density, some useful features (thanks in part to some new features in Purity 6), and integration with the things you know and like about Pure Storage, such as Pure1 and Evergreen storage. It seems like Pure Storage has done an awful lot of work to squeeze performance out of QLC whilst ensuring that the modules don’t need replacing every other week. There’s a lot to like about the evolving Pure Storage story, and I’m interested to see how they tie it all together as the portfolio continues to expand. You can read the press release here, access the data sheet here, and read Mellor’s take on the news here.

Datadobi Announces DobiProtect

Datadobi recently announced DobiProtect. I had the opportunity to speak with Michael Jack and Carl D’Halluin about the announcement, and thought I’d share some thoughts here.

 

The Problem

Disaster Recovery

Modern disaster recovery solutions tend more towards business continuity than DR. The challenge with data replication solutions is that it’s a trivial thing to replicate corruption from your primary storage to your DR storage. Backup systems are vulnerable too, and most instances you need to make some extra effort to ensure you’ve got a replicated catalogue, and that your backup data is not isolated. Invariably, you’ll be looking to restore to like hardware in order to reduce the recovery time. Tape is still a pain to deal with, and invariably you’re also at the mercy of people and processes going wrong.

What Do Customers Need?

To get what you need out of a robust DR system, there are a few criteria that need to be met, including:

  • An easy way to select business-critical data;
  • A simple way to make a golden copy in native format;
  • A bunker site in a DC or cloud;
  • A manual air-gap procedure;
  • A way to restore to anything; and
  • A way to failover if required.

 

Enter DobiProtect

What Does It Do?

The idea is that you have two sites with a manual air-gap between them, usually controlled by a firewall of some type. The first site is where you run your production workload, and there’ll likely be a subset of data that is really quirte important to your business. You can use DobiProtect to get that data from your production site to DR (it might even be in a bunker!). In order to get the data from Production to DR, DobiProtect scans the data before it’s pulled across to DR. Note that the data is pulled, not pushed. This is important as it means that there’s no obvious trace of the bunker’s existence in production.

[image courtesy of Datadobi]

If things go bang, you can recover to any NAS or Object.

  • Browse golden copy
  • Select by directory structure, folder, or object patterns
  • Mounts and shares
  • Specific versions

Bonus Use Case

One of the more popular use cases that Datadobi spoke to me about was heterogeneous edge-to-core protection. Data on the edge is usually more vulnerable, and not every organisation has the funding to put robust protection mechanisms in place at every edge site to protect critical data. With the advent of COVID-19, many organisations have been pushing more data to the edge in order for remote workers to have better access to data. The challenge then becomes keeping that data protected in a reliable fashion. DobiProtect can be used to pull data from the core once data has been pulled back from the edge. Because it’s a software only product, your edge storage can be anything that supports object, SMB, or NFS, and the core could be anything else. This provides a lot of flexibility in terms of the expense traditionally associated with DR at edge sites.

[image courtesy of Datadobi]

 

Thoughts and Further Reading

The idea of an air-gapped site in a bunker somewhere is the sort of thing you might associate with a James Bond story. In Australia these aren’t exactly a common thing (bunkers, not James Bond stories), but Europe and the US is riddled with them. As Jack pointed out in our call, “[t]he first rule of bunker club – you don’t talk about the bunker”. Datadobi couldn’t give me a list of customers using this type of solution because all of the customers didn’t want people to know that they were doing things this way. It seems a bit like security via obscurity, but there’s no point painting a big target on your back or giving clues out for would-be crackers to get into your environment and wreak havoc.

The idea that your RPO is a day, rather than minutes, is also confronting for some folks. But the idea of this solution is that you’ll use it for your absolutely mission critical can’t live without it data, not necessarily your virtual machines that you may be able to recover normally if you’re attacked or the magic black smoke escapes from one of your hosts. If you’ve gone to the trouble of looking into acquiring some rack space in a bunker, limited the people in the know to a handful, and can be bothered messing about with a manual air-gap process, the data you’re looking to protect is clearly pretty important.

Datadobi has a rich heritage in data migration for both file and object storage systems. It makes sense that eventually customer demand would drive them down this route to deliver a migration tool that ostensibly runs all the time as sort of data protection tool. This isn’t designed to protect everything in your environment, but for the stuff that will ruin your business if it goes away, it’s very likely worth the effort and expense. There are some folks out there actively looking for ways to put you over a barrel, so it’s important to think about what it’s worth to your organisation to avoid that if possible.

BackupAssist Announces BackupAssist ER

BackupAssist recently announced BackupAssist ER. I recently had the opportunity to speak with Linus Chang (CEO), Craig Ryan, and Madeleine Tan about the announcement.

 

BackupAssist

Founded in 2001, BackupAssist is focussed primarily on the small to medium enterprise (under 500 seats). They sell the product via a variety of mechanisms, including:

  • Direct
  • Partners
  • Distribution channels

 

Challenges Are Everywhere

Some of the challenges faced by the average SME when it comes to data protection include the following:

  • Malware
  • COVID-19
  • Compliance

So what does the average SME need when it comes to selecting a data protection solution?

  • Make it affordable
  • Automatic offsite backups with history and retention
  • Most recoveries are local – make them fast!
  • The option to recover in the cloud if needed (the fallback to the fallback)

 

What Is It?

So what exactly is BackupAssist ER? It’s backup and recovery software.

[image courtesy of BackupAssist]

It’s deployed on Windows servers, and has support for disk to disk to cloud as a protection topology.

CryptoSafeGuard

Another cool feature is CryptoSafeGuard, providing the following features:

  • Shield from unauthorised access
  • Detect – Alert – Preserve

Disaster Recovery

  • VM Instant boot (converting into a Hyper-V guest)
  • BMR (catering for dissimilar hardware)
  • Download cloud backup anywhere

Data Recovery

The product supports the granular recovery of files, Exchange, and applications.

Data Handling and Control

A key feature of the solution is the approach to data handling, offering:

  • Accessibility
  • Portability
  • Retention

It uses the VHDX file format to store protection data. It can also backup to Blob storage. Chang also advised that they’re working on introducing S3 compatibility at some stage.

Retention

The product supports a couple of different retention schemes, including:

  • Local – Keep N copies (GFS is coming)
  • Cloud – Keep X copies
  • Archival – Keep a backup on a HDD, and retain for years

Pricing

BackupAssist ER is licensed in a variety of ways. Costs are as follows:

  • Per physical machine – $399 US annually;
  • Per virtual guest machine – $199 US annually; and
  • Per virtual host machine – $699 US annually.

There are discounts available for multi-year subscriptions, as well as discounts to be had if you’re looking to purchase licensing for more than 5 machines.

 

Thoughts and Further Reading

Chang noted that BackupAssist is “[n]ot trying to be the best, but the best fit”. You’ll see that a lot of the capability is Microsoft-centric, with support for Windows and Hyper-V. This makes sense when you look at what the SME market is doing in terms of leveraging Microsoft platforms to deliver their IT requirements. Building a protection product that covers every platform is time-consuming and expensive in terms of engineering effort. What Chang and the team have been focussed on is delivering data protection products to customers at a particular price point while delivering the right amount of technology.

The SME market is notorious for wanting to consume quality product at a particular price point. Every interaction I’ve had with customers in the SME segment has given me a crystal clear understanding of “Champagne tastes on a beer budget”. But in much the same way that some big enterprise shops will never stop doing things at a glacial pace, so too will many SME shops continue to look for high value at a low cost. Ultimately, compromises need to be made to meet that price point, hence the lack of support for features such as VMware. That doesn’t mean that BackupAssist can’t meet your requirements, particularly if you’re running your business’s IT on a couple of Windows machines. For this it’s well suited, and the flexibility on offer in terms of disk targets, retention, and recovery should be motivation to investigate further. It’s a bit of a nasty world out there, so anything you can do to ensure your business data is a little safer should be worthy of further consideration. You can read the press release here.

Retrospect Announces Backup 17 And Virtual 2020

Retrospect recently announced new versions of its Backup (17) and Virtual (2020) products. I had the opportunity to speak to JG Heithcock (GM, Retrospect) about the announcement and thought I’d share some thoughts here.

 

What’s New?

Retrospect Backup 17 has the following new features:

  • Automatic Onboarding: Simplified and automated deployment and discovery;
  • Nexsan E-Series / Unity Certification;
  • 10x Faster ProactiveAI; and
  • Restore Preflight for restores from cold storage.

Retrospect Virtual 2020 has the following enhancements:

  • Automatic Onboarding: Physical and Virtual monitoring from a single website;
  • 50% Faster;
  • Wasabi Cloud Support;
  • Backblaze B2 Cloud Support; and
  • Flexible licensing between VMware and Hyper-V.

Automatic Onboarding?

So what exactly is automatic onboarding? You can onboard new servers and endpoints for faster deployment and automatic discovery.

  • Share one link with your team. No agent password required.
  • Retrospect Backup finds and protects new clients with ProactiveAI.
  • Add servers, desktops, and laptops to Retrospect Backup.
  • Single pane of glass for entire backup infrastructure with Retrospect Management Console.
  • Available for Windows, Mac, and Linux.

You can also onboard a new Retrospect Backup server for faster, simplified deployment.

  • Protect group or site.
  • Customised installer with license built-in.
  • Seamless Management Console integration.
  • Available for Windows and Mac.

Onboard new Retrospect Virtual server for complete physical and virtual monitoring.

  • Customised installer
  • Seamless Management Console integration.
  • Monitor Physical + Virtual

Pricing

There’s a variety of pricing available. When you buy a perpetual license, you have access to any new minor or major version upgrades for 12 months. With the monthly subscription model you have access to the latest version of the product for as long as you keep the subscription active.

[image courtesy of Retrospect]

 

Thoughts And Further Reading

Retrospect was acquired by StorCentric in June 2019 after bouncing around a few different owners over the years. It’s been around for a long time, and has a rich history of delivering data protection solutions for small business and “prosumer” markets. I have reasonably fond memories of Retrospect from the time when it was shipped with Maxtor OneTouch external hard drives. Platform support is robust, with protection options available across Windows, macOS and some Linux, and the pricing is competitive. Retrospect is also benefitting from joining the StorCentric family, and I’m looking forward to hearing about more product integrations as time goes on.

Why would I cover a data protection product that isn’t squarely targeted at the enterprise or cloud market? Because I’m interested in data protection solutions across all areas of IT. I think the small business and home market is particularly under-represented when it comes to easy to deploy and run solutions. There is a growing market for cloud-based solutions, but simple local protection options still seem to be pretty rare. The number of people I talk to who are just manually copying data from one spot to another is pretty crazy. Why is it so hard to get good backup and recovery happening on endpoints? It shouldn’t be. You could argue that, with the advent of SaaS services and cloud-based storage solutions, the requirement to protect endpoints the way we used to has changed. But local protection options still makes it a whole lot quicker and easier to recover.

If you’re in the market for a solution that is relatively simple to operate, has solid support for endpoint operating systems and workloads, and is competitively priced, then I think Retrospect is worth evaluating. You can read the announcement here.

StorCentric Announces QLC E-Series 18F

Nexsan recently announced the release of its new E-Series 18F (E18F) storage platform. I had the chance to chat with Surya Varanasi, CTO of StorCentric, about the announcement and thought I’d share some thoughts here.

 

Less Disk, More Flash

[image courtesy of Nexsan]

The E18F is designed and optimised for quad-level cell (QLC) NAND technology. If you’re familiar with the Nexsan E-Series range, you’d be aware of the E18P that preceded this model. This is the QLC Flash version of that.

Use Cases

We spoke about a couple of use cases for the E18F. The first of these was with data lake environments. These are the sort of storage environents with 20 to 30PB installations that are subjected to random workload pressures. The idea of using QLC is to increase the performance without significantly increasing the cost. That doesn’t mean that you can do a like for like swap of HDDs for QLC Flash. Varanasi did, however, suggest that Nexsan had observed a 15x improvement over hard drive installation for around 3-4 times the cost, and he’s expecting that to go down to 2-3 times in the future. There is also the option to use just a bit of QLC Flash with a lot of HDDs to get some performance improvement.

The other use case discussed was the use of QLC in test and dev environments. Users are quite keen, obviously, on getting Flash in their environments at the price of HDDs. This isn’t yet a realistic goal, but it’s more achievable with QLC than it is with something like TLC.

 

QLC And The Future

We spoke briefly about more widespread adoption of QLC across the range of StorCentric storage products. Varanasi said the use “will eventually expand across the portfolio”, and they were looking at how it might be adopted with the larger E-Series models, as well as with the Assureon and Vexata range. They were treating Unity more cautiously, as the workloads traditionally hosted on that platform were a little more demanding.

 

Thoughts and Further Reading

The kind of workloads we’re throwing at what were once viewed as “cheap and deep” platforms is slowly changing. Where once it was perhaps acceptable to wait a few days for reporting runs to finish, there’s no room for that kind of performance gap now. So it makes sense that we look to Flash as a way of increasing the performance of the tools we’re using. The problem, however, is that when you work on data sets in the petabyte range, you need a lot of capacity to accommodate that. Flash is getting cheaper, but it’s still not there when compared to traditional spinning disks. QLC is a nice compromise between performance and capacity. There’s a definite performance boost to be had, and the increase in cost isn’t eye watering. StorCentric Announces QLC E-Series 18F

I’m interested to see how this solution performs in the real world, and whether QLC has the expected durability to cope with the workloads that enterprise will throw at it. I’m also looking forward to seeing where else Nexsan decide to use QLC in its portfolio. There’s good story here in terms of density, performance, and energy consumption – one that I’m sure other vendors will also be keen to leverage. For another take on this, check out Mellor’s article here.

InfiniteIO And Your Data – Making Meta Better

InfiniteIO recently announced its new Application Accelerator. I had the opportunity to speak about the news with Liem Nguyen (VP of Marketing) and Kris Meier (VP of Product Management) from InfiniteIO and thought I’d share some thoughts here.

 

Metadata Is Good, And Bad

When you think about file metadata you might think about photos and the information they store that tells you about where the photo was taken, when it was taken, and the kind of camera used. Or you might think of an audio file and the metadata that it contains, such as the artist name, year of release, track number, and so on. Metadata is a really useful thing that tells us an awful lot about data we’re storing. But things like simple file read operations make use of a lot of metadata just to open the file:

  • During the typical file read, 7 out of 8 operations are metadata requests which significantly increases latency; and
  • Up to 90% of all requests going to NAS systems are for metadata.

[image courtesy of InfiniteIO]

 

Fire Up The Metadata Engine

Imagine how much faster storage would be if it only has to service 10% of the requests it does today? The Application Accelerator helps with this by:

  • Separating metadata request processing from file I/O
  • Responding directly to metadata requests at the speed of DRAM – much faster than a file system

[image courtesy of InfiniteIO]

The cool thing is it’s a simple deployment – installed like a network switch requiring no changes to workflows.

 

Thoughts and Further Reading

Metadata is a key part of information management. It provides data with a lot of extra information that makes that data more useful to applications that consume it and to the end users of those applications. But this metadata has a cost associated with it. You don’t think about the amount of activity that happens with simple file operations, but there is a lot going on. It gets worse when you look at activities like AI training and software build operations. The point of a solution like the Application Accelerator is that, according to InfiniteIO, your primary storage devices could be performing at another level if another device was doing the heavy lifting when it came to metadata operations.

Sure, it’s another box in the data centre, but the key to the Application Accelerator’s success is the software that sits on the platform. When I saw the name my initial reaction was that filesystem activities aren’t applications. But they really are, and more and more applications are leveraging data on those filesystems. If you could reduce the load on those filesystems to the extent that InfiniteIO suggest then the Application Accelerator becomes a critical piece of the puzzle.

You might not care about increasing the performance of your applications when accessing filesystem data. And that’s perfectly fine. But if you’re using a lot of applications that need high performance access to data, or your primary devices are struggling under the weight of your workload, then something like the Application Accelerator might be just what you need. For another view, Chris Mellor provided some typically comprehensive coverage here.

Datrium Enhances DRaaS – Makes A Cool Thing Cooler

Datrium recently made a few announcements to the market. I had the opportunity to speak with Brian Biles (Chief Product Officer, Co-Founder), Sazzala Reddy (Chief Technology Officer and Co-Founder), and Kristin Brennan (VP of Marketing) about the news and thought I’d cover it here.

 

Datrium DRaaS with VMware Cloud

Before we talk about the new features, let’s quickly revisit the DRaaS for VMware Cloud offering, announced by Datrium in August this year.

[image courtesy of Datrium]

The cool thing about this offering was that, according to Datrium, it “gives customers complete, one-click failover and failback between their on-premises data center and an on-demand SDDC on VMware Cloud on AWS”. There are some real benefits to be had for Datrium customers, including:

  • Highly optimised, and more efficient than some competing solutions;
  • Consistent management for both on-premises and cloud workloads;
  • Eliminates the headaches as enterprises scale;
  • Single-click resilience;
  • Simple recovery from current snapshots or old backup data;
  • Cost-effective failback from the public cloud; and
  • Purely software-defined DRaaS on hyperscale public clouds for reduced deployment risk long term.

But what if you want a little flexibility in terms of where those workloads are recovered? Read on.

Instant RTO

So you’re protecting your workloads in AWS, but what happens when you need to stand up stuff fast in VMC on AWS? This is where Instant RTO can really help. There’s no rehydration or backup “recovery” delay. Datrium tells me you can perform massively parallel VM restarts (hundreds at a time) and you’re ready to go in no time at all. The full RTO varies by run-book plan, but by booting VMs from a live NFS datastore, you know it won’t take long. Failback uses VADP.

[image courtesy of Datrium]

The only cost during normal business operations (when not testing or deploying DR) is the cost of storing ongoing backups. And these are are automatically deduplicated, compressed and encrypted. In the event of a disaster, Datrium DRaaS provisions an on-demand SDDC in VMware Cloud on AWS for recovery. All the snapshots in S3 are instantly made executable on a live, cloud-native NFS datastore mounted by ESX hosts in that SDDC, with caching on NVMe flash. Instant RTO is available from Datrium today.

DRaaS Connect

DRaaS Connect extends the benefits of Instant RTO DR to any vSphere environment. DRaaS Connect is available for two different vSphere deployment models:

  • DRaaS Connect for VMware Cloud offers instant RTO disaster recovery from an SDDC in one AWS Availability Zone (AZ) to another;
  • DRaaS Connect for vSphere On Prem integrates with any vSphere physical infrastructure on-premises.

[image courtesy of Datrium]

DRaaS Connect for vSphere On Prem extends Datrium DRaaS to any vSphere on-premises infrastructure. It will be managed by a DRaaS cloud-based control plane to define VM protection groups and their frequency, replication and retention policies. On failback, DRaaS will return only changed blocks back to vSphere and the local on-premises infrastructure through DRaaS Connect.

The other cool things to note about DRaaS Connect is that:

  • There’s no Datrium DHCI system required
  • It’s a downloadable VM
  • You can start protecting workloads in minutes

DRaaS Connect will be available in Q1 2020.

 

Thoughts and Further Reading

Datrium announced some research around disaster recovery and ransomware in enterprise data centres in concert with the product announcements. Some of it wasn’t particularly astonishing, with folks keen to leverage pay as you go models for DR, and wanting easier mechanisms for data mobility. What was striking is that one of the main causes of disasters is people, not nature. Years ago I remember we used to plan for disasters that invariably involved some kind of flood, fire, or famine. Nowadays, we need to plan for some script kid pumping some nasty code onto our boxes and trashing critical data.

I’m a fan of companies that focus on disaster recovery, particularly if they make it easy for consumers to access their services. Disasters happen frequently. It’s not a matter of if, just a matter of when. Datrium has acknowledged that not everyone is using their infrastructure, but that doesn’t mean it can’t offer value to customers using VMC on AWS. I’m not 100% sold on Datrium’s vision for “disaggregated HCI” (despite Hugo’s efforts to educate me), but I am a fan of vendors focused on making things easier to consume and operate for customers. Instant RTO and DRaaS Connect are both features that round out the DRaaS for VMwareCloud on AWS quite nicely.

I haven’t dived as deep into this as I’d like, but Andre from Datrium has written a comprehensive technical overview that you can read here. Datrium’s product overview is available here, and the product brief is here.

Axellio Announces Azure Stack HCI Support

Microsoft recently announced their Azure Stack HCI program, and I had the opportunity to speak to the team from Axellio (including Bill Miller, Barry Martin, and Kara Smith) about their support for it.

 

Azure Stack Versus Azure Stack HCI

So what’s the difference between Azure Stack and Azure Stack HCI? You can think of Azure Stack as an extension of Azure – designed for cloud-native applications. The Azure Stack HCI is more for your traditional VM-based applications – the kind of ones that haven’t been refactored (or can’t be) for public cloud.

[image courtesy of Microsoft]

The Azure Stack HCI program has fifteen vendor partners on launch day, of which Axellio is one.

 

Axellio’s Take

Miller describes the Axellio solution as “[n]ot your father’s HCI infrastructure”, and Axellio tell me it “has developed the new FabricXpress All-NVMe HCI edge-computing platform built from the ground up for high-performance computing and fast storage for intense workload environments. It delivers 72 NVMe SSDS per server, and packs 2 servers into one 2U chassis”. Cluster sizes start at 4 nodes and run up to 16. Note that the form factor measurement in the table below includes any required switching for the solution. You can grab the data sheet from here.

[image courtesy of Axellio]

It uses the same Hyper-V based software-defined compute, storage and networking as Azure Stack and integrates on-premises workloads with Microsoft hybrid data services including Azure Site Recovery and Azure Backup, Cloud Witness and Azure Monitor.

 

Thoughts and Further Reading

When Microsoft first announced plans for a public cloud presence, some pundits suggested they didn’t have the chops to really make it. It seems that Microsoft has managed to perform well in that space despite what some of the analysts were saying. What Microsoft has had working in its favour is that it understands the enterprise pretty well, and has made a good push to tap that market and help get the traditionally slower moving organisations to look seriously at public cloud.

Azure Stack HCI fits nicely in between Azure and Azure Stack, giving enterprises the opportunity to host workloads that they want to keep in VMs hosted on a platform that integrates well with public cloud services that they may also wish to leverage. Despite what we want to think, not every enterprise application can be easily refactored to work in a cloud-native fashion. Nor is every enterprise ready to commit that level of investment into doing that with those applications, preferring instead to host the applications for a few more years before introducing replacement application architectures.

It’s no secret that I’m a fan of Axellio’s capabilities when it comes to edge compute and storage solutions. In speaking to the Axellio team, what stands out to me is that they really seem to understand how to put forward a performance-oriented solution that can leverage the best pieces of the Microsoft stack to deliver an on-premises hosting capability that ticks a lot of boxes. The ability to move workloads (in a staged fashion) so easily between public and private infrastructure should also have a great deal of appeal for enterprises that have traditionally struggled with workload mobility.

Enterprise operations can be a pain in the backside at the best of times. Throw in the requirement to host some workloads in public cloud environments like Azure, and your operations staff might be a little grumpy. Fans of HCI have long stated that the management of the platform, and the convergence of compute and storage, helps significantly in easing the pain of infrastructure operations. If you then take that management platform and integrate it successfully with you public cloud platform, you’re going to have a lot of fans. This isn’t Axellio’s only solution, but I think it does fit in well with their ability to deliver performance solutions in both the core and edge.

Thomas Maurer wrote up a handy article covering some of the differences between Azure Stack and Azure Stack HCI. The official Microsoft blog post on Azure Stack HCI is here. You can read the Axellio press release here.

Elastifile Announces Cloud File Service

Elastifile recently announced a partnership with Google to deliver a fully-managed file service delivered via the Google Cloud Platform. I had the opportunity to speak with Jerome McFarland and Dr Allon Cohen about the announcement and thought I’d share some thoughts here.

 

What Is It?

Elastifile Cloud File Service delivers a self-service SaaS experience, providing the ability to consume scalable file storage that’s deeply integrated with Google infrastructure. You could think of it as similar to Amazon’s EFS.

[image courtesy of Elastifile]

 

Benefits

Easy to Use

Why would you want to use this service? It:

  • Eliminates manual infrastructure management;
  • Provisions turnkey file storage capacity in minutes; and
  • Can be delivered in any zone, and any region.

 

Elastic

It’s also cloudy in a lot of the right ways you want things to be cloudy, including:

  • Pay-as-you-go, consumption-based pricing;
  • Flexible pricing tiers to match workflow requirements; and
  • The ability to start small and scale out or in as needed and on-demand.

 

Google Native

One of the real benefits of this kind of solution though, is the deep integration with Google’s Cloud Platform.

  • The UI, deployment, monitoring, and billing are fully integrated;
  • You get a single bill from Google; and
  • The solution has been co-engineered to be GCP-native.

[image courtesy of Elastifile]

 

What About Cloud Filestore?

With Google’s recently announced Cloud Filestore, you get:

  • A single storage tier selection, being Standard or SSD;
  • It’s available in-cloud only; and
  • Grow capacity or performance up to a tier capacity.

With Elastifile’s Cloud File Service, you get access to the following features:

  • Aggregates performance & capacity of many VMs
  • Elastically scale-out or -in; on-demand
  • Multiple service tiers for cost flexibility
  • Hybrid cloud, multi-zone / region and cross-cloud support

You can also use ClearTier to perform tiering between file and object without any application modification.

 

Thoughts

I’ve been a fan of Elastifile for a little while now, and I thought their 3.0 release had a fair bit going for it. As you can see from the list of features above, Elastifile are really quite good at leveraging all of the cool things about cloud – it’s software only (someone else’s infrastructure), reasonably priced, flexible, and scalable. It’s a nice change from some vendors who have focussed on being in the cloud without necessarily delivering the flexibility that cloud solutions have promised for so long. Coupled with a robust managed service and some preferential treatment from Google and you’ve got a compelling solution.

Not everyone will want or need a managed service to go with their file storage requirements, but if you’re an existing GCP and / or Elastifile customer, this will make some sense from a technical assurance perspective. The ability to take advantage of features such as ClearTier, combined with the simplicity of keeping it all under the Google umbrella, has a lot of appeal. Elastifile are in the box seat now as far as these kinds of offerings are concerned, and I’m keen to see how the market responds to the solution. If you’re interested in this kind of thing, the Early Access Program opens December 11th with general availability in Q1 2019. In the meantime, if you’d like to try out ECFS on GCP – you can sign up here.

Big Switch Announces AWS Public Cloud Monitoring

Big Switch Networks recently announced Big Mon for AWS. I had the opportunity to speak with Prashant Gandhi (Chief Product Officer) about the announcement and thought I’d share some thoughts here.

The Announcement

Big Switch describe Big Monitoring Fabric Public Cloud (it’s real product name) as “a seamless deep packet monitoring solution that enables workload monitoring within customer specified Virtual Private Clouds (VPCs). All components of the solution are virtual, with elastic scale-out capability based on traffic volumes.”

[image courtesy of Big Switch]

There are some real benefits to be had, including:

  • Complete AWS Visibility;
  • Multi-VPC support;
  • Elastic scaling; and
  • Consistent with the On-Prem offering.

Capabilities

  • Centralised packet and flow-based monitoring of all VPCs of a user account
  • Visibility-related traffic is kept local for security purposes and cost savings
  • Monitoring and security tools are centralised and tagged within the dedicated VPC for ease of configuration
  • Role-based access control enables multiple teams to operate Big Mon 
  • Supports centralised AWS VPC tool farm to reduce monitoring cost
  • Integrated with Big Switch’s Multi-Cloud Director for centralised hybrid cloud management

Thoughts and Further Reading

It might seem a little odd that I’m covering news from a network platform vendor on this blog, given the heavy focus I’ve had over the years on storage and virtualisation technologies. But the world is changing. I work for a Telco now and cloud is dominating every infrastructure and technology conversation I’m having. Whether it’s private or public or hybrid, cloud is everywhere, and networks are a bit part of that cloud conversation (much as it has been in the data centre), as is visibility into those networks. 

Big Switch have been around for under 10 years, but they’ve already made some decent headway with their switching platform and east-west monitoring tools. They understand cloud networking, and particularly the challenges facing organisations leveraging complicated cloud networking topologies. 

I’m the first guy to admit that my network chops aren’t as sharp as they could be (if you watched me setup some Google WiFi devices over the weekend, you’d understand). But I also appreciate that visibility is key to having control over what can sometimes be an overly elastic / dynamic infrastructure. It’s been hard to see traffic between availability zones, between instances, and contained in VPNs. I also like that they’ve focussed on a consistent experience between the on-premises offering and the public cloud offering. 

If you’re interested in learning more about Big Switch Networks, I also recommend checking out their labs.