Rubrik Cloud Data Management 4.2 Announced – “Purpose Built for the Hybrid Cloud”

Rubrik recently announced 4.2 of their Cloud Data Management platform and I was fortunate enough to sit in on a sneak preview from Chris Wahl, Kenneth Hui, and Rebecca Fitzhugh. “Purpose Built for the Hybrid Cloud”, there are a whole bunch of new features in this release. I’ve included a summary table below, and will dig in to some of the more interesting ones.

Expanding the Ecosystem Core Features & Services General Enhancements
AWS Native Protection (EC2 Instances) Rubrik Envoy SQL Server FILESTREAM
VMware vCloud Director Integration Rubrik Edge on Hyper-V SQL Server Log Shipping
Windows Full Volume Protection Network Throttling NAS Native API Integration
AIX & Solaris Support VLAN Tagging (GUI) NAS SMB Scan Enhancements
SNMP AHV VSS snapshot
Multi-File restore Proxy per Archival Location
Reader-Writer Archival Locations

 

AWS Native Protection (EC2 Instances)

One of the key parts of this announcement is cloud-native protection, delivered specifically with AWS EBS Snapshots. The cool thing is you can have Rubrik running on-premises or sitting in the cloud.

Use cases?

  • Automate manual processes – use policy engine to automate lifecycle management of snapshots, including scheduling and retention
  • Rapid recovery from failure – eliminate manual steps for instance and file recovery
  • Replicate instances in other availability zones and regions – launch instances in other AZs and Regions when needed using snapshots
  • Consolidate data management – one solution to manage data across on-premises DCs and public clouds

Snapshots have been a manual process to deal with. Now there’s no need to mess with crontab or various AWS tools to get the snaps done. It also aligns with Rubrik’s vision of having a single tool to manage both cloud and on-premises workloads. The good news is that files in snapshots are indexed and searchable, so individual file recovery is also pretty simple.

 

VMware vCloud Director Integration

It may or may not be a surprise to learn that VMware vCloud Director is still in heavy use with service providers, so news of Rubrik integration with vCD shouldn’t be too shocking. Rubrik spent a little time talking about some of the “Foundational Services” they offer, including:

  • Backup – Hosted or Managed
  • ROBO Protection
  • DR – Mirrored Site service
  • Archival – Hosted or Managed

The value they add, though, is in the additional services, or what they term “Next Generation premium services”. These include:

  • Dev / Test
  • Cloud Archival
  • DR in Cloud
  • Near-zero availability
  • Cloud migration
  • Cloud app protection

Self-service is the key

To be able to deliver a number of these services, particularly in the service provider space, there’s been a big focus on multi-tenancy.

  • Operate multi-customer configuration through a single cluster
  • Logically partition cluster into tenants as “Organisations”
  • Offer self-service management for each organisation
  • Centrally control, monitoring and reporting with aggregated data

Support for vCD (version 8.10 and later) is as follows:

  • Auto discovery of vCD hierarchy
  • SLA based auto protect at different levels of vCD hierarchy
  • vCD Instance
  • vCD Organization • Org VDC
  • vApp
  • Recovery workflows
  • Export and Instant recovery
  • Network settings
  • File restore
  • Self-service using multi-tenancy
  • Reports for vCD organization

 

Windows Full Volume Protection

Rubrik have always had fileset-based protection, and they’re now offering the ability with Windows hosts to protect a volume at a time, eg. C:\ volume. These protection jobs incorporate additional information such as partition type, volume size, and permissions.

[image courtesy of Rubrik]

There’s also a Rubrik-created package to create bootable Microsoft Windows Preinstallation Environment (WinPE) media to restore the OS as well as provide disk partition information. There are multiple options for customers to recover entire volumes in addition to system state, including Master Boot Record (MBR), GUID Partition Table (GPT) information, and OS.

Why would you? There are a few use cases, including

  • P2V – remember those?
  • Physical RDM mapping compatibility – you might still have those about, because, well, reasons
  • Physical Exchange servers and log truncation
  • Cloud mobility (AWS to Azure or vice versa)

So now you can select volumes or filesets, and you can store the volumes in a Volume Group.

[image courtesy of Rubrik]

 

AIX and Solaris Support

Wahl was reluctant to refer to AIX and Solaris as “traditional” DC applications, because it all makes us feel that little bit older. In any case, AIX support was already available in the 4.1.1 release, and 4.2 adds Oracle Solaris support. There are a few restore scenarios that come to mind, particularly when it comes to things like migration. These include:

  • Restore (in place) – Restores the original AIX server at the original path or a different path.
  • Export (out of place) – Allows exporting to another AIX or Linux host that has the Rubrik Backup Service (RBS) running.
  • Download Only – Ability to download files to the machine from which the administrator is running the Rubrik web interface.
  • Migration – Any AIX application data can be restored or exported to a Linux host, or vice versa from Linux to an AIX host. In some cases, customers have leveraged this capability for OS migrations, removing the need for other tools.

 

Rubrik Envoy

Rubrik Envoy is a trusted ambassador (its certificate is issued by the Rubrik cluster) that represents the service provider’s Rubrik cluster in an isolated tenant network.

[image courtesy of Rubrik]

 

The idea is that service providers are able to offer backup-as-a-service (BaaS) to co-hosted tenants, enabling self-service SLA management with on-demand backup and recovery. The cool thing is you don’t have to deploy the Virtual Edition into the tenant network to get the connectivity you need. Here’s how it comes together:

  1. Once a tenant subscribes to BaaS from the SP, an Envoy virtual appliance is deployed on the tenant’s network.
  2. The tenant may log into Envoy, which will route the Rubrik UI to the MSP’s Rubrik cluster.
  3. Envoy will only allow access to objects that belong to the tenant.
  4. The Rubrik cluster works with the tenant VMs, via Envoy, for all application quiescence, file restore, point-in-time recovery, etc.

 

Network Throttling

Network throttling is something that a lot of customers were interested in. There’s not an awful lot to say about it, but the options are No, Default and Scheduled. You can use it to configure the amount of bandwidth used by archival and replication traffic, for example.

 

Core Feature Improvements

There are a few other nice things that have been added to the platform as well.

  • Rubrik Edge is now available on Hyper-V
  • VLAN tagging was supported in 4.1 via the CLI, GUI configuration is now available
  • SNMPv2c support (I loves me some SNMP)
  • GUI support for multi-file recovery

 

General Enhancements

A few other enhancements have been added, including:

  • SQL Server FILESTREAM fully supported now (I’m not shouting, it’s just how they like to write it);
  • SQL Server Log Shipping; and
  • Per-Archive Proxy Support.

Rubrik were also pretty happy to announce NAS Vendor Native API Integration with NetApp and Isilon.

  • Network Attached Storage (NAS) vendor-native API integration.
    • NetApp ONTAP (ONTAP API v8.2 and later) supporting cluster-mode for NetApp filers.
    • Dell EMC Isilon OneFS (v8.x and later) + ChangeList (v7.1.1 and later)
  • NAS vendor-native API integration further enhances our current capability to take volume-based snapshots.
  • This feature also enhances the overall backup fileset backup performance.

NAS SMB Scan Enhancements have also been included, providing a 10x performance improvement (according to Rubrik).

 

Thoughts

Point releases aren’t meant to be massive undertakings, but companies like Rubrik are moving at a fair pace and adding support for products to try and meet the requirements of their customers. There’s a fair bit going on in this one, and the support for AWS snapshots is kind of a big deal. I really like Rubrik’s focus on multi-tenancy, and they’re slowing opening up doors to some enterprises still using the likes of AIX and Solaris. This has previously been the domain of the more traditional vendors, so it’s nice to see progress has been made. Not all of the world runs on containers or in vSphere VMs, so delivering this capability will only help Rubrik gain traction in some of the more conservative shops around town.

Rubrik are working hard to address some of the “enterprise-y” shortcomings or gaps that may have been present in earlier iterations of their product. It’s great to see this progress over such a short period of time, and I’m looking forward to hearing about what else they have up their sleeve.

Druva Announces CloudRanger Acquisition

Announcement

Druva recently announced that they’ve acquired CloudRanger. I had the opportunity to catch up with W. Curtis Preston about the news recently and thought I’d cover it briefly here.

 

What’s A CloudRanger?

Here’s the high-level view of the company:

  • Founded in 2016
  • Headquartered in Donegal, Ireland
  • 300+ Global Customers
  • 3x Growth in last 6 months
  • 100% Cloud native ‘as-a-Service’
  • Pay as you go pricing model
  • Biggest client creating 4,000 snapshots per day

 

Why CloudRanger?

Agentless Service

  • API Account IAM access ensures greater customer account security
  • Leverages AWS Quiescing capabilities
  • No account proxies (No additional costs, increased security)
  • No software needed to be updated

Broadest service coverage

  • Amazon EC2, EBS, RDS & RedShift
  • Automated Disaster Recovery (ADR)
  • Server scheduling for Amazon EC2 & RDS
  • SaaS based solution, compared to CPM server based approach
  • Easy to use platform for managing multiple AWS accounts
  • Featured SaaS product in AWS Marketplace available via SaaS contracts

Consumption Based Pricing Model

  • Pay as you go with full insight into data usage for cost predictability

 

A Good Fit

So where does CloudRanger fit in the broader Druva story? You’ll notice in the below picture that Apollo is missing. The main reason for the acquisition, as best I can tell, is that CloudRanger gives Druva the capability they were after with Apollo but in a much shorter timeframe.

[image courtesy of Druva]

 

Thoughts

A lot of customers want a lot of different things from their software vendors, particularly when it comes to data protection. A lot of companies have particular needs, and infrastructure protection is a complicated beast at the best of times. Sometimes it makes sense to try and develop these features for your customers. And sometimes it makes sense to go out and acquire those features. In this case, Druva has realised that CloudRanger gets them to a point in their product development far quicker than they may have gotten to under their own steam. The point of this acquisition isn’t that the good folks at Druva don’t have the chops to deliver what CloudRanger does already, but now they can move on to other platform enhancements. This does assume that the acquisition will go smoothly, but given that this doesn’t appear to be a hostile takeover, I’m assuming that part will go well.

Druva have done a lot of cool stuff recently, and I do like their approach to data protection (management?) that has differentiated itself from some of the more traditional approaches in the marketplace. CloudRanger gives them solid capability with AWS workloads, and I imagine Azure will be on the radar as well. I’m looking forward to seeing how this plays out, and what impact it has on some of their competitors in the space.

Cloudtenna Announces DirectSearch

 

I had the opportunity to speak to Aaron Ganek about Cloudtenna and their DirectSearch product recently and thought I’d share some thoughts here. Cloudtenna recently announced $4M in seed funding, have Citrix as a key strategic partner, and are shipping a beta product today. Their goal is “[b]ringing order to file chaos!”.

 

The Problem

Ganek told me that there are three major issues with file management and the plethora of collaboration tools used in the modern enterprise:

  • Search is too much effort
  • Security tends to fall through the cracks
  • Enterprise IT is dangerously non-compliant

Search

Most of these collaboration tools are geared up for search, because people don’t tend to remember where they put files, or what they’ve called them. So you might have some files in your corporate Box account, and some in Dropbox, and then some sitting in Confluence. The problem with trying to find something is that you need to search each application individually. According to Cloudtenna, this:

  • Wastes time;
  • Leads to frustration; and
  • Often yields poor results.

Security

Security also becomes a problem when you have multiple storage repositories for corporate files.

  • There are too many apps to manage
  • It’s difficult to track users across applications
  • There’s no consolidated audit trail

Exposure

As a result of this, enterprises find themselves facing exposure to litigation, primarily because they can’t answer these questions:

  • Who accessed what?
  • When and from where?
  • What changed?

As some of my friends like to say “people die from exposure”.

 

Cloudtenna – The DirectSearch Solution

Enter DirectSearch. At its core it’s a SaaS offering that

  • Catalogues file activity across disparate data silos; and
  • Delivers machine learning services to mitigate the “chaos”.

Basically you point it at all of your data repositories and you can then search across all of those from one screen. The cool thing about the catalogue is not just that it tracks metadata and leverages full-text indexing, it also tracks user activity. It supports a variety of on-premises, cloud and SaaS applications (6 at the moment, 16 by September). You only need to login once and there’s full ACL support – so users can only see what they’re meant to see.

According to Ganek, it also delivers some pretty fast search results, in the order of 400 – 600ms.

[image courtesy of Cloudtenna]

I was interested to know a little more about how the machine learning could identify files that were being worked on by people in the same workgroup. Ganek said they didn’t rely on Active Directory group membership, as these were often outdated. Instead, they tracked file activity to create a “Shadow IT organisational chart” that could be used to identify who was collaborating on what, and tailor the search results accordingly.

 

Thoughts and Further Reading

I’ve spent a good part of my career in the data centre providing storage solutions for enterprises to host their critical data on. I talk a lot about data and how important it is to the business. I’ve worked at some established companies where thousands of files are created every day and terabytes of data is moved around. Almost without fail, file management has been a pain in the rear. Whether I’ve been using Box to collaborate, or sending links to files with Dropbox, or been stuck using Microsoft Teams (great for collaboration but hopeless from a management perspective), invariably files get misplaced or I find myself firing up a search window to try and track down this file or that one. It’s a mess because we don’t juts work from a single desktop and carefully curated filesystem any more. We’re creating files on mobile devices, emailing them about, and gathering data from systems that don’t necessarily play well on some platforms. It’s a mess, but we need access to the data to get our jobs done. That’s why something like Cloudtenna has my attention. I’m looking forward to seeing them progress with the beta of DirectSearch, and I have a feeling they’re on to something pretty cool with their product. You can also read Rich’s thoughts on Cloudtenna over at the Gestalt IT website.

What’s New With Zerto?

Zerto recently held their annual conference (ZertoCON) last week in Boston. I didn’t attend, but I did have time to catch up with Rob Strechay prior to Zerto making some announcements around the company and future direction. I thought I’d cover those here.

 

IT Resilience Platform

The first announcement revolved around the “IT Resilience Platform“. The idea behind the strategy is that backup, disaster recovery and cloud mobility solutions into a single, simple, scalable platform. Strechay says that “this strategy combines continuous availability, workload mobility, and multi-cloud agility to ensure you can withstand any disruption, leverage new technology seamlessly, and move forward with confidence”. They’ve found that Zerto is being used both for unplanned and planned disruptions, and they’ve also been seeing a lot more activity resolving ransomware and security incidents. From a planned outage perspective, DC consolidation has been a big part of the planned disruption activity as well.

What’s driving this direction? According to Strechay, companies are looking for fewer point solutions. They’re also seeing backup and DR activities converging. Cloud is driving this technology convergence and is changing the way data protection is being delivered.

  • Cloud for backup
  • Cloud for DR
  • Application mobility

“It’s good if it’s done properly”. Zerto tell me they haven’t rushed into this and are not taking the approach lightly. They see IT Resilience as a combination of  Backup, DR Replication, and Hybrid Cloud. Strechay told me that Zerto are going to stay software only and will partner on the hardware side where required. So what does it look like conceptually?

[image courtesy of Zerto]

Think of this as a mode of transport. The analytics and control is like the navigation system, the orchestration and automation layer are the steering wheel, and continuous data protection is the car.

 

Vision for the Future of Backup

Strechay also shared with me Zerto’s vision for the future of backup. In short, “it needs to change”. They really want to move away from the concept of periodic protection to continuous, journal-based protection delivering seconds of RPO at scale to meet customer expectations. How are they going to do this? The key differentiation will be CDP combined with best of breed replication.

 

Zerto 7 Preview

Strechay also shared some high level details of Zerto 7, with key features including:

  • Intelligent index and search
  • Elastic journal
  • Data protection workflows
  • Architecture enhanced
  • LTR targets

There’ll be a new and enhanced user experience – they’re busy revisiting workflows and enhancing a number of them (e.g. reducing clicks, enhanced APIs, etc). They’ll also be looking at features such as prescriptive analytics (what if I added more VMs to this journal?). They’re aiming for a release in Q1 2019.

 

Thoughts

The way we protect data is changing. Companies like Zerto, Rubrik and Cohesity are bringing a new way of thinking to an age old problem. They’re coming at it from slightly different angles as well. This can only be a good thing for the industry. A lot of the technical limitations that we faced previously have been removed in terms of bandwidth and processing power. This provides the opportunity to approach the problem from the business perspective. Rather than saying “we can’t do that”, we have the opportunity to say “we can do that”. That doesn’t mean that scale is a simple thing to manage, but it seems like there are more ways to solve this problem than there have been previously.

I’ve been a fan of Zerto’s approach for some time. I like the idea that a company has shared their new vision for data protection some months out from actually delivering the product. It makes a nice change from companies merely regurgitating highlights from their product release notes (not that that isn’t useful at times). Zerto have a rich history of delivering CDP solutions for virtualised environments, and they’ve made some great inroads with cloud workload protection as well. The idea of moving away from periodic data protection to something continuous is certainly interesting, and obviously fits in well with Zerto’s strengths. It’s possibly not a strategy that will work well in every situation, particularly with smaller environments. But if you’re leveraging replication technologies already, it’s worth looking at how Zerto might be able to deliver a more complete solution for your data protection requirements.

Burlywood Tech Announces TrueFlash

Burlywood Tech came out of stealth late last year and recently announced their TrueFlash product. I had the opportunity to speak with Mike Tomky about what they’ve been up to since emerging from stealth and thought I’d cover the announcement here.

 

Burlywood TrueFlash

So what is TrueFlash? It’s a “modular controller architecture that accelerates time-to-market of new flash adoption”. The idea is that Burlywood can deliver a software-defined solution that will sit on top of commodity Flash. They say that one size doesn’t fit all, particularly with Flash, and this solution gives customers the opportunity to tailor the hardware to better meet their requirements.

It offers the following features:

  • Multiple interfaces (SATA, SAS, NVMe)
  • FTL Translation (Full SSD to None)
  • Capacity ->100TB
  • Traffic optimisation
  • Multiple Protocols (Block (NVMe, NVMe/F), File, Object, Direct Memory)

[image courtesy of Burlywood Tech]

 

Who’s Buying?

This isn’t really an enterprise play – those aren’t the types of companies that would buy Flash at the scale that this would make sense. This is really aimed at the hyperscalers, cloud providers, and AFA / HCI vendors. They sell the software, controller and SSD Reference Design to the hyperscalers, but treat the cloud providers and AFA vendors a little differently, generally delivering a completed SSD for them. All of their customers benefit from:

  • A dedicated support team (in-house drive team);
  • Manufacturing assembly & test;
  • Technical & strategic support in all phases; and
  • Collaborative roadmap planning.

The key selling point for Burlywood is that they claim to be able to reduce costs by 10 – 20% through better capacity utilisation, improved supply chain and faster product qualification times.

 

Thoughts

You know you’re doing things at a pretty big scale if you’re thinking it’s a good idea to be building your own SSDs to match particular workloads in your environment. But there are reasons to do this, and from what I can see, it makes sense for a few companies. It’s obviously not for everyone, and I don’t think you’ll be seeing this n the enterprise anytime soon. Which is the funny thing, when you think about it. I remember when Google first started becoming a serious search engine and they talked about some of their earliest efforts with DIY servers and battles with doing things at the scale they needed. Everyone else was talking about using appliances or pre-built solutions “optimised” by the vendors to provide the best value for money or best performance or whatever. As the likes of Dropbox, Facebook and LinkedIn have shown, there is value in going the DIY route, assuming the right amount of scale is there.

I’ve said it before, very few companies really qualify for the “hyper” in hyperscalers. So a company like Burlywood Tech isn’t necessarily going to benefit them directly. That said, these kind of companies, if they’re successful in helping the hyperscalers drive the cost of Flash in a downwards direction, will indirectly help enterprises by forcing the major Flash vendors to look at how they can do things more economically. And sometimes it’s just nice to peak behind the curtain to see how this stuff comes about. I’m oftentimes more interested in how networks put together their streaming media services than a lot of the content they actually deliver on those platforms. I think Burlywood Tech falls in that category as well. I don’t care for some of the services that the hyperscalers deliver, but I’m interested in how they do it nonetheless.

Storbyte Come Out Of Stealth Swinging

I had the opportunity to speak to Storbyte‘s Chief Evangelist and Design Architect Diamond Lauffin recently and thought I’d share some information on their recent announcement.

 

Architecture

ECO-FLASH

Storbyte have announced ECO-FLASH, positioning it as “a new architecture and flash management system for non-volatile memory”. Its integrated circuit, ASIC-based architecture abstracts independent SSD memory modules within the flash drive and presents the unified architecture as a single flash storage device.

 

Hydra

Each ECO-FLASH module is comprised of 16 mSATA modules, running in RAID 0. 4 modules are managed by each Hydra, with 4 “sub-master” Hydras being managed by a master Hydra. This makes up one drive that supports RAID 0, 5, 6 and N, so if you were only running a single-drive solution (think out at the edge), you can configure the modules to run in RAID 5 or 6.

 

[image courtesy of Storbyte]

 

Show Me The Product

[image courtesy of Storbyte]

 

The ECO-FLASH drives come in 4, 8, 16 and 32TB configurations, and these fit into a variety of arrays. Storbyte is offering three ECO-FLASH array models:

  • 131TB raw capacity in 1U (using 4 drives);
  • 262TB raw capacity in 2U (using 16 drives); and
  • 786TB raw capacity in 4U (using 48 drives).

Storbyte’s ECO-FLASH supports a blend of Ethernet, iSCSI, NAS and InfiniBand primary connectivity simultaneously. You can also add Storbyte’s 4U 1.18PB spinning disk JBOD expansion units to deliver a hybrid solution.

 

Thoughts

The idea behind Storbyte came about because some people were working in forensic security environments that had a very heavy write workload, and they needed to find a better way to add resilience to the high performance storage solutions they were using. Storbyte are offering a 10 year warranty on their product, so they’re clearly convinced that they’ve worked through a lot of the problems previously associated with the SSD Write Cliff (read more about that here, here, and here). They tell me that Hydra is the primary reason that they’re able to mitigate a number of the effects of the write cliff and can provide performance for a longer period of time.

Storbyte’s is not a standard approach by any stretch. They’re talking some big numbers out of the gate and have a pretty reasonable story to tell around capacity, performance, and resilience as well. I’ve scheduled another session with Storbyte to talk some more about how it all works and I’ll be watching these folks with some interest as they enter the market and start to get some units running workload on the floor. There’s certainly interesting heritage there, and the write cliff has been an annoying problem to solve. Coupled with some aggressive economics and support for a number of connectivity options and I can see this solution going in to a lot of DCs and being used for some cool stuff. If you’d like to read another perspective, check out what Rich over at Gestalt IT wrote about them and you can read the full press release here.

What’s The Buzz About StorageOS?

I wrote about StorageOS almost twelve months ago, and recently had the opportunity to catch up with Chris Brandon about what StorageOS have been up to. They’ve been up to a fair bit as it happens, so I thought I’d share some of the details here.

 

The Announcement

What’s StorageOS? According to Brandon it’s “[a] software-defined, scale-out/up storage platform for running enterprise containerized applications in production”. The “buzz” is that StorageOS is now generally available for purchase and they’ve secured some more funding.

 

Cloud Native Storage, Eh?

StorageOS have come up with some thinking around the key tenets of cloud native storage. To wit, it needs to be:

  • Application Centric;
  • Application Platform Agnostic;
  • Declarative and Composable;
  • API Driven and Self-Managed;
  • Agile;
  • Natively Secure;
  • Performant; and
  • Consistently Available.

 

What Can StorageOS Do For Me?

According to Brandon, StorageOS offers are a number of benefits:

  • It’s Enterprise Class – so you can keep your data safe and available;
  • Policy Management allows you to enforce policies and rules while still enabling storage self-service by developers and DevOps teams;
  • Deploy It Anywhere – cloud, VM or server – you decide;
  • Data Services – Replication for HA, data reduction, storage pooling and agility to scale up or scale out based on application requirements;
  • Performance – Optimised to give you the best performance from your platform;
  • Cost-Effective Pricing – Only pay for the storage you use. Lower OpEx and CapEx;
  • Integrated Storage – Integrated into your favorite platforms with extensible plugins and APIs; and
  • Made Easy – Automated configuration and simple management.

 

Architecture

There is a container installed on each node and this runs both the data plane and control plane.

Data Plane

  • Manages data access requests
  • Pools aggregated storage for presentation
  • Runs as a container

Control Plane

  • Manages config, health, scheduling, policy, provisioning and recovery
  • API is accessed by plugins, CLI, GUI
  • Runs as a container

Containers are also used to create a highly available storage pool.

[Image courtesy of StorageOS]

 

Thoughts And Further Reading

StorageOS secured some funding recently and have moved their headquarters from London to New York. They’re launching at KubeCon, Red Hat Summit and dockercon. They have a number of retail and media customers and are working closely with strategic partners. They’ll initially be shipping the Enterprise version, and there is a Professional version on the way. They are also committed to always having a free version available for developers to try it out (this is capacity limited to 100GB right now).

We’ve come some way from the one application per host approach of the early 2000s. The problem, however, is that “legacy” storage hasn’t been a good fit for containers. And containers have had some problems with storage in general. StorageOS are working hard to fix some of those issues and are looking to deliver a product that neatly sidesteps some of the issues inherent in container storage while delivering some features that have been previously unavailable in container deployments.

The team behind the company have some great heritage with cloud-native applications, and I like that they’re working hard to make sure this really is a cloud-native solution, not just a LUN being pointed at an operating environment. Ease of consumption is a popular reason for shifting to the cloud, and StorageOS are ensuring that people can leverage their product with a simple to understand subscription model. They’re not your stereotypical cloud neckbeards though (that’s my prejudice, not yours). The financial services background comes through in the product architecture, with a focus on availability and performance being key to the platform. I also like the policy-based approach to data placement and the heavy focus on orchestration and automation. You can read more about some of the product features here.

Things have really progressed since I first spoke to StorageOS last year, and I’m looking forward to seeing what they come up with in the next 12 months.

Dell Technologies World 2018 – Dell EMC (H)CI Updates

Disclaimer: I recently attended Dell Technologies World 2018.  My flights, accommodation and conference pass were paid for by Dell Technologies via the Press, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Announcement

Dell EMC today announced enhancements to their (hyper)converged infrastructure offerings: VxRail and VxRack SDDC.

VxRail

  • VMware Validated Designs for SDDC to plan, operate, & deploy on-prem cloud
  • Future-proof performance w/NVMe, 2x more memory (up to 3TB per node), 2x graphics acceleration, and 25Gbps networking support
  • New STIG Compliance Guide and automated scripts accelerate deployment of secure infrastructure

VxRack SDDC

  • Exclusive automation & serviceability extensions with VMware Cloud Foundation (VCF)
  • Now leverages more powerful 14th Gen PowerEdge 
  • End-to-end cloud infrastructure security

 

Gil Shneorson on HCI

During the week I also had the chance to speak with Gil Shneorson and I thought it would be worthwhile sharing some of his insights here.

What do you think about HCI in the context of an organisation’s journey to cloud. Is it a stop-gap? “HCI is simply a new way to consume infrastructure (compute and SDS) – and you get some stuff that wasn’t available before. Your environments are evergreen – you take less risk, you don’t have to plan ahead, don’t tend to buy too much or too little”.

Am I going to go traditional or HCI? “Most are going HCI. Where is the role of traditional storage? It’s become more specialised – bare metal, extreme performance, certain DR scenarios. HCI comes partially with everything – lots of storage, lots of CPU. Customers are using it in manufacturing, finance, health care, retail – all in production. There’s no more delineation. Economics are there. Picked up over 3000 customers in 9 quarters”.

Shneorson went on to say that HCI provides “[g]ood building blocks for cloud-like environments – IaaS. It’s the software on top, not the HCI itself. The world is dividing into specific stacks – VMware, Microsoft, Nutanix. Dell EMC are about VMware’s multi-cloud approach. If you do need on-premises, HCI is a good option, and won’t be going away. The Edge is growing like crazy too. Analytics, decision making. Not just point of sale for stores. You need a lot more just in time scale for storage, compute, network.

How about networking? “More is being done. Moving away form storage networks has been a challenge. Not just technically, but organisationally. Finding people who know a bit about everything isn’t easy. Sometimes they stick with the old because of the people. You need a lot of planning to put your IO on the customers’ network. Then you need to automate. We’re still trying to make HCI as robust as traditional architectures”.

And data protection? “Data protection still taking bit of a backseat”.

Are existing VCE customers were upset about some of the move away from Cisco? “Generally, if they were moving away from converged solutions, it was more because they’d gained more confidence in HCI, rather than the changing tech or relationships associated with Dell EMC’s CI offering”.

 

Thoughts

This weeks announcements around VxRail and VxRack SDDC weren’t earth shattering by any stretch, but the thing that sticks in my mind is that Dell EMC continue to iteratively improve the platform and are certainly focused on driving VxRail to be number one in the space. There’s a heck of a lot of competition out there from their good friends at Nutanix, so I’m curious to see how this plays out. When it comes down to it, it doesn’t matter what platform you use to deliver outcomes, they key is that you deliver those outcomes. In the market, it seems the focus is moving more towards how the applications can deliver value, rather than what infrastructure is hosting those applications. This is a great move, but just like serverless needs servers, you still need to think about where your value-adding applications are being hosted. Ideally, you want the data close to the processing, and, depending on the applications, your users need to be close to that data and processing too. Hyper-converged infrastructure can be a really nice solution to leverage when you want to move beyond the traditional storage / compute / network paradigm. You can start small and scale (to a point) as required. Dell EMC’s VxRail and VxRack story is getting better as time goes on.

Dell Technologies World 2018 – Dell EMC Announces XtremIO Enhancement and PowerEdge Updates

Disclaimer: I recently attended Dell Technologies World 2018.  My flights, accommodation and conference pass were paid for by Dell Technologies via the Press, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Dell EMC today made some announcements around the XtremIO X2 platform and their PowerEdge server line. I thought it would be worthwhile covering the highlights here.

 

What’s New with XtremIO X2?

The XIOS 6.1 operating system delivers one very important enhancement: native replication. (It does a lot of other stuff, but this is the big one really)

“Metadata-Aware” or Native Replication has the following features:

  • Only sends unique data to minimize WAN bandwidth requirements;
  • Minimal to no performance impact with XtremIO architecture; and
  • Simple Protection Wizard is built-in to XtremIO HTML5 UI.

This is available from May 3, 2018

 

PowerEdge News

Dell EMC also announced the availability of the new PowerEdge R840 and R940xa, both available from Q2 2018. I feel bad posting server news without some kind of box shot. Hopefully I can find one and update this post in the future.

PowerEdge R840

Dell EMC tell me the PowerEdge R840 is “[d]esigned to turbocharge data analytics”.

It offers great density, with

  • A lot of performance in a dense 2U form factor; and
  • Speedy response times with up to 24 direct-attached NVMe drives.

It also offers “Integrated Security”, which Dell EMC tell me is based on a “[c]yber resilient architecture, [where] security is integrated into full server lifecycle – from design to retirement”.

You can also scale performance and capacity, with

  • Up to 2 GPUs or up to 2 FPGAs; and
  • Up to 26 SSDs/HDDs.

There’s also “Intelligent Automation” with

  • OpenManage RESTful API & IDRAC9 for DevOps integration

 

PowerEdge R940xa

Dell EMC are positioning the R940xa for use with “[e]xtreme GPU Database Acceleration.

There’s a 1:1 CPU to GPU ratio, so you can:

  • Deliver faster response times with 4-socket performance; and
  • Drive insights with up to 4 GPUs or up to 8 FPGAs.

Integrated Security is present in this appliance as well (see above).

Scale on-premises capacity by mixing and matching capacity and performance options with up to 32 drives.

Intelligent Automation is present in this appliance as well (see above).

 

Thoughts

People have been looking for native replication in the XtremIO product since it started shipping. It was hoped that the X2 would deliver on that, but instead RecoverPoint seemed to be a capable, if not sometimes disappointing, solution. “Native” replication is what people really want to be able to leverage though, as these kind of protection activities can get overly complicated when multiple solutions are bolted together. I had the great displeasure of deploying an XtremIO backed by VPLEX once. I’m not saying it didn’t work, indeed it worked rather well. But the additional configuration and operating overhead seemed excessive. To be fair, they also wanted the VPLEX so they could tier data to their VNX if required, but I always felt that was just a table stakes exercise. In any case, in my opinion the best option for data replication resides with application. But sometimes you’re just not in a position to use that. In that instance, infrastructure (or storage)-level replication is the next best thing. It needs to be simple though, so it’s nice to see Dell EMC delivering on that.

I don’t cover servers as much as I probably should. These two new models from Dell EMC are certainly pitched at particular workloads. There was obviously a lot more announced last year in terms of new compute, but that was generational. A lot of people are doing some pretty cool stuff with GPUs, and they’ve frequently had to come up with their own solution to get the job done, so it’s nice to see some focus from Dell EMC on that.

You can read a blog post on XtremIO here, and the PowerEdge press release is here. There’s also a white paper on XtremIO replication you can read here.

Dell Technologies World 2018 – Dell EMC Announces PowerMax

Disclaimer: I recently attended Dell Technologies World 2018.  My flights, accommodation and conference pass were paid for by Dell Technologies via the Press, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Announcement

[image courtesy of Dell EMC]

 

Dell EMC today announced PowerMax. Described as the next generation of VMAX, it’s been designed from the ground up to support NVMe. It’s being pitched as suitable for both traditional applications, such as:

  • Virtual Machines;
  • EMR;
  • Relational Databases; and
  • ERP.

And “next generation applications”, such as:

  • Real time analytics;
  • Genomics;
  • AI;
  • IoT; and
  • Mobile Applications.

From a performance perspective, Dell EMC tell me this thing can do 10M IOPS. It’s also been benchmarked delivering 25% better response time using NVMe Flash (compared to a VMAX AF using SAS Flash) and 50% better response time using NVMe SCM (Storage Class Memory). They also say that can get 150GB/s out of a single system.

 

Overview

  • Multi-controller
  • End to End NVMe
    • NVMe over Fabric Ready (soon)
    • NVMe based drives (dual ported) – Flash and SCM (*soon)
    • NVMe-based Disk Array Enclosure
  • Industry standard technology

[Image courtesy of Dell EMC]

 

Scalability and Density

Starts small, and scales up and out.

  • Capacity starts at 13TB (effective)
  • As small as 10U
  • Scales from 1 Brick
  • Scales to 8 Bricks
  • 4PB (effective) per system

[Image courtesy of Dell EMC]

 

Efficiency

From a storage efficiency perspective, a number of features you’d hope for are there:

  • Inline dedupe and compression – 5:1 data reduction across the PowerMax
  • No performance impact
  • Works with all data services enabled
  • Can be turned on or off by application

 

Configurations

There are two different models: the PowerMax 2000 and PowerMax 8000.

PowerMax 2000

  • 1.7M IOPS (RRH-8K)
  • 1PB effective Capacity
  • 1 to 2 PowerBricks

PowerMax 8000

  • 10M IOPS (RRH-8K)
  • 4PB effective Capacity
  • 1 to 8 PowerBricks

 

Software

PowerMax Software comes in two editions:

Essentials

  • SnapVX
  • Compression
  • Non-disruptive Migration
  • QoS
  • Deduplication
  • iCDM Basic

Pro

The Pro edition gives you all of the above and

  • SRDF
  • D@RE
  • eNAS
  • iCDM Advanced
  • PowerPath
  • SRM

The PowerMax is available from May 7, 2018.

 

Thoughts

Dell EMC tell me the VMAX 250 and 950 series aren’t going away any time soon, but there will be tools made available to migrate from those platforms if you decide to put some PowerMax on the floor. PowerMax is an interesting platform with a lot of potential, hype around the quoted performance numbers notwithstanding. It seems like it takes a lot of floor tiles compared to some other NVMe-based alternatives, although this may be down to the scale of the platform. It stands to reason that the kind of folks interested in this offering are the same ones that were interested in VMAX All Flash. I’d be curious to see what the compatibility matrices look like for the existing VMAX tools when compared to the PowerMax, although I do imagine that they’d be a bit more careful about this then they have been with the midrange products.

You can read the press release from Dell EMC here, and there’s a blog post on PowerMax here.