SwiftStack Announce 1space

SwiftStack recently announced 1space, and I was lucky enough to snaffle some time with Joe Arnold to talk more about what it all means. I thought it would be useful to do a brief post, as I really do like SwiftStack, and I feel like I don’t spend enough time talking about them.

 

The Announcement

So what exactly is 1space? It’s basically SwiftStack delivering access to their storage across both on-premises and public cloud. But what does that mean? Well, you get some cool features as a result, including:

  • Integrated multi-cloud access
  • Scale-out & high-throughput data movement
  • Highly reliable & available policy execution
  • Policies for lifecycle, data protection & migration
  • Optional, scale-out containers with AWS S3 support
  • Native access in public cloud (direct to S3, GCS, etc.)
  • Data created in public cloud accessible on-premises
  • Native format enabling cloud-native services

[image courtesy of SwiftStack]

According to Arnold, one of the really cool things about this is that it “provides universal access to over both file protocols and object APIs to a single storage namespace, it is increasingly used for distributed workflows across multiple geographic regions and multiple clouds”.

 

Metadata Search

But wait …

One of the really nice things that SwiftStack has done is add integrated metadata search via a desktop client for Windows, macOS, and Linux. It’s called MetaSync.

 

Thoughts

This has been a somewhat brief post, but something I did want to focus on was the fact that this product has been open-sourced. SwiftStack have been pretty keen on open source as a concept, and I think that comes through when you have a look at some of their contributions to the community. These contributions shouldn’t be underestimated, and I think it’s important that we call out when vendors are contributing to the open source community. Let’s face it, a whole lot of startups are taking advantage of code generated by the open source community, and a number of them have the good sense to know that it’s most certainly a two-way street, and they can’t relentlessly pillage the community without it eventually falling apart.

But this announcement isn’t just me celebrating the contributions of neckbeards from within the vendor community and elsewhere. SwiftStack have delivered something that is really quite cool. In much the same way that storage types won’t shut up about NVMe over Fabrics, cloud folks are really quite enthusiastic about the concept of multi-cloud connectivity. There are a bunch of different use cases where it makes sense to leverage a universal namespace for your applications. If you’d like to see SwiftStack in action, check out this YouTube channel (there’s a good video about 1space here) and if you’d like to take SwiftStack for a spin, you can do that here.

Brisbane VMUG – BNEVMUG at The Movies – August 2018

hero_vmug_express_2011

 

The August 2018 edition of the Brisbane VMUG meeting is a purely social event, with a members-only screening of Mission: Impossible – Fallout. It will be held on Friday 3rd August at the Elizabeth Picture Theatre in Elizabeth Street. This special social event is sponsored by VMware. There’ll be pizza and refreshments from 5:00 pm with the screening starting at 6:00 pm. Parking at the Wintergarden will be validated by the cinema. Be sure to reserve your seat now as seats are limited and available on a first come, first served basis. You can find out more information and register for the event here. I hope to see you there. Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.

Pavilion Data Systems Overview

I recently had the opportunity to hear about Pavilion Data Systems from VR Satish, CTO, and Jeff Sosa, VP of Products. I thought I’d put together a brief overview of their offering, as NVMe-based systems are currently the new hotness in the storage world.

 

It’s a Box!

And a pretty cool looking one at that. Here’s what it looks like from the front.

[image courtesy of Pavilion Data]

The storage platform is built from standard components, including x86 processors and U.2 NVMe SSDs. A big selling point, in Pavilion’s opinion, is that there are no custom ASICs and no FPGAs in the box. There are three different models available (the datasheet is here), with different connectivity and capacity options.

From a capacity perspective, you can start at 14TB and get all the way to 1PB in 4RU. The box can start at 18 NVMe drives and (growing by increments of 18) goes to 72 drives. It runs RAID 6 and presents the drives as virtual volumes to the hosts. Here’s a look at the box from a top-down perspective.

[image courtesy of Pavilion Data]

There’s a list of supported NVMe SSDs that you can use with the box, if you wanted to source those elsewhere. On the right hand side (the back of the box) are the IO controllers. You can start at 4 and go up to 20 in a box. There’s also 2 management modules and 4 power supplies for resiliency.

[image courtesy of Pavilion Data]

You can see in the above diagram that connectivity is also a big part of the story, with each pair of controllers offering 4x 100GbE ports.

 

Software? 

Sure. It’s a box but it needs something to run it. Each controller runs a customised flavour of Linux and delivers a number of the features you’d expect from a storage array, including:

  • Active-active controller support
  • Space-efficient snapshots and clones
  • Thin provisioning.

There’re also plans afoot for encryption support in the near future. Pavilion have also focused on making operations simple, providing support for RESTful API orchestration, OpenStack Cinder, Kubernetes, DMTF RedFish and SNIA Swordfish. They’ve also gone to some lengths to ensure that standard NVMe/F drivers will work for host connectivity.

 

Thoughts and Further Reading

Pavilion Data has been around since 2014 and the leadership group has some great heritage in the storage and networking industry. They tell me they wanted to move away from the traditional approach to storage arrays (the dual controller, server-based platform) to something that delivered great performance at scale. There are similarities more with high performance networking devices than high performance storage arrays, and this is by design. They tell me they really wanted to deliver a solution that wasn’t the bottleneck when it came to realising the performance capabilities of the NVMe architecture. The numbers being punted around are certainly impressive. And I’m a big fan of the approach, in terms of both throughput and footprint.

The webscale folks running apps like MySQL and Cassandra and MongoDB (and other products with similarly awful names) are doing a few things differently to the enterprise bods. Firstly, they’re more likely to wear jeans and sneakers to the office (something that drives me nuts) and they’re leveraging DAS heavily because it gives them high performance storage options for latency-sensitive situations. The advent of NVMe and NVMe over Fabrics takes away the requirement for DAS (although I’m not sure they’ll start to wear proper office attire any time soon) by delivering storage at the scale and performance they need. As a result of this, you can buy 1RU servers with compute instead of 2RU servers full of fast disk. There’s an added benefit as organisations tend to assign longer lifecycles to their storage systems, so systems like the one from Pavilion are going to have a place in the DC for five years, not 2.5 – 3 years. Suddenly lifecycling your hosts becomes simpler as well. This is good news for the jeans and t-shirt set and the beancounters alike.

NVMe (and NVMe over Fabrics) has been a hot topic for a little while now, and you’re only going to hear more about it. Those bright minds at Gartner are calling it “Shared Accelerated Storage” and you know if they’re talking about it then the enterprise folks will cotton on in a few years and suddenly it will be everywhere. In the meantime, check out Chris M. Evans’ article on NVMe over Fabrics and Chris Mellor also did an interesting piece at El Reg. The market is becoming more crowded each month and I’m interested to see how Pavilion fare.

Rubrik Cloud Data Management 4.2 Announced – “Purpose Built for the Hybrid Cloud”

Rubrik recently announced 4.2 of their Cloud Data Management platform and I was fortunate enough to sit in on a sneak preview from Chris Wahl, Kenneth Hui, and Rebecca Fitzhugh. “Purpose Built for the Hybrid Cloud”, there are a whole bunch of new features in this release. I’ve included a summary table below, and will dig in to some of the more interesting ones.

Expanding the Ecosystem Core Features & Services General Enhancements
AWS Native Protection (EC2 Instances) Rubrik Envoy SQL Server FILESTREAM
VMware vCloud Director Integration Rubrik Edge on Hyper-V SQL Server Log Shipping
Windows Full Volume Protection Network Throttling NAS Native API Integration
AIX & Solaris Support VLAN Tagging (GUI) NAS SMB Scan Enhancements
SNMP AHV VSS snapshot
Multi-File restore Proxy per Archival Location
Reader-Writer Archival Locations

 

AWS Native Protection (EC2 Instances)

One of the key parts of this announcement is cloud-native protection, delivered specifically with AWS EBS Snapshots. The cool thing is you can have Rubrik running on-premises or sitting in the cloud.

Use cases?

  • Automate manual processes – use policy engine to automate lifecycle management of snapshots, including scheduling and retention
  • Rapid recovery from failure – eliminate manual steps for instance and file recovery
  • Replicate instances in other availability zones and regions – launch instances in other AZs and Regions when needed using snapshots
  • Consolidate data management – one solution to manage data across on-premises DCs and public clouds

Snapshots have been a manual process to deal with. Now there’s no need to mess with crontab or various AWS tools to get the snaps done. It also aligns with Rubrik’s vision of having a single tool to manage both cloud and on-premises workloads. The good news is that files in snapshots are indexed and searchable, so individual file recovery is also pretty simple.

 

VMware vCloud Director Integration

It may or may not be a surprise to learn that VMware vCloud Director is still in heavy use with service providers, so news of Rubrik integration with vCD shouldn’t be too shocking. Rubrik spent a little time talking about some of the “Foundational Services” they offer, including:

  • Backup – Hosted or Managed
  • ROBO Protection
  • DR – Mirrored Site service
  • Archival – Hosted or Managed

The value they add, though, is in the additional services, or what they term “Next Generation premium services”. These include:

  • Dev / Test
  • Cloud Archival
  • DR in Cloud
  • Near-zero availability
  • Cloud migration
  • Cloud app protection

Self-service is the key

To be able to deliver a number of these services, particularly in the service provider space, there’s been a big focus on multi-tenancy.

  • Operate multi-customer configuration through a single cluster
  • Logically partition cluster into tenants as “Organisations”
  • Offer self-service management for each organisation
  • Centrally control, monitoring and reporting with aggregated data

Support for vCD (version 8.10 and later) is as follows:

  • Auto discovery of vCD hierarchy
  • SLA based auto protect at different levels of vCD hierarchy
  • vCD Instance
  • vCD Organization • Org VDC
  • vApp
  • Recovery workflows
  • Export and Instant recovery
  • Network settings
  • File restore
  • Self-service using multi-tenancy
  • Reports for vCD organization

 

Windows Full Volume Protection

Rubrik have always had fileset-based protection, and they’re now offering the ability with Windows hosts to protect a volume at a time, eg. C:\ volume. These protection jobs incorporate additional information such as partition type, volume size, and permissions.

[image courtesy of Rubrik]

There’s also a Rubrik-created package to create bootable Microsoft Windows Preinstallation Environment (WinPE) media to restore the OS as well as provide disk partition information. There are multiple options for customers to recover entire volumes in addition to system state, including Master Boot Record (MBR), GUID Partition Table (GPT) information, and OS.

Why would you? There are a few use cases, including

  • P2V – remember those?
  • Physical RDM mapping compatibility – you might still have those about, because, well, reasons
  • Physical Exchange servers and log truncation
  • Cloud mobility (AWS to Azure or vice versa)

So now you can select volumes or filesets, and you can store the volumes in a Volume Group.

[image courtesy of Rubrik]

 

AIX and Solaris Support

Wahl was reluctant to refer to AIX and Solaris as “traditional” DC applications, because it all makes us feel that little bit older. In any case, AIX support was already available in the 4.1.1 release, and 4.2 adds Oracle Solaris support. There are a few restore scenarios that come to mind, particularly when it comes to things like migration. These include:

  • Restore (in place) – Restores the original AIX server at the original path or a different path.
  • Export (out of place) – Allows exporting to another AIX or Linux host that has the Rubrik Backup Service (RBS) running.
  • Download Only – Ability to download files to the machine from which the administrator is running the Rubrik web interface.
  • Migration – Any AIX application data can be restored or exported to a Linux host, or vice versa from Linux to an AIX host. In some cases, customers have leveraged this capability for OS migrations, removing the need for other tools.

 

Rubrik Envoy

Rubrik Envoy is a trusted ambassador (its certificate is issued by the Rubrik cluster) that represents the service provider’s Rubrik cluster in an isolated tenant network.

[image courtesy of Rubrik]

 

The idea is that service providers are able to offer backup-as-a-service (BaaS) to co-hosted tenants, enabling self-service SLA management with on-demand backup and recovery. The cool thing is you don’t have to deploy the Virtual Edition into the tenant network to get the connectivity you need. Here’s how it comes together:

  1. Once a tenant subscribes to BaaS from the SP, an Envoy virtual appliance is deployed on the tenant’s network.
  2. The tenant may log into Envoy, which will route the Rubrik UI to the MSP’s Rubrik cluster.
  3. Envoy will only allow access to objects that belong to the tenant.
  4. The Rubrik cluster works with the tenant VMs, via Envoy, for all application quiescence, file restore, point-in-time recovery, etc.

 

Network Throttling

Network throttling is something that a lot of customers were interested in. There’s not an awful lot to say about it, but the options are No, Default and Scheduled. You can use it to configure the amount of bandwidth used by archival and replication traffic, for example.

 

Core Feature Improvements

There are a few other nice things that have been added to the platform as well.

  • Rubrik Edge is now available on Hyper-V
  • VLAN tagging was supported in 4.1 via the CLI, GUI configuration is now available
  • SNMPv2c support (I loves me some SNMP)
  • GUI support for multi-file recovery

 

General Enhancements

A few other enhancements have been added, including:

  • SQL Server FILESTREAM fully supported now (I’m not shouting, it’s just how they like to write it);
  • SQL Server Log Shipping; and
  • Per-Archive Proxy Support.

Rubrik were also pretty happy to announce NAS Vendor Native API Integration with NetApp and Isilon.

  • Network Attached Storage (NAS) vendor-native API integration.
    • NetApp ONTAP (ONTAP API v8.2 and later) supporting cluster-mode for NetApp filers.
    • Dell EMC Isilon OneFS (v8.x and later) + ChangeList (v7.1.1 and later)
  • NAS vendor-native API integration further enhances our current capability to take volume-based snapshots.
  • This feature also enhances the overall backup fileset backup performance.

NAS SMB Scan Enhancements have also been included, providing a 10x performance improvement (according to Rubrik).

 

Thoughts

Point releases aren’t meant to be massive undertakings, but companies like Rubrik are moving at a fair pace and adding support for products to try and meet the requirements of their customers. There’s a fair bit going on in this one, and the support for AWS snapshots is kind of a big deal. I really like Rubrik’s focus on multi-tenancy, and they’re slowing opening up doors to some enterprises still using the likes of AIX and Solaris. This has previously been the domain of the more traditional vendors, so it’s nice to see progress has been made. Not all of the world runs on containers or in vSphere VMs, so delivering this capability will only help Rubrik gain traction in some of the more conservative shops around town.

Rubrik are working hard to address some of the “enterprise-y” shortcomings or gaps that may have been present in earlier iterations of their product. It’s great to see this progress over such a short period of time, and I’m looking forward to hearing about what else they have up their sleeve.

Druva Announces CloudRanger Acquisition

Announcement

Druva recently announced that they’ve acquired CloudRanger. I had the opportunity to catch up with W. Curtis Preston about the news recently and thought I’d cover it briefly here.

 

What’s A CloudRanger?

Here’s the high-level view of the company:

  • Founded in 2016
  • Headquartered in Donegal, Ireland
  • 300+ Global Customers
  • 3x Growth in last 6 months
  • 100% Cloud native ‘as-a-Service’
  • Pay as you go pricing model
  • Biggest client creating 4,000 snapshots per day

 

Why CloudRanger?

Agentless Service

  • API Account IAM access ensures greater customer account security
  • Leverages AWS Quiescing capabilities
  • No account proxies (No additional costs, increased security)
  • No software needed to be updated

Broadest service coverage

  • Amazon EC2, EBS, RDS & RedShift
  • Automated Disaster Recovery (ADR)
  • Server scheduling for Amazon EC2 & RDS
  • SaaS based solution, compared to CPM server based approach
  • Easy to use platform for managing multiple AWS accounts
  • Featured SaaS product in AWS Marketplace available via SaaS contracts

Consumption Based Pricing Model

  • Pay as you go with full insight into data usage for cost predictability

 

A Good Fit

So where does CloudRanger fit in the broader Druva story? You’ll notice in the below picture that Apollo is missing. The main reason for the acquisition, as best I can tell, is that CloudRanger gives Druva the capability they were after with Apollo but in a much shorter timeframe.

[image courtesy of Druva]

 

Thoughts

A lot of customers want a lot of different things from their software vendors, particularly when it comes to data protection. A lot of companies have particular needs, and infrastructure protection is a complicated beast at the best of times. Sometimes it makes sense to try and develop these features for your customers. And sometimes it makes sense to go out and acquire those features. In this case, Druva has realised that CloudRanger gets them to a point in their product development far quicker than they may have gotten to under their own steam. The point of this acquisition isn’t that the good folks at Druva don’t have the chops to deliver what CloudRanger does already, but now they can move on to other platform enhancements. This does assume that the acquisition will go smoothly, but given that this doesn’t appear to be a hostile takeover, I’m assuming that part will go well.

Druva have done a lot of cool stuff recently, and I do like their approach to data protection (management?) that has differentiated itself from some of the more traditional approaches in the marketplace. CloudRanger gives them solid capability with AWS workloads, and I imagine Azure will be on the radar as well. I’m looking forward to seeing how this plays out, and what impact it has on some of their competitors in the space.

OT – New Site Sponsor – Vembu

Please welcome Vembu Technologies as a sponsor of PenguinPunk.net. They are a data protection company that has been around for some time now with a comprehensive suite of products aimed at small to medium enterprises. You can read more about them here. I’m looking forward to taking their stuff for a spin in the lab in the next little while to see what they can really do.

The idea that I’m accepting sponsorship money for this blog doesn’t site well with some folks. But I’ve been maintaining this site for over ten years now, and sponsorship is one way I can keep getting to the big tech conferences and events that are so critical (I think) to understanding what’s happening in the industry. It doesn’t mean you’ll now be bombarded with advertorials from the companies that sponsor me. Any paid for content carries a disclaimer up front so we’re all clear about who’s paying for it and what it is. But running a blog as a hobby still costs money, and I’ve been reaching in to my own pocket a lot for some of this stuff. And while I’m shilling for the site, my rates are reasonable and the delivery model is simple. Feel free to get in contact via email / Twitter / whatever if it’s something you might like to do.

Cloudtenna Announces DirectSearch

 

I had the opportunity to speak to Aaron Ganek about Cloudtenna and their DirectSearch product recently and thought I’d share some thoughts here. Cloudtenna recently announced $4M in seed funding, have Citrix as a key strategic partner, and are shipping a beta product today. Their goal is “[b]ringing order to file chaos!”.

 

The Problem

Ganek told me that there are three major issues with file management and the plethora of collaboration tools used in the modern enterprise:

  • Search is too much effort
  • Security tends to fall through the cracks
  • Enterprise IT is dangerously non-compliant

Search

Most of these collaboration tools are geared up for search, because people don’t tend to remember where they put files, or what they’ve called them. So you might have some files in your corporate Box account, and some in Dropbox, and then some sitting in Confluence. The problem with trying to find something is that you need to search each application individually. According to Cloudtenna, this:

  • Wastes time;
  • Leads to frustration; and
  • Often yields poor results.

Security

Security also becomes a problem when you have multiple storage repositories for corporate files.

  • There are too many apps to manage
  • It’s difficult to track users across applications
  • There’s no consolidated audit trail

Exposure

As a result of this, enterprises find themselves facing exposure to litigation, primarily because they can’t answer these questions:

  • Who accessed what?
  • When and from where?
  • What changed?

As some of my friends like to say “people die from exposure”.

 

Cloudtenna – The DirectSearch Solution

Enter DirectSearch. At its core it’s a SaaS offering that

  • Catalogues file activity across disparate data silos; and
  • Delivers machine learning services to mitigate the “chaos”.

Basically you point it at all of your data repositories and you can then search across all of those from one screen. The cool thing about the catalogue is not just that it tracks metadata and leverages full-text indexing, it also tracks user activity. It supports a variety of on-premises, cloud and SaaS applications (6 at the moment, 16 by September). You only need to login once and there’s full ACL support – so users can only see what they’re meant to see.

According to Ganek, it also delivers some pretty fast search results, in the order of 400 – 600ms.

[image courtesy of Cloudtenna]

I was interested to know a little more about how the machine learning could identify files that were being worked on by people in the same workgroup. Ganek said they didn’t rely on Active Directory group membership, as these were often outdated. Instead, they tracked file activity to create a “Shadow IT organisational chart” that could be used to identify who was collaborating on what, and tailor the search results accordingly.

 

Thoughts and Further Reading

I’ve spent a good part of my career in the data centre providing storage solutions for enterprises to host their critical data on. I talk a lot about data and how important it is to the business. I’ve worked at some established companies where thousands of files are created every day and terabytes of data is moved around. Almost without fail, file management has been a pain in the rear. Whether I’ve been using Box to collaborate, or sending links to files with Dropbox, or been stuck using Microsoft Teams (great for collaboration but hopeless from a management perspective), invariably files get misplaced or I find myself firing up a search window to try and track down this file or that one. It’s a mess because we don’t juts work from a single desktop and carefully curated filesystem any more. We’re creating files on mobile devices, emailing them about, and gathering data from systems that don’t necessarily play well on some platforms. It’s a mess, but we need access to the data to get our jobs done. That’s why something like Cloudtenna has my attention. I’m looking forward to seeing them progress with the beta of DirectSearch, and I have a feeling they’re on to something pretty cool with their product. You can also read Rich’s thoughts on Cloudtenna over at the Gestalt IT website.

Cohesity – Cloud Edition for Azure – A Few Notes

I deployed Cohesity Cloud Edition in Microsoft Azure recently and took a few notes. I’m the first to admit that I’m completely hopeless when it comes to fumbling my way about Azure, so this probably won’t seem as convoluted a process to you as it did to me. If you have access to the documentation section of the Cohesity support site, there’s a PDF you can download that explains everything. I won’t go into too much detail but there are a few things to consider. There’s also a handy solution brief on the Cohesity website that sheds a bit more light on the solution.

 

Process

The installation requires a Linux VM be setup in Azure (a small one – DS1_V2 Standard). Just like in the physical world, you need to think about how many nodes you want to deploy in Azure (this will be determined largely by how much you’re trying to protect). As part of the setup you edit a Cohesity-provided JSON file with a whole bunch of cool stuff like Application IDs and Keys and Tenant IDs.

Subscription ID

Specify the subscription ID for the subscription used to store the resources of the Cohesity Cluster.

WARNING: The subscription account must have owner permissions for the specified subscription.

Application ID

Specify the Application ID assigned by Azure during the service principal creation process.

Application Key

Specify the Application key generated by Azure during the service principal creation process that is used for authentication.

Tenant ID

Specify the unique Tenant ID assigned by Azure.

The Linux VM then goes off and builds the cluster in the location you specify with the details you’ve specified. If you haven’t done so already, you’ll need to create a Service Principal as well. Microsoft has some useful documentation on that here.

 

Limitations

One thing to keep in mind is that, at this stage, “Cohesity does not support the native backup of Microsoft Azure VMs. To back up a cloud VM (such as a Microsoft Azure VM), install the Cohesity agent on the cloud VM and create a Physical Server Protection Job that backs up the VM”. So you’ll see that, even if you add Azure as a source, you won’t be able to perform VM backups in the same way you would with vSphere workloads, as “”Cloud Edition only supports registering a Microsoft Azure Cloud for converting and cloning VMware VMs. The registered Microsoft Azure Cloud is where the VMs are cloned to”. This is the same across most public cloud platforms, as Microsoft, Amazon and friends aren’t terribly interested in giving out that kind of access to the likes of Cohesity or Rubrik. Still, if you’ve got the right networking configuration in place, you can back up your Azure VMs either to the Cloud Edition or to an on-premises instance (if that works better for you).

 

Thoughts

I’m on the fence about “Cloud Editions” of data protection products, but I do understand why they’ve come to be a thing. Enterprises have insisted on a lift and shift approach to moving workloads to public cloud providers and have then panicked about being able to protect them, because the applications they’re running aren’t cloud-native and don’t necessarily work well across multiple geos. And that’s fine, but there’s obviously an overhead associated with running cloud editions of data protection solutions. And it feels like you’re just putting off the inevitable requirement to re-do the whole solution. I’m all for leveraging public cloud – it can be a great resource to get things done effectively without necessarily investing a bunch of money in your own infrastructure. But you need to re-factor your apps for it to really make sense. Otherwise you find yourself deploying point solutions in the cloud in order to avoid doing the not so cool stuff.

I’m not saying that this type of solution doesn’t have a place. I just wish it didn’t need to be like this sometimes …

What’s New With Zerto?

Zerto recently held their annual conference (ZertoCON) last week in Boston. I didn’t attend, but I did have time to catch up with Rob Strechay prior to Zerto making some announcements around the company and future direction. I thought I’d cover those here.

 

IT Resilience Platform

The first announcement revolved around the “IT Resilience Platform“. The idea behind the strategy is that backup, disaster recovery and cloud mobility solutions into a single, simple, scalable platform. Strechay says that “this strategy combines continuous availability, workload mobility, and multi-cloud agility to ensure you can withstand any disruption, leverage new technology seamlessly, and move forward with confidence”. They’ve found that Zerto is being used both for unplanned and planned disruptions, and they’ve also been seeing a lot more activity resolving ransomware and security incidents. From a planned outage perspective, DC consolidation has been a big part of the planned disruption activity as well.

What’s driving this direction? According to Strechay, companies are looking for fewer point solutions. They’re also seeing backup and DR activities converging. Cloud is driving this technology convergence and is changing the way data protection is being delivered.

  • Cloud for backup
  • Cloud for DR
  • Application mobility

“It’s good if it’s done properly”. Zerto tell me they haven’t rushed into this and are not taking the approach lightly. They see IT Resilience as a combination of  Backup, DR Replication, and Hybrid Cloud. Strechay told me that Zerto are going to stay software only and will partner on the hardware side where required. So what does it look like conceptually?

[image courtesy of Zerto]

Think of this as a mode of transport. The analytics and control is like the navigation system, the orchestration and automation layer are the steering wheel, and continuous data protection is the car.

 

Vision for the Future of Backup

Strechay also shared with me Zerto’s vision for the future of backup. In short, “it needs to change”. They really want to move away from the concept of periodic protection to continuous, journal-based protection delivering seconds of RPO at scale to meet customer expectations. How are they going to do this? The key differentiation will be CDP combined with best of breed replication.

 

Zerto 7 Preview

Strechay also shared some high level details of Zerto 7, with key features including:

  • Intelligent index and search
  • Elastic journal
  • Data protection workflows
  • Architecture enhanced
  • LTR targets

There’ll be a new and enhanced user experience – they’re busy revisiting workflows and enhancing a number of them (e.g. reducing clicks, enhanced APIs, etc). They’ll also be looking at features such as prescriptive analytics (what if I added more VMs to this journal?). They’re aiming for a release in Q1 2019.

 

Thoughts

The way we protect data is changing. Companies like Zerto, Rubrik and Cohesity are bringing a new way of thinking to an age old problem. They’re coming at it from slightly different angles as well. This can only be a good thing for the industry. A lot of the technical limitations that we faced previously have been removed in terms of bandwidth and processing power. This provides the opportunity to approach the problem from the business perspective. Rather than saying “we can’t do that”, we have the opportunity to say “we can do that”. That doesn’t mean that scale is a simple thing to manage, but it seems like there are more ways to solve this problem than there have been previously.

I’ve been a fan of Zerto’s approach for some time. I like the idea that a company has shared their new vision for data protection some months out from actually delivering the product. It makes a nice change from companies merely regurgitating highlights from their product release notes (not that that isn’t useful at times). Zerto have a rich history of delivering CDP solutions for virtualised environments, and they’ve made some great inroads with cloud workload protection as well. The idea of moving away from periodic data protection to something continuous is certainly interesting, and obviously fits in well with Zerto’s strengths. It’s possibly not a strategy that will work well in every situation, particularly with smaller environments. But if you’re leveraging replication technologies already, it’s worth looking at how Zerto might be able to deliver a more complete solution for your data protection requirements.

Pure//Accelerate 2018 – (Fairly) Full Disclosure

Disclaimer: I recently attended Pure//Accelerate 2018.  My flights, accommodation and conference pass were paid for by Pure Storage via the Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my notes on gifts, etc, that I received as a conference attendee at Pure//Accelerate 2018. This is by no stretch an interesting post from a technical perspective, but it’s a way for me to track and publicly disclose what I get and how it looks when I write about various things. I’m going to do this in chronological order, as that was the easiest way for me to take notes during the week. While everyone’s situation is different, I took 5 days of unpaid leave to attend this conference.

 

Saturday

My wife dropped me at the BNE domestic airport and I had some ham and cheese and a few coffees in the Qantas Club. I flew Qantas economy class to SFO via SYD. The flights were paid for by Pure Storage. Plane food was consumed on the flight. It was a generally good experience, and I got myself caught up with Season 3 of Mr. Robot. Pure paid for a car to pick me up at the airport. My driver was the new head coach of the San Francisco City Cats ABA team, so we talked basketball most of the trip. I stayed at a friend’s place until late Monday and then checked in to the Marriott Marquis in downtown San Francisco. The hotel costs were also covered by Pure.

 

Tuesday

When I picked up my conference badge I was given a Pure Storage and Rubrik co-branded backpack. On Tuesday afternoon we kicked off the Analyst and Influencer Experience with a welcome reception at the California Academy of Sciences. I helped myself to a Calicraft Coast Kolsch and 4 Aliciella Bitters. I also availed myself of the charcuterie selection, cheese balls and some fried shrimp. The most enjoyable part of these events is catching up with good folks I haven’t seen in a while, like Vaughn and Craig.

As we left we were each given a shot glass from the Academy of Sciences that was shaped like a small beaker. Pure also had a small box of Sweet 55 chocolate delivered to our hotel rooms. That’s some seriously good stuff. Sorry it didn’t make it home kids.

After the reception I went to dinner with Alastair Cooke, Chris Evans and Matt Leib at M.Y. China in downtown SF. I had the sweet and sour pork and rice and 2 Tsingtao beers. The food was okay. We split the bill 4 ways.

 

Wednesday

We were shuttled to the event venue early in the morning. I had a sausage and egg breakfast biscuit, fruit and coffee in the Analysts and Influencers area for breakfast. I need to remind myself that “biscuits” in their American form are just not really my thing. We were all given an Ember temperature control ceramic mug. I also grabbed 2 Pure-flavoured notepads and pens and a Pure Code t-shirt. Lunch in the A&I room consisted of chicken roulade, salmon, bread roll, pasta and Perrier sparkling spring water. I also grabbed a coffee in between sessions.

Christopher went down to the Solutions Expo and came back with a Quantum sticker (I am protecting data from the dark side) and Veeam 1800mAh keychain USB charger for me. I also grabbed some stickers from Justin Warren and some coffee during another break. No matter how hard I tried I couldn’t trick myself into believing the coffee was good.

There was an A&I function at International Smoke and I helped myself to cheese, charcuterie, shrimp cocktail, ribs, various other finger foods and 3 gin and tonics. I then skipped the conference entertainment (The Goo Goo Dolls) to go with Stephen Foskett and see Terra Lightfoot and The Posies play at The Independent. The car to and from the venue and the tickets were very kindly covered by Stephen. I had two 805 beers while I was there. It was a great gig. 5 stars.

 

Thursday

For breakfast I had fruit, a chocolate croissant and some coffee. Scott Lowe kindly gave me a printed copy of ActualTech’s latest Gorilla Guide to Converged Infrastructure. I also did a whip around the Solutions Expo and grabbed:

  • A Commvault glasses cleaner;
  • 2 plastic Zerto water bottles;
  • A pair of Rubrik socks;
  • A Cisco smart wallet and pen;
  • Veeam webcam cover, retractable charging cable and $5 Starbucks card; and
  • A Catalogic pen.

Lunch was boxed. I had the Carne Asada, consisting of Mexican style rice, flat iron steak, black beans, avocado, crispy tortilla and cilantro. We were all given 1GB USB drives with a copies of the presentations from the A&I Experience on them as well. That was the end of the conference.

I had dinner at ThirstBear Brewing Co with Alastair, Matt Leib and Justin. I had the Thirstyburger, consisting of Richards Ranch grass-fed beef, mahón cheese, chorizo-andalouse sauce, arugula, housemade pickles, panorama bun, and hand-cut fried kennebec patatas. This was washed down with two glasses of The Admiral’s Blend.

 

Friday

As we didn’t fly out until Friday evening, Alastair and I spent some time visiting the Museum of Modern Art. vBrownBag covered my entry to the museum, and the Magritte exhibition was terrific. We then lunched in Chinatown at a place (Maggie’s Cafe) that reminded me a lot of the Chinese places in Brisbane. Before I went to the airport I had a few beers in the hotel bar. This was kindly paid for by Justin Warren. On Friday evening Pure paid for a car to take Justin and I to SFO for our flight back to Australia. Justin gets extra thanks for having me as his plus one in the fancier lounges that I normally don’t have access to.

Big thanks to Pure Storage for having me over for the week, and big thanks to everyone who spent time with me at the event (and after hours) – it’s a big part of why I keep coming back to these types of events.