Updated Articles Page

I recently had the opportunity to deploy a Cohesity C2500 4-node appliance and thought I’d run through the basics of the installation. There’s a new document outlining the process on the articles page.

Zerto Announces ZVR 6.0

Zerto recently announced version 6.0 of their Zero Virtual Replication (ZVR) product and I had the opportunity to speak with Rob Strechay (Senior VP, Product) about the announcement.

 

Announcement

Multi-cloud Mobility

Multi-cloud workload mobility is probably the biggest bit of news from the 6.0 release. It provides “inter-cloud and intra-cloud workload mobility and protection between Azure, IBM Cloud, AWS and more than 350 cloud service providers (CSPs)”. This is the culmination of a lot of work by Zerto over the past few years, with support for AWS delivered in 2014, Azure in 2016, and now you have the ability to move workloads between clouds as well. The cool thing about this is that you can do some interesting stuff with workload migration, moving to and from Azure, and also in-between Azure (i.e. region to region).

GCP is on their roadmap, however demand for that functionality has not been as great according to Strechay.

 

Enhanced Analytics Visibility

Zerto’s analytics capability (first announced in ZVR 5.5) has been enhanced as well. Customers now have access to expanded dashboards with:

  • Live network analysis reports for troubleshooting and optimisation;
  • Insights into network throughput and performance;
  • The ability to monitor site-to-site and outbound traffic; and
  • 30 days of network history metrics for any site.

 

Cloud Portal for CSPs

CSPs are still a huge piece of what makes Zerto successful. The new CSP Management Portal will give CSPs the ability to “remotely upgrade customer sites to provide them with continuous availability and latest software releases”. This is a SaaS-delivered service, and will eventually be supported for Enterprise customers as well.

 

Thoughts and Further Reading

If you’ve ever been to VMworld (or similar events), you’ll see that Zerto make a big effort to get in front of current (and potential) customers and spread the good word about disaster recovery and disaster avoidance. Not only do they make pretty good t-shirts, they also have a nifty product (and excellent CSP ecosystem) that keeps improving as the years go by. They now support over 6000 customers in over 70 countries and have done quite a bit of work to make disaster recovery for virtual environments a relatively simple undertaking. This simplicity, coupled with some great improvements in cloud workload mobility make it worth a second look.

Disaster recovery (and disaster avoidance), like most data protection activities, isn’t sexy. It’s boring. And you hope you’ll never have to use it. But if you’ve ever had to, you’ll know how kludgy some solutions can be. Zerto has worked hard to not be one of those solutions, instead offering a simple mechanism for workload protection and mobility. If you’re into that kind of thing (and you probably should be), they’re worth checking out.

Zerto Analytics – Seeing Is Understanding

I attended VMworld US in August and had hoped to catch up with Zerto regarding their latest product update (the snappily titled Zerto Virtual Replication 5.5). Unfortunately there were some scheduling issues and we were unable to meet up. I was, however, briefed by them a few weeks later on some of the new features, particularly around the Zerto Analytics capability. This is a short post that focuses primarily on that part of the announcement.

 

Incremental But Important Announcement

If you’re unfamiliar with Zerto, they provide cloud and hypervisor-based workload replication for disaster recovery. They’ve been around since 2010, and the product certainly has its share of fans. The idea behind Zerto Analytics, according to Zerto, is that it “provides real-time and historical analytics on the status and health of multi-site, multi-cloud environments”.

It is deployed on Zerto’s new SaaS platform, is accessible to all Zerto VR customers, and, according to Zerto, “you will be able to quickly visualize your entire infrastructure from a single pane of glass”.

 

The Value

DR is a vital function that a whole bunch of companies don’t understand terribly well. Zerto provide a reasonably comprehensive solution for companies looking to protect their hypervisor-based workloads in multiple locations while leveraging a simple to use interface for recovery. because when it all goes wrong you want it to be easy to come back. The cool thing about Zerto Analytics is that it gives you more than the standard issue status reporting you’ve previously enjoyed. Instead, you can go through historical data to get a better understanding of the replication requirements of your workloads, and the hot and cold times for workloads. I think this is super useful when it comes to (potentially) understanding when planned maintenance needs to occur, and when a good time is to schedule in your test recoveries or data migration activities.

There’s never a good time for a disaster. That’s why they call them disasters. But the more information you have available at the time of a disaster, the better the chances are of you coming out the other end in good shape. The motto at my daughters’ school is “Scientia est Potestas”. This doesn’t actually mean “Science is Potatoes” but is Latin for “Knowledge is Power”. As with most things in IT (and life), a little bit of extra knowledge (in the form of insight and data) can go a long way. Zerto are keen, with this release, to improve the amount of visibility you have into your environment from a DR perspective. This can only be a good thing, particularly when you can consume it across a decent range of platforms.

DR isn’t just about the technology by any stretch. You need an extensive understanding of what’s happening in your environment, and you need to understand what happens to people when things go bang. But one of the building blocks for success, in my opinion, is providing a solid platform for recovery in the event that something goes pear-shaped. Zerto isn’t for everyone, but I get the impression anecdotally that they’re doing some pretty good stuff around making what can be a bad thing into a more positive experience.

 

Read More

Technical documentation on Zerto Virtual Replication 5.5 can be found here. There’s also a great demo on YouTube that you can see here.

Aparavi Comes Out Of Stealth. Dazzles.

Santa Monica-based (I love that place) SaaS data protection outfit, Aparavi, recently came out of stealth, and I thought it was worthwhile covering their initial offering.

 

So Latin Then?

What’s an Aparavi? It’s apparently Latin and means “[t]o prepare, make ready, and equip”. The way we consume infrastructure has changed, but a lot of data protection products haven’t changed to accommodate this. Aparavi are keen to change that, and tell me that their product is “designed to work seamlessly alongside your business continuity plan to ease the burden of compliance and data protection for mid market companies”. Sounds pretty neat, so how does it work?

 

Architecture

Aparavi uses a three tiered architecture written in Node.js and C++. It consists of:

  • The Aparavi hosted platform;
  • An on-premises software appliance; and
  • A source client.

[image courtesy of Aparavi]

The platform is available as a separate module if required, otherwise it’s hosted on Aparavi’s infrastructure. The software appliance is the relationship manager in the solution. It performs in-line deduplication and compression. The source client can be used as a temporary recovery location if required. AES-256 encryption is done at the source, and the metadata is also encrypted. Key storage is all handled via keyring-style encryption mechanisms. There is communication between the web platform and the appliance, but the appliance can operate when the platform is off-line if required.

 

Cool Features

There are a number of cool features of the Aparavi solution, including:

  • Patented point-in-time recovery – you can recover data from any combination of local and cloud storage (you don’t need the backup set to live in one place);
  • Cloud active data pruning – will automatically remove files, and portions of files no longer needed from cloud locations;
  • Multi-cloud agile retention (this is my favourite) – you can use multiple cloud locations without the need to move data from one to the other;
  • Open data format – open source published, with Aparavi providing a reader so data can be read by any tool; and
  • Multi-tier, multi-tenancy – Aparavi are very focused on delivering a multi-tier and multi-tenant environment for service providers and folks who like to scale.

 

Retention Simplified

  • Policy Engine – uses file exclusion and inclusion lists
  • Comprehensive Search – search by user name and appliance name as well as file name
  • Storage Analytics – how much you’re saving by pruning, data growth / shrinkage over time, % change monitor
  • Auditing and Reporting Tools
  • RESTful API – anything in the UI can be automated

 

What Does It Run On?

Aparavi runs on all Microsoft-supported Windows platforms as well as most major Linux distributions (including Ubuntu and RedHat). They use the Amazon S3 API, and support GCP and are working on OpenStack and Azure. They’ve also got some good working relationships with Cloudian and Scality, amongst others.

[image courtesy of Aparavi]

 

Availability?

Aparavi are having a “soft launch” on October 25th. The product is licensed on the amount of source data protected. From a pricing perspective, the first TB is always free. Expect to pay US $999/year for 3TB.

 

Conclusion

Aparavi are looking to focus on the mid-market to begin with, and stressed to me that it isn’t really a tool that will replace your day to day business continuity tool. That said, they recognize that customers may end up using the tool in ways that they hadn’t anticipated.

Aparavi’s founding team of Adrian Knapp, Rod Christensen, Jonathan Calmes and Jay Hill have a whole lot of experience with data protection engines and a bunch of use cases. Speaking to Jonathan it feels like they’ve certainly thought about a lot the issues facing folks leveraging cloud for data protection. I like the open approach to storing the data, and the multi-cloud friendliness takes the story well beyond the hybrid slideware I’m accustomed to seeing from some companies.

Cloud has opened up a lot of possibilities for companies that were traditionally constrained by their own ability to deliver functional, scalable and efficient infrastructure internally. It’s since come to people’s attention that, much like the days of internal-only deployments, a whole lot of people who should know better still don’t understand what they’re doing with data protection, and there’s crap scattered everywhere. Products like Aparavi are a positive step towards taking control of data protection in fluid environments, potentially helping companies to get it together in an effective manner. I’m looking forward to diving further into the solution, and am interested to see how the industry reacts to Aparavi over the coming months.

Rubrik Cloud Data Management 4.1 Released – “More Than You Might Expect”

Rubrik recently announced Version 4.1 of its Cloud Data Management product, and I thought it would be worthwhile running through some of the highlights.

 

Ecosystem

Azure CloudOn

This feature enables customers to power-on an archived snapshot of a VM in the cloud

  • Instance type recommendation based on VM config file (.vmx)
  • 2-click deployment with orchestration
  • UI Integration to launch, power off or de-provision an instance
  • On-demand or constant conversion

What are the use cases?

  • Spin up a cloud sandbox for dev/test use
  • Disaster Recovery
  • On-premises to cloud migration

 

[image courtesy of Rubrik]

 

There are some limitations to note:

  • The OS must be supported by Azure
  • A 1TB Max Disk Size

 

Other Enhancements

There are a few other enhancements, including:

  • AWS Glacier and Google CloudOut
  • Hyper-V SCVMM

I won’t cover them here but the Glacier and GCP Archive features seem pretty cool.

 

Core Features

SQL AAG

Alta introduced a lot of Oracle support, and this version introduces support for SQL Server AlwaysOn Availability Groups (AAGs). Rubrik auto-detects settings / configurations within SQL Server

  • Availability Groups (“AGs”) – collections of SQL server replicas
  • AlwaysOn settings – includes replica failover order

Rubrik dynamically backs up the appropriate AG node based on the AG’s backup preferences. The AGs and selected AlwaysOn settings are displayed in the Rubrik UI.

There’s support for AlwaysOn manual and automatic failover transitions:

  • Target secondary replica specified by AlwaysOn settings
  • Previously had to manually swap DBs within Rubrik
  • Automatic failover for synchronous commit replicas only
  • Rubrik continues to backup DBs during AlwaysOn failover

There are some limitations to note:

  • Cannot restore / create DB within an Availability Group via Rubrik. This must be done within the SQL Server product;
  • SQL Server only supports automatic failover for synchronous replicas; and
  • The feature is not supported in versions of SQL Server from before 2012 (no Availability Groups then).

These limitations are common to AlwaysOn as a technology and are not Rubrik specific.

 

Multi-tenancy

Logically divide Rubrik Clusters into multiple management units (organizations). There are three roles that can be leveraged: Global Admin, Org Admin and End User.

Global Admin

  • Comprehensive privileges across all resources
  • Define Organization: subset of all resources
  • Assign Org Admin and define privileges

Org Admin

  • Privilege subset scoped to Organization resources
  • Assign End User and define privileges

End User

  • May be scoped to Organization
  • Can browse snapshots, recover files, and live mount on select resources

Organizations can be used to fully partition ALL objects associated with your Rubrik cluster by customer (MSP) or department (enterprise)

  • Protected objects
  • Archival targets
  • Replication targets
  • SLA domains
  • Service credentials
  • Users

Groups of logical objects (SLA Domains, Archival Targets, Protected Objects, Users) can be independently managed as an organization. There’s also integration with an existing directory service (AD).

 

VLAN Tagging

All the kids are into VLAN tagging nowadays, and Rubrik’s implementation provides the ability to segment traffic within physical networks via IEEE 802.1q. This is configurable at bootstrap or later via the CLI, and supports up to 25 VLANs per cluster. If you choose not to create any VLAN configuration during initial cluster setup all traffic will be untagged. Additionally, traffic that does not belong to a directly attached VLAN will be placed on the management interface/VLAN and routed through the default gateway.

 

General Enhancements

New Envision Report Customisations 

  • Two new Default reports (Capacity Over Time and Global Protection Summary)

Oracle Enhancements

  • Ability to resize Managed Volumes while still mapping across the underlying cluster resources in a scale-out fashion

Archive Cascading

  • Allows customers to replicate from a Rubrik cluster at Site A to a Rubrik Cluster at Site B with the data then archived from the Site B Rubrik cluster

[image courtesy of Rubrik]

 

Conclusion

I’ve been a fan of Rubrik for some time now. I don’t cover these announcements just because they put me on a #vAllStars baseball card or because they send me swag from time to time. I genuinely think they’re doing some cool stuff and it’s been great to see the evolution of the product over the last few years. Version 4.0 (Alta) was a pretty big release for them (there’s a webinar series you can access on-demand here) and this one adds some new features that a lot of people (particularly enterprise folks) have been asking for.

Druva Is Useful, And Modern

Disclaimer: I recently attended VMworld 2017 – US.  My flights were paid for by ActualTech Media, VMware provided me with a free pass to the conference and various bits of swag, and Tech Field Day picked up my hotel costs. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

You can view the video of Druva‘s presentation here, and you can download a PDF copy of my rough notes from here.

 

DMaaS

Druva have been around for a while, and I recently had the opportunity to hear from them at a Tech Field Day Extra event. They have combined their Phoenix and inSync products into a single platform, yielding Druva Cloud Platform. This is being positioned as a “Data Management-as-a-Service” offering.

 

Data Management-as-a-Service

Conceptually, it looks a little like this.

[image via Druva]

According to Druva, the solution takes into account all the good stuff, such as:

  • Protection;
  • Governance; and
  • Intelligence.

It works with both:

  • Local data sources (end points, branch offices, and DCs); and
  • Cloud data sources (such as IaaS, Cloud Applications, and PaaS).

The Druva cloud is powered by AWS, and provides, amongst other things:

  • Auto-tiering in the cloud (S3/S3IA/Glacier); and
  • Easy recovery to any location (servers or the cloud).

 

Just Because You Can Put A Cat …

With everything there’s a right way and a wrong way to do it. Sometimes you might do something and think that you’re doing it right, but you’re not. Wesley Snipes’s line in White Men Can’t Jump may not be appropriate for this post, but Druva came up with one that is: “A VCR in the cloud doesn’t give you Netflix”. When you’re looking at cloud-based data protection solutions, you need to think carefully about just what’s on offer. Druva have worked through a lot of these requirements and claim their solution:

  • Is fully managed (no need to deploy, manage, support software);
  • Offers predictable lower costs
  • Delivers linear and infinite (!) scalability
  • Provides automatic upgrades and patching; and
  • Offers seamless data services.

I’m a fan of the idea that cloud services can offer a somewhat predictable cost models to customers. One of the biggest concerns faced by the C-level folk I talk to is the variability of cost when it comes to consuming off-premises services. The platform also offers source side global deduplication, with:

  • Application-aware block-level deduplication;
  • Only unique blocks being sent; and
  • Forever incremental and efficient backups.

The advantage of this approach is that, as Druva charge based on “post-globally deduped storage consumed”, chances are you can keep your costs under control.

 

It Feels Proper Cloudy

I know a lot of people who are in the midst of the great cloud migration. A lot of them are only now (!) starting to think about how exactly they’re going to protect all of this data in the cloud. Some of them are taking their existing on-premises solutions and adapting them to deal with hybrid or public cloud workloads. Others are dabbling with various services that are primarily cloud-based. Worse still are the ones assuming that the SaaS provider is somehow magically taking care of their data protection needs. Architecting your apps for multiple geos is a step in the right direction towards availability, but you still need to think about data protection in terms of integrity, not just availability. The impression I got from Druva is that they’ve taken some of the best elements of their on-premises and cloud offerings, sprinkled some decent security in the mix, and come up with a solution that could prove remarkably effective.

VMware – VMworld 2017 – STO3194BU – Protecting Virtual Machines in VMware Cloud on AWS

Disclaimer: I recently attended VMworld 2017 – US.  My flights were paid for by ActualTech Media, VMware provided me with a free pass to the conference and various bits of swag, and Tech Field Day picked up my hotel costs. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my rough notes from “STO3194BU – Protecting Virtual Machines in VMware Cloud on AWS”, presented by Brian Young and Anita Thomas. You can grab a PDF copy of my notes from here.

VMware on AWS Backup Overview

VMware Cloud on AWS

  • VMware is enabling the VADP backup partner ecosystem on VMC
  • Access to native AWS storage for backup target
  • Leverages high performance network between Virtual Private Clouds

You can read more about VMware Cloud on AWS here.

 

Backup Partner Strategy

VMware Certified – VMware provides highest level of product endorsement

  • Product certification with VMware Compatibility Guide Listing
  • Predictable Life Cycle Management
  • VMware maintains continuous testing of VAPD APIs on VMC releases

Customer Deployed – Same solution components for both on-premises and VMC deployments

  • Operational Consistency
  • Choice of backup methods – image-level, in-guest
  • Choice of backup targets – S3, EBS, EFS

Partner Supported – Partner provides primary support

  • Same support model as on-premises

 

VADP / ENI / Storage Targets

VADP

  • New VDDK supports both on-premises and VMC
  • VMware backup partners are updating existing products to use new VDDK to enable backup of VMC based VMs

Elastic Network Interface (ENI)

  • Provide access to high speed, low latency network between VMC and AWS Virtual Private Clouds
  • No ingress or egress charges within the same availability zone

Backup Storage Targets

  • EC2 based backup appliance – EBS and S3 storage
  • Direct to S3

 

Example Backup Topology

  • Some partners will support in-guest and image level backups direct to S3
  • Deduplicates, compresses and encrypts on EC2 backup appliance
  • Store or cache backups on EBS
  • Some partners will support vaulting older backups to S3

 

Summary

  • VADP based backup products for VMC are available now
  • Elastic Network Interface connection to native AWS services is available now
  • Dell EMC Data Protection Suite is the first VADP data protection product available on VMC
  • Additional VADP backup solutions will be available in the coming months

 

Dell EMC Data Protection for VMware Cloud on AWS

Data Protection Continuum – Where you need it, how you want it

Dell EMC Data Protection is a Launch Partner for VMware Cloud on AWS. Data Protection Suite protects VMs and enterprise workloads whether on-premises or in VMware Cloud

  • Same data protection policies
  • Leveraging best-in-class Data Domain Virtual Edition
  • AWS S3 integration for cost efficient data protection

 

Dell EMC Data Domain and DP Suite

Data Protection Suite

  • Protects across the continuum – replication, snapshot, backup and archive
  • Covers all consumption models
  • Broadest application and platform support
  • Tightest integration with Data Domain

Data Domain Virtual Edition

  • Deduplication ratios up to 55x
  • Supports on-premises and cloud
  • Data encryption at rest
  • Data Invulnerability Architecture – best-in-class reliability
  • Includes DD Boost, DD Replicator

 

Dell EMC Solution Highlights

Unified

  • Single solution for enterprise applications and virtual machines
  • Works across on-premises and cloud deployments

Efficient

  • Direct application backup to S3
  • Minimal compute costs in cloud
  • Storage-efficient: deduplication up to 55x to DD/VE

Scalable

  • Highly scalable solution using lightweight stateless proxies
  • Virtual synthetic full backups – lightning fast daily backups, faster restores
  • Uses CBT for faster VM-image backup and restore

 

Solution Detail

Backup of VMs and applications in VMC to a DD/VE or AWS S3. The solution supports

  • VM image backup and restore
  • In-guest backup and restore of applications using agents for consistency
  • Application direct to S3

 

ESG InstaGraphic

  • ESG Lab has confirmed that the efficiency of the Dell EMC architecture can be used to reduce monthly in-cloud data protection costs by 50% or more
  • ESG Research has confirmed that public cloud adoption is on the rise. More than 75% of IT organisations report they are using the public cloud and 41% are using it for production applications
  • There is a common misconception that an application, server, or data moved to the cloud is automatically backed up the same way it was on-premises
  • Architecture matters when choosing a public cloud data protection solution

Source – ESG White Paper – Cost-efficient Data Protection for Your Cloud – to be published.

 

Manage Backups Using a Familiar Interface

  • Consistent user experience in cloud and on-premises
  • Manage backups using familiar data protection UI
  • Extend data protection policies to cloud
  • Detailed reporting and monitoring

 

Software Defined Data Protection Policies

Dynamic Polices – Keeping up with VM data growth and smart policies

Supported Attributes

  • DS Clusters
  • Data Center
  • Tags
  • VMname
  • Data Store
  • VMfolder
  • VM resource group
  • vApp

 

Technology Preview

The Vision we are building towards (screenshot demos).

 

Further Reading

You can read more in Chad’s post on the solution. Dell EMC put out a press release that you can see here. There’s a blog post from Dell EMC that also provides some useful information. I found this to be a pretty useful overview of what’s available and what’s coming in the future. 4 stars.

VMware – VMworld 2017 – STO3331BUS – Cohesity Hyperconverged Secondary Storage: Simple Data Protection for VMware and vSAN

Disclaimer: I recently attended VMworld 2017 – US.  My flights were paid for by ActualTech Media, VMware provided me with a free pass to the conference and various bits of swag, and Tech Field Day picked up my hotel costs. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my rough notes on “STO3331BUS – Cohesity Hyperconverged Secondary Storage: Simple Data Protection for VMware and vSAN” presented by Gaetan Castelein of Cohesity and Shawn Long, CEO of viLogics. You can grab a PDF of my notes from here.

 

Secondary Storage Problem

SDS has changed for the better.

 

Primary storage has improved dramatically

Moving from:

  • High CapEx costs
  • Device-centric silos
  • Complex processes

To:

  • Policy-based management
  • Cost-efficient performance
  • Modern storage architectures

 

But secondary storage is still problematic

Rapidly growing data

  • 6ZB in 2016
  • 93ZB in 2025
  • 80% unstructured

Too many copies

  • 45% – 60% of capacity for copy data
  • 10 – 12 copies on average
  • $50B problem

Legacy storage can’t keep up

  • Doesn’t scale
  • Fragmented silos
  • Inefficient

 

Cohesity Hyperconverged Secondary Storage

You can use this for a number of different applications, including:

  • File shares
  • Archiving
  • Test / Dev
  • Analytics
  • Backups

It also offers native integration with the public cloud and Cohesity have been clear that you shouldn’t consider it to be just another backup appliance.

 

Consolidate Secondary Storage Silos at Web-Scale

  • Data Protection with Cohesity DataProtect;
  • Third-party backup DB copies with CommVault, Oracle RMAN, Veritas, IBM and Veeam;
  • Files; and
  • Objects.

 

Deliver Data Instantly

Want to make the data useful (via SnapTree)?

 

Software defined from Edge to Cloud

You can read more about Cohesity’s cloud integration here.

Use Cases

  • Simple Data Protection
  • Distributed File Services
  • Object Services
  • Multicloud Mobility
  • Test / Dev Copies
  • Analytics

You can use Cohesity with existing backup products if required or you can use Cohesity DataProtect.

 

Always-Ready Snapshots for Instant Restores

  • Sub-5 minute RPOs
  • Fully hydrated images (linked clones)
  • Catalogue of always-ready images
  • Instant recoveries (near-zero RTOs)
  • Integration with Pure Storage

 

Tight Integration with VMware

  • vCenter Integration
  • VADP for snap-based CBT backups
  • vRA plugin for self-service, policy-based management

 

CloudArchive

  • Policy-based archival
  • Dedupe, compression, encryption
  • Everything is indexed before it goes to the cloud – search files and VMs
  • Individual file recovery
  • Recover to a different Cohesity cluster

 

CloudReplicate   

  • Replicate backup data to cloud

Deploy Cohesity to the cloud (available on Azure currently, other platforms soon).

 

Reduce TCO

You can move from “Legacy backup”, where you’re paying maintenance on backup software and deduplication appliances, to paying just for Cohesity.

 

Testimonial

Shawn Long from viLogics then took the stage to talk about their experiences with Cohesity.

  • People want to consume IT
  • “Product’s only as good as the support behind it”

 

Conclusion

This was a useful session. I do enjoy the sponsored sessions at VMworld. It’s a useful way for the vendors to get their message across in a way that needs to tie back to VMware. There’s often a bit of a sales pitch, but there’s usually also enough information in them to get you looking further into the solution. I’ve been keeping an eye on Cohesity since I first encountered them a few years ago at Storage Field Day, and their story has improved in clarity and coherence since them. If you’re looking at secondary storage solutions it’s worth checking the out. You’ll find some handy resources here. 3.5 stars.

VMware – VMworld 2017 – MGT3342BUS – Architecting Data Protection with Rubrik

Disclaimer: I recently attended VMworld 2017 – US.  My flights were paid for by ActualTech Media, VMware provided me with a free pass to the conference and various bits of swag, and Tech Field Day picked up my hotel costs. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my rough notes from “MGT3342BUS – Architecting Data Protection with Rubrik” presented by Rebecca Fitzhugh and Andrew Miller at VMworld US 2017. You can download my rough notes from here. Here’s a proof of life shot of Rebecca and Andrew.

 

Why bother with Data Protection?

There’s one big reason. Your stuff is important. However, the business expectations of a company’s DR / data protection frequently != the IT capabilities for DR / data protection.

 

What are you really protecting yourself against?

  • Lost or postponed sales and income
  • Regulatory fines
  • Delay of new business plans
  • Loss of contractual bonuses
  • Customer dissatisfaction
  • Timing and duration of disruption
  • Increased expenses such as overtime labor and outsourcing
  • Employee burnout

Disaster – what does that really look like?

  • Natural – tornadoes, earthquakes, etc; and
  • Man-made – power loss, human error.

 

Where do we begin? How do we deal with this?

What is a Business Impact Analysis (BIA)? Something you need to do if you haven’t done it already.

A process to understand:

  • What is the monetary impact of a disaster or failure?
  • What are the most time-critical and information-critical business processes?
  • How does the business REALLY rely upon IT service and application availability?
  • What availability ore recoverability capabilities are justifiable based on these requirements, potential impact and costs?

Composed of two components

  • Technical discovery – data gathering
  • Human conversation – talk to people!

Example output – recovery priority tiers.

 

What is an SLA?

A contract between an external service provider and its customers or between an IT department and internal business units it services

Downtime

  • Two 9s – 99% = 3.65 days of downtime per year (easy to achieve, less expensive)
  • Three 9s – 99.9% = 8.76 hours of downtime per year
  • Four 9s – 99.99% = 52.6 minutes of downtime per year
  • Five 9s – 99.999% = 5.26 minutes of downtime per year (difficult to achieve, expensive!)

DR – key measures

  • RPO: how much data can I lose?
  • RTO: Targeted amount of time to restart a business service after a disaster event

The smaller your RTOs and RPOs – the more money you’ll spend

 

BC vs DR vs OR – Say What?

Business Continuity

  • All goes on as normal despite and incident
  • Could lose a site and have no impact on business operations (active/active sites)

Disaster Recovery

  • To cope with and recover from an IT crisis that moves work to an alternative system in a non-routine way
  • A real “disaster” is large in scope and impact
  • DR typically implies failure of the primary data centre and recovery to an alternate site

Operational Recovery

  • Addresses more “routine” types of failure (server, network, storage, etc)
  • Events are smaller in scope and impact than a full disaster
  • Typically implies recovering to alternate equipment within the primary DC

Each should have its own clearly defined objectives – at minimum you should know the difference.

 

Where Rubrik Helps

Complexity is the enemy. Whatever you do. Whatever you buy. Simplify your architecture & expect more.

 

Key Evaluation Criteria

What Rubrik have seen that makes a difference:

1. Reliability of data recovery

  • Simplicity of setup and day 2 operations – SLA policies!
  • Immutability – is your data there when you need it?

2. Speed of data recovery

  • Search and Live Mount
  • API usage / automation to enhance restore capabilities

Not a lot has changed in data management since the 1990s. Last decade we introduced disk-based backup and deduplication. The problem is we added capabilities to existing architectures. This didn’t necessarily make things simpler.

 

Rubrik Cloud Data Management

Software fabric for orchestrating apps and data across clouds. No forklift upgrades.

 

How it Works

  • Quick start – Rack and go. auto discovery.
  • Rapid Ingest – Flash-optimized, parallel ingest accelerates snapshots and eliminates stun. Content-aware dedupe. One global namespace.
  • Automate – Intelligent SLA policy engine for effortless management.
  • Instant Recovery – Live mount VMs and SQL. Instant search and file restore.
  • Secure – end-to-end encryption. Immutability to fight ransomware.
  • Cloud – “CloudOut” instantly accessible with global search. Launch apps with “CloudOn” for DR or test/dev. Run apps in cloud.

 

Data Management in the Cloud

SLAs are important, and you’ll likely need to consider the following aspects.

  • RPO
  • Availability Duration (Retention)
  • When to archive (RTO)
  • Replication Schedule (DR)

*Demo Time

Under the hood – Interface, Logic, Core.

“Simple is hard”

Use an API-first platform to create powerful automation workflows

“Don’t Backup. Go Forward”

 

Conclusion

It should be no secret that I’m quite a fan of the Rubrik architecture and approach to data protection. I’ve written about them before on this blog. I like when data protection firms talk to me about what’s important to the business and the kinds of scenarios they protect against. I also like the focus on BIA and SLAs. Rubrik have made some great strides in the marketplace and are delivering new features at a rapid clip. If you haven’t had time to look at the them and you’re looking for a new approach to data protection, I recommend you look into their solution.

Moving From CrashPlan Back To BackBlaze

The Problem

I recently received an email from the CrashPlan for Home Team and have included some of the text below:

“Thank you for being a CrashPlan® for Home customer. We’re honored that you’ve trusted us to protect your data.

It’s because of this trust that we want you to know that we have shifted our business strategy to focus on the enterprise and small business segments. This means that over the next 14 months we will be exiting the consumer market and you must choose another option for data backup before your subscription expires. We are committed to providing you with an easy and efficient transition.”

You may or may not recall (or care) that I moved from Mozy to BackBlaze when Mozy changed their pricing scheme. I then moved to CrashPlan when a local (to Australia) contact offered me an evaluation of their seed service. Since then I’ve been pretty happy with CrashPlan, and had setup some peer to peer stuff with Mat as well.

 

Now What?

CrashPlan are offering existing customers a very smooth transition to their business plans. While the price is a little higher than before, it’s still very reasonable. And there’s a big discount on offer for the first twelve months, and a bunch of other options available. Plus, I wouldn’t have to re-seed my data, and I can access local support and resources.

 

The Siren’s Call

There are a whole lot of differnet cloud backup solutions you can access. They’re listed in this handy table here. Some of them are sync-only services, and some of them are fully-fledged offerings. I’ve been a fan of BackBlaze’s offering and technical transparency for a long time, and noticed they were pretty quick to put up a post showing off their wares. Their pricing is very reasonable, I’ve never had too many problems with the software, and they offer USB restores of data if required. The issue is that I have about 1TB of data to seed and on an ADSL connection it’s going to take for ages. BackBlaze’s don’t offer the ability to seed data in a similar fashion to CrashPlan, so I’ll be sucking it up and trickling the data up to BackBlaze while maintaining my account with CrashPlan for Home. I’ll get back to you in a few years and let you know how that’s gone. In the meantime, the kind folks at BackBlaze did send me this link to their FAQ on moving from CrashPlan to BackBlaze which may be useful.

 

Feelings

A few people on the Internet were a bit cranky about the news of this mild pivot / change of strategic focus from CrashPlan. I think that’s testament to CrashPlan’s quality product and competitive pricing. They’re still giving users a lot of notice about what’s happening, and offering a very accessible migration path. The business plan is still very affordable, and offers a lot of useful functionality. As Mozy discovered a few years ago, consumers are notoriously cheap, and it’s sometimes hard to pay the bills when the market is demanding ridiculously low prices for what are actually pretty important services. I have no insight into CrashPlan’s financials, and won’t pretend to understand the drive behind this. I could choose to move my account to their business plan and not have to re-seed my data again, but I’ve always had a soft spot for BackBlaze, so I’ll be moving back to them.

If you’re not backing up your data (at least locally, and ideally to more than one destination) than you should start doing that. There’s nothing worse than trying to put back the pieces of your digital life from scraps of social media scattered across the Internet. If you’ve already got things in hand – good for you. Talk to your friends about the problem too. It’s a problem that can impact anyone, at any time, and it’s something that not enough people are talking about openly. BackBlaze haven’t paid me any money to write this post, I just thought it was something people might be interested in, given the experiences I’ve had with various vendors over time.