Rubrik Basics – SLA Domains

I’ve been doing some work with Rubrik in our lab and thought it worth covering some of the basic features that I think are pretty neat. In this edition of Rubrik Basics, I thought I’d quickly cover off Service Level Agreements (SLA) Domains – one of the key tenets of the Rubrik architecture.

 

The Defaults

Rubrik CDM has three default local SLA Domains. Of course, they’re named after precious metals. There’s something about Gold that people seem to understand better than calling things Tier 0, 1 and 2. The defaults are Gold, Silver, and Bronze. The problem, of course, is that people start to ask for Platinum because they’re very important. The good news is you can create SLA Domains and call them whatever you want. I created one called Adamantium. Snick snick.

Note that these policies have the archival policy and the replication policy disabled, don’t have a Snapshot Window configured, and do not set a Take First Full Snapshot time. I recommend you leave the defaults as they are and create some new SLA Domains that align with what you want to deliver in your enterprise.

 

Service Level Agreement

There are two components to the SLA Domain. The first is the Service Level Agreement, which defines a number of things, including the frequency of snapshot creation and their retention. Note that you can’t go below an hour for your snapshot frequency (unless I’ve done something wrong here). You can go berserk with retention though. Keep those “kitchen duty roster.xls” files for 25 years if you like. Modern office life can be gruelling at times.

A nice feature is the ability to configure a Snapshot Window. The idea is that you can enforce time periods where you don’t perform operations on the systems being protected by the SLA Domain. This is handy if you’ve got systems that run batch processing or just need a little time to themselves every day to reflect on their place in the world. Every systems needs a little time every now and then.

If you have a number of settings in the SLA, the Rubrik cluster creates snapshots to satisfy the smallest frequency that is specified. If the Hourly rule has the smallest frequency, it works to that. If the Daily rule has the smallest frequency, it works to that, and so on. Snapshot expiration is determined by the rules you put in place combined with their frequency.

 

Remote Settings

The second page of the Create SLA Domain window is where you can configure the remote settings. I wrote an article on setting up Archival Locations previously – this is where you can take advantage of that. One of the cool things about Rubrik’s retention policy is that you can choose to send a bunch of stuff to an off-site location and keep, say, 30 days of data on Brik. The idea is that you don’t then have to invest in a tonne of Briks, so to speak, to satisfy your organisation’s data protection retention policy.

 

Thoughts

If you’ve had the opportunity to test-drive Rubrik’s offering, you’ll know that everything about it is pretty simple. From deployment to ongoing operation, there aren’t a whole lot of nerd knobs to play with. It nonetheless does the job of protecting the workloads you point it at. A lot of the complexity normally associated with data protection is masked by a fairly simple model that will hopefully make data protection a little more appealing for the average Joe or Josie responsible for infrastructure operations.

Rubrik, and a number of other solution vendors, are talking a lot about service levels and policy-driven data protection. The idea is that you can protect your data based on a service catalogue type offering rather than the old style of periodic protection that was offered with little flexibility (“We backup daily, we keep it 90 days, and sometimes we keep the monthly tape for longer”). This strikes me as an intuitive way to deliver data protection capabilities, provided that your business knows what they want (or need) from the solution. That’s always the key to success – understanding what the business actually needs to stay in business. You can do a lot with modern data protection offerings. Call it SLA-based, talk about service level objectives, makes t-shirts with policy-driven on them and hand them out to your executives. But unless you understand what’s important for your business to stay in business when there’s a problem, then it won’t really matter which solution you’ve chosen.

Chris Wahl wrote some neat posts (a little while ago) on SLAs and their challenges on the Rubrik blog that you can read here and here.

Dell EMC Announces IDPA DP4400

Dell EMC announced the Integrated Data Protection Appliance (IDPA) at Dell EMC World in May 2017. They recently announced a new edition to the lineup, the IDPA DP4400. I had the opportunity to speak with Steve Reichwein about it and thought I’d share some of my thoughts here.

 

The Announcement

Overview

One of the key differences between this offering and previous IDPA products is the form factor. The DP4400 is a 2RU appliance (based on a PowerEdge server) with the following features:

  • Capacity starts at 24TB, growing in increments of 12TB, up to 96TB useable. The capacity increase is done via licensing, so there’s no additional hardware required (who doesn’t love the golden screwdriver?)
  • Search and reporting is built in to the appliance
  • There are Cloud Tier (ECS, AWS, Azure, Virtustream, etc) and Cloud DR options (S3 at this stage, but that will change in the future)
  • There’s the IDPA System Manager (Data Protection Central), along with Data Domain DD/VE (3.1) and Avamar (7.5.1)

[image courtesy of Dell EMC]

It’s hosted on vSphere 6.5, and the whole stack is referred to as IDPA 2.2. Note that you can’t upgrade the components individually.

 

Hardware Details

Storage Configuration

  • 18x 12TB 3.5″ SAS Drives (12 front, 2 rear, 4 mid-plane)
    • 12TB RAID1 (1+1) – VM Storage
    • 72TB RAID6 (6+2) – DDVE File System Spindle-group 1
    • 72TB RAID6 (6+2) – DDVE File System Spindle-group 2
  • 240GB BOSS Card
    • 240GB RAID1 (1+1 M.2) – ESXi 6.5 Boot Drive
  • 1.6TB NVMe Card
    • 960GB SSD – DDVE cache-tier

System Performance

  • 2x Intel Silver 4114 10-core 2.2GHz
  • Up to 40 vCPU system capacity
  • Memory of 256GB (8x 32GB RDIMMs, 2667MT/s)

Networking-wise, the appliance has 8x 10GbE ports using either SFP+ or Twinax. There’s a management port for initial configuration, along with an iDRAC port that’s disabled by default, but can be configured if required. If you’re using Avamar NDMP accelerator nodes in your environment, you can integrate an existing node with the DP4400. Note that it supports one accelerator node per appliance.

 

Put On Your Pointy Hat

One of the nice things about the appliance (particularly if you’ve ever had to build a data protection environment based on Data Domain and Avamar) is that you can setup everything you need to get started via a simple to use installation wizard.

[image courtesy of Dell EMC]

 

Thoughts and Further Reading

I talked to Steve about what he thought the key differentiators were for the DP4400. He talked about:

  • Ecosystem breadth;
  • Network bandwidth; and
  • Guaranteed dedupe ratio (55:1 vs 5:1?)

He also mentioned the capability of a product like Data Protection Central to manage an extremely large ROBO environment. He said these were some of the opportunities where he felt Dell EMC had an edge over the competition.

I can certainly attest to the breadth of ecosystem support being a big advantage for Dell EMC over some of its competitors. Avamar and DD/VE have also demonstrated some pretty decent chops when it comes to bandwidth-constrained environments in need of data protection. I think it’s great the Dell EMC are delivering these kinds of solutions to market. For every shop willing to go with relative newcomers like Cohesity or Rubrik, there are plenty who still want to buy data protection from Dell EMC, IBM or Commvault. Dell EMC are being fairly upfront about what they think this type of appliance will support in terms of workload, and they’ve clearly been keeping an eye on the competition with regards to usability and integration. People who’ve used Avamar in real life have been generally happy with the performance and feature set, and this is going to be a big selling point for people who aren’t fans of NetWorker.

I’m not going to tell you that one vendor is offering a better solution than the others. You shouldn’t be making strategic decisions based on technical specs and marketing brochures in any case. Some environments are going to like this solution because it fits well with their broader strategy of buying from Dell EMC. Some people will like it because it might be a change from their current approach of building their own solutions. And some people might like to buy it because they think Dell EMC’s post-sales support is great. These are all good reasons to look into the DP4400.

Preston did a write-up on the DP4400 that you can read here. The IDPA DP4400 landing page can be found here. There’s also a Wikibon CrowdChat on next generation data protection being held on August 15th (2am on the 16th in Australian time) that will be worth checking out.

Disaster Recovery vs Disaster Avoidance vs Data Protection

This is another one of those rambling posts that I like to write when I’m sitting in an airport lounge somewhere and I’ve got a bit of time to kill. The versus in the title is a bit misleading too, because DR and DA are both forms of data protection. And periodic data protection (PDP) is important too. But what I wanted to write about was some of the differences between DR and DA, in particular.

TL;DR – DR is not DA, and this is not PDP either. But you need to think about all of them at some point.

 

Terminology

I want to be clear about what I mean when I say these terms, because it seems like they can mean a lot of things to different folks.

  • Recovery Point Objective – The Recovery Point Objective (RPO) is the maximum amount of time in which data may have been permanently lost during an incident. You want this to be in minutes and hours, not days or weeks (ideally). RPO 0 is the idea that no data is lost when there’s a failure. A lot of vendors will talk about “Near Zero” RPOs.
  • Recovery Time Objective – The Recovery Time Objective (RTO) is the amount of time the business can be without the service, without incurring significant risks or significant losses. This is, ostensibly, how long it takes you to get back up and running after an event. You don’t really want this to be in days and weeks either.
  • Disaster Recovery – Disaster Recovery is the ability to recover applications after a major event (think flood, fire, DC is now a hole in the ground). This normally involves a failover of workloads from one DC to another in an orchestrated fashion.
  • Disaster Avoidance – Disaster avoidance “is an anticipatory strategy that is in place in order to prevent any such instance of data breach or losses. It is a defensive, proactive approach to keeping data safe” (I’m quoting this from a great blog post on the topic here)
  • Periodic Data Protection – This is the kind of data protection activity we normally associate with “backups”. It is usually a daily activity (or perhaps as frequent as hourly) and the data is normally used for ad-hoc data file recovery requests. Some people use their backup data as an archive. They’re bad people and shouldn’t be trusted. PDP is normally separate to DA or DR solutions.

 

DR Isn’t The Full Answer

I’ve had some great conversations with customers recently about adding resilience to their on-premises infrastructure. It seems like an old-fashioned concept, but a number of organisations are only now seeing the benefits of adding infrastructure-level resilience to their platforms. The first conversation usually goes something like this:

Me: So what’s your key application, and what’s your resiliency requirement?

Customer: Oh, it’s definitely Application X (usually built on Oracle or using SAP or similar). It absolutely can’t go down. Ever. We need to have RPO 0 and RTO 0 for this one. Our while business depends on it.

Me: Okay, it sounds like it’s pretty important. So what about your file server and email?

Customer: Oh, that’s not so important. We can recover those from overnight backups.

Me: But aren’t they used to store data for Application X? Don’t you have workflows that rely on email?

Customer: Oh, yeah, I guess so. But it will be too expensive to protect all of this. Can we change the RPO a bit? I don’t think the CFO will support us doing RPO 0 everywhere.

These requirements tend to change whenever we move from technical discussions to commercial discussions. In an ideal world, Martha in Accounting will have her home directory protected in a highly available fashion such that it can withstand the failure of one or more storage arrays (or data centres). The problem with this is that, if there are 1000 Marthas in the organisation, the cost of protecting that kind of data at scale becomes prohibitive, relative to the perceived value of the data. This is one of the ways I’ve seen “DR” capability added to an environment in the past. Take some older servers and put them in a site removed from the primary site, setup some scripts to copy critical data to that site, and hope nothing ever goes too wrong with the primary site.

There are obviously better ways of doing this, and common solutions may or may not involve block-level storage replication, orchestrated failover tools, and like for like compute at the secondary site (or perhaps you’ve decided to shut down test and development while you’re fixing the problem at the production site).

But what are you trying to protect against? The failure of some compute? Some storage? The network layer? A key application? All of these answers will determine the path you’ll need to go down. Keep in mind also that DR isn’t the only answer. You also need to have business continuity processes in place. A failover of workloads to a secondary site is pointless if operations staff don’t have access to a building to continue doing their work, or if people can’t work when the swipe card access machine is off-lien, or if your Internet feed only terminates in one DC, etc.

 

I’m Avoiding The Problem

Disaster Avoidance is what I like to call the really sexy resilience solution. You can have things go terribly wrong with your production workload and potentially still have it functioning like there was no problem. This is where hardware solutions like Pure Storage ActiveCluster or Dell EMC VPLEX can really shine, assuming you’ve partnered them with applications that have the smarts built in to leverage what they have to offer. Because that’s the real key to a successful disaster avoidance design. It’s great to have synchronous replication and cache-consistency across DCs, but if your applications don’t know what to do when a leg goes missing, they’ll fall over. And if you don’t have other protection mechanisms in place, such as periodic data protection, then your synchronous block replication solution will merrily synchronise malware or corrupted data from one site to another in the blink of an eye.

It’s important to understand the failure scenarios you’re protecting against too. If you’ve deployed vSphere Metro Storage Cluster, you’ll be able to run VMs even when your whole array has gone off-line (assuming you’ve set it up properly). But this won’t necessarily prevent an outage if you lose your vSphere cluster, or the whole DC. Your data will still be protected, and you’ll be in good shape in terms of recovering quickly, but there will be an outage. This is where application-level resilience can help with availability. Remember that, even if you’ve got ultra-resilient workloads protection across DCs, if your staff only have one connection into the environment, they may be left twiddling their thumbs in the event of a problem.

There’s a level of resiliency associated with this approach, and your infrastructure will certainly be able to survive the failure of a compute node, or even a bunch of disk and some compute (everything will reboot in another location). But you need to be careful not to let people think that this is something it’s not.

 

PDP, Yeah You Know Me

I mentioned problems with malware and data corruption earlier on. This is where periodic data protection solutions (such as those sold by Dell EMC, CommVault, Rubrik, Cohesity, Veeam, etc) can really get you out of a spot of bother. And if you don’t need to recover the whole VM when there’s a problem, these solutions can be a lot quicker at getting data back. The good news is that you can integrate a lot of these products with storage protection solutions and orchestration tools for a belt and braces solution to protection, and it’s not the shitshow of scripts and kludges that it was ten years ago. Hooray!

 

Final Thoughts

There’s a lot more to data protection than I’ve covered here. People like Preston have written books about the topic. And a lot of the decision making is potentially going to be out of your hands in terms of what your organisation can afford to spend (until they lose a lot of data, money (or both), then they’ll maybe change their focus). But if you do have the opportunity to work on some of these types of solutions, at least try to make sure that everyone understands exactly what they can achieve with the technologies at hand. There’s nothing worse than being hauled over the coals because some director thought they could do something amazing with infrastructure-level availability and resiliency only to have the whole thing fall over due to lack of budget. It can be a difficult conversation to have, particularly if your executives are the types of people who like to trust the folks with the fancy logos on their documents. All you can do in that case is try and be clear about what’s possible, and clear about what it will cost in time and money.

In the near future I’ll try to put together a post on various infrastructure failure scenarios and what works and what doesn’t. RPO 0 seems to be what everyone is asking for, but it may not necessarily be what everyone needs. Now please enjoy this Unfinished Business stock image.

Rubrik Basics – Cluster Upgrade Process

I’ve been doing some work with Rubrik in our lab and thought it worth covering some of the basic features that I think are pretty neat. In this edition of Rubrik Basics, I thought I’d quickly cover off software upgrades. There are two ways to upgrade the Rubrik software on your Brik – via USB and SFTP. Either way, you’ll need access to the Downloads section of the support site. If you’re a customer, you’ll have this already. If this all sounds too hard, you can raise a ticket with the support team and they’ll tunnel in and do the upgrade for you (assuming you’ve allowed remote tunnel capability).

 

USB

The good thing about using a USB drive is that you can still keep appliances in “dark” sites up to date. Before you begin you’ll need to do two things:

  • Download the compressed upgrade archive and the matching signature file from the customer portal.
  • Format a removable drive with the FAT32 file system.

You’ll need to copy the upgrade file and matching signature file to the removable drive. Plug that into any node in the cluster. Log in to that node as the admin user. Mount the USB drive by typing the following command:

mount --usb_device

Type the following command to begin the upgrade:

upgrade start

The upgrade system scans the file system for upgrade archives. If multiple archives are available, it display a list of choices. Once you’ve finished, you can unmount the device.

umount --usb_device

 

SFTP

You can also run the upgrade via SFTP. I found the instructions on how to do that here. It’s not too dissimilar to the USB method. You’ll want to use your favourite SFTP client to upload the files to the /upgrade directory. Once you’ve done that, ssh on to the node and you can run a pre-flight check. If everything comes up Milhouse you’ll be good to go for the next step.

Using username "admin".

admin@10.xxx.yyy.131's password:

=======================

Welcome to Rubrik CLI

=======================

Type 'help' or '?' to list commands

RVM165Sxxxx55 >> upgrade start --mode prechecks_only
Do you want to use --share rubrik-4.1.2-2366.tar.gz [y/N] [N]: y
Upgrade status: Started pre-checks successfully
RVM165Sxxxx55 >> upgrade status
Current upgrade mode: prechecks_only
Current upgrade pre-checks node: RVM165Sxxxx55
Current upgrade pre-checks tarball name: --share rubrik-4.1.2-2366.tar.gz
Current upgrade pre-checks status: In progress
Current run started at: 2018-07-19 00:48:04.437000 UTC+0000

Current state (3/6): VERIFYING
Current task: Verify authenticity of new software
Current state progress: 0.0%

Finished states (2/6): ACQUIRING, COPYING
Pending states (3/6): UNTARING, DEPLOYING, PRECHECKING

Time taken so far: 18.38 seconds
Overall upgrade progress: 6.0%

To check on progress, run “upgrade status” to, erm, check on the status of the upgrade.

RVM165Sxxxx55 >> upgrade status
Last upgrade mode: prechecks_only
Last upgrade pre-checks node: RVM165Sxxxx55
Last upgrade pre-checks tarball name: --share rubrik-4.1.2-2366.tar.gz
Last upgrade pre-checks status: Completed successfully
Last run ended at: 2018-07-19 00:51:03.129000 UTC+0000
Current state: IDLE

Now you’re ready to do it for real. Run “upgrade start” to start.

RVM165Sxxxx55 >> upgrade start
Do you want to use --share rubrik-4.1.2-2366.tar.gz [y/N] [N]: y
Upgrade status: Started upgrade successfully
RVM165Sxxxx55 >> upgrade status
Current upgrade mode: normal
Current upgrade node: RVM165Sxxxx55
Current upgrade tarball name: --share rubrik-4.1.2-2366.tar.gz
Current upgrade status: In progress
Current run started at: 2018-07-19 00:52:56.882000 UTC+0000

Current state (4/9): UNTARING
Current task: Extract new software
Current state progress: 0.0%

Finished states (3/9): ACQUIRING, COPYING, VERIFYING
Pending states (5/9): DEPLOYING, PRECHECKING, PREPARING, UPGRADING, RESTARTING

Time taken so far: 22.52 seconds
Overall upgrade progress: 3.5%

It’s a pretty quick process, and eventually you’ll see this message.

RVM165Sxxxx55 >> upgrade status
Last upgrade mode: normal
Last upgrade node: RVM165Sxxxx55
Last upgrade tarball name: --share rubrik-4.1.2-2366.tar.gz
Last upgrade status: Completed successfully
Last run ended at: 2018-07-19 01:19:09.719000 UTC+0000

Current state: IDLE
RVM165Sxxxx55 >>

And you’re all done. Note that you only have to upload the data and run the process on one node in the cluster.

Random Short Take #6

Welcome to the sixth edition of the Random Short Take. Here are a few links to a few things that I think might be useful, to someone.

Rubrik CDM 4.1.1. – A Few Notes

Here are a few random notes on things in Rubrik‘s Cloud Data Management (CDM) 4.1.1-p4-2319 that I’ve come across in my recent testing in the lab. There’s not enough in each item to warrant a full post, hence the “few notes” format. Note that some of these things have been around for a while, I just wanted to note the specific version of Rubrik CDM I’m working with.

 

Guest OS Credentials

Rubrik uses Guest OS credentials for access to a VM’s operating system. When you add VM workload to your Rubrik environment, you may see the following message in the logs.

Note that it’s a warning, not an error. You can still backup the VM, just not to the level you might have hoped for. If you want to do a direct restore on a Linux guest, you’ll need an account with write access. For Windows, you’ll need something with administrative access. You could achieve this with either local or domain administrator accounts. This isn’t recommended though, and Rubrik suggests “a credential for a domain level account that has a small privilege set that includes administrator access to the relevant guests”. You could use a number of credentials across multiple groups of machines to reduce (to a small extent) the level of exposure, but there are plenty of CISOs and Windows administrators who are not going to like this approach.

So what happens if you don’t provide the credentials? My understanding is that you can still do file system consistent snapshots (provided you have a current version of VMware Tools installed), you just won’t be able to do application-consistent backups. For your reference, here’s the table from Rubrik discussing the various levels of available consistency.

Consistency level Description Rubrik usage
Inconsistent A backup that consists of copying each file to the backup target without quiescence.

File operations are not stopped The result is inconsistent time stamps across the backup and, potentially, corrupted files.

Not provided
Crash consistent A point-in-time snapshot but without quiescence.

•                Time stamps are consistent

•                Pending updates for open files are not saved

•                In-flight I/O operations are not completed

The snapshot can be used to restore the virtual machine to the same state that a hard reset would produce.

Provided only when:

•                The Guest OS does not have VMware Tools

•                The Guest OS has an out-of-date version of VMware Tools

The VM’s Application Consistency was manually set to Crash Consistent in the Rubrik UI

File system consistent A point-in-time snapshot with quiescence.

•                Time stamps are consistent

•                Pending updates for open files are saved

•                In-flight I/O operations are completed

•                Application-specific operations may not be completed.

Provided when the guest OS has an up-to-date version of VMware Tools and application consistency is not supported for the guest OS.
Application consistent A point-in-time snapshot with quiescence and application-awareness.

•                Time stamps are consistent

•                Pending updates for open files are saved

•                In-flight I/O operations are completed

•                Application-specific operations are completed.

Provided when the guest OS has an up-to-date version of VMware Tools and application consistency is supported for the guest OS.

 

open-vm-tools

If you’re running something like Debian in your vSphere environment you may have chosen to use open-vm-tools rather than VMware’s package. There’s nothing wrong with this (it’s a VMware-supported configuration), but you’ll see that Rubrik currently has a bit of an issue with it.

It will still backup the VM, just not at the consistency level you may be hoping for. It’s on Rubrik’s list of things to fix. And VMware Tools is still a valid (and arguably preferred) option for supported Linux distributions. The point of open-vm-tools is that appliance vendors can distribute the tools with their VMs without violating licensing agreements.

 

Download Logs

It seems like a simple thing, but I really like the ability to download logs related to a particular error. In this example, I’ve got some issues with a SQL cluster I’m backing up. I can click on “Download Logs” and grab the info I need related to the SLA Activity. It’s a small thing, but it makes wading through logs to identify issues a little less painful.

Rubrik Basics – Multi-tenancy

I’ve been doing some work with Rubrik in our lab and thought it worth covering some of the basic features that I think are pretty neat. In this edition of Rubrik Basics, I thought I’d quickly cover off how to get started with the multi-tenancy feature. You can read a little about it here. And yes, I know, some of the Rubrik documentation doesn’t hyphenate the word. But this is the hill I’m dying on apparently.

 

Multi-tenancy and Role-based Access

Multi-tenancy means a lot of different things to a lot of different people. In the case of Rubrik, multi-tenancy is an extension of the RBAC scheme enables a central organisation to delegate administrative capabilities to multiple tenant organisations. That is, you’ll likely have one global administrator (probably the managed service provider) looking after the Rubrik environment and carving it up for use by a number of different client organisations (tenants).

Each tenant organisation has a subset of administrative privileges defined by the global organisation. A tenant’s administrative privileges are also specified on a per-organisation basis. The administrators of the tenant can then go and do their thing independently of the cluster administrator. Because Rubrik supports multiple Active Directory domains, you can still use AD authentication on a per-tenant basis.

 

A Rubrik cluster can have one central organisation and any number of tenant organisations. An organisation is a collection of the following elements:

  • Protected objects
  • Replication and archival targets
  • SLA Domains
  • Local users
  • Active Directory users and groups
  • Service credentials
  • Reports

 

The Impact

SLA Domains are the mechanism used to protect objects in the Rubrik environment. In the case of multi-tenancy, SLA Domains are impacted by virtue of which organisation creates them. If the SLA Domain is created outside of a tenant organisation (and assigned to that organisation), it cannot be altered by the users or AD groups of the tenant organisation. Those that are created within a tenant can be modified by that tenant.

Note also that a Tenant Organisation does not inherit Guest OS Credentials from the Global Organisation. If you want to use the Guest OS Credentials of the global org you’ll need to assign those on a per-tenant basis.

 

Other Thoughts

When it comes to offering products as a service, there’s a bit more to multi-tenancy in terms of network connectivity, reporting, QoS, and other things like that. But the foundation, in my opinion, is the ability to create tenants organisations on the platform and have those remain independent of each other. The key to this is tying multi-tenancy in to your RBAC scheme to ensure that the rules of the tenancy are being observed. Once you have that working correctly, it becomes a relatively simple exercise to start to add features to the platform that can take advantage of those rules.

Rubrik introduced multi-tenancy into Rubrik CDM with 4.1, and it seems to be a pretty well thought out implementation. It’s not a feature that enterprise bods are interested in, but it’s certainly something that service providers require to be able to satisfy their customers that the right people will be touching the right stuff. I’m looking forward to testing out some more of these features in the near future.

Rubrik Basics – Role-based Access Control

I’ve been doing some work with Rubrik in our lab and thought it worth covering some of the basic features that I think are pretty neat. In this edition of Rubrik Basics, I thought I’d quickly cover off how to get started with the Role Based Access Control (RBAC) feature.

 

Roles

The concept of RBAC is not a new one. It is, however, one of the first things that companies with more than one staff member ask for when they have to manage infrastructure. Rubrik uses the concept of Roles to deliver particular access to their environment. The available roles are as follows:

  • Administrator role – Full access to all Rubrik operations on all objects;
  • End User role – For assigned objects: browse snapshots, recover files and Live Mount; and
  • No Access role – Cannot log in to the Rubrik UI and cannot make REST API calls.

The End User role has a set of privileges that align with the requirements of a backup operator role.

Privilege type Description
Download data from backups Data download only from assigned object types:

  • vSphere virtual machines
  • Hyper-V virtual machines
  • AHV virtual machines
  • Linux & Unix hosts
  • Windows hosts
  • NAS hosts
  • SQL Server databases
  • Managed volumes
Live Mount or Export virtual machine snapshot Live Mount or Export a snapshot only from specified virtual machines and only to specified target locations.
Export data from backups Export data only from specified source objects.
Restore data over source Write data from backups to the source location, overwriting existing data, only for assigned objects, and only when ‘Allow overwrite of original’ is enabled for the user account or group account.

The good news is that Rubrik supports local authentication as well as Active Directory. You can then tie these roles to particular groups within your organisation. You can have more than one domain that you use for authentication, but I’ll cover that in a future post on multi-tenancy.

I don’t believe that the ability to create custom roles is present (at least in the UI). I’m happy for people from Rubrik to correct me if I’ve gotten that wrong.

 

Configuration

Configuring access to the Rubrik environment for users is fairly straightforward. In this example I’ll be giving my domain account access to the Brik as an administrator. To get started, click on the Gear icon in the UI and select Users (under Access Management).

I don’t know who Grant Authorization is in real life, but he’s the guy who can help you out here (my dad jokes are both woeful and plentiful – just ask my children).

In this example I’m granting access to a domain user.

This example also assumes that you’ve added the domain to the appliance in the first place (and note that you can add multiple domains). In the dropdown box, select the domain the user resides in.

You can then search for a name. In this example, the user I’m searching for is danf. Makes sense, if you think about it.

Select the user account and click on Continue.

By default users are assigned No Access. If you have one of these accounts, the UI will let you enter a username and password and then kick you back to the login screen.

If I assign the user the End User role, I can assign access to various objects in the environment. Note that I can also provide access to overwrite original files if required. This is disabled by default.

In this example, however, I’m providing my domain account with full access via the Administrator role. Click on Assign to continue.

I can now log in to the Rubrik UI with my domain user account and do things.

And that’s it. In a future post I’ll be looking in to multi-tenancy and fun things you can do with organisations and multiple access levels.

Rubrik Basics – Archival Locations

I’ve been doing some work with Rubrik in our lab and thought it worth covering some of the basic features that I think are pretty neat. In this edition of Rubrik Basics, I thought I’d quickly cover off how to get started with the Archival Locations feature. You can read the datasheet here.

 

Rubrik and Archiving Policies

So what can you do with Archival Locations? Well, the idea is that you can copy data to another location for safe-keeping. Normally this data will live in that location for a longer period than it will in the on-premises Brik you’re using. You might, for example, keep data on your appliance for 30 days, and have archive data living in a cloud location for another 2 years.

 

Archival Location Support

Rubrik supports a variety of Archival Locations, including:

  • Public Cloud: Amazon Web Services S3, S3-IA, S3-RRS and Glacier; Microsoft Azure Blob Storage LRS, ZRS and GRS; Google Cloud Platform Nearline, Coldline, Multi-Regional and Regional; (also includes support for Government Cloud Options in AWS and Azure);
  • Private Cloud (S3 Object Store): Basho Riak, Cleversafe, Cloudian, EMC ECS, Hitachi Content Platform, IIJ GIO, Red Hat Ceph, Scality;
  • NFS: Any NFS v3 Compliant Target; and
  • Tape: All Major Tape Vendors via QStar.

What’s cool is that multiple, active archival locations can be configured for a Rubrik cluster. You can then select an archival location when an SLA policy is created or edited. This is particularly useful when you have a number of different tenants hosted on the same Brik.

 

Setup

To setup an Archival Location, click on the “Gear” icon in the Rubrik interface (in this example I’m using Rubrik CDM 4.1) and select “Archival Locations”.

Click on the + sign.

You can then choose the archival type, selecting from Amazon S3 (or Glacier), Azure, Google Cloud Platform, NFS or Tape (via QStar). In this example I’m setting up an Amazon S3 bucket.

You then need to select the Region and Storage Class, and provide your AWS Access Key, Secret Key and S3 Bucket.

You also need to choose the encryption type. I’m not using an external KMS in our lab, so I’ve used OpenSSL to generate a key using the following command.

Once you run that command, paste the contents of the PEM file.

Once you’ve added the location, you’ll see it listed, along with some high level statistics.

Once you have an Archival Location configured, you can add it to existing SLA Domains, or use it when you create a new SLA Domain.

Instant Archive

The Instant Archive feature can also be used to immediately queue a task to copy a new snapshot to a specified archival location. Note that the Instant Archive feature does not change the amount of time that a snapshot is retained locally on the Rubrik cluster. The Retention On Brik setting determines how long a snapshot is kept on the Rubrik cluster.

 

Thoughts

Rubrik’s Data Archival is flexible as well as simple to use. It’s easy to setup and works as promised. There is a bunch of stuff happening within the Rubrik environment that means that you can access protection data across multiple locations as well, so you might find that a combination of a Rubrik Brik and some cheap and deep NFS storage is a good option to store backup data for an extended period of time. You might also think about using this feature as a way to do data mobility or disaster recovery, depending on the type of disaster you’re trying to recover from.

Rubrik Cloud Data Management 4.2 Announced – “Purpose Built for the Hybrid Cloud”

Rubrik recently announced 4.2 of their Cloud Data Management platform and I was fortunate enough to sit in on a sneak preview from Chris Wahl, Kenneth Hui, and Rebecca Fitzhugh. “Purpose Built for the Hybrid Cloud”, there are a whole bunch of new features in this release. I’ve included a summary table below, and will dig in to some of the more interesting ones.

Expanding the Ecosystem Core Features & Services General Enhancements
AWS Native Protection (EC2 Instances) Rubrik Envoy SQL Server FILESTREAM
VMware vCloud Director Integration Rubrik Edge on Hyper-V SQL Server Log Shipping
Windows Full Volume Protection Network Throttling NAS Native API Integration
AIX & Solaris Support VLAN Tagging (GUI) NAS SMB Scan Enhancements
SNMP AHV VSS snapshot
Multi-File restore Proxy per Archival Location
Reader-Writer Archival Locations

 

AWS Native Protection (EC2 Instances)

One of the key parts of this announcement is cloud-native protection, delivered specifically with AWS EBS Snapshots. The cool thing is you can have Rubrik running on-premises or sitting in the cloud.

Use cases?

  • Automate manual processes – use policy engine to automate lifecycle management of snapshots, including scheduling and retention
  • Rapid recovery from failure – eliminate manual steps for instance and file recovery
  • Replicate instances in other availability zones and regions – launch instances in other AZs and Regions when needed using snapshots
  • Consolidate data management – one solution to manage data across on-premises DCs and public clouds

Snapshots have been a manual process to deal with. Now there’s no need to mess with crontab or various AWS tools to get the snaps done. It also aligns with Rubrik’s vision of having a single tool to manage both cloud and on-premises workloads. The good news is that files in snapshots are indexed and searchable, so individual file recovery is also pretty simple.

 

VMware vCloud Director Integration

It may or may not be a surprise to learn that VMware vCloud Director is still in heavy use with service providers, so news of Rubrik integration with vCD shouldn’t be too shocking. Rubrik spent a little time talking about some of the “Foundational Services” they offer, including:

  • Backup – Hosted or Managed
  • ROBO Protection
  • DR – Mirrored Site service
  • Archival – Hosted or Managed

The value they add, though, is in the additional services, or what they term “Next Generation premium services”. These include:

  • Dev / Test
  • Cloud Archival
  • DR in Cloud
  • Near-zero availability
  • Cloud migration
  • Cloud app protection

Self-service is the key

To be able to deliver a number of these services, particularly in the service provider space, there’s been a big focus on multi-tenancy.

  • Operate multi-customer configuration through a single cluster
  • Logically partition cluster into tenants as “Organisations”
  • Offer self-service management for each organisation
  • Centrally control, monitoring and reporting with aggregated data

Support for vCD (version 8.10 and later) is as follows:

  • Auto discovery of vCD hierarchy
  • SLA based auto protect at different levels of vCD hierarchy
  • vCD Instance
  • vCD Organization • Org VDC
  • vApp
  • Recovery workflows
  • Export and Instant recovery
  • Network settings
  • File restore
  • Self-service using multi-tenancy
  • Reports for vCD organization

 

Windows Full Volume Protection

Rubrik have always had fileset-based protection, and they’re now offering the ability with Windows hosts to protect a volume at a time, eg. C:\ volume. These protection jobs incorporate additional information such as partition type, volume size, and permissions.

[image courtesy of Rubrik]

There’s also a Rubrik-created package to create bootable Microsoft Windows Preinstallation Environment (WinPE) media to restore the OS as well as provide disk partition information. There are multiple options for customers to recover entire volumes in addition to system state, including Master Boot Record (MBR), GUID Partition Table (GPT) information, and OS.

Why would you? There are a few use cases, including

  • P2V – remember those?
  • Physical RDM mapping compatibility – you might still have those about, because, well, reasons
  • Physical Exchange servers and log truncation
  • Cloud mobility (AWS to Azure or vice versa)

So now you can select volumes or filesets, and you can store the volumes in a Volume Group.

[image courtesy of Rubrik]

 

AIX and Solaris Support

Wahl was reluctant to refer to AIX and Solaris as “traditional” DC applications, because it all makes us feel that little bit older. In any case, AIX support was already available in the 4.1.1 release, and 4.2 adds Oracle Solaris support. There are a few restore scenarios that come to mind, particularly when it comes to things like migration. These include:

  • Restore (in place) – Restores the original AIX server at the original path or a different path.
  • Export (out of place) – Allows exporting to another AIX or Linux host that has the Rubrik Backup Service (RBS) running.
  • Download Only – Ability to download files to the machine from which the administrator is running the Rubrik web interface.
  • Migration – Any AIX application data can be restored or exported to a Linux host, or vice versa from Linux to an AIX host. In some cases, customers have leveraged this capability for OS migrations, removing the need for other tools.

 

Rubrik Envoy

Rubrik Envoy is a trusted ambassador (its certificate is issued by the Rubrik cluster) that represents the service provider’s Rubrik cluster in an isolated tenant network.

[image courtesy of Rubrik]

 

The idea is that service providers are able to offer backup-as-a-service (BaaS) to co-hosted tenants, enabling self-service SLA management with on-demand backup and recovery. The cool thing is you don’t have to deploy the Virtual Edition into the tenant network to get the connectivity you need. Here’s how it comes together:

  1. Once a tenant subscribes to BaaS from the SP, an Envoy virtual appliance is deployed on the tenant’s network.
  2. The tenant may log into Envoy, which will route the Rubrik UI to the MSP’s Rubrik cluster.
  3. Envoy will only allow access to objects that belong to the tenant.
  4. The Rubrik cluster works with the tenant VMs, via Envoy, for all application quiescence, file restore, point-in-time recovery, etc.

 

Network Throttling

Network throttling is something that a lot of customers were interested in. There’s not an awful lot to say about it, but the options are No, Default and Scheduled. You can use it to configure the amount of bandwidth used by archival and replication traffic, for example.

 

Core Feature Improvements

There are a few other nice things that have been added to the platform as well.

  • Rubrik Edge is now available on Hyper-V
  • VLAN tagging was supported in 4.1 via the CLI, GUI configuration is now available
  • SNMPv2c support (I loves me some SNMP)
  • GUI support for multi-file recovery

 

General Enhancements

A few other enhancements have been added, including:

  • SQL Server FILESTREAM fully supported now (I’m not shouting, it’s just how they like to write it);
  • SQL Server Log Shipping; and
  • Per-Archive Proxy Support.

Rubrik were also pretty happy to announce NAS Vendor Native API Integration with NetApp and Isilon.

  • Network Attached Storage (NAS) vendor-native API integration.
    • NetApp ONTAP (ONTAP API v8.2 and later) supporting cluster-mode for NetApp filers.
    • Dell EMC Isilon OneFS (v8.x and later) + ChangeList (v7.1.1 and later)
  • NAS vendor-native API integration further enhances our current capability to take volume-based snapshots.
  • This feature also enhances the overall backup fileset backup performance.

NAS SMB Scan Enhancements have also been included, providing a 10x performance improvement (according to Rubrik).

 

Thoughts

Point releases aren’t meant to be massive undertakings, but companies like Rubrik are moving at a fair pace and adding support for products to try and meet the requirements of their customers. There’s a fair bit going on in this one, and the support for AWS snapshots is kind of a big deal. I really like Rubrik’s focus on multi-tenancy, and they’re slowing opening up doors to some enterprises still using the likes of AIX and Solaris. This has previously been the domain of the more traditional vendors, so it’s nice to see progress has been made. Not all of the world runs on containers or in vSphere VMs, so delivering this capability will only help Rubrik gain traction in some of the more conservative shops around town.

Rubrik are working hard to address some of the “enterprise-y” shortcomings or gaps that may have been present in earlier iterations of their product. It’s great to see this progress over such a short period of time, and I’m looking forward to hearing about what else they have up their sleeve.