Rubrik Basics – Multi-tenancy – Create An Organization

I covered multi-tenancy with Rubrik some time ago, but things have certainly advanced since then. One of the useful features of Rubrik CDM (and something that’s really required for Envoy to make sense) is the Organizations feature. This is the way in which you can use a combination of LDAP sources, roles, and tenant workloads to deliver a packaged multi-tenancy feature to organisations either within or external to your company. In this article I’ll run through the basics of setting up an Organization. If you’d like to see how it can be applied in a practical sense, it’s worth checking out my post on deploying Rubrik Envoy.

It starts, as these things often do, by clicking on the gear in the Rubrik CDM UI. Select Organizations (located under Access Management).

Click on Create Organization.

You’ll want to give it a name, and think about whether you want to give your tenant the ability to do per-tenant access control.

You’ll want an Org Admin Role to have particular abilities, and you might like to get fancy and add in some additional roles that will have some other capabilities.

At this point you’ll get to select which users you want in your Organization.

Hopefully you’ve added the tenant’s LDAP source to your environment already.

And it’s worth thinking about what users and / or groups you’ll be using from that LDAP source to populate your Organization’s user list.

You’ll also need to consider which role will be assigned to these users (rather than relying on Global Admins to do things for tenants).

You can then assign particular resources, including VMs, vApps, and so forth.

You can also select what SLA Domains the Organization has access to, as well as Archival locations, and replication targets and sources. This becomes important in a multi-tenanted environment as you don’t want folks putting data where they shouldn’t.

At this point you can download the Rubrik Envoy OVA, deploy it, and connect it to your Organization.

And then you’re done. Well, normally you would be, but I didn’t select a whole lot of objects in this example. Click Finish and you’re on your way.

Assuming you’ve assigned your roles correctly, when your tenant logs in, he or she will only be able to see and control resources that belong to that particular Organization.

 

Rubrik Basics – Add LDAP

I thought I’d run through the basics of adding LDAP support to a Rubrik Edge cluster. I’ve written previously about multi-tenancy considerations with Rubrik, and thought it might be useful to start down that path in the lab to demonstrate some of the process. It’s not a terribly difficult task, but I did find a little trial and error was required. I suspect that’s because of some environmental issues on my side, rather than the Rubrik side of things. Anyway, let’s get started. Click on the Gear / Settings icon in the Web UI. Then select Users under Access Management.

Click on the LDAP Servers tab and click on “Add LDAP Server”.

You’ll be presented with the Add LDAP Server workflow window.

I messed this up a few times in my environment, but this is what worked for me.

Domain name: domainname.com.au

Base DN: dc=domainname,dc=com,dc=au

Bind DN or Username: dan.frith@domainname.com.au

Password: *******

Click Next to continue.

I pointed to one of the Active Directory servers in the environment. This went better when I added the domain name search to the cluster. The port I used was 389, but I’ve seen variations on that in various articles across the Internet.

If that works, you then have the option to enable MFA integration.

Toggling the button will give you the option to add two-step verification. There are some articles on the Internet that provide further guidance on that, and this video is quite useful too.

Once you’ve added your directory source, it’s time to assign roles to a user.

Click on Assign Roles, then drop down the directory you’d like to search in.

In this example, there’s the local user directory, and the domain source that I added previously.

If I search for people called Dan in this directory, it’s not too hard to find my username.

I can then assign a role to my directory username. By default, the configured roles are Administrator and ReadOnlyAdmin.

Now my AD account is listed under the users and I can login to CDM using my domain credentials.

And that’s it. If you want to read more about Rubrik and AD integration, including some neat automation, check out this article from Frederic Lhoest.

Rubrik Basics – Cluster Shutdown

It’s been a little while since I’ve done any hands-on work with Rubrik, but I recently had to jump on a cluster and power it down so it could be relocated. The process is simple (particularly if you have the correct credentials), but I’m noting it here more for my own reference than anything else. It’s important to note that if you’re running a version of CDM pre-5.1 and have the cluster shutdown for longer than 24 hours, it will be sad when it comes back online and you’ll need support’s help to get it back online. Note also that 5.1 introduced a new command line structure (support site registration required), so the command is slightly different. This page also has a bunch of useful, publicly visible information.

If you’re not in the DC with the cluster, ssh to one of the nodes to run the commands. For pre-5.1 environments, run

poweroff_cluster

For 5.1 and newer environments, run

cluster poweroff_cluster

Type yes to continue and you should be good to go.

Here’s a picture of one I prepared earlier.

Exciting? Not really. But useful to know when people are threatening to power off equipment regardless of the state it’s in.

Cohesity Basics – Configuring An External Target For Cloud Archive

I’ve been working in the lab with Pure Storage’s ObjectEngine and thought it might be nice to document the process to set it up as an external target for use with Cohesity’s Cloud Archive capability. I’ve written in the past about Cloud Tier and Cloud Archive, but in that article I focused more on the Cloud Tier capability. I don’t want to sound too pretentious, but I’ll quote myself from the other article: “With Cloud Archive you can send copies of snapshots up to the cloud to keep as a copy separate to the backup data you might have replicated to a secondary appliance. This is useful if you have some requirement to keep a monthly or six-monthly copy somewhere for compliance reasons.”

I would like to be clear that this process hasn’t been blessed or vetted by Pure Storage or Cohesity. I imagine they are working on delivering a validated solution at some stage, as they have with Veeam and Commvault. So don’t go out and jam this in production and complain to me when Pure or Cohesity tell you it’s wrong.

There are a couple of ways you can configure an external target via the Cohesity UI. In this example, I’ll do it from the dashboard, rather than during the protection job configuration. Click on Protection and select External Target.

You’ll then be presented with the New Target configuration dialogue.

In this example, I’m calling my external target PureOE, and setting its purpose as Archival (as opposed to Tiering).

The Type of target is “S3 Compatible”.

Once you select that, you’ll be asked for a bunch of S3-type information, including Bucket Name and Access Key ID. This assumes you’ve already created the bucket and configured appropriate security on the ObjectEngine side of things.

Enter the required information. I’ve de-selected compression and source side deduplication, as I’m wanting that the data reduction to be done by the ObjectEngine. I’ve also disabled encryption, as I’m guessing this will have an impact on the ObjectEngine as well. I need to confirm that with my friends at Pure. I’m using the fully qualified domain name of the ObjectEngine as the endpoint here as well.

Once you click on Register, you’ll be presented with a summary of the configuration.

You’re then right to use this as an external target for Archival parts of protection jobs within your Cohesity environment. Once you’ve run a few protection jobs, you should start to see files within the test bucket on the ObjectEngine. Don’t forget that, as fas as I’m aware, it’s still very difficult (impossible?) to remove external targets from the the Cohesity Data Platform, so don’t get too carried away with configuring a bunch of different test targets thinking that you can remove them later.

*Update – 2020.09.07* Thanks to Rhys Hammond for letting me know there is now a way to remove external targets. If you have access to Cohesity Support portal – look for article 000003358. Spoiler alert – there’s some gflag stuff to do via iris_cli, and it seems to work with Cohesity 6.3.1e and 6.4.0.

Cohesity Basics – Excluding VMs Using Tags – Real World Example

I’ve written before about using VM tags with Cohesity to exclude VMs from a backup. I wanted to write up a quick article using a real world example in the test lab. In this instance, we had someone deploying 200 VMs over a weekend to test a vendor’s storage array with a particular workload. The problem was that I had Cohesity set to automatically protect any new VMs that are deployed in the lab. This wasn’t a problem from a scalability perspective. Rather, the problem was that we were backing up a bunch of test data that didn’t dedupe well and didn’t need to be protected by what are ultimately finite resources.

As I pointed out in the other article, creating tags for VMs and using them as a way to exclude workloads from Cohesity is not a new concept, and is fairly easy to do. You can also apply the tags in bulk using the vSphere Web Client if you need to. But a quicker way to do it (and something that can be done post-deployment) is to use PowerCLI to search for VMs with a particular naming convention and apply the tags to those.

Firstly, you’ll need to log in to your vCenter.

PowerCLI C:\> Connect-VIServer vCenter

In this example, the test VMs are deployed with the prefix “PSV”, so this makes it easy enough to search for them.

PowerCLI C:\> get-vm | where {$_.name -like "PSV*"} | New-TagAssignment -Tag "COH-NoBackup"

This assumes that the tag already exists on the vCenter side of things, and you have sufficient permissions to apply tags to VMs. You can check your work with the following command.

PowerCLI C:\> get-vm | where {$_.name -like "PSV*"} | Get-TagAssignment

One thing to note. If you’ve updated the tags of a bunch of VMs in your vCenter environment, you may notice that the objects aren’t immediately excluded from the Protection Job on the Cohesity side of things. The reason for this is that, by default, Cohesity only refreshes vCenter source data every 4 hours. One way to force the update is to manually refresh the source vCenter in Cohesity. To do this, go to Protection -> Sources. Click on the ellipsis on the right-hand side of your vCenter source you’d like to refresh, and select Refresh.

You’ll then see that the tagged VMs are excluded in the Protection Job. Hat tip to my colleague Mike for his help with PowerCLI. And hat tip to my other colleague Mike for causing the problem in the first place.

Cohesity Basics – Excluding VMs Using Tags

I’ve been doing some work with Cohesity in our lab and thought it worth covering some of the basic features that I think are pretty neat. In this edition of Cohesity Basics, I thought I’d quickly cover off how to exclude VMs from protection jobs based on assigned tags. In this example I’m using version 6.0.1b_release-20181014_14074e50 (a “feature release”).

 

Process

The first step is to find the VM in vCenter that you want to exclude from a protection job. Right-click on the VM and select Tags & Custom Attributes. Click on Assign Tag.

In the Assign Tag window, click on the New Tag icon.

Assign a name to the new tag, and add a description if that’s what you’re into.

In this example, I’ve created a tag called “COH-Test”, and put it in the “Backup” category.

Now go to the protection job you’d like to edit.

Click on the Tag icon on the right-hand side. You can then select the tag you created in vCenter. Note that you may need to refresh your vCenter source for this new tag to be reflected.

When you select the tag, you can choose to Auto Protect or Exclude the VM based on the applied tags.

If you drill in to the objects in the protection job, you can see that the VM I wanted to exclude from this job has been excluded based on the assigned tag.

 

Thoughts

I’ve written enthusiastically about Cohesity’s Auto Protect feature previously. Sometimes, though, you need to exclude VMs from protection jobs. Using tags is a quick and easy way to do this, and it’s something that your virtualisation admin team will be happy to use too.

Getting Started With The Pure Storage CLI

I used to write a lot about how to manage CLARiiON and VNX storage environments with EMC’s naviseccli tool. I’ve been doing some stuff with Pure Storage FlashArrays in our lab and thought it might be worth covering off some of the basics of their CLI. This will obviously be no replacement for the official administration guide, but I thought it might come in useful as a starting point.

 

Basics

Unlike EMC’s CLI, there’s no executable to install – it’s all on the controllers. If you’re using Windows, PuTTY is still a good choice as an ssh client. Otherwise the macOS ssh client does a reasonable job too. When you first setup your FlashArray, a virtual IP (VIP) was configured. It’s easiest to connect to the VIP, and Purity then directs your session to whichever controller is the current primary controller. Note that you can also connect via the physical IP address if that’s how you want to do things.

The first step is to login to the array as pureuser, with the password that you’ve definitely changed from the default one.

login as: pureuser
pureuser@10.xxx.xxx.30's password:
Last login: Fri Aug 10 09:36:05 2018 from 10.xxx.xxx.xxx

Mon Aug 13 10:01:52 2018
Welcome pureuser. This is Purity Version 4.10.4 on FlashArray purearray
http://www.purestorage.com/

“purehelp” is the command to run to list available commands.

pureuser@purearray> purehelp
Available commands:
-------------------
pureadmin
purealert
pureapp
purearray
purecert
pureconfig
puredns
puredrive
pureds
purehelp
purehgroup
purehost
purehw
purelog
pureman
puremessage
purenetwork
purepgroup
pureplugin
pureport
puresmis
puresnmp
puresubnet
puresw
purevol
exit
logout

If you want to get some additional help with a command, you can run “command -h” (or –help).

pureuser@purearray> purevol -h
usage: purevol [-h]
               {add,connect,copy,create,destroy,disconnect,eradicate,list,listobj,monitor,recover,remove,rename,setattr,snap,truncate}
               ...

positional arguments:
  {add,connect,copy,create,destroy,disconnect,eradicate,list,listobj,monitor,recover,remove,rename,setattr,snap,truncate}
    add                 add volumes to protection groups
    connect             connect one or more volumes to a host
    copy                copy a volume or snapshot to one or more volumes
    create              create one or more volumes
    destroy             destroy one or more volumes or snapshots
    disconnect          disconnect one or more volumes from a host
    eradicate           eradicate one or more volumes or snapshots
    list                display information about volumes or snapshots
    listobj             list objects associated with one or more volumes
    monitor             display I/O performance information
    recover             recover one or more destroyed volumes or snapshots
    remove              remove volumes from protection groups
    rename              rename a volume or snapshot
    setattr             set volume attributes (increase size)
    snap                take snapshots of one or more volumes
    truncate            truncate one or more volumes (reduce size)

optional arguments:
  -h, --help            show this help message and exit

There’s also a facility to access the man page for commands. Just run “pureman command” to access it.

Want to see how much capacity there is on the array? Run “purearray list –space”.

pureuser@purearray> purearray list --space
Name        Capacity  Parity  Thin Provisioning  Data Reduction  Total Reduction  Volumes  Snapshots  Shared Space  System  Total
purearray  12.45T    100%    86%                2.4 to 1        17.3 to 1        350.66M  3.42G      3.01T         0.00    3.01T

Need to check the software version or generally availability of the controllers? Run “purearray list –controller”.

pureuser@purearray> purearray list --controller
Name  Mode       Model   Version  Status
CT0   secondary  FA-450  4.10.4   ready
CT1   primary    FA-450  4.10.4   ready

 

Connecting A Host

To connect a host to an array (assuming you’ve already zoned it to the array), you’d use the following commands.

purehost create hostname
purehost create -wwnlist WWNs hostname
purehost list
purevol connect --host [host] [volume]

 

Host Groups

You might need to create a Host Group if you’re running ESXi and want to have multiple hosts accessing the same volumes. Here’re the commands you’ll need. Firstly, create the Host Group.

purehgroup create [hostgroup]

Add the hosts to the Host Group (these hosts should already exist on the array)

purehgroup setattr --hostlist host1,host2,host3 [hostgroup]

You can then assign volumes to the Host Group

purehgroup connect --vol [volume] [hostgroup]

 

Other Volume Operations

Some other neat (and sometimes destructive) things you can do with volumes are listed below.

To resize a volume, use the following commands.

purevol setattr --size 500G [volume]
purevol truncate --size 20GB [volume]

Note that a snapshot is available for 24 hours to roll back if required. This is good if you’ve shrunk a volume to be smaller than the data on it and have consequently munted the filesystem.

When you destroy a volume it immediately becomes unavailable to host, but remains on the array for 24 hours. Note that you’ll need to remove the volume from any hosts connected to it first.

purevol disconnect [volume] --host [hostname]
purevol destroy [volume]

If you’re running short of capacity, or are just curious about when a deleted volume will disappear, use the following command.

purevol list --pending

If you need the capacity back immediately, the deleted volume can be eradicated with the following comamnd.

purevol eradicate [volume]

 

Further Reading

The Pure CLI is obviously not a new thing, and plenty of bright folks have already done a few articles about how you can use it as part of a provisioning workflow. This one from Chadd Kenney is a little old now but still demonstrates how you can bring it all together to do something pretty useful. You can obviously extend that to do some pretty interesting stuff, and there’s solid parity between the GUI and CLI in the Purity environment.

It seems like a small thing, but the fact that there’s no need to install an executable is a big thing in my book. Array vendors (and infrastructure vendors in general) insisting on installing some shell extension or command environment is a pain in the arse, and should be seen as an act of hostility akin to requiring Java to complete simple administration tasks. The sooner we get everyone working with either HTML5 or simple ssh access the better. In any csase, I hope this was a useful introduction to the Purity CLI. Check out the Administration Guide for more information.

Rubrik Basics – Role-based Access Control

I’ve been doing some work with Rubrik in our lab and thought it worth covering some of the basic features that I think are pretty neat. In this edition of Rubrik Basics, I thought I’d quickly cover off how to get started with the Role Based Access Control (RBAC) feature.

 

Roles

The concept of RBAC is not a new one. It is, however, one of the first things that companies with more than one staff member ask for when they have to manage infrastructure. Rubrik uses the concept of Roles to deliver particular access to their environment. The available roles are as follows:

  • Administrator role – Full access to all Rubrik operations on all objects;
  • End User role – For assigned objects: browse snapshots, recover files and Live Mount; and
  • No Access role – Cannot log in to the Rubrik UI and cannot make REST API calls.

The End User role has a set of privileges that align with the requirements of a backup operator role.

Privilege type Description
Download data from backups Data download only from assigned object types:

  • vSphere virtual machines
  • Hyper-V virtual machines
  • AHV virtual machines
  • Linux & Unix hosts
  • Windows hosts
  • NAS hosts
  • SQL Server databases
  • Managed volumes
Live Mount or Export virtual machine snapshot Live Mount or Export a snapshot only from specified virtual machines and only to specified target locations.
Export data from backups Export data only from specified source objects.
Restore data over source Write data from backups to the source location, overwriting existing data, only for assigned objects, and only when ‘Allow overwrite of original’ is enabled for the user account or group account.

The good news is that Rubrik supports local authentication as well as Active Directory. You can then tie these roles to particular groups within your organisation. You can have more than one domain that you use for authentication, but I’ll cover that in a future post on multi-tenancy.

I don’t believe that the ability to create custom roles is present (at least in the UI). I’m happy for people from Rubrik to correct me if I’ve gotten that wrong.

 

Configuration

Configuring access to the Rubrik environment for users is fairly straightforward. In this example I’ll be giving my domain account access to the Brik as an administrator. To get started, click on the Gear icon in the UI and select Users (under Access Management).

I don’t know who Grant Authorization is in real life, but he’s the guy who can help you out here (my dad jokes are both woeful and plentiful – just ask my children).

In this example I’m granting access to a domain user.

This example also assumes that you’ve added the domain to the appliance in the first place (and note that you can add multiple domains). In the dropdown box, select the domain the user resides in.

You can then search for a name. In this example, the user I’m searching for is danf. Makes sense, if you think about it.

Select the user account and click on Continue.

By default users are assigned No Access. If you have one of these accounts, the UI will let you enter a username and password and then kick you back to the login screen.

If I assign the user the End User role, I can assign access to various objects in the environment. Note that I can also provide access to overwrite original files if required. This is disabled by default.

In this example, however, I’m providing my domain account with full access via the Administrator role. Click on Assign to continue.

I can now log in to the Rubrik UI with my domain user account and do things.

And that’s it. In a future post I’ll be looking in to multi-tenancy and fun things you can do with organisations and multiple access levels.

Cohesity Basics – Cloud Tier

I’ve been doing some work with Cohesity in our lab and thought it worth covering some of the basic features that I think are pretty neat. In this edition of Cohesity Basics, I thought I’d quickly cover off how to get started with the “Cloud Tier” feature. You can read about Cohesity’s cloud integration approach here. El Reg did a nice write-up on the capability when it was first introduced as well.

 

What Is It?

Cohesity have a number of different technologies that integrate with the cloud, including Cloud Archive and Cloud Tier. With Cloud Archive you can send copies of snapshots up to the cloud to keep as a copy separate to the backup data you might have replicated to a secondary appliance. This is useful if you have some requirement to keep a monthly or six-monthly copy somewhere for compliance reasons. Cloud Tier is an overflow technology that allows you to have cold data migrated to a cloud target when the capacity of your environment exceeds 80%. Note that “coldness” is defined in this instance as older than 60 days. That is, you can’t just pump a lot of data in to your appliance to see how this works (trust me on that). The coldness level is configurable, but I recommend you engage with Cohesity support before you go down that track. It’s also important to note that once you turn on Cloud Tier for a View Box, you can’t turn it off again.

 

How Do I?

Here’s how to get started in 10 steps or less. Apologies if the quality of some of these screenshots is not great. The first thing to do is register an External Target on your appliance. In this example I’m running version 5.0.1 of the platform on a Cohesity Virtual Edition VM. Click on Protection – External Target.

Under External Targets you’ll see any External Targets you’ve already configured. Select Register External Target.

You’ll need to give it a name and choose whether you’re using it for Archival or Cloud Tier. This choice also impacts some of the types of available targets. You can’t, for example, configure a NAS or QStar target for use with Cloud Tier.

Selecting Cloud Tier will provide you with more cloudy targets, such as Google, AWS and Azure.

 

In this example, I’ve selected S3 (having already created the bucket I wanted to test with). You need to know the Bucket name, Region, Access Key ID and your Secret Access Key.

If you have it all correct, you can click on Register and it will work. If you’ve provided the wrong credentials, it won’t work. You then need to enable Cloud Tier on the View Box. Go to Platform – Cluster.

Click on View Boxes and the click on the three dots on the right to Edit the View Box configuration.

You then can toggle Cloud Tier and select the External Target you want to use for Cloud Tier.

Once everything is configured (and assuming you have some cold data to move to the cloud and your appliance is over 80% full) you can click on the cluster dashboard and you’ll see an overview of Cloud Tier storage in the Storage part of the overview.

 

 

Thoughts?

All the kids are getting into cloud nowadays, and Cohesity is no exception. I like this feature because it can help with managing capacity on your on-premises appliance, particularly if you’ve had a sudden influx of data into the environment, or you have a lot of old data that you likely won’t be accessing. You still need to think about your egress charges (if you need to get those cold blocks back) and you need to think about what the cost of that S3 bucket (or whatever you’re using) really is. I don’t see the default coldness level being a problem, as you’d hope that you sized your appliance well enough to cope with a certain amount of growth.

Features like this demonstrate both a willingness on behalf of Cohesity to embrace cloud technologies, as well as a focus on ease of use when it comes to reasonably complicated activities like moving protection data to an alternative location. My thinking is that you wouldn’t necessarily want to find yourself in the position of having to suddenly shunt a bunch of cold data to a cloud location if you can help it (although I haven’t done the maths on which is a better option) but it’s nice to know that the option is available and easy enough to setup.

Cohesity Basics – Auto Protect

I’ve been doing some work with Cohesity in our lab and thought it worth covering some of the basic features that I think are pretty neat. In this edition of Cohesity Basics, I thought I’d quickly cover off the “Auto Protect” feature. If you read their white paper on data protection, you’ll find the following line: “As new virtual machines are added, they are auto discovered and included in the protection policy that meets the desired SLAs”. It seems like a pretty cool feature, and was introduced in version 4.0. I wanted to find out a bit more about how it works.

 

What Is It?

Auto Protect will “protect new VMs that are added to a selected parent Object (such as a Datacenter, Folder, Cluster or Host)”. The idea behind this is that you can add a source and have Cohesity automatically protect all of the VMs in a folder, cluster, etc. The cool thing is that it will also protect any new VMs added to that source.

When you’re adding Objects to a Protection Job, you can select what to auto protect. In the screenshot below you can see that the Datacenter in my vCenter has Auto Protect turned off.

The good news is that you can explicitly exclude Objects as well. Here’s what the various icons mean.

[Image courtesy of Cohesity]

 

What Happens?

When you create a Protection Job in Cohesity you add Objects to the job. If you select to Auto Protect this Object, anything under that Object will automatically be protected. Every time the Protection Job runs, if the Object hierarchy has been refreshed on the Cohesity Cluster, new VMs are also backed up even though the new VM has not been manually included in the Protection Job. There are two ways that the Object hierarchy gets refreshed. It is automatically done every 4 hours by the cluster. If you’re in a hurry though, you can do it manually. Go to Protection -> Sources and click on the Source you’d like to refresh. There’s a refresh button to click on and you’ll see your new Objects showing up.

 

Why Wouldn’t You?

As part of my testing, I’ve been creating “catchall” Protection Jobs and adding all the VMs in the environment into the jobs. But we have some VMware NSX Controller VMs in our lab, and VMware “only supports backing up the NSX Edge and controller through the NSX Manager“. Not only that, but it simply won’t work.

In any case, you can use FTP to back up your NSX VMs if you really feel like that’s emoting you want to do. More info on that is here. You also want to be careful that you’re not backing up stuff you don’t need to, such as clones and odds and sods. Should I try protecting the Cohesity Virtual Edition appliance VM? I don’t know about that …

 

Thoughts

I generally prefer data protection configurations that “protect everything and exclude as required”. While Auto Protect is turned off by default, it’s simple enough to turn on when you get started. And it’s a great feature, particularly in dynamic environments where there’s no automation of data protection when new workloads are provisioned (a problem for another time). Hat tip to my Cohesity SE Pete Marfatia for pointing this feature out to me.