Rubrik CDM 4.1.1. – A Few Notes

Here are a few random notes on things in Rubrik‘s Cloud Data Management (CDM) 4.1.1-p4-2319 that I’ve come across in my recent testing in the lab. There’s not enough in each item to warrant a full post, hence the “few notes” format. Note that some of these things have been around for a while, I just wanted to note the specific version of Rubrik CDM I’m working with.

 

Guest OS Credentials

Rubrik uses Guest OS credentials for access to a VM’s operating system. When you add VM workload to your Rubrik environment, you may see the following message in the logs.

Note that it’s a warning, not an error. You can still backup the VM, just not to the level you might have hoped for. If you want to do a direct restore on a Linux guest, you’ll need an account with write access. For Windows, you’ll need something with administrative access. You could achieve this with either local or domain administrator accounts. This isn’t recommended though, and Rubrik suggests “a credential for a domain level account that has a small privilege set that includes administrator access to the relevant guests”. You could use a number of credentials across multiple groups of machines to reduce (to a small extent) the level of exposure, but there are plenty of CISOs and Windows administrators who are not going to like this approach.

So what happens if you don’t provide the credentials? My understanding is that you can still do file system consistent snapshots (provided you have a current version of VMware Tools installed), you just won’t be able to do application-consistent backups. For your reference, here’s the table from Rubrik discussing the various levels of available consistency.

Consistency level Description Rubrik usage
Inconsistent A backup that consists of copying each file to the backup target without quiescence.

File operations are not stopped The result is inconsistent time stamps across the backup and, potentially, corrupted files.

Not provided
Crash consistent A point-in-time snapshot but without quiescence.

•                Time stamps are consistent

•                Pending updates for open files are not saved

•                In-flight I/O operations are not completed

The snapshot can be used to restore the virtual machine to the same state that a hard reset would produce.

Provided only when:

•                The Guest OS does not have VMware Tools

•                The Guest OS has an out-of-date version of VMware Tools

The VM’s Application Consistency was manually set to Crash Consistent in the Rubrik UI

File system consistent A point-in-time snapshot with quiescence.

•                Time stamps are consistent

•                Pending updates for open files are saved

•                In-flight I/O operations are completed

•                Application-specific operations may not be completed.

Provided when the guest OS has an up-to-date version of VMware Tools and application consistency is not supported for the guest OS.
Application consistent A point-in-time snapshot with quiescence and application-awareness.

•                Time stamps are consistent

•                Pending updates for open files are saved

•                In-flight I/O operations are completed

•                Application-specific operations are completed.

Provided when the guest OS has an up-to-date version of VMware Tools and application consistency is supported for the guest OS.

 

open-vm-tools

If you’re running something like Debian in your vSphere environment you may have chosen to use open-vm-tools rather than VMware’s package. There’s nothing wrong with this (it’s a VMware-supported configuration), but you’ll see that Rubrik currently has a bit of an issue with it.

It will still backup the VM, just not at the consistency level you may be hoping for. It’s on Rubrik’s list of things to fix. And VMware Tools is still a valid (and arguably preferred) option for supported Linux distributions. The point of open-vm-tools is that appliance vendors can distribute the tools with their VMs without violating licensing agreements.

 

Download Logs

It seems like a simple thing, but I really like the ability to download logs related to a particular error. In this example, I’ve got some issues with a SQL cluster I’m backing up. I can click on “Download Logs” and grab the info I need related to the SLA Activity. It’s a small thing, but it makes wading through logs to identify issues a little less painful.

FreeNAS – Using one SSD for ZIL and L2ARC

Following on from my previous musings on FreeNAS, I thought I’d do a quick howto post on using one SSD for both ZIL and L2ARC. This has been covered here and here, but I found myself missing a few steps, so I thought I’d cover it here for my own benefit if nothing else. Before I start though, you should really consider using two drives, particularly with ZIL. And before you start, you’ve obviously already gone through the exercise of understanding your workload, and you’ve thought about the ramifications of what you’re doing, right? Because using one drive isn’t necessarily recommended …

So, let’s get started by connecting to your NAS via SSH are some other console mechanism.

Last login: Sun Dec 13 10:25:29 on console
dans-MacBook-Pro:~ dan$ ssh dan@freenas1

dan@freenas1's password: 
Last login: Wed Dec 30 06:57:25 2015 from 192.168.0.100
FreeBSD 9.3-RELEASE-p28 (FREENAS.amd64) #0 r288272+f229c79: Sat Dec 12 11:58:01 PST 2015

FreeNAS (c) 2009-2015, The FreeNAS Development Team
All rights reserved.
FreeNAS is released under the modified BSD license.

For more information, documentation, help or support, go here:
  http://freenas.org

Welcome to FreeNAS

You then need to find your SSD.

dan@freenas1:~ % sudo camcontrol devlist
Password:
<INTEL SSDSC2CT060A3 300i>         at scbus0 target 0 lun 0 (ada0,pass0)
<ST32000644NS 130C>                at scbus1 target 0 lun 0 (ada1,pass1)
<ST32000644NS 130C>                at scbus2 target 0 lun 0 (ada2,pass2)
<ST32000644NS 130C>                at scbus3 target 0 lun 0 (ada3,pass3)
<ST32000644NS 130C>                at scbus4 target 0 lun 0 (ada4,pass4)
<ST32000644NS 130C>                at scbus5 target 0 lun 0 (ada5,pass5)
<SanDisk Cruzer Force 1.26>        at scbus7 target 0 lun 0 (pass6,da0)

I then wanted to look at ada0, this being the SSD.

dan@freenas1:~ % sudo gpart show ada0
gpart: No such geom: ada0.

No dice. But it’s there, isn’t it?

dan@freenas1:~ % zpool list -v
NAME                                     SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
freenas-boot                            14.9G   533M  14.4G         -      -     3%  1.00x  ONLINE  -
  gptid/57344c8a-ae4d-11e5-b98c-bc5ff42c6cb2  14.9G   533M  14.4G         -      -     3%
volume0                                 9.06T  2.99M  9.06T         -     0%     0%  1.00x  ONLINE  /mnt
  raidz1                                9.06T  2.99M  9.06T         -     0%     0%
    gptid/6b604f11-aeb5-11e5-99ac-bc5ff42c6cb2      -      -      -         -      -      -
    gptid/6c25f30c-aeb5-11e5-99ac-bc5ff42c6cb2      -      -      -         -      -      -
    gptid/6cf26f5b-aeb5-11e5-99ac-bc5ff42c6cb2      -      -      -         -      -      -
    gptid/6dc0508f-aeb5-11e5-99ac-bc5ff42c6cb2      -      -      -         -      -      -
    gptid/6e88fc7a-aeb5-11e5-99ac-bc5ff42c6cb2      -      -      -         -      -      -
dan@freenas1:~ % geom disk list
Geom name: da0
Providers:
1. Name: da0
   Mediasize: 16008609792 (14G)
   Sectorsize: 512
   Mode: r1w1e3
   descr: SanDisk Cruzer Force
   lunname: SanDisk Cruzer Force    4C532000050815116541
   lunid: SanDisk Cruzer Force    4C532000050815116541
   ident: 4C532000050815116541
   fwsectors: 63
   fwheads: 255
Geom name: ada0
Providers:
1. Name: ada0
   Mediasize: 60022480896 (55G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   descr: INTEL SSDSC2CT060A3
   lunid: 5001517bb28ade6a
   ident: CVMP213600AQ060AGN
   fwsectors: 63
   fwheads: 16
Geom name: ada1
Providers:
1. Name: ada1
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r2w2e5
   descr: ST32000644NS
   lunid: 5000c5004027e7b5
   ident: 9WM88536
   fwsectors: 63
   fwheads: 16
Geom name: ada2
Providers:
1. Name: ada2
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r2w2e5
   descr: ST32000644NS
   lunid: 5000c50040276a46
   ident: 9WM88DV9
   fwsectors: 63
   fwheads: 16

[snip]

Ok, so I can see it, but I can’t. I mucked about a bit, and came up with this approach.

dan@freenas1:~ % sudo gpart create -s gpt ada0
ada0 created
dan@freenas1:~ % sudo gpart add -a 4k -b 128 -t freebsd-zfs -s 10G ada0
ada0p1 added
dan@freenas1:~ % sudo gpart add -a 4k -t freebsd-zfs ada0
ada0p2 added

Which worked well. Now I need to get the UUIDs.

dan@freenas1:~ % sudo gpart list
Geom name: da0
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 31266782
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: da0p1
   Mediasize: 524288 (512k)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 17408
   Mode: r0w0e0
   rawuuid: 572c8954-ae4d-11e5-b98c-bc5ff42c6cb2
   rawtype: 21686148-6449-6e6f-744e-656564454649
   label: 1
   length: 524288
   offset: 17408
   type: bios-boot
   index: 1
   end: 1057
   start: 34
2. Name: da0p2
   Mediasize: 16008044544 (14G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 544768
   Mode: r1w1e2
   rawuuid: 57344c8a-ae4d-11e5-b98c-bc5ff42c6cb2
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: 1
   length: 16008044544
   offset: 544768
   type: freebsd-zfs
   index: 2
   end: 31266775
   start: 1064
Consumers:
1. Name: da0
   Mediasize: 16008609792 (14G)
   Sectorsize: 512
   Mode: r1w1e3

[snip]

Geom name: ada0
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 117231374
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada0p1
   Mediasize: 10737418240 (10G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   rawuuid: 94a4bd28-aeb7-11e5-99ac-bc5ff42c6cb2
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: 1
   length: 10737418240
   offset: 65536
   type: freebsd-zfs
   index: 1
   end: 20971647
   start: 128
2. Name: ada0p2
   Mediasize: 49284976640 (45G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   rawuuid: 9a79622f-aeb7-11e5-99ac-bc5ff42c6cb2
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: 1
   length: 49284976640
   offset: 10737483776
   type: freebsd-zfs
   index: 2
   end: 117231367
   start: 20971648
Consumers:
1. Name: ada0
   Mediasize: 60022480896 (55G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0

Let’s double-check.

dan@freenas1:~ % sudo gpart show ada0
=>       34  117231341  ada0  GPT  (55G)
         34         94        - free -  (47k)
        128   20971520     1  freebsd-zfs  (10G)
   20971648   96259720     2  freebsd-zfs  (45G)
  117231368          7        - free -  (3.5k)

Everything looks copacetic. Let’s add them to the zpool. ZIL is log and L2ARC is cache. My pool is volume0 and I’m using the rawuuid value.

dan@freenas1:~ % sudo zpool add volume0 log gptid/94a4bd28-aeb7-11e5-99ac-bc5ff42c6cb2
dan@freenas1:~ % sudo zpool add volume0 cache gptid/9a79622f-aeb7-11e5-99ac-bc5ff42c6cb2

And here it is, all done.

dan@freenas1:~ % zpool list -v
NAME                                     SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
freenas-boot                            14.9G   533M  14.4G         -      -     3%  1.00x  ONLINE  -
  gptid/57344c8a-ae4d-11e5-b98c-bc5ff42c6cb2  14.9G   533M  14.4G         -      -     3%
volume0                                 9.06T  3.02M  9.06T         -     0%     0%  1.00x  ONLINE  /mnt
  raidz1                                9.06T  3.02M  9.06T         -     0%     0%
    gptid/6b604f11-aeb5-11e5-99ac-bc5ff42c6cb2      -      -      -         -      -      -
    gptid/6c25f30c-aeb5-11e5-99ac-bc5ff42c6cb2      -      -      -         -      -      -
    gptid/6cf26f5b-aeb5-11e5-99ac-bc5ff42c6cb2      -      -      -         -      -      -
    gptid/6dc0508f-aeb5-11e5-99ac-bc5ff42c6cb2      -      -      -         -      -      -
    gptid/6e88fc7a-aeb5-11e5-99ac-bc5ff42c6cb2      -      -      -         -      -      -
  gptid/94a4bd28-aeb7-11e5-99ac-bc5ff42c6cb2  9.94G   132K  9.94G         -     0%     0%
cache                                       -      -      -      -      -      -
  gptid/9a79622f-aeb7-11e5-99ac-bc5ff42c6cb2  45.9G   190K  45.9G         -     0%     0%
dan@freenas1:~ %