EMC – Creating SnapView Clones with naviseccli

EMC SnapView has been around for some time on the CLARiiON and VNX. I don’t want to go into the how and what of using SnapView in your environment, but I thought this quick rundown of creating a SnapView Clone using naviseccli might be useful. You can also do all of this using the Wizard in Unisphere – if you’re into pointy hats and capes.

You’d be best served to test these commands in your own environment before trying them on data you care about. And I take no responsibility if you pooch it because you’ve followed my guide without thinking through the ramifications of your actions. Also, be careful of the line breaks here – if you don’t get the whole line copied you’ll run into issues. So let’s get to it.

I’m assuming that you’ve never done clones before, so we’re starting from scratch. Firstly, bind two LUNs for use as Clone Private LUNs. These can be any size you want, but their usefulness is limited beyond 1GB each.

naviseccli -h 192.168.0.90 bind r5 901 -rg 1 -sp a -cap 1024 -sq mb
naviseccli -h 192.168.0.90 bind r5 902 -rg 1 -sp b -cap 1024 -sq mb

Note that I already had a RAID Group set aside for the LUNs. You then need to create a LUN to use as the clone. This LUN will evidently be the same size as the source. Note that it can reside on a different SP. Let’s assume the source LUN is 1TB.

naviseccli -h 192.168.0.90 bind r5 910 -rg 7 -sp b -cap 1024 -sq gb

Now it’s time to allocate the 1GB LUNs as CPLs.

naviseccli -h 192.168.0.90 snapview -allocatecpl -spA 901 -spB 902 -o

With SnapView, AllowProtectedRestore is disabled by default. You can change this on a global level with the following command. Note the -o prevents naviseccli from confirming my actions. If you’re unsure about the commands you want to use, you can leave -o off.

naviseccli -h 192.168.0.90 snapview -changeclonefeature -AllowProtectedRestore 0|1(Disabled|Enabled) -o

Create a Clone Group to store the Source and Clone in. At this point we nominate the LUN(s) that we want to be cloned.

naviseccli -h 192.168.0.90 snapview -createclonegroup -name VMwareMgmtSnapClone -luns 3 -description "Clone Group for vSphere Upgrade" -o

Now add the clone to the clone group.

naviseccli -h 192.168.0.90 snapview -addclone -Name VMwareMgmtSnapClone -Luns 910

The listclonegroup is used to get info on the array’s SnapView clone status.

naviseccli -h 192.168.0.90 snapview -listclonegroup
Name: VMwareMgmtSnapClone
CloneGroupUid: 50:06:01:60:C6:E0:43:CF:01:00:00:00:00:00:00:00
InSync: Yes
Description: Clone Group for vSphere Upgrade Testing
QuiesceThreshold: 60
SourceMediaFailure: No
IsControllingSP: No
SourceLUNSize: 2147483648
CloneCount: 1
Sources: 3
Clones:
CloneID: 0100000000000000
CloneState: Synchronizing
CloneCondition: Synchronizing
AvailableForIO: No
CloneMediaFailure: No
IsDirty: No
PercentSynced: 0
RecoveryPolicy: Auto
SyncRate: Medium
CloneLUNs: 910
UseProtectedRestore: No
IsFractured: No

 

Using -changeclone you can change the sync rate and protected restore options for the clone. In this example I’ve changed the rate to High (from Medium) and set it to use Protected Restore.

naviseccli -h 192.168.0.90 snapview -changeclone -name VMwareMgmtSnapClone -cloneid 0100000000000000 -SyncRate high -UseProtectedRestore 1 -o

Depending on how much data needs to be synchronized, it might take a little time before the output looks like this:

naviseccli -h 192.168.0.90 snapview -listclonegroup
Name: VMwareMgmtSnapClone
CloneGroupUid: 50:06:01:60:C6:E0:43:CF:01:00:00:00:00:00:00:00
InSync: Yes
Description: Clone Group for vSphere Upgrade Testing
QuiesceThreshold: 60
SourceMediaFailure: No
IsControllingSP: No
SourceLUNSize: 2147483648
CloneCount: 1
Sources: 3
Clones:
CloneID: 0100000000000000
CloneState: Synchronized
CloneCondition: Normal
AvailableForIO: No
CloneMediaFailure: No
IsDirty: No
PercentSynced: 100
RecoveryPolicy: Auto
SyncRate: High
CloneLUNs: 910
UseProtectedRestore: Yes
IsFractured: No

After synchronization, use the Windows-based tool admsnap to flush I/O from the LUN

admsnap flush -o E:

Wait for the clone to transition to synchronized. Once this is complete, you can fracture the Clone.

naviseccli -h 192.168.0.90 snapview -fractureclone -Name VMwareMgmtSnapClone -CloneId 0100000000000000 -o

Verify that it’s fractured and consistent

naviseccli -h 192.168.0.90 snapview -listclonegroup
Name: VMwareMgmtSnapClone
CloneGroupUid: 50:06:01:60:C6:E0:43:CF:01:00:00:00:00:00:00:00
InSync: Yes
Description: Clone Group for vSphere Upgrade Testing
QuiesceThreshold: 60
SourceMediaFailure: No
IsControllingSP: No
SourceLUNSize: 2147483648
CloneCount: 1
Sources: 3
Clones:
CloneID: 0100000000000000
CloneState: Consistent
CloneCondition: Administratively Fractured
AvailableForIO: Yes
CloneMediaFailure: No
IsDirty: No
PercentSynced: N/A
RecoveryPolicy: Auto
SyncRate: High
CloneLUNs: 910
UseProtectedRestore: Yes
IsFractured: Yes

If you wanted to use this clone to run a backup via a secondary host, you would then add it to the storage group of that host.

naviseccli -h 192.168.0.90 storagegroup -addhlu -gname SGNAME -hlu HOSTLUNID -alu ARRAYLUNID

Once you’ve added it to the storage group, you can use admsnap to scan for the clone on the secondary host

admsnap clone_activate

If for some reason you want to use the clone to restore the data on the source LUN, you’d use the Reverse Synchronize process.

naviseccli clone -reversesyncclone -name NAME -cloneid CLONEID -UseProtectedRestore 0|1 -o

If you want to keep the clone to use again in the future, fracture it from the source again.

naviseccli clone -fractureclone -name NAME -cloneid CLONEID -o

And that’s SnapView Clones with naviseccli in a nutshell.

 

 

 

EMC – Configure the Reserved LUN Pool with naviseccli

I’ve been rebuilding our lab CLARiiONs recently, and wanted to configure the Reserved LUN Pool (RLP) for use with SnapView and MirrorView/Asynchronous. Since I spent approximately 8 days per week in Unisphere recently performing storage provisioning, I’ve since made it a goal of mine to never, ever have to log in to Unisphere to do anything again. While this may be unattainable, you can get an awful lot done with a combination of Microsoft Excel, Notepad and naviseccli.

So I needed to configure a Reserved LUN Pool for use with MV/A, SnapView Incremental SAN Copy, and so forth. I won’t go into the reasons for what I’ve created, but let’s just say I needed to create about 50 LUNs and give them each a label. Here’s what I did:

Firstly, I created a RAID Group with an ID of 1 using disks 5 – 9 in the first enclosure.

C:\>naviseccli -h 256.256.256.256 createrg 1 0_0_5 0_0_6 0_0_7 0_0_8 0_0_9

It was then necessary to bind a series of 20GB LUNs to use, 25 for each SP. If you’re smart with Excel you can set the following command to do this for you with little fuss.

C:\>naviseccli -h 256.256.256.256  bind r5 50 -rg 1 -aa 0 -cap 20 -sp a -sq gb  

Here I’ve specified the raid-type (r5), the lun id (50), the RAID Group (1),  -aa 0 (disabling auto-assign), -cap (the capacity), -sp (a or b), and the -sq (size qualifier, which can be mb|gb|tb|sc|bc). Note that if you don’t specify the LUN ID, it will automatically use the next available ID.

So now I’ve bound the LUNs, I can use another command to give them a label that corresponds with our naming standard (using our old friend chglun):

C:\>naviseccli -h 256.256.256.256 chglun -l 50 -name TESTLAB1_RLP01_0050

Once you’ve created the LUNs you require, you can then add them to the Reserved LUN Pool with the reserved command.

C:\>naviseccli -h 256.256.256.256 reserved -lunpool -addlun 99

To check that everything’s in order, use the -list switch to get an output of the current RLP configuration.

C:\>naviseccli -h 256.256.256.256 reserved -lunpool -list
Name of the SP:  GLOBAL
Total Number of LUNs in Pool:  50
Number of Unallocated LUNs in Pool:  50
Unallocated LUNs:  53, 63, 98, 78, 71, 56, 88, 69, 92, 54, 99, 79, 72, 58, 81, 5
7, 85, 93, 61, 96, 67, 76, 86, 64, 50, 66, 52, 62, 68, 77, 89, 70, 55, 65, 91, 8
0, 73, 59, 82, 90, 94, 84, 97, 74, 60, 83, 95, 75, 87, 51
Total size in GB:  999.975586
Unallocated size in GB:  999.975586
Used LUN Pool in GB:  0
% Used of LUN Pool:  0
Chunk size in disk blocks:  128
No LUN in LUN Pool associated with target LUN.
C:\>

If, for some reason, you want to remove a LUN from the RLP, and it isn’t currently in use by one of the layered applications, you can use the -rmlun switch.

C:\>naviseccli -h 256.256.256.256 reserved -lunpool -rmlun 99 -o

If you omit the override [-o] option, the CLI prompts for confirmation before removing the LUN from reserved LUN pool. It’s possible to argue that, with the ability to create multiple LUNs from Unisphere, it might be simpler to not worry about naviseccli, but I think that it’s a very efficient way to get things done quickly, particularly if you’re working in a Unisphere domain with a large number of CLARiiONs, or on a workstation that has some internet browser “issues”.

New Article – Adding capacity to the Reserved LUN Pool

Another simple one that I thought was worth documenting for the cosmetic changes in Unisphere. You can find it here. Check out the rest of my equally exciting guides here.