EMC – VNX Pool LUN Allocation vs Default Owner

I had a question about this come up this week and thought I’d already posted something about it. Seems I was lazy and didn’t. If you have access, have a look at Primus article emc311319 on EMC’s support site. If you don’t, here’s the rough guide to what it’s all about.

When a Storage Pool is created, a large number of private LUNs are bound on all the Pool drives and these are divided up between SP A and B. When a Pool LUN is created, it uses the allocation owner to determine which SP private LUNs should be used to store the Pool LUN slices. If the default and current owner are not the same as the allocation owner, the I/O will have to be passed over the CMI bus between SP, to reach the Pool Private FLARE LUNs. This is a bad thing, and can lead to higher response times and general I/O bottlenecks.

OMG, I might have this issue, what should I do? You can change the default owner of a LUN by accessing the LUN properties in Unisphere. You can also change the default owner of a LUN thusly.

naviseccli -h <SP A or B> chglun -l <metalun> -d owner <0|1>

where

-d owner 0 = SP A
-d owner 1 = SP B

But what if you have too many LUNs where the allocation owner sits on one SP? And when did I start writing blog posts in the form of a series of questions? I don’t know the answer to the latter question. But for the first, the simplest remedy is to create a LUN on the alternate SP and use EMC’s LUN migration tool to get the LUN to the other SP. Finally, to match the current owner of a LUN to the default owner, simply trespass the LUN to the default owner SP.

Note that this is a problem from CX4 arrays through to VNX2 arrays. It does not apply to traditional RAID Group FLARE LUNs though, only Pool LUNs.

EMC – VNX / CX4 LUN Allocation Owner and Default Owner

Mat’s been doing some useful scripting again. This time it’s a small PERL script that identifies the allocation owner and default owner of a pool LUN on a CX4 or VNX and lets you know whether the LUN is “non-optimal” or not. For those of you playing along at home, I found the following information on this (but can’t remember where I found it). “Allocation owner of a pool LUN is the SP that owns and maintains the metadata for that LUN. It is not advised to trespass the LUNs to an SP that is not the allocation owner. This introduces lag. The SP that provides the best performance for the pool LUN. The allocation owner SP is set by the system to match the default SP owner when you create the LUN. You cannot change the allocation owner after the LUN is created. If you change the default owner for the LUN, the software will display a warning that a performance penalty will occur if you continue.”

There’s a useful article by Jithin Nadukandathil on the ECN site, as well as a most excellent writeup by fellow EMC Elect member Jon Klaus here. In short, if you identify NonOptimal LUN ownership, your best option is to create a new LUN and migrate the data to that LUN via the LUN Migration tool. You can download a copy of the script here. Feel free to look at the other scripts that are on offer as well. Here’s what the output looks like.

 output1

 

 

EMC – Maximum Pool LUN Size

Mat has been trying to create a 42TB LUN to use temporarily for Centera backups. I don’t want to go into why we’re doing Centera backups, but let’s just say we need the space. He created a Storage Pool on one of the CX4-960s, using 28 2TB spindles and 6+1 private RAID Groups. However, when he tried to bind the LUN, he got the following error.

err1

Weird. So what if we set the size to 44000GB?

err2

No, that doesn’t work either. Turns out, I should really read some of the stuff that I post here, like my article entitled “EMC CLARiiON VNX7500 Configuration guidelines – Part 1“, where I mention that the maximum size of a Pool LUN is 16TB. I was wrong in any case, as it looks more like it’s 14TB. Seems like we’ll be using RAID Groups and MetaLUNs to get over the line on this one.

EMC – CX4 Configuration – a few things I’d forgotten

I’ve been commissioning some new CX4-960s recently (it’s a long story), and came across a few things that I’d forgotten about for some reason. If you’re running older disks, and they get replaced by EMC, there’s a good chance they’ll be a higher capacity. In our case I was creating a storage pool with 45 300GB FC disks and kept getting the following error.

This error was driving me nuts for a while, until I realised that one of the 300GB disks had, at some point, been replaced with a 450GB drive. Hence the error.

The other thing I came across was the restriction that Private LUNs (Write Intent Log, Reserved LUN Pool, MetaLUN Components) have to reside on traditional RAID Groups and can’t live in storage pools. Not a big issue, but I hadn’t really planned to use RAID Groups on these arrays. If you search for emc254739 you’ll find a handy KB article on WIL performance considerations, including this nugget “Virtual Provisioning LUNs are not supported for the WIL; RAID group-based LUNs or metaLUNs should be used”. Which clarifies why I was unable to allocate the 2 WIL LUNs I’d configured in the pool.

*Edit* I re-read the KB article and realised it doesn’t address the problem I saw. I had created thick LUNs on a storage pool, but these weren’t able to be allocated as WIL LUNs. Even though the article states “[The WIL LUNs] can either be RAID-group based LUNs, metaLUNs or Thick Pool LUNs”. So I don’t really know. Maybe it’s a VNX vs CX4 thing. Maybe not.

EMC – Listing pool disks with naviseccli

Mat‘s been doing some work on the DIY Heatmaps script, and came across an interesting bug in naviseccli. It seems that using the “-list -disks” option displays the drives in one pool, whereas “-list -all” displays disks for every pool. This seems to have popped up in the latest version of naviseccli. It’s a bit weird. We’re running these commands against CX4 arrays running the latest R30 FLARE. Whether this behaves differently on the VNX or not, I’m not sure. Notice also that Mat has started using ******** as his password too ;)

C:\scripts\heatmap>naviseccli -help
@(#)Navisphere naviseccli Revision 7.31.30.0.90 on Sun Dec 4 20:34:42 2011
Copyright (C) 1997-2011, EMC Corporation

C:\scripts\heatmap>naviseccli -user mat -password ******** -scope 0 -h SPB storagepool -list -disks
Pool Name: SP_SIL_10_EXCH_LOGS_1
Pool ID: 2
Disks:

Pool Name: SP_GOL_5
Pool ID: 0
Disks:

Pool Name: SP_SIL02_5
Pool ID: 6
Disks:

Pool Name: SP_SIL03_5
Pool ID: 7
Disks:

Pool Name: SP_ICMS_10
Pool ID: 4
Disks:

Pool Name: SP_SIL05_5_SQL
Pool ID: 9
Disks:

Pool Name: SP_SIL_10
Pool ID: 5
Disks:
Bus 4 Enclosure 7 Disk 5
Bus 0 Enclosure 2 Disk 11
Bus 6 Enclosure 6 Disk 7
Bus 6 Enclosure 6 Disk 9
Bus 3 Enclosure 2 Disk 11
Bus 0 Enclosure 2 Disk 12
Bus 2 Enclosure 7 Disk 3
Bus 2 Enclosure 7 Disk 5
Bus 4 Enclosure 7 Disk 10
Bus 4 Enclosure 7 Disk 12
Bus 1 Enclosure 7 Disk 3
Bus 1 Enclosure 7 Disk 5
Bus 1 Enclosure 7 Disk 7
Bus 7 Enclosure 0 Disk 6
Bus 4 Enclosure 7 Disk 7
Bus 4 Enclosure 7 Disk 9
Bus 7 Enclosure 0 Disk 11
Bus 2 Enclosure 7 Disk 7
Bus 2 Enclosure 7 Disk 9
Bus 7 Enclosure 0 Disk 8
Bus 6 Enclosure 6 Disk 11
Bus 4 Enclosure 7 Disk 6
Bus 0 Enclosure 2 Disk 6
Bus 0 Enclosure 2 Disk 8
Bus 2 Enclosure 7 Disk 2
Bus 2 Enclosure 7 Disk 4
Bus 2 Enclosure 7 Disk 6
Bus 4 Enclosure 7 Disk 11
Bus 1 Enclosure 7 Disk 2
Bus 1 Enclosure 7 Disk 4
Bus 1 Enclosure 7 Disk 6
Bus 7 Enclosure 0 Disk 5
Bus 7 Enclosure 0 Disk 7
[snip]

Bus 5 Enclosure 2 Disk 6
Bus 5 Enclosure 2 Disk 8
Bus 6 Enclosure 6 Disk 5
Bus 2 Enclosure 7 Disk 8
Bus 3 Enclosure 2 Disk 6
Bus 3 Enclosure 2 Disk 8
Bus 5 Enclosure 2 Disk 11
Bus 7 Enclosure 0 Disk 9
Bus 6 Enclosure 6 Disk 10
Bus 6 Enclosure 6 Disk 12

Pool Name: SP_SIL04_5
Pool ID: 8
Disks:

Pool Name: SP_SIL_5_EXCH_DATA_1
Pool ID: 1
Disks:

Pool Name: SP_SIL01_5
Pool ID: 3
Disks:

C:\scripts\heatmap>naviseccli -user mat -password ******** -scope 0 -h SPB storagepool -list -all
Pool Name: SP_SIL_10_EXCH_LOGS_1
Pool ID: 2
Raid Type: r_10
Percent Full Threshold: 70
Description:
Disk Type: Fibre Channel
State: Ready
Status: OK(0x0)
Current Operation: None
Current Operation State: N/A
Current Operation Status: N/A
Current Operation Percent Completed: 0
Raw Capacity (Blocks): 54036889600
Raw Capacity (GBs): 25766.797
User Capacity (Blocks): 26845184000
User Capacity (GBs): 12800.781
Consumed Capacity (Blocks): 21862046720
Consumed Capacity (GBs): 10424.636
Available Capacity (Blocks): 4983137280
Available Capacity (GBs): 2376.145

Percent Full: 81.438
Total Subscribed Capacity (Blocks): 21862046720
Total Subscribed Capacity (GBs): 10424.636
Percent Subscribed: 81.438
Oversubscribed by (Blocks): 0
Oversubscribed by (GBs): 0.000
Auto-Tiering: Manual
Tier Name: FC
Raid Type: r_10
User Capacity (GBs): 12800.78
Consumed Capacity (GBs): 6311.39
Available Capacity (GBs): 6489.40
Percent Subscribed: 49.30%
Data Targeted for Higher Tier (GBs): 0.00
Data Targeted for Lower Tier (GBs): 0.00
Disks (Type):
Bus 3 Enclosure 5 Disk 5 (Fibre Channel)
Bus 3 Enclosure 5 Disk 8 (Fibre Channel)
Bus 6 Enclosure 2 Disk 11 (Fibre Channel)
Bus 7 Enclosure 4 Disk 6 (Fibre Channel)
Bus 7 Enclosure 4 Disk 8 (Fibre Channel)
Bus 0 Enclosure 3 Disk 5 (Fibre Channel)
Bus 0 Enclosure 3 Disk 7 (Fibre Channel)
Bus 0 Enclosure 3 Disk 9 (Fibre Channel)
Bus 5 Enclosure 4 Disk 11 (Fibre Channel)
Bus 6 Enclosure 2 Disk 12 (Fibre Channel)
Bus 4 Enclosure 2 Disk 11 (Fibre Channel)
Bus 1 Enclosure 5 Disk 5 (Fibre Channel)
Bus 1 Enclosure 5 Disk 8 (Fibre Channel)
Bus 5 Enclosure 4 Disk 6 (Fibre Channel)
Bus 5 Enclosure 4 Disk 8 (Fibre Channel)
Bus 4 Enclosure 2 Disk 6 (Fibre Channel)
Bus 4 Enclosure 2 Disk 8 (Fibre Channel)
Bus 2 Enclosure 3 Disk 11 (Fibre Channel)
Bus 3 Enclosure 5 Disk 10 (Fibre Channel)
[snip]
Bus 6 Enclosure 2 Disk 10 (Fibre Channel)
Bus 7 Enclosure 4 Disk 7 (Fibre Channel)
Bus 7 Enclosure 4 Disk 9 (Fibre Channel)
Bus 1 Enclosure 5 Disk 11 (Fibre Channel)
Bus 1 Enclosure 5 Disk 6 (Fibre Channel)
Bus 6 Enclosure 2 Disk 5 (Fibre Channel)
Bus 6 Enclosure 2 Disk 7 (Fibre Channel)
Bus 6 Enclosure 2 Disk 9 (Fibre Channel)
Bus 7 Enclosure 4 Disk 12 (Fibre Channel)
Bus 5 Enclosure 4 Disk 7 (Fibre Channel)
Bus 5 Enclosure 4 Disk 9 (Fibre Channel)
Bus 4 Enclosure 2 Disk 7 (Fibre Channel)
Bus 4 Enclosure 2 Disk 9 (Fibre Channel)
Bus 4 Enclosure 2 Disk 12 (Fibre Channel)
Bus 7 Enclosure 4 Disk 11 (Fibre Channel)
Bus 2 Enclosure 3 Disk 5 (Fibre Channel)
Bus 2 Enclosure 3 Disk 7 (Fibre Channel)
Bus 2 Enclosure 3 Disk 9 (Fibre Channel)

Disks:
LUNs: 350, 316, 393, 354, 397, 318, 358, 391, 392, 353, 396, 360, 319, 356, 317, 357, 394, 355, 399, 390, 398, 352, 395, 315, 351, 359
FAST Cache: Disabled

Pool Name: SP_GOL_5
Pool ID: 0
Raid Type: r_5
Percent Full Threshold: 70
Description:
Disk Type: Fibre Channel
State: Ready
Status: OK(0x0)
Current Operation: None
Current Operation State: N/A
Current Operation Status: N/A
Current Operation Percent Completed: 0
Raw Capacity (Blocks): 25329792000
Raw Capacity (GBs): 12078.186
User Capacity (Blocks): 20259724800
User Capacity (GBs): 9660.590
Consumed Capacity (Blocks): 17973689600
Consumed Capacity (GBs): 8570.523
Available Capacity (Blocks): 2286035200
Available Capacity (GBs): 1090.067
Percent Full: 88.716
Total Subscribed Capacity (Blocks): 17973689600
Total Subscribed Capacity (GBs): 8570.523
Percent Subscribed: 88.716
Oversubscribed by (Blocks): 0
Oversubscribed by (GBs): 0.000
Auto-Tiering: Scheduled

Tier Name: FC
Raid Type: r_5
User Capacity (GBs): 9660.59
Consumed Capacity (GBs): 3417.21
Available Capacity (GBs): 6243.38
Percent Subscribed: 35.37%
Data Targeted for Higher Tier (GBs): 0.00
Data Targeted for Lower Tier (GBs): 0.00
Disks (Type):
Bus 1 Enclosure 2 Disk 8 (Fibre Channel)
Bus 1 Enclosure 2 Disk 6 (Fibre Channel)
Bus 0 Enclosure 5 Disk 9 (Fibre Channel)
Bus 0 Enclosure 5 Disk 7 (Fibre Channel)
Bus 0 Enclosure 5 Disk 5 (Fibre Channel)
Bus 4 Enclosure 0 Disk 11 (Fibre Channel)
Bus 4 Enclosure 0 Disk 13 (Fibre Channel)
Bus 2 Enclosure 0 Disk 8 (Fibre Channel)
Bus 2 Enclosure 0 Disk 6 (Fibre Channel)
Bus 1 Enclosure 2 Disk 5 (Fibre Channel)
Bus 6 Enclosure 0 Disk 11 (Fibre Channel)
Bus 6 Enclosure 0 Disk 13 (Fibre Channel)
Bus 3 Enclosure 0 Disk 11 (Fibre Channel)
Bus 3 Enclosure 0 Disk 13 (Fibre Channel)
Bus 2 Enclosure 0 Disk 9 (Fibre Channel)
Bus 2 Enclosure 0 Disk 7 (Fibre Channel)
Bus 2 Enclosure 0 Disk 5 (Fibre Channel)
Bus 1 Enclosure 2 Disk 9 (Fibre Channel)
Bus 1 Enclosure 2 Disk 7 (Fibre Channel)
Bus 0 Enclosure 5 Disk 8 (Fibre Channel)
Bus 0 Enclosure 5 Disk 6 (Fibre Channel)
Bus 6 Enclosure 0 Disk 10 (Fibre Channel)
Bus 6 Enclosure 0 Disk 12 (Fibre Channel)
Bus 6 Enclosure 0 Disk 14 (Fibre Channel)
Bus 3 Enclosure 0 Disk 10 (Fibre Channel)
Bus 3 Enclosure 0 Disk 12 (Fibre Channel)
Bus 3 Enclosure 0 Disk 14 (Fibre Channel)
Bus 4 Enclosure 0 Disk 10 (Fibre Channel)
Bus 4 Enclosure 0 Disk 12 (Fibre Channel)
Bus 4 Enclosure 0 Disk 14 (Fibre Channel)

Disks:
LUNs: 1102, 902, 997, 1987, 900, 1988, 986, 1983, 984, 901, 988, 1986, 1984, 985, 983, 700, 1101, 998, 987, 1985
FAST Cache: Enabled

Pool Name: SP_SIL02_5
Pool ID: 6
Raid Type: r_5
Percent Full Threshold: 70
Description:
Disk Type: Fibre Channel
State: Ready
Status: OK(0x0)
Current Operation: None
Current Operation State: N/A
Current Operation Status: N/A
Current Operation Percent Completed: 0
Raw Capacity (Blocks): 50659584000
Raw Capacity (GBs): 24156.372
User Capacity (Blocks): 40519449600
User Capacity (GBs): 19321.179
Consumed Capacity (Blocks): 38644481280
Consumed Capacity (GBs): 18427.125
Available Capacity (Blocks): 1874968320
Available Capacity (GBs): 894.055
Percent Full: 95.373
Total Subscribed Capacity (Blocks): 38644481280
Total Subscribed Capacity (GBs): 18427.125
Percent Subscribed: 95.373
Oversubscribed by (Blocks): 0
Oversubscribed by (GBs): 0.000
Auto-Tiering: Scheduled

Tier Name: FC
Raid Type: r_5
User Capacity (GBs): 19321.18
Consumed Capacity (GBs): 8620.53
Available Capacity (GBs): 10700.65
Percent Subscribed: 44.62%
Data Targeted for Higher Tier (GBs): 0.00
Data Targeted for Lower Tier (GBs): 0.00
Disks (Type):
Bus 6 Enclosure 7 Disk 14 (Fibre Channel)
Bus 6 Enclosure 7 Disk 12 (Fibre Channel)
Bus 6 Enclosure 7 Disk 10 (Fibre Channel)
Bus 2 Enclosure 0 Disk 14 (Fibre Channel)
Bus 2 Enclosure 0 Disk 12 (Fibre Channel)
Bus 2 Enclosure 0 Disk 10 (Fibre Channel)
Bus 0 Enclosure 5 Disk 14 (Fibre Channel)
Bus 0 Enclosure 5 Disk 12 (Fibre Channel)
Bus 0 Enclosure 5 Disk 10 (Fibre Channel)
Bus 5 Enclosure 0 Disk 9 (Fibre Channel)
Bus 5 Enclosure 0 Disk 1 (Fibre Channel)
Bus 5 Enclosure 5 Disk 5 (Fibre Channel)
Bus 6 Enclosure 7 Disk 6 (Fibre Channel)
Bus 5 Enclosure 0 Disk 4 (Fibre Channel)
Bus 5 Enclosure 0 Disk 2 (Fibre Channel)
Bus 5 Enclosure 0 Disk 6 (Fibre Channel)
Bus 5 Enclosure 5 Disk 7 (Fibre Channel)
Bus 3 Enclosure 3 Disk 8 (Fibre Channel)
Bus 3 Enclosure 3 Disk 6 (Fibre Channel)
Bus 1 Enclosure 2 Disk 14 (Fibre Channel)
Bus 1 Enclosure 2 Disk 12 (Fibre Channel)
Bus 1 Enclosure 2 Disk 10 (Fibre Channel)
Bus 7 Enclosure 5 Disk 5 (Fibre Channel)
Bus 5 Enclosure 0 Disk 11 (Fibre Channel)
Bus 5 Enclosure 0 Disk 13 (Fibre Channel)
Bus 2 Enclosure 0 Disk 13 (Fibre Channel)
Bus 2 Enclosure 0 Disk 11 (Fibre Channel)
Bus 0 Enclosure 5 Disk 13 (Fibre Channel)
Bus 0 Enclosure 5 Disk 11 (Fibre Channel)
Bus 7 Enclosure 5 Disk 8 (Fibre Channel)
Bus 6 Enclosure 7 Disk 9 (Fibre Channel)
Bus 6 Enclosure 7 Disk 7 (Fibre Channel)
Bus 5 Enclosure 0 Disk 8 (Fibre Channel)
Bus 3 Enclosure 3 Disk 14 (Fibre Channel)
Bus 3 Enclosure 3 Disk 12 (Fibre Channel)
Bus 3 Enclosure 3 Disk 10 (Fibre Channel)
Bus 5 Enclosure 5 Disk 6 (Fibre Channel)
Bus 5 Enclosure 0 Disk 3 (Fibre Channel)
Bus 5 Enclosure 0 Disk 7 (Fibre Channel)
Bus 5 Enclosure 0 Disk 5 (Fibre Channel)
Bus 3 Enclosure 3 Disk 9 (Fibre Channel)
Bus 3 Enclosure 3 Disk 7 (Fibre Channel)
Bus 3 Enclosure 3 Disk 5 (Fibre Channel)
Bus 6 Enclosure 7 Disk 13 (Fibre Channel)
Bus 6 Enclosure 7 Disk 11 (Fibre Channel)
Bus 5 Enclosure 0 Disk 10 (Fibre Channel)
Bus 5 Enclosure 0 Disk 12 (Fibre Channel)
Bus 5 Enclosure 0 Disk 14 (Fibre Channel)
Bus 5 Enclosure 5 Disk 14 (Fibre Channel)
Bus 7 Enclosure 5 Disk 7 (Fibre Channel)
Bus 7 Enclosure 5 Disk 9 (Fibre Channel)
Bus 6 Enclosure 7 Disk 8 (Fibre Channel)
Bus 5 Enclosure 0 Disk 0 (Fibre Channel)
Bus 3 Enclosure 3 Disk 13 (Fibre Channel)
Bus 3 Enclosure 3 Disk 11 (Fibre Channel)
Bus 6 Enclosure 7 Disk 5 (Fibre Channel)
Bus 5 Enclosure 5 Disk 8 (Fibre Channel)
Bus 1 Enclosure 2 Disk 13 (Fibre Channel)
Bus 1 Enclosure 2 Disk 11 (Fibre Channel)
Bus 7 Enclosure 5 Disk 6 (Fibre Channel)

Disks:
LUNs: 7214, 501, 993, 999, 701, 509, 991, 990, 936, 500, 996, 994, 992, 1100
FAST Cache: Enabled

EMC – Naviseccli, disks and Virtual Pools

EMC seem to be calling Storage Pools Virtual Pools now. Or maybe they always called them that. I’m not sure. Whatever you want to call them, you need to be aware that some of the commands you traditionally ran on RAID Groups and LUNs doesn’t necessarily yield the same results on Pools. For example, if I want some information on a disk and any LUNs bound on it, I can run the following command, with the disk referenced using B_E_D (Bus_Enclosure_Device) format.

naviseccli -h sp-ip-address getdisk 0_2_5

Bus 0 Enclosure 2  Disk 5
Vendor Id:             SEAGATE
Product Id:            ST345085 CLAR450
Product Revision:        HC08
Lun:                     Unbound
Type:                    N/A
State:                   Enabled
Hot Spare:               NO
Prct Rebuilt:            Unbound
Prct Bound:              Unbound
Serial Number:           3QQ1WJW9
Sectors:                 N/A
Capacity:                412268
Private:                 Unbound
Bind Signature:          0x514, 2, 5
Hard Read Errors:        0
Hard Write Errors:       0
Soft Read Errors:        0
Soft Write Errors:       0
Read Retries:     N/A
Write Retries:    N/A
Remapped Sectors:        N/A
Number of Reads:         132578985
Number of Writes:        55935766
Number of Luns:          0
Raid Group ID:           N/A
Clariion Part Number:    DG118032601
Request Service Time:    N/A
Read Requests:           132578985
Write Requests:          55935766
Kbytes Read:             956277952
Kbytes Written:          805876939
Stripe Boundary Crossing: None
Drive Type:              Fibre Channel
Clariion TLA Part Number:005048849
User Capacity:           0
Idle Ticks:              166532928
Busy Ticks:              7964279
Current Speed: 4Gbps
Maximum Speed: 4Gbps

Note that it says the LUN is unbound. It’s not unbound though, it’s part of a Storage Pool. So to get information about the LUNs in a Pool, you’ll need to run a command which specifically addresses Storage Pools.

naviseccli -h sp-ip-address storagepool –list [-id poolID|-name poolName] [-availableCap] [-consumedCap] [-currentOp] [-description] [-disks] [-diskType] [-luns] [-opState] [-opStatus] [-prcntOp] [-rawCap] [-rtype] [-prcntFullThreshold] [-state] [-status] [-subscribedCap] [-userCap] [-prcntFull]

You can’t use -id and -name together. To see the overhead of using pool storage, -consumedCap will give you the total. Unfortunately -diskType is not as useful as I’d hoped, because if you’re running different types it comes back with “Mixed”.

If you want to modify a pool’s configuration, you can use -modify to, er, modify the pool.

naviseccli -h sp-ip-address storagepool -modify -id poolID| -name poolName[-newName newName] [-description description] [-fastcache on|off] [-prcntFullThreshold threshold] [-autotiering scheduled|manual] [-o]

The cool thing about this is that you can turn FAST Cache on and off, and use it to modify FAST auto-tiering as well. This could be very useful where you want to script for certain workloads to use FAST Cache during the day, but then you want to have it off during backup windows.

Finally, my favourite switch is -feature -info, which lists a pool’s configuration information.

naviseccli -h sp-ip-address storagepool -feature -info [-isVirtualProvisioningSupported] [-maxPools] [-maxDiskDrivesPerPool] [-maxDiskDrivesAllPools] [-maxDiskDrivesPerOp] [-maxPoolLUNs] [-minPoolLUNSize] [-maxPoolLUNSize] [-numPools] [-numPoolLUNs] [-numThinLUNs] [numDiskDrivesAllPools] [-availableDisks]

The main feature of this command is that you don’t need to remember the maximum numbers of disks you can put in a pool. In an environment where you may have a number of different models of CLARiiON and / or VNX, this will save some time digging through various pdf files from EMC.

EMC – Sometimes RAID 6 can be a PITA

This is really a quick post to discuss how RAID 6 can be a bit of a pain to work with when you’re trying to combine traditional CLARiiON / VNX DAEs and Storage Pool best practices. It’s no secret that EMC strongly recommend using RAID 6 when you’re using SATA-II / NL-SAS drives that are 1TB or greater. Which is a fine and reasonable thing to recommend. However, as you’re no doubt aware, the current implementation of FAST VP uses Storage Pools that require homogeneous RAID types. So you need multiple tools if you want to run both RAID 1/0 and RAID 6. If you want a pool that can leverage FAST to move slices between EFD, SAS, and NL-SAS, it all needs to be RAID 6. There are a couple of issues with this. Firstly, given the price of EFDs, a RAID 6 (6+2) of EFDs is going to feel like a lot of money down the drain. Secondly, if you stick with the default RAID 6 implementation for Storage Pools, you’ll be using 6+2 in the private RAID groups. And then you’ll find yourself putting private RAID groups across backend ports. This isn’t as big an issue as it was with the CX4, but it still smells a bit ugly.

What I have found, however, is that you can get the CLARiiON to create non-standard sized RAID 6 private RAID groups. If you create a pool with 10 spindles in RAID 6, it will create a private RAID groups in a 8+2 configuration. This seems to be the magic number at the moment. If you add 12 disks to the pool it will create 2 4+2 private RAID groups, and if you use 14 disks it will do a 6+2 and a 4+2 RAID group. Now, the cool thing about 10 spindles in a private RAID group is that you could, theoretically (I’m extrapolating from the VNX Best Practices document here), split the 8+2 across two DAEs in a 5+5. In this fashion, you can increase the rebuild times slightly in the event of a disk failure, and you can also draw some sensible designs that fit well in a traditional DAE4P. Of course, creating your pools in increments of 10 disks is going to be a pain, particularly for larger Storage Pools, and particularly as there is no re-striping of data done after a pool expansion. But I’m sure EMC are focussing on this issue in the future, as a lot of customers have had a problem with the initial approach. The downside to all this, of course, is that you’re going to suffer a capacity and, to a lesser extent, performance penalty by using RAID 6 across the board. In this instance you need to consider whether FAST VP is going to give you the edge over split RAID pools or traditional RAID groups.

I personally like the idea of Storage Pools, and I’m glad EMC have gotten on-board with them in their midrange stuff. I’m also reasonably optimistic that they’re working on addressing a lot of issues that have come up in the field. I just don’t know when that will be.

EMC CLARiiON VNX7500 Configuration guidelines – Part 2

In this episode of EMC CLARiiON VNX7500 Configuration Guidelines, I thought it would be useful to discuss Storage Pools, RAID Groups and Thin things (specifically Thin LUNs). But first you should go away and read Vijay’s blog post on Storage Pool design considerations. While you’re there, go and check out the rest of his posts, because he’s a switched-on dude. So, now you’ve done some reading, here’s a bit more knowledge.

By default, RAID groups should be provisioned in a single DAE. You can theoretically provision across buses for increased performance, but oftentimes you’ll just end up with crap everywhere. Storage Pools obviously change this, but you still don’t want to bind the Private RAID Groups across DAEs. But if you did, for example, want to bind a RAID 1/0 RAID Group across two buses – for performance and resiliency – you could do it thusly:

naviseccli -h <sp-ip> createrg 77 0_1_0 1_1_0 0_1_1 1_1_1

Where the numbers refer to the standard format Bus_Enclosure_Disk.

The maximum number of Storage Pools you can configure is 60. It is recommended that a pool should contain a minimum of 4 private RAID groups. While it is tempting to just make the whole thing one big pool, you will find that segregating LUNs into different pools may still be useful for FAST cache performance, availability, etc. Remember kids, look at the I/O profile of the projected workload, not just the capacity requirements. The mixing of drives with different performance characteristics in a homogenous pool is also contra-indiciated. When you create a Storage Pool the following Private RAID Group configurations are considered optimal (depending on the RAID type of the Pool):

  • RAID 5 – 4+1
  • RAID 1/0 – 4+4
  • RAID 6 – 6 + 2

Pay attention to this, because you should always ensure that a Pool’s private RAID groups align with traditional RAID Group best practices, while sticking to these numbers. So don’t design a 48 spindle RAID 5 Pool. That will be, er, non-optimal.

 

EMC recommend that if you’re going to blow a wad of cash on SSDs / EFDs, you should do it on FAST cache before making use of the EFD Tier.

 

With current revisions of FLARE 30 and 31, data is not re-striped when the pool is expanded. It’s also important to understand that preference is given to using the new capacity rather than the original storage until all drives in the Pool are at the same level of capacity. So if you have data on a 30-spindle Pool, and then add another 15 spindles to the Pool, the data goes to the new spindles first to even up the capacity. It’s crap, but deal with it, and plan your Pool configurations before you deploy them. For RAID 1/0, avoid private RAID Groups of 2 drives.

A Storage Pool on the VNX7500 can be created with or expanded by 180 drives at a time, and you should keep the increments the same. If you are considering the use of greater than 1TB drives use RAID 6. When FAST VP is working with Pools, remember that you’re limited to one type of RAID in a pool. So if you want to get fancy with different RAID Types and tiers, you’ll need to consider using additional Pools to accommodate this. It is, however, possible to mix thick and thin LUNs in the same Pool. It’s also important to remember that the consumed capacity for Pool LUNs = (User Consumed Capacity * 1.02) + 3GB. This can have an impact as capacity requirements increase.

 

A LUN’s tiering policy can be changed after the initial allocation of the LUN. FAST VP has the following data placement options: Lowest, Highest, Auto, no movement. This can present some problems if you want to create a 3-tier Pool. The only workaround I could come up with was to create the Pool with 2 tiers and place LUNs at highest and lowest. Then add the third tier and place those highest tier LUNs on the highest tier and change the middle tier LUNs to No Movement. What would be a better solution is to create the Pool with the tiers you want, put all of your LUNs on Auto placement, and let FAST VP sort it out for you. But if you have a lot of LUNs, this can take time.

 

For thin NTFS LUNs – use Microsoft’s sdelete to zero free space. When using LUN Compression – Private LUNs (Meta Components, Snapshots, RLP) cannot be compressed. EMC recommends that compression only be used for archival data that is infrequently accessed. Finally, you can’t defragment RAID 6 RAID Groups – so pay attention when you’re putting LUNs in those RAID Groups.

EMC – Configuring Storage Pools on a CLARiiON (minor update)

I was going to re-do the “Configuring Storage Pools on a CLARiiON” document with some updated screenshots to demonstrate how you could create storage pools with different disk types (in preparation for FAST) but thought that wouldn’t be so exciting. So I’ve taken a screenshot and put it here instead.

In this example Unisphere has grabbed all of the EFDs in the array, including the 8 that we’d purchased to use for FAST Cache (not yet configured). As some of my colleagues have been learning the hard way, you need to pay attention when Unisphere makes suggestions for disk selections, because it’s not always the right suggestion.

New Article – Configuring Storage Pools on a CLARiiON

I’ve added another 16 page pdf that states the obvious, in a visual way, about how to configure storage pools on the CLARiiON. Check out my other articles while you’re here.