EMC – new FLARE 30 available

Just a quick note that FLARE 30 for the CX4 has received an update to version .524. I haven’t gotten hold of the release notes yet, so I can’t say what’s been fixed, etc. As always, go to Powerlink for more info, and talk to your local EMC people about whether it’s appropriate to upgrade your CX4(s).

EMC – Broken Vault drive munts FAST Cache

Mat sent me an e-mail this morning, asking “why would FAST Cache be degraded after losing B0 E0 D2 in one of the CX4-960s?”. For those of you playing at home 0_0_2 is one of the Vault disks in the CX4 and VNX. Here’s a picture of the error:

Check out the 0x7576 that pops up shortly after the array says there’s a faulted disk. Here’s a closeup of the error:

Weird, huh?  So here’s the output of the naviseccli command that will give you the same information, but with a text-only feel.

"c:/Program Files/EMC/Navisphere CLI/NaviSECCli.exe"  -user Ebert -scope 0 -password xx -h 255.255.255.255  cache -fast -info -disks -status
Disks:
Bus 0 Enclosure 7 Disk 0
Bus 2 Enclosure 7 Disk 0
Bus 0 Enclosure 7 Disk 1
Bus 2 Enclosure 7 Disk 1
Bus 1 Enclosure 7 Disk 1
Bus 1 Enclosure 7 Disk 0
Bus 3 Enclosure 7 Disk 1
Bus 3 Enclosure 7 Disk 0
Mode:  Read/Write
Raid Type:  r_1
Size (GB):  366
State:  Enabled_Degraded
Current Operation:  N/A
Current Operation Status:  N/A
Current Operation Percent Completed:  N/A

So what’s with the degraded cache? The reason for this is that FAST Cache stores a small database on the first 3 drives (0_0_0, 0_0_1, 0_0_2). if any of these disks fail, FAST Cache flushes to disk and goes into a degraded state. But it shouldn’t, because the database is triple-mirrored. And what does it mean exactly? It means your FAST Cache is not processing writes at the moment. Which is considered “bad darts”.

This is a bug. Have a look on Powerlink for emc267579. Hopefully this will be fixed in R32 for the VNX. I couldn’t see details about the CX4 though. I strongly recommend that if you’re a CX4 user and you experience this issue, you raise a service request with your local EMC support mechanisms as soon as possible. The only way they get to know the severity of a problem is if people in the field feedback issues.

EMC – DIY Heatmaps – Updated Version

Mat has done an updated version of the heatmaps script for CLARiiON with LUN info and good things like that. You can download it here. Updated release notes can be found here. A sample of the output is here. Enjoy, and feel free to send requests for enhancements.

New Article – VNX5700 Configuration Guidelines

I’ve added a new article to the articles section of the blog. This one is basically a rehash of the recent posts I did on the VNX7500, but focussed on the VNX5700 instead. As always, your feedback is welcome.

EMC CLARiiON VNX7500 Configuration guidelines – Part 3

One thing I didn’t really touch on in the first two parts of this series is the topic of RAID Groups and binding between disks on the DPE / DAE-OS and other DAEs. It’s a minor point, but something people tend to forget when looking at disk layouts. Ever since the days of Data General, the CLARiiON has used Vault drives in the first shelf. For reasons that are probably already evident, these drives, and the storage processors, are normally protected by a Standy Power Supply (SPS) or two. The SPS provides enough battery power in a power failure scenario such that cache can be copied to the Vault disks and data won’t be lost. This is a good thing.

The thing to keep in mind with this, however, is that the other DAEs in the array aren’t protected by this SPS. Instead, you plug them into UPS-protected power in your data centre. So when you lose power with those, they go down. This can cause “major dramas” with Background Verify operations when the array is rebooted. This is a sub-optimal situation to be in. The point of all this is that, as EMC have said for some time, you should bind RAID groups across disks that are either contained in that first DAE, or exclusive to that DAE.

Now, if you really must do it, there are some additional recommendations:

  • Don’t split RAID 1 groups between the DPe and another DAE;
  • For RAID 5, ensure that at least 2 drives are outside the DPE;
  • For RAID 6, ensure that at least 3 drives are outside the DPE;
  • For RAID 1/0 – don’t do it, you’ll go blind.

It’s a minor design consideration, but something I’ve witnessed in the field when people have either a) tried to be tricky on smaller systems, or b) have been undersold on their requirements and have needed to be creative. As an aside, it is also recommended that you don’t include drives from the DPE / DAE-OS in Storage Pools. This may or may not have an impact on your Pool design.

EMC – Configure the Reserved LUN Pool with naviseccli

I’ve been rebuilding our lab CLARiiONs recently, and wanted to configure the Reserved LUN Pool (RLP) for use with SnapView and MirrorView/Asynchronous. Since I spent approximately 8 days per week in Unisphere recently performing storage provisioning, I’ve since made it a goal of mine to never, ever have to log in to Unisphere to do anything again. While this may be unattainable, you can get an awful lot done with a combination of Microsoft Excel, Notepad and naviseccli.

So I needed to configure a Reserved LUN Pool for use with MV/A, SnapView Incremental SAN Copy, and so forth. I won’t go into the reasons for what I’ve created, but let’s just say I needed to create about 50 LUNs and give them each a label. Here’s what I did:

Firstly, I created a RAID Group with an ID of 1 using disks 5 – 9 in the first enclosure.

C:\>naviseccli -h 256.256.256.256 createrg 1 0_0_5 0_0_6 0_0_7 0_0_8 0_0_9

It was then necessary to bind a series of 20GB LUNs to use, 25 for each SP. If you’re smart with Excel you can set the following command to do this for you with little fuss.

C:\>naviseccli -h 256.256.256.256  bind r5 50 -rg 1 -aa 0 -cap 20 -sp a -sq gb  

Here I’ve specified the raid-type (r5), the lun id (50), the RAID Group (1),  -aa 0 (disabling auto-assign), -cap (the capacity), -sp (a or b), and the -sq (size qualifier, which can be mb|gb|tb|sc|bc). Note that if you don’t specify the LUN ID, it will automatically use the next available ID.

So now I’ve bound the LUNs, I can use another command to give them a label that corresponds with our naming standard (using our old friend chglun):

C:\>naviseccli -h 256.256.256.256 chglun -l 50 -name TESTLAB1_RLP01_0050

Once you’ve created the LUNs you require, you can then add them to the Reserved LUN Pool with the reserved command.

C:\>naviseccli -h 256.256.256.256 reserved -lunpool -addlun 99

To check that everything’s in order, use the -list switch to get an output of the current RLP configuration.

C:\>naviseccli -h 256.256.256.256 reserved -lunpool -list
Name of the SP:  GLOBAL
Total Number of LUNs in Pool:  50
Number of Unallocated LUNs in Pool:  50
Unallocated LUNs:  53, 63, 98, 78, 71, 56, 88, 69, 92, 54, 99, 79, 72, 58, 81, 5
7, 85, 93, 61, 96, 67, 76, 86, 64, 50, 66, 52, 62, 68, 77, 89, 70, 55, 65, 91, 8
0, 73, 59, 82, 90, 94, 84, 97, 74, 60, 83, 95, 75, 87, 51
Total size in GB:  999.975586
Unallocated size in GB:  999.975586
Used LUN Pool in GB:  0
% Used of LUN Pool:  0
Chunk size in disk blocks:  128
No LUN in LUN Pool associated with target LUN.
C:\>

If, for some reason, you want to remove a LUN from the RLP, and it isn’t currently in use by one of the layered applications, you can use the -rmlun switch.

C:\>naviseccli -h 256.256.256.256 reserved -lunpool -rmlun 99 -o

If you omit the override [-o] option, the CLI prompts for confirmation before removing the LUN from reserved LUN pool. It’s possible to argue that, with the ability to create multiple LUNs from Unisphere, it might be simpler to not worry about naviseccli, but I think that it’s a very efficient way to get things done quickly, particularly if you’re working in a Unisphere domain with a large number of CLARiiONs, or on a workstation that has some internet browser “issues”.

EMC – Silly things you can do with stress testing – Part 2

I’ve got a bunch of graphs that indicate you can do some bad things to EFDs when you run certain SQLIO stress tests against them and compare the results to FC disks. But EMC is pushing back on the results I’ve gotten for a number of reasons. So in the interests of keeping things civil I’m not going to publish them – because I’m not convinced the results are necessarily valid and I’ve run out of time and patience to continue testing. Which might be what EMC hoped for – or I might just be feeling a tad cynical.

What I have learnt though, is that it’s very easy to generate QFULL errors on a CX4 if you follow the EMC best practice configs for Qlogic HBAs and set the execution throttle to 256. In fact, you might even be better off leaving it at 16, unless you have a real requirement to set it higher. I’m happy for someone to tell me why EMC suggests it be set to 256, because I’ve not found a good reason for it yet. Of course, this is dependent on a number of environmental factors, but the 256 figure still has me scratching my head.

Another thing that we uncovered during stress testing had something to do with the Queue Depth of LUNs. For our initial testing, we had a Storage Pool created with 30 * 200GB EFDs, 70 * 450GB FC spindles, and 15 * 1TB SATA-II Spindles with FAST-VP enabled. The LUNs on the EFDs were set to no data movement – so everything sat on the EFDs. We were getting kind of underwhelming performance stats out of this config, and it seems like the main culprit was the LUN queue depth. In a traditonal RAID Group setup, the queue depth of the LUN is (14 * (the number of data drives in the LUN) + 32). So for a RAID 5 (4+1) LUN, the queue depth is 88. If, for some reason, you want to drive a LUN harder, you can increase this by using MetaLUNs, with the sum of the components providing the LUN’s queue depth. What we observed on the Pool LUN, however, was that this seemed to stay fixed at 88, regardless of the number of internal RAID Groups servicing the Pool LUN. This seems like it’s maybe a bad thing, but that’s probably why EMC quietly say that you should stick to traditional MetaLUNs and RAID Groups if you need particular performance characteristics.

So what’s the point I’m trying to get at? Storage Pools and FAST-VP are awesome for the majority of workloads, but sometimes you need to use more traditional methods to get what you want. Which is why I spent last weekend using the LUN Migration tool to move 100TB of blocks around the array to get back to the traditional RAID Group / MetaLUN model. Feel free to tell me if you think I’ve gotten this arse-backwards too, because I really want to believe that I have.

EMC – getting the status of MirrorView operations with naviseccli



 Tired of sifting through a Consistency Group to see the status of MirrorView synchronizations? Tire no more with the mirror -sync -listsyncprogress command!

c:\>naviseccli -address 256.256.256.256 mirror -sync -listsyncprogress

MirrorView Name:  MIRROR_FC_R5_DAT02_0030_SRM
Has Secondary Images:  YES
Image UID:  50:06:01:60:BB:20:36:C1
Image State:  Synchronizing
Synchronizing Progress(%):  97

MirrorView Name:  MIRROR_FC_R5_MNT01_0042_SRM
Has Secondary Images:  YES
Image UID:  50:06:01:60:BB:20:36:C1
Image State:  Synchronized
Synchronizing Progress(%):  100

MirrorView Name:  MIRROR_FC_R5_DAT04_0046_SRM
Has Secondary Images:  YES
Image UID:  50:06:01:60:BB:20:36:C1
Image State:  Synchronizing
Synchronizing Progress(%):  72

MirrorView Name:  MIRROR_EFD_R5_MSMQ01_0055_SRM
Has Secondary Images:  YES
Image UID:  50:06:01:60:BB:20:36:C1
Image State:  Synchronized
Synchronizing Progress(%):  100

EMC – Silly things you can do with stress testing – Part 1

I have a whole swag of things I want to talk about with regards to EMC CLARiiONs and stress testing with SQLIO. But the posts are still forming and I want to be sure that what I put on the internet is accurate (a novel concept, I know) before I publish them. But what I can show you is the performance of our 4Gbps FC ports when running a particular read test on EFDs. In this instance you can see how, conceivably, the 8Gbps FC fabric becomes useful. At least for benchmarking.

EMC – naviseccli getlun -capacity

I needed to run this command recently to get the blocksize of a pool LUN that I wanted to migrate to a traditional FLARE LUN. I’ll going into the reasons for the migration another time, but basically a pool LUN doesn’t show you the number of blocks consumed when viewed through Unisphere.

So I used naviseccli to report the block count accurately so I could create another LUN of exactly the same size.

I:\>naviseccli -address 256.256.256.256 getlun 432 -capacity
LUN Capacity(Megabytes):    1048576
LUN Capacity(Blocks):       2147483648

It’s also important to note that you cannot migrate a LUN using the LUN Migration tool to a LUN that is larger than the source. Test it for yourself if you don’t believe me. If you want to migrate a LUN to a larger destination you need to use SAN Copy. This also became an issue recently when I needed to migrate some Pool LUNs to traditional MetaLUNs and used components that were a block or two too large. Fortunately when you create a MetaLUN you can specify the correct block count / MB / GB / size.

naviseccli – don’t hate it because it’s beautiful.