EMC – Configure FAST Cache disks with naviseccli

I’m sorry I couldn’t think of a fancy title for this post, but did you know you can configure FAST Cache with naviseccli? I can’t remember whether I’ve talked about this before or not. So just go with it. This one’s quick and dirty, by the way. I won’t be talking about where you should be putting your EFDs in the array. That really depends on the model array you have and the number of EFDs at your disposal. But don’t just go and slap them in any old way. Please, think of the children.

To use FAST Cache, you’ll need:

  • The FAST Cache enabler installed;
  • EFD disks that are not in a RAID group or Storage Pool;
  • To have configured the FAST Cache (duh);
  • The correct number of disks for the model of CLARiiON or VNX you’re configuring; and
  • To have enabled FAST Cache for the RAID group LUNs and/or the pools with LUNs that will use FAST Cache.

Basically, you can run the following switches after the standard naviseccli -h sp-ip-address

cache -fast -create – this creates FAST Cache.

cache -fast -destroy – this destroys FAST Cache.

cache -fast -info – this displays FAST Cache information.

When you create FAST Cache, you have the following options:

cache -fast -create -disks disksList [-rtype raidtype] [-mode ro|rw] [-o]

Here is what the options mean:

-disks disksList – You need to specify what disks you’re adding, or it no worky. Also, pay close attention to the order in which you bind the disks.

-mode ro|rw – The ro is read only mode and rw is readwrite mode.

-rtype raidtype – I don’t know why this is in here, but valid RAID types are disk and r_1.

-o – Just do it and stop asking questions!

naviseccli cache -fast -create -disks 0_1_6 1_1_6 -mode rw -rtype r_1

In this example I’ve used disks on Bus 0, Enclosure 1, Disk 6 and Bus 1, Enclosure 1, Disk 6.

Need info about what’s going on? Use the following command:

cache -fast -info [-disks] [-status] [-perfData]

I think -perfdata is one of the more interesting options here.

EMC – FAST and FAST Cache on the CX4-960

Apologies for the lack of posts over the last few months – I have been nuts deep in work and holidays. I’m working on some literature around Storage Pools and FAST in general, but in the meantime I thought I’d share this nugget with you. We finally got approval to install the FAST and FAST Cache enablers on our production CX4-960s a few nights ago. We couldn’t install them on one of the arrays because we had a dead disk that prevented the NDU from going ahead. Fair enough. Two awesome things happened when we installed it on the other array. Both of which could have been avoided if I’d had my shit together. Firstly, when I got into the office the next morning at 8 am, we noticed that the Read Cache on the array was disabled. For those of you playing at home, we had the cache on the 960 set at 1000MB read and 9760MB for write. I think I read this in a whitepaper some where. But after FAST went on, we still had 9760MB allocated to Write, and 0MB available for Read. Awesome not so much. Seems that we lost 1000MB, presumably because we added another layered application. Funnily enough we didn’t observe this behaviour on our lab CX4-120s, although you could argue that they really have sweet FA of cache in the first place. So now we have 8760MB for Write, and 1000MB for Read. And I’m about to configure a few hundred GB of FAST Cache on the EFDs in any case. We’ll see how that goes.

The other slightly boneheaded thing we did was forget to trespass the LUN ownership of LUNs on SP A back from SP B. In other words, an NDU applies code to SP B first, reboots the SP, checks it, and then loads code on the other SP. As part of this, LUN ownership is temporarily trespassed to the surviving SP (this is the whole non-disruptive thing). Once the NDU is complete, you should go and check for trespassed LUNs and move them back to their owners. Or not, and have everything run on one SP for a while. And wait for about 9000 Exchange users to complain when one of the Exchange clusters goes off-line. Happy days.