EMC CLARiiON VNX7500 Configuration guidelines – Part 1

I’ve been doing some internal design work and referencing the “EMC Unified Storage Best Practices for Performance and Availability Common Platform and Block Storage 31.0 – Applied Best Practices” – rev 23/06/2011 – h8268_VNX_Block_best_practices.pdf fairly heavily. If you’re an EMC customer or partner you can get it from the Powerlink website. I thought it would be a useful thing to put some of the information here, more as a personal reference. The first part of this two-part series will focus on configuration maximums for the VNX7500 – the flagship midrange array from EMC. The sequel will look at Storage Pools, RAID Groups and thin things. There may or may not be a third part on some of the hardware configuration considerations. Note that the information here is based on the revision of the document referenced at the start. Some of these numbers will change with code updates.

Here are some useful numbers to know when considering a VNX7500 deployment:

  • Maximum RAID Groups – 1000;
  • Maximum drives per RAID Group – 16;
  • Minimum drives per RAID Group – R1/0 – 2, R5 – 3, R6 – 4;
  • Stripe Size R1/0 (4+4) and R5 (4+1) – 256KB, R6 (6+2) – 384KB;
  • Maximum LUNs (this includes private LUNs) – 8192;
  • Maximum LUNs per Pool / all pools – 2048;
  • Maximum LUNs per RAID Group – 256;
  • Maximum MetaLUNs per System – 2048;
  • Maximum Pool LUN size (thick or thin) – 16TB;
  • Maximum traditional LUN size = largest, highest capacity RAID Group;
  • Maximum components per MetaLUN – 512;
  • EMC still recommends 1 Global Hot Spare per 30 drives.

When you add drives to a Storage Pool or RAID Group they are zeroed out – this is a background process but can take some time. New drives shipped from EMC are pre-zeroed and won’t be “re-zeroed”. The drives you bought off ebay are not. To pre-zero drives prior to adding to Storage Pool or RAID Group run the following commands with naviseccli:

naviseccli zerodisk -messner <disk-id> <disk-id> <disk-id> start
naviseccli zerodisk -messner <disk-id> <disk-id> <disk-id> status

Trespassing a Pool LUN will adversely affect its performance after the trespass. It is recommended that you avoid doing this, except for NDU or break-fix situations.

The LUN Migration tool provided by EMC has saved my bacon a number of times. If you need to know how long a LUN migration will take, you can use the following formula. LUN Migration duration = (Source LUN (GB) * (1/Migration Rate)) + ((Dest LUN Capacity – Source LUN Capacity) * (1/Initialization Rate)). The Migration rates are – Low = 1.4, Medium = 13, High = 44, ASAP = 85 (in MB/s). Up to 2 ASAP migrations can be performed at the same time per Storage Processor. Keep in mind that this will belt the Storage Processors though, so, you know, be careful.