EMC CLARiiON VNX7500 Configuration guidelines – Part 2

In this episode of EMC CLARiiON VNX7500 Configuration Guidelines, I thought it would be useful to discuss Storage Pools, RAID Groups and Thin things (specifically Thin LUNs). But first you should go away and read Vijay’s blog post on Storage Pool design considerations. While you’re there, go and check out the rest of his posts, because he’s a switched-on dude. So, now you’ve done some reading, here’s a bit more knowledge.

By default, RAID groups should be provisioned in a single DAE. You can theoretically provision across buses for increased performance, but oftentimes you’ll just end up with crap everywhere. Storage Pools obviously change this, but you still don’t want to bind the Private RAID Groups across DAEs. But if you did, for example, want to bind a RAID 1/0 RAID Group across two buses – for performance and resiliency – you could do it thusly:

naviseccli -h <sp-ip> createrg 77 0_1_0 1_1_0 0_1_1 1_1_1

Where the numbers refer to the standard format Bus_Enclosure_Disk.

The maximum number of Storage Pools you can configure is 60. It is recommended that a pool should contain a minimum of 4 private RAID groups. While it is tempting to just make the whole thing one big pool, you will find that segregating LUNs into different pools may still be useful for FAST cache performance, availability, etc. Remember kids, look at the I/O profile of the projected workload, not just the capacity requirements. The mixing of drives with different performance characteristics in a homogenous pool is also contra-indiciated. When you create a Storage Pool the following Private RAID Group configurations are considered optimal (depending on the RAID type of the Pool):

  • RAID 5 – 4+1
  • RAID 1/0 – 4+4
  • RAID 6 – 6 + 2

Pay attention to this, because you should always ensure that a Pool’s private RAID groups align with traditional RAID Group best practices, while sticking to these numbers. So don’t design a 48 spindle RAID 5 Pool. That will be, er, non-optimal.

 

EMC recommend that if you’re going to blow a wad of cash on SSDs / EFDs, you should do it on FAST cache before making use of the EFD Tier.

 

With current revisions of FLARE 30 and 31, data is not re-striped when the pool is expanded. It’s also important to understand that preference is given to using the new capacity rather than the original storage until all drives in the Pool are at the same level of capacity. So if you have data on a 30-spindle Pool, and then add another 15 spindles to the Pool, the data goes to the new spindles first to even up the capacity. It’s crap, but deal with it, and plan your Pool configurations before you deploy them. For RAID 1/0, avoid private RAID Groups of 2 drives.

A Storage Pool on the VNX7500 can be created with or expanded by 180 drives at a time, and you should keep the increments the same. If you are considering the use of greater than 1TB drives use RAID 6. When FAST VP is working with Pools, remember that you’re limited to one type of RAID in a pool. So if you want to get fancy with different RAID Types and tiers, you’ll need to consider using additional Pools to accommodate this. It is, however, possible to mix thick and thin LUNs in the same Pool. It’s also important to remember that the consumed capacity for Pool LUNs = (User Consumed Capacity * 1.02) + 3GB. This can have an impact as capacity requirements increase.

 

A LUN’s tiering policy can be changed after the initial allocation of the LUN. FAST VP has the following data placement options: Lowest, Highest, Auto, no movement. This can present some problems if you want to create a 3-tier Pool. The only workaround I could come up with was to create the Pool with 2 tiers and place LUNs at highest and lowest. Then add the third tier and place those highest tier LUNs on the highest tier and change the middle tier LUNs to No Movement. What would be a better solution is to create the Pool with the tiers you want, put all of your LUNs on Auto placement, and let FAST VP sort it out for you. But if you have a lot of LUNs, this can take time.

 

For thin NTFS LUNs – use Microsoft’s sdelete to zero free space. When using LUN Compression – Private LUNs (Meta Components, Snapshots, RLP) cannot be compressed. EMC recommends that compression only be used for archival data that is infrequently accessed. Finally, you can’t defragment RAID 6 RAID Groups – so pay attention when you’re putting LUNs in those RAID Groups.

7 Comments

  1. Great series. Given the new SAS back-end on the VNX, bus layout is not as critical as it was with the legacy loop design. Each port on the VNX is 4x SAS lanes and each port can address up to 4 disks at a time. In addition, you can have up to 10 DAE’s (15 disk DAE) on a bus or a maximum of 250 disks (new 25 disk DAE). Still, given this information, I always follow legacy design best practices with the CX5’s, RAID1/0 across two buses, and FAST cache across all available buses. One should also mention that the cache numbers and page cache water marks should be adjusted to the best practices. Changed for VNX models.

  2. It’s always best practice to expand the pool by the same amount of disks presently in the pool. This will maintain performance. I realize this is not always possible.

  3. Hey Dave, thanks for the comments. When I was first doing the design the VNX hardware overview gave me a strong impression that I could only have 4 buses per SP, but it seems like we can have eight, which I think is good for FAST cache. I’m glad they’ve finally gone with SAS, because as you say to backend doesn’t need to be as ugly now. I’ll try and do a post soon on some of the things to look for with SP Cache, particularly if you’re deploying FAST Cache.

  4. Yep, I should have been clear on that point too. Which can lead to some interesting numbers if you’re in the unusual position, like we are, of buying a single-purpose array. We don’t want to configure a pool with 950 drives (excluding GHS, etc), but we also need to find a number that will both fit nicely into the layout, expand cleanly and still not require a large number of Pools to be configured. Still, it’s a good problem to have.

  5. In the case of the VNX7500, 4 buses standard, or a total of 8 with expansion module. This is not true for 5700 and down. Another great thing to mention is that you can mix any drive type in any DAE. CX4 wouldn’t allow SATA with FC, or EFD with SATA. VNX does not matter, SAS,EFD, or NL-SAS all in the same DAE. Initial release of Block OE doesn’t allow connectivity to another domain. I hope that eventually we will be able to mix RAID types within a heterogeneous FAST VP.

  6. Dave – good point about the mixed drive types in one DAE – I remember being excited that EMC had “caught up” with some of the other vendors (namely the Engenio OEMs). I hear a rumour that the latest Block OE now allows more than one array in the domain – but I haven’t confirmed this for myself. Thanks again for your commentary – I think it would be good to take these points and make them into a fourth post. One thing we’ve run into is problems with having 8 BE buses, 8 DMs and a sufficient number of FE ports. But I’m working with my local team to get some clarity around that.

  7. You heard correct. In fact, Block OE 05.31.000.5.502 GA’ed today. Lot’s of new features. This doesn’t surprise me with the recent release of vSphere 5. New File OE has VAAI support for NFS. :)

Comments are closed.