Just a quick note to advise that FLARE 31 for the VNX has received an update to version .716. I had a quick glance at the release notes and noted a few fixes that take care of some SP panics. If you want a bit of a giggle, go and look at the kb article for emc291837 – it’s a scream. Or maybe I’m just being cynical. As always, go to Powerlink for more info, and talk to your local EMC people about whether it’s appropriate to upgrade your VNX’s code.
Tag Archives: VNX7500
EMC CLARiiON VNX7500 Configuration guidelines – Part 4
In Part 4 of this 2-part series on VNX7500 configuration guidelines I’m going to simply paraphrase Loyal Reader Dave’s comments on my previous posts, because he raised a number of points that I’d forgotten, and the other two of you reading this may not have read the comments section.
While I was heavily focussed on a VNX7500 filled with 600GB 10K SAS disks, it is important to note that with the VNX you can *finally* mix any drive type in any DAE. So if you have workloads that don’t necessarily conform to the 15 or 25-disk DAE this is no longer a problem. That’s right, with VNX it does not matter – SAS,EFD, or NL-SAS can all be in the same DAE. Whether you really want this will again depend on the workload you’re designing for. And before the other vendors jump in and say “we’ve had that for years”, I know you have. Dave also mentioned that the ability to mix RAID types within a heterogeneous FAST VP would be handy, and I agree there. I’m pretty sure that’s on a roadmap, but I’ve no idea when it’s slated for.
It is also important to expand a FAST-VP pool by the same amount of disks each time. So if you’ve got a 15-disk pool to start with, you should be expanding a pool with another 15 disks. This can get unwieldy if you have a 60-disk pool and then only need 5 more disks. I’m hearing rumours that the re-striping feature is coming, but so’s Christmas.
While you can have up to 10 DAE’s (15 disk DAE) on a bus or a maximum of 250 disks (new 25 disk DAE), Dave still follows legacy design best practices with the CX5s: RAID1/0 across two buses, and FAST cache across all available buses. I agree with this, and I don’t think the VNX is quite there in terms of just throwing disks anywhere and letting FAST-VP take care of it. Dave suggested that I should also mention that the cache numbers and page cache water marks should be adjusted to the best practices (this has changed for VNX models). I’m hoping to do a post on this in the near future.
While the initial release of Block OE doesn’t allow connectivity to another domain, the latest Block OE 05.31.000.5.502 has just gone GA, and I think this has been fixed.
If I get some time this week I’ll put up a copy of the disk layout I proposed for this design, including the multiple variations that were used to try and make everything fit nicely. Thanks again to Dave for his insightful commentary, and for reminding me of the things I should have already covered.
EMC CLARiiON VNX7500 Configuration guidelines – Part 3
One thing I didn’t really touch on in the first two parts of this series is the topic of RAID Groups and binding between disks on the DPE / DAE-OS and other DAEs. It’s a minor point, but something people tend to forget when looking at disk layouts. Ever since the days of Data General, the CLARiiON has used Vault drives in the first shelf. For reasons that are probably already evident, these drives, and the storage processors, are normally protected by a Standy Power Supply (SPS) or two. The SPS provides enough battery power in a power failure scenario such that cache can be copied to the Vault disks and data won’t be lost. This is a good thing.
The thing to keep in mind with this, however, is that the other DAEs in the array aren’t protected by this SPS. Instead, you plug them into UPS-protected power in your data centre. So when you lose power with those, they go down. This can cause “major dramas” with Background Verify operations when the array is rebooted. This is a sub-optimal situation to be in. The point of all this is that, as EMC have said for some time, you should bind RAID groups across disks that are either contained in that first DAE, or exclusive to that DAE.
Now, if you really must do it, there are some additional recommendations:
- Don’t split RAID 1 groups between the DPe and another DAE;
- For RAID 5, ensure that at least 2 drives are outside the DPE;
- For RAID 6, ensure that at least 3 drives are outside the DPE;
- For RAID 1/0 – don’t do it, you’ll go blind.
It’s a minor design consideration, but something I’ve witnessed in the field when people have either a) tried to be tricky on smaller systems, or b) have been undersold on their requirements and have needed to be creative. As an aside, it is also recommended that you don’t include drives from the DPE / DAE-OS in Storage Pools. This may or may not have an impact on your Pool design.
EMC CLARiiON VNX7500 Configuration guidelines – Part 2
In this episode of EMC CLARiiON VNX7500 Configuration Guidelines, I thought it would be useful to discuss Storage Pools, RAID Groups and Thin things (specifically Thin LUNs). But first you should go away and read Vijay’s blog post on Storage Pool design considerations. While you’re there, go and check out the rest of his posts, because he’s a switched-on dude. So, now you’ve done some reading, here’s a bit more knowledge.
By default, RAID groups should be provisioned in a single DAE. You can theoretically provision across buses for increased performance, but oftentimes you’ll just end up with crap everywhere. Storage Pools obviously change this, but you still don’t want to bind the Private RAID Groups across DAEs. But if you did, for example, want to bind a RAID 1/0 RAID Group across two buses – for performance and resiliency – you could do it thusly:
naviseccli -h <sp-ip> createrg 77 0_1_0 1_1_0 0_1_1 1_1_1
Where the numbers refer to the standard format Bus_Enclosure_Disk.
The maximum number of Storage Pools you can configure is 60. It is recommended that a pool should contain a minimum of 4 private RAID groups. While it is tempting to just make the whole thing one big pool, you will find that segregating LUNs into different pools may still be useful for FAST cache performance, availability, etc. Remember kids, look at the I/O profile of the projected workload, not just the capacity requirements. The mixing of drives with different performance characteristics in a homogenous pool is also contra-indiciated. When you create a Storage Pool the following Private RAID Group configurations are considered optimal (depending on the RAID type of the Pool):
- RAID 5 – 4+1
- RAID 1/0 – 4+4
- RAID 6 – 6 + 2
Pay attention to this, because you should always ensure that a Pool’s private RAID groups align with traditional RAID Group best practices, while sticking to these numbers. So don’t design a 48 spindle RAID 5 Pool. That will be, er, non-optimal.
EMC recommend that if you’re going to blow a wad of cash on SSDs / EFDs, you should do it on FAST cache before making use of the EFD Tier.
With current revisions of FLARE 30 and 31, data is not re-striped when the pool is expanded. It’s also important to understand that preference is given to using the new capacity rather than the original storage until all drives in the Pool are at the same level of capacity. So if you have data on a 30-spindle Pool, and then add another 15 spindles to the Pool, the data goes to the new spindles first to even up the capacity. It’s crap, but deal with it, and plan your Pool configurations before you deploy them. For RAID 1/0, avoid private RAID Groups of 2 drives.
A Storage Pool on the VNX7500 can be created with or expanded by 180 drives at a time, and you should keep the increments the same. If you are considering the use of greater than 1TB drives use RAID 6. When FAST VP is working with Pools, remember that you’re limited to one type of RAID in a pool. So if you want to get fancy with different RAID Types and tiers, you’ll need to consider using additional Pools to accommodate this. It is, however, possible to mix thick and thin LUNs in the same Pool. It’s also important to remember that the consumed capacity for Pool LUNs = (User Consumed Capacity * 1.02) + 3GB. This can have an impact as capacity requirements increase.
A LUN’s tiering policy can be changed after the initial allocation of the LUN. FAST VP has the following data placement options: Lowest, Highest, Auto, no movement. This can present some problems if you want to create a 3-tier Pool. The only workaround I could come up with was to create the Pool with 2 tiers and place LUNs at highest and lowest. Then add the third tier and place those highest tier LUNs on the highest tier and change the middle tier LUNs to No Movement. What would be a better solution is to create the Pool with the tiers you want, put all of your LUNs on Auto placement, and let FAST VP sort it out for you. But if you have a lot of LUNs, this can take time.
For thin NTFS LUNs – use Microsoft’s sdelete to zero free space. When using LUN Compression – Private LUNs (Meta Components, Snapshots, RLP) cannot be compressed. EMC recommends that compression only be used for archival data that is infrequently accessed. Finally, you can’t defragment RAID 6 RAID Groups – so pay attention when you’re putting LUNs in those RAID Groups.
EMC CLARiiON VNX7500 Configuration guidelines – Part 1
I’ve been doing some internal design work and referencing the “EMC Unified Storage Best Practices for Performance and Availability Common Platform and Block Storage 31.0 – Applied Best Practices” – rev 23/06/2011 – h8268_VNX_Block_best_practices.pdf fairly heavily. If you’re an EMC customer or partner you can get it from the Powerlink website. I thought it would be a useful thing to put some of the information here, more as a personal reference. The first part of this two-part series will focus on configuration maximums for the VNX7500 – the flagship midrange array from EMC. The sequel will look at Storage Pools, RAID Groups and thin things. There may or may not be a third part on some of the hardware configuration considerations. Note that the information here is based on the revision of the document referenced at the start. Some of these numbers will change with code updates.
Here are some useful numbers to know when considering a VNX7500 deployment:
- Maximum RAID Groups – 1000;
- Maximum drives per RAID Group – 16;
- Minimum drives per RAID Group – R1/0 – 2, R5 – 3, R6 – 4;
- Stripe Size R1/0 (4+4) and R5 (4+1) – 256KB, R6 (6+2) – 384KB;
- Maximum LUNs (this includes private LUNs) – 8192;
- Maximum LUNs per Pool / all pools – 2048;
- Maximum LUNs per RAID Group – 256;
- Maximum MetaLUNs per System – 2048;
- Maximum Pool LUN size (thick or thin) – 16TB;
- Maximum traditional LUN size = largest, highest capacity RAID Group;
- Maximum components per MetaLUN – 512;
- EMC still recommends 1 Global Hot Spare per 30 drives.
When you add drives to a Storage Pool or RAID Group they are zeroed out – this is a background process but can take some time. New drives shipped from EMC are pre-zeroed and won’t be “re-zeroed”. The drives you bought off ebay are not. To pre-zero drives prior to adding to Storage Pool or RAID Group run the following commands with naviseccli:
naviseccli zerodisk -messner <disk-id> <disk-id> <disk-id> start
naviseccli zerodisk -messner <disk-id> <disk-id> <disk-id> status
Trespassing a Pool LUN will adversely affect its performance after the trespass. It is recommended that you avoid doing this, except for NDU or break-fix situations.
The LUN Migration tool provided by EMC has saved my bacon a number of times. If you need to know how long a LUN migration will take, you can use the following formula. LUN Migration duration = (Source LUN (GB) * (1/Migration Rate)) + ((Dest LUN Capacity – Source LUN Capacity) * (1/Initialization Rate)). The Migration rates are – Low = 1.4, Medium = 13, High = 44, ASAP = 85 (in MB/s). Up to 2 ASAP migrations can be performed at the same time per Storage Processor. Keep in mind that this will belt the Storage Processors though, so, you know, be careful.