EMC – New VNX OE available

I’m a bit behind on my news, but just a quick note to say that FLARE 33 for the next-generation VNX (Block) has received an update to version 05.33.000.5.038. The release notes are here. Notably, a fix related to ETA 175619 is included. This is a good thing. As always, go to EMC’s Support site for more info, and talk to your local EMC people about whether it’s appropriate to upgrade.

EMC – Next-Generation VNX – Deduplication

In my previous post on the Next-Generation VNX, I spoke about some of the highlights of the new platform. In this post I’d like to dive a little deeper into the deduplication feature, because I think this stuff is pretty neat. In the interests of transparency, I’m taking a lot of this information from briefings I’ve received from EMC. I haven’t yet had the chance to test this for myself, so, as always, your mileage might vary.

One of the key benefits of deduplication is reduced footprint. Here’s a marketing picture that expresses, via a simple graph, how deduplication can help you do more, with less.

VNX_Efficiency

There are 3 basic steps to deduplication:

  1. Discover / Digest;
  2. Sort / Identify; and
  3. Map / Eliminate.

The Discovery phase is basically generating hashes of 8KB blocks using unique digests. It’s then sorted to identify chunk candidates for deduplication. The duplicates are then mapped and the space is freed up. Digests are used as pointers for the unique data chunks.

Deduplication can be turned on or off at the LUN level. Pools can contain both deduplicated and “normal” LUNs. Also, note that the total number of LUNs on a system can be deduplicated – there is no separate limit applied to the deduplication technology.

The deduplication properties of a LUN are as follows:

  • Feature State – On or Paused
  • State – Off, Enabling, On, Disabling
  • Status – indicates any problems during enabling or disabling

The deduplication properties of a Pool are as follows:

  • State – Idle (no deduplicated LUNs), Pending (between passes), Running (currently running a dedupe pass) and Paused (pool is paused)
  • Deduplication Rate – High, Medium, Low (Medium is current default)

Note that dedplication can be paused at a system level for all pools.

When deduplication is turned off on a LUN it is migrated out of the deduplication container within the pool. 8 simultaneous migrations per system can occur, and it obviously reduces the consumed space in the deduplication container.

At a high level, deduplication interoperability is there:

  • Works with FAST VP (dedupe LUNs behave as a single entity, when dedupe is turned off it goes back to per-LUN FAST VP settings)
  • Supports Snaps and Clones (VNX Snapshots are lost when enabling or disabling, Reserved LUNs for SnapView Snaps cannot be deduplicated)
  • Support for RP, MV and SAN Copy
  • LUN migration works, although moving between pools means deduplication is lost as it’s pool-based
  • Compression is not supported.

And that’s fixed-block deduplication for the next-generation VNX in a nutshell. When I get my hands on one of these I’ll be running it through some more realistic testing and scenarios.

EMC – Next-Generation VNX – MCx

EMC have also breathlessly announced the introduction of “Multi-Core Everything” or MCx as an Operating Environment replacement for FLARE. I thought I’d spend a little time going through what that means, based on information I’ve been provided by EMC.

MCx is a redesign of a number of components, providing functionality and performance improvements for:

  • Multi-Core Cache (MCC);
  • Multi-Core RAID (MCR);
  • Multi-Core FAST Cache (MCF); and
  • Active / Active data access.

As I mentioned in my introductory post, the OE has been redesigned to spread the operations across each core – providing linear scaling across cores. Given that FLARE pre-dated multi-core x86 CPUs, it seems like this has been a reasonable thing to implement.

VNX_MCx_Core_utilisation

EMC are also suggesting that this has enabled them to scale out within the dual-node architecture, providing increased performance, at scale. This is a response to a number of competitors who have suggested that the way to scale out in the mid-range is to increase the number of nodes. How well this scales in real-world scenarios remains to be seen.

 

Multi-Core Cache

With MCC, the cache engine has been modularized to take advantage of all the cores available in the system. There is now also no requirement for manually separate space for Read and Write Cache, meaning no management overhead in
ensuring the cache is working in the most effective way regardless of the IO mix. Interestingly, cache is dynamically assigned, allocating space on-the-fly for reads, writes, thin metadata, etc depending on the needs of the system at the time.

The larger overall working cache  provides mutual benefit between reads and writes. Note also that data is not discarded from cache after a write destage (this greatly improves cache hits). The new caching model employs intelligent monitoring of pace of disk writes to avoid forced flushing and works on a standardized 8KB physical page size.

 

Multi-Core RAID

MCR introduces some much-needed changes to the historically clumsy way in which disks were managed in the CLARiiON / VNX. Of particular note is the handling of hot spares. Instead of having to define disks as permanent hot spares, the sparing function is now more flexible and any unassigned drive in the system can operate as a hot spare.

There are three policies for hot spares:

  • Recommended;
  • No hot spares; and
  • Custom.

The “Recommended” policy implements the same sparing model as today (1 spare per 30 drives). Note, however, that if there is an unused drive that can be used as a spare (even if you have the “No Hot Spare” policy set) and there is a fault in a RAID group, the system will use that drive. Permanent sparing also means that when a drive has been used in a sparing function, that drive is then a permanent part of the RAID Group or Pool so there is no need to re-balance and copy back the spare drive to a new drive. The cool thing about this is that it reduces the exposure and performance overhead of sparing operations. If the concept of storage pools that spread all over the place didn’t freak out the RAID Group huggers in your storage team, then the idea of spares becoming permanent might just do it. There will be a CLI command available to copy the spare back to the replaced drive, so don’t freak out too much.

What I like the look of, having been stuck “optimising” CLARiiONs previously, is the idea of “Portable drives”, where drives can be physically removed from a RAID group and relocated to another disk shelf. Please note that you can’t use this feature to migrate RAID Groups to other VNX systems, you’ll need to use more traditional migration methods to achieve this.

 

Multi-Core FAST Cache

According to EMC, MCF is all about increased efficiency and performance. They’ve done this by moving MCC above the FAST Cache driver – all DRAM cache hits are processed without the need to check whether a block resides in FAST Cache, saving this CPU cycle overhead on all IOs. There’s also a faster initial warm-up for FAST Cache, resulting in better initial performance. Finally, instead of requiring 3 hits, if the FAST Cache is not full, it will perform more like a standard extension of cache and load data based on a single hit. Once the Cache is 80% full it reverts to the default 3-hit promotion policy.

Symmetric Active / Active
Real symmetric? Not that busted-arse ALUA? Sounds good to me. With Symmetric Active/Active, EMC are delivering true concurrent access via both SPs (no trespassing required – hooray!). However, this is currently only supported for Classic LUNs. It does have benefits for Pool LUNs by removing the performance impacts of trespassing the allocation owner. Basically, there’s a stripe locking service in lpay that’s providing access to the LUN for each SP, with the CMI being used for communication between the SPs. Here’s what LUN Parallel Access looks like.
VNX_LUN_Parallel_Access
And that’s about it. Make no mistake, MCx sounds like it should really improve performance for EMC’s mid-range. Remember though, given some of the fundamental changes outlined here, it’s unlikely that this will work on previous-generation VNX models.

EMC – Next-Generation VNX – The Speed to Lead

Otherwise titled, “about time”. I heard about this a few months ago, and have had a number of technical briefings from people inside EMC since then. For a number of reasons EMC weren’t able to publicly announce it until now. Anyway, for the official scoop, head on over to EMC’s Speed to Lead site. In this post I thought I’d cover off on some of the high-level speeds and feeds. I hope to have a some time in the near future to dive in a little deeper on some of the more interesting architectural changes.

As far as the hardware goes, EMC have refreshed the VNX midrange line with the following models:

VNX_New_Models

The 5200 is positioned just above the 5100, the 5400 above the 5300, and so forth. The VNX8000 is the biggest yet, and, while initially shipping with 1000 drives, will eventually ship with a 1500-spindle capability. The SPs all use Sandy Bridge chips, with EMC heavily leveraging multi-core. The 8000 will sport  dual-socket, 8-core SPs and 128GB of RAM. You’ll also find the sizing of these arrays is based on 2.5″ drives, and that a number of 2.5″ and 3.5″ drives will be available with these models (highlights being 4TB 3.5″ NL-SAS and 1.2TB 2.5″ SAS drives). Here’s a better picture with the max specs for each model.

VNX_models

Multi-core is a big part of the new VNX range, with MCx described as a project that redesigns the core Block OE stack to improve performance, reliability and longevity.

VNX_MCx_Optimisation

Instead of FLARE using one core per component, MCx is able to evenly distribute workloads across cores, giving improved utilisation across the SP.

VNX_MCx_Core_utilisation

 

The key benefit of this architecture is that you’ll get improved performance with the other tech in the array, namely FAST VP and Deduplication. The multi-core optimisations also extend to RAID management, Cache management and FAST Cache. I hope to be able to do some more detailed posts on these.

FAST-VP has also received some attention, with the chunk size being reduced from 1GB down to 256MB. The big news, however, is the introduction of fixed-block deduplication, with data being segmented in 8KB chunks, deduped at the pool level (not across pools), and turned on or off at the LUN level. I’ll be doing a post shortly on how it all works.

Hopefully that’s given you a starting point from which to investigate this announcement further. As always, if you’re after more information, talk to your local EMC people.