Ever since the VNX2 was announced, customers have asked me about using deduplication with their configs. I did an article on it around the time of the product announcement but have been meaning to talk a bit more about it for some time. But before I do, check out Joel Cason’sgreat post on this. Anyway, here’s a brief article listing some of the caveats and things to look out for with block deduplication. A few of my clients have used this feature in the field, and have learnt the hard way that if you don’t follow EMC’s guidelines, you may have a sub-optimal experience. Most of the information here has been taken from the “EMC VNX2 Deduplication and Compression” which can be downloaded here.
If you’re running a workload with more than 30% writes, compression and deduplication may be a problem. EMC state that, “[f]or applications requiring consistent and predictable performance, EMC recommends using Thick pool LUNs or Classic LUNs. If Thin LUN performance is not acceptable, then do not use Block Deduplication”. I can’t stress this enough – know your workload!
Block deduplication is done on a per pool LUN basis. EMC recommended that deduplication be enabled at the time of LUN creation. If you enable it on an existing LUN, the LUN is migrated into the deduplication container using a background process. The data must reside in the container before the deduplication process can run on the dataset.
There is only one deduplication container per storage pool. This is where your deduplicated data is stored. When a deduplication container is created, the SP owning the container needs to be determined. The container owner is matched to the Allocation Owner of the first deduplicated LUN within the pool. As a result of this process, EMC recommends that all LUNs with Block Deduplication enabled within the same pool should be owned by the same SP. This can be a big problem in smaller environments where you’ve only deployed one pool.
There’s a bit more to consider, particularly if you’re looking at leveraging compression as well. But if you can’t get past these first few considerations, it’s likely that the VNX2’s version of deduplication on primary storage is probably not for you. Read the whitepaper – it’s readily accessible and fairly clear about what can and can’t be achieved within the constraints of the product.
Many moons ago I wrote a brief article about accessing RemotelyAnywhere on the CX4. This was prompted by changes in Release 29 of FLARE that changed the access mechanism for remote console access on the SPs. I’ve been working on some VNX2s recently (or VNX with MCx – as EMC really would like them to be known), and I was curious as to whether the process was the same.
Pretty much, yep.
Overview
Nowadays there are a few ways to access RemotelyAnywhere on the VNX SP. There are a few different ports on the array that can be used, depending on your circumstances. In some environments, where you’re not allowed to touch the customer’s network with your own gear, the service port may be more appropriate. Here’s an image of the ports from EMC. The model of VNX you’re using will dictate the layout of the ports.
You can go via:
the SP’s management port: http://<SP IP address>:9519;
the SP’s service port: https://128.221.1.250:9519 (SP A) or https://128.221.1.251:9519 (SP B); and
the SP’s serial port: https://192.168.1.1:9519 (this assumes you’re connected via serial already – more on that below).
Management Port
This is fairly straightforward, and you’ll need to be on a network that has access to the management ports.
Service Port
So, you’re probably already aware that the best way to connect to the service port is to set your laptop TCP/IP settings as follows:
IP Address – 128.221.1.249 or 128.221.1.254
Subnet Mask – 255.255.255.248
Default Gateway – leave blank
DNS server entries – leave blank
Serial Cable
If you want to connect via the serial cable, you’ll need to setup a PPP connection on your laptop. The following steps assume that you’ve got a USB to serial adapter and you’re using a Windows 7 machine.
Device Manager
Right Click on your Computer icon and Select Manage
Click on Device Manager
Expand Ports (COM & LPT)
Look for the USB-to-Serial Comm Port (COM##)
The COMM number will be the one you will select during the configuration of your PPP connection.
Create the COM Port
Click Start -> Control Panel -> Phone and Modem.
Click the “modem tab” and click Add.
On the Install new Modem Pane, select the Don’t detect my modem box, then click Next.
Select Communications cable between two computers, then click Next.
Select the COM port from the previous step, then click Next.
Click Finish.
Highlight the new modem and click Properties.
Select the “modem tab” and adjust the max speed to 115200 then click OK.
Click OK again to exit the Phone and Modem screen.
In the Computer Management window, disable and then re-enable the USB Serial connection in Device Manager. Do this by right-clicking on it.
Setting Up the PPP Connection
Click Start -> Control Panel -> Network and Sharing Center, click Set up a new connection or network (at the bottom).
Click Next, select Set up a dial-up connection and click Next.
This screen should list modems and select the Communications cable between two computers created above.
On the next screen put in a random phone number. This is required in order to complete this step. You need at least one digit, but you’ll remove it later. Next put in the username and password and give it a name. Then click on Connect. This connection will fail displaying: Connection Failed with error 777. Click on Set up the connection anyway.
You will get: “The connection to the Internet is ready to use”. Select Close.
The above connection should now appear in Network Connections. Open Control Panel and select Change Adapter Settings.
Right-click on your new Modem connection and select Cancel as Default Connection.
Right Click on your new Modem connection again and select Properties.
Modify the Settings
In the General tab, remove the phone number entry and leave it blank.
In the General tab, click configure and set Max speed to 115200 and select enable hardware flow control.
In the Options tab, click PPP settings and check that the top two boxes are selected (LCP extensions and SW compression).
In the Security tab, check Data Encryption is Optional Encryption (connect even if no encryption) is set.
In the Networking tab, check Internet Protocol Version 4 is selected and click on Properties.
In the Networking tab, choose Internet Protocol Properties, then the Advanced button. Uncheck Use default gateway on remote network.
Click OK.
Click OK.
Click OK.
Conclusion
Here’s what it looks like when you log in – enjoy.
In my current role, I don’t do a lot of array configuration from scratch anymore. I generally do the detailed design and hand it over to a colleague to go and make it so. Recently, however, I’ve had to step in and do some actual work myself because reasons. Anyway, I was deploying a new VNX5400 and having a heck of a time getting Unisphere to work. And by Unisphere I mean Java. I initially wanted to blame my work-issued Windows 8.1 laptop, but it was ultimately a Java issue. It turns out my Java version was high. Not Cypress Hill high, but still too high for Unisphere.
EMC’s snappily titled “EMC VNX Operating Environment for Block 05.33.006.5.096, EMC VNX Operating Environment for File 8.1.6.96, EMC Unisphere 1.3.6.1.0096 Release Notes” talks fairly explicitly about Java support on page 6, and I thought it was worth repeating here for schmucks like me who do this stuff part-time. You can find this document on the EMC support site.
“Java support
The following 32 bit Java Platforms are verified by EMC and compatible for use with Unisphere, the Unified Service Manager (USM), and the VNX Installation Assistant (VIA):
Oracle Standard Edition 1.7 up to Update 75
Oracle Standard Edition 1.8 up to Update 25
The 32-bit JRE is required – even on 64 bit systems. JRE Standard Edition 1.6 is not recommended because Oracle has stopped support for this edition”.
I think I was running 1.8 Update 31, and saw that, regardless of the browser, Unisphere just wouldn’t load. If you need to track down an older version of Java to work on stuff like this – Oracle has a site you can go to here. Incidentally, I can confirm that it is not necessary to install the Ask Toolbar in order for Unisphere to function correctly.
*Update (2016.05.20): Current link to 1.8 U25 is here.
*Update (2016.06.04): Jirah Cox (@vJirah) pointed out that http://filehippo.com keeps an extensive archive of versions too.
I had a question about this come up this week and thought I’d already posted something about it. Seems I was lazy and didn’t. If you have access, have a look at Primus article emc311319 on EMC’s support site. If you don’t, here’s the rough guide to what it’s all about.
When a Storage Pool is created, a large number of private LUNs are bound on all the Pool drives and these are divided up between SP A and B. When a Pool LUN is created, it uses the allocation owner to determine which SP private LUNs should be used to store the Pool LUN slices. If the default and current owner are not the same as the allocation owner, the I/O will have to be passed over the CMI bus between SP, to reach the Pool Private FLARE LUNs. This is a bad thing, and can lead to higher response times and general I/O bottlenecks.
OMG, I might have this issue, what should I do? You can change the default owner of a LUN by accessing the LUN properties in Unisphere. You can also change the default owner of a LUN thusly.
naviseccli -h <SP A or B> chglun -l <metalun> -d owner <0|1>
where
-d owner 0 = SP A
-d owner 1 = SP B
But what if you have too many LUNs where the allocation owner sits on one SP? And when did I start writing blog posts in the form of a series of questions? I don’t know the answer to the latter question. But for the first, the simplest remedy is to create a LUN on the alternate SP and use EMC’s LUN migration tool to get the LUN to the other SP. Finally, to match the current owner of a LUN to the default owner, simply trespass the LUN to the default owner SP.
Note that this is a problem from CX4 arrays through to VNX2 arrays. It does not apply to traditional RAID Group FLARE LUNs though, only Pool LUNs.
I’m a bit behind on my news, but just a quick note to say that FLARE 33 for the next-generation VNX (Block) has received an update to version 05.33.000.5.038. The release notes are here. Notably, a fix related to ETA 175619 is included. This is a good thing. As always, go to EMC’s Support site for more info, and talk to your local EMC people about whether it’s appropriate to upgrade.
In my previous post on the Next-Generation VNX, I spoke about some of the highlights of the new platform. In this post I’d like to dive a little deeper into the deduplication feature, because I think this stuff is pretty neat. In the interests of transparency, I’m taking a lot of this information from briefings I’ve received from EMC. I haven’t yet had the chance to test this for myself, so, as always, your mileage might vary.
One of the key benefits of deduplication is reduced footprint. Here’s a marketing picture that expresses, via a simple graph, how deduplication can help you do more, with less.
There are 3 basic steps to deduplication:
Discover / Digest;
Sort / Identify; and
Map / Eliminate.
The Discovery phase is basically generating hashes of 8KB blocks using unique digests. It’s then sorted to identify chunk candidates for deduplication. The duplicates are then mapped and the space is freed up. Digests are used as pointers for the unique data chunks.
Deduplication can be turned on or off at the LUN level. Pools can contain both deduplicated and “normal” LUNs. Also, note that the total number of LUNs on a system can be deduplicated – there is no separate limit applied to the deduplication technology.
The deduplication properties of a LUN are as follows:
Feature State – On or Paused
State – Off, Enabling, On, Disabling
Status – indicates any problems during enabling or disabling
The deduplication properties of a Pool are as follows:
State – Idle (no deduplicated LUNs), Pending (between passes), Running (currently running a dedupe pass) and Paused (pool is paused)
Deduplication Rate – High, Medium, Low (Medium is current default)
Note that dedplication can be paused at a system level for all pools.
When deduplication is turned off on a LUN it is migrated out of the deduplication container within the pool. 8 simultaneous migrations per system can occur, and it obviously reduces the consumed space in the deduplication container.
At a high level, deduplication interoperability is there:
Works with FAST VP (dedupe LUNs behave as a single entity, when dedupe is turned off it goes back to per-LUN FAST VP settings)
Supports Snaps and Clones (VNX Snapshots are lost when enabling or disabling, Reserved LUNs for SnapView Snaps cannot be deduplicated)
Support for RP, MV and SAN Copy
LUN migration works, although moving between pools means deduplication is lost as it’s pool-based
Compression is not supported.
And that’s fixed-block deduplication for the next-generation VNX in a nutshell. When I get my hands on one of these I’ll be running it through some more realistic testing and scenarios.
EMC have also breathlessly announced the introduction of “Multi-Core Everything” or MCx as an Operating Environment replacement for FLARE. I thought I’d spend a little time going through what that means, based on information I’ve been provided by EMC.
MCx is a redesign of a number of components, providing functionality and performance improvements for:
Multi-Core Cache (MCC);
Multi-Core RAID (MCR);
Multi-Core FAST Cache (MCF); and
Active / Active data access.
As I mentioned in my introductory post, the OE has been redesigned to spread the operations across each core – providing linear scaling across cores. Given that FLARE pre-dated multi-core x86 CPUs, it seems like this has been a reasonable thing to implement.
EMC are also suggesting that this has enabled them to scale out within the dual-node architecture, providing increased performance, at scale. This is a response to a number of competitors who have suggested that the way to scale out in the mid-range is to increase the number of nodes. How well this scales in real-world scenarios remains to be seen.
Multi-Core Cache
With MCC, the cache engine has been modularized to take advantage of all the cores available in the system. There is now also no requirement for manually separate space for Read and Write Cache, meaning no management overhead in
ensuring the cache is working in the most effective way regardless of the IO mix. Interestingly, cache is dynamically assigned, allocating space on-the-fly for reads, writes, thin metadata, etc depending on the needs of the system at the time.
The larger overall working cache provides mutual benefit between reads and writes. Note also that data is not discarded from cache after a write destage (this greatly improves cache hits). The new caching model employs intelligent monitoring of pace of disk writes to avoid forced flushing and works on a standardized 8KB physical page size.
Multi-Core RAID
MCR introduces some much-needed changes to the historically clumsy way in which disks were managed in the CLARiiON / VNX. Of particular note is the handling of hot spares. Instead of having to define disks as permanent hot spares, the sparing function is now more flexible and any unassigned drive in the system can operate as a hot spare.
There are three policies for hot spares:
Recommended;
No hot spares; and
Custom.
The “Recommended” policy implements the same sparing model as today (1 spare per 30 drives). Note, however, that if there is an unused drive that can be used as a spare (even if you have the “No Hot Spare” policy set) and there is a fault in a RAID group, the system will use that drive. Permanent sparing also means that when a drive has been used in a sparing function, that drive is then a permanent part of the RAID Group or Pool so there is no need to re-balance and copy back the spare drive to a new drive. The cool thing about this is that it reduces the exposure and performance overhead of sparing operations. If the concept of storage pools that spread all over the place didn’t freak out the RAID Group huggers in your storage team, then the idea of spares becoming permanent might just do it. There will be a CLI command available to copy the spare back to the replaced drive, so don’t freak out too much.
What I like the look of, having been stuck “optimising” CLARiiONs previously, is the idea of “Portable drives”, where drives can be physically removed from a RAID group and relocated to another disk shelf. Please note that you can’t use this feature to migrate RAID Groups to other VNX systems, you’ll need to use more traditional migration methods to achieve this.
Multi-Core FAST Cache
According to EMC, MCF is all about increased efficiency and performance. They’ve done this by moving MCC above the FAST Cache driver – all DRAM cache hits are processed without the need to check whether a block resides in FAST Cache, saving this CPU cycle overhead on all IOs. There’s also a faster initial warm-up for FAST Cache, resulting in better initial performance. Finally, instead of requiring 3 hits, if the FAST Cache is not full, it will perform more like a standard extension of cache and load data based on a single hit. Once the Cache is 80% full it reverts to the default 3-hit promotion policy.
Symmetric Active / Active
Real symmetric? Not that busted-arse ALUA? Sounds good to me. With Symmetric Active/Active, EMC are delivering true concurrent access via both SPs (no trespassing required – hooray!). However, this is currently only supported for Classic LUNs. It does have benefits for Pool LUNs by removing the performance impacts of trespassing the allocation owner. Basically, there’s a stripe locking service in lpay that’s providing access to the LUN for each SP, with the CMI being used for communication between the SPs. Here’s what LUN Parallel Access looks like.
And that’s about it. Make no mistake, MCx sounds like it should really improve performance for EMC’s mid-range. Remember though, given some of the fundamental changes outlined here, it’s unlikely that this will work on previous-generation VNX models.
Otherwise titled, “about time”. I heard about this a few months ago, and have had a number of technical briefings from people inside EMC since then. For a number of reasons EMC weren’t able to publicly announce it until now. Anyway, for the official scoop, head on over to EMC’s Speed to Lead site. In this post I thought I’d cover off on some of the high-level speeds and feeds. I hope to have a some time in the near future to dive in a little deeper on some of the more interesting architectural changes.
As far as the hardware goes, EMC have refreshed the VNX midrange line with the following models:
The 5200 is positioned just above the 5100, the 5400 above the 5300, and so forth. The VNX8000 is the biggest yet, and, while initially shipping with 1000 drives, will eventually ship with a 1500-spindle capability. The SPs all use Sandy Bridge chips, with EMC heavily leveraging multi-core. The 8000 will sport dual-socket, 8-core SPs and 128GB of RAM. You’ll also find the sizing of these arrays is based on 2.5″ drives, and that a number of 2.5″ and 3.5″ drives will be available with these models (highlights being 4TB 3.5″ NL-SAS and 1.2TB 2.5″ SAS drives). Here’s a better picture with the max specs for each model.
Multi-core is a big part of the new VNX range, with MCx described as a project that redesigns the core Block OE stack to improve performance, reliability and longevity.
Instead of FLARE using one core per component, MCx is able to evenly distribute workloads across cores, giving improved utilisation across the SP.
The key benefit of this architecture is that you’ll get improved performance with the other tech in the array, namely FAST VP and Deduplication. The multi-core optimisations also extend to RAID management, Cache management and FAST Cache. I hope to be able to do some more detailed posts on these.
FAST-VP has also received some attention, with the chunk size being reduced from 1GB down to 256MB. The big news, however, is the introduction of fixed-block deduplication, with data being segmented in 8KB chunks, deduped at the pool level (not across pools), and turned on or off at the LUN level. I’ll be doing a post shortly on how it all works.
Hopefully that’s given you a starting point from which to investigate this announcement further. As always, if you’re after more information, talk to your local EMC people.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.