EMC – VNX2, Unisphere and Java Support

In my current role, I don’t do a lot of array configuration from scratch anymore. I generally do the detailed design and hand it over to a colleague to go and make it so. Recently, however, I’ve had to step in and do some actual work myself because reasons. Anyway, I was deploying a new VNX5400 and having a heck of a time getting Unisphere to work. And by Unisphere I mean Java. I initially wanted to blame my work-issued Windows 8.1 laptop, but it was ultimately a Java issue. It turns out my Java version was high. Not Cypress Hill high, but still too high for Unisphere.

EMC’s snappily titled “EMC VNX Operating Environment for Block 05.33.006.5.096, EMC VNX Operating Environment for File 8.1.6.96, EMC Unisphere 1.3.6.1.0096 Release Notes” talks fairly explicitly about Java support on page 6, and I thought it was worth repeating here for schmucks like me who do this stuff part-time. You can find this document on the EMC support site.

“Java support

The following 32 bit Java Platforms are verified by EMC and compatible for use with Unisphere, the Unified Service Manager (USM), and the VNX Installation Assistant (VIA):

  • Oracle Standard Edition 1.7 up to Update 75
  • Oracle Standard Edition 1.8 up to Update 25

The 32-bit JRE is required – even on 64 bit systems. JRE Standard Edition 1.6 is not recommended because Oracle has stopped support for this edition”.

I think I was running 1.8 Update 31, and saw that, regardless of the browser, Unisphere just wouldn’t load. If you need to track down an older version of Java to work on stuff like this – Oracle has a site you can go to here. Incidentally, I can confirm that it is not necessary to install the Ask Toolbar in order for Unisphere to function correctly.

*Update (2016.05.20): Current link to 1.8 U25 is here.

*Update (2016.06.04): Jirah Cox (@vJirah) pointed out that http://filehippo.com keeps an extensive archive of versions too.

EMC – Go home Unisphere, you’re drunk

Mat forwarded this one through to me this morning. Seems one of his CX4s is feeling a bit odd :)

unisphere

EMC – VNX: Splash Screen weirdness or How I spent my recent public holiday in a DC

Here’s a picture you don’t see every day.

vnx_splash_cropped

 

I spent a recent public holiday assisting a client move some VNX5500s between racks in their data centre. Not terribly exciting, but it helped them out of a spot. Something that was done as part of the shift was a change in SP IP addresses. There’s an article on support.emc.com that refers to Primus ID emc274335 and some Java misconfiguration. I’m not entirely convinced though, because we observed that Unisphere would (eventually) come good, without us making any changes to the client settings. In any case, I thought it was something interesting. The arrays were running VNX OE R31 and R32.

EMC – VNX: USM reports an error “Assistance needed for upgrade”

If you’re trying to do an OE upgrade on a VNX you might get the following error after you’ve run through the “Prepare for Installation” phase.

USM_error

Turns out you just need to upgrade USM to the latest version. You can do this manually or via USM. Further information on this error can be found on support.emc.com by searching for the following Primus ID: emc321171.

Incidentally, I’d just like to congratulate EMC on how much simpler it is upgrade FLARE / VNX OE nowadays than it was when I first started on FC and CX arrays. Sooo much nicer …

 

 

EMC – VNX / CX4 LUN Allocation Owner and Default Owner

Mat’s been doing some useful scripting again. This time it’s a small PERL script that identifies the allocation owner and default owner of a pool LUN on a CX4 or VNX and lets you know whether the LUN is “non-optimal” or not. For those of you playing along at home, I found the following information on this (but can’t remember where I found it). “Allocation owner of a pool LUN is the SP that owns and maintains the metadata for that LUN. It is not advised to trespass the LUNs to an SP that is not the allocation owner. This introduces lag. The SP that provides the best performance for the pool LUN. The allocation owner SP is set by the system to match the default SP owner when you create the LUN. You cannot change the allocation owner after the LUN is created. If you change the default owner for the LUN, the software will display a warning that a performance penalty will occur if you continue.”

There’s a useful article by Jithin Nadukandathil on the ECN site, as well as a most excellent writeup by fellow EMC Elect member Jon Klaus here. In short, if you identify NonOptimal LUN ownership, your best option is to create a new LUN and migrate the data to that LUN via the LUN Migration tool. You can download a copy of the script here. Feel free to look at the other scripts that are on offer as well. Here’s what the output looks like.

 output1

 

 

EMC – Using naviseccli to report on a LUN’s FAST status

Ever wondered what tier a LUN was sitting on in a FAST VP pool? Wonder no more, naviseccli is here to help.

C:\>naviseccli -user username -scope 0 -password password -h 1.1.1.1 lun -list -l 24 -userCap -consumedCap -tieringPolicy -initialTier -tiers
LOGICAL UNIT NUMBER 24
Name: LUN_0024
User Capacity (Blocks): 4294967296
User Capacity (GBs): 2048.000
Consumed Capacity (Blocks): 4379120640
Consumed Capacity (GBs): 2088.127
Tiering Policy: Auto Tier
Initial Tier: Optimize Pool
Tier Distribution:
FC: 93.87%
SATA: 6.13%

Also, you can see this information via Unisphere. Those of you who are challenged at the thought of typing something won’t be left out.

lun1

lun2

 

 

EMC – Using naviseccli to expand a pool

I haven’t banged on about how much I like naviseccli in a little while. I was reading a white paper on FAST VP in the new VNX series recently, and came across the storage pool -expand command. This isn’t so exciting, but the -skipRules option was intriguing. It seems you would use this if you didn’t want to follow all of the rules associated with a normal pool expansion. I inferred from the white paper that by default a FAST VP pool will automatically redistribute its LUNs across the new disks. This may be non-optimal if you’re in the middle of a busy period on the array, and if you don’t want this to happen, you should use the skipRules option.  Note that this is for Release 5.33. If I’ve misunderstood this I’m happy to be corrected.

In any case, here’s an example of how to expand a pool using naviseccli.

storagepool -expand -id poolID| -name poolName-disks disksList [-rtype raidType[-rdrivecountdrivecount]][-initialverify yes|no][-skipRules] [-o]

The RAID types you can select are r_5, r_6 and r_10. This is important if you already have disks of a certain tier type in the pool. The capacity tier (NL-SAS drives) uses RAID 6, the performance tier (SAS drives) uses RAID 5, and the extreme performance tier (Flash drives) is RAID 1/0.

naviseccli -h SP_IPaddress storagepool -expand -id 10 -rtype r_6 -disks 0_2_0 0_2_1 0_2_2 0_2_3 0_2_4 0_2_5 0_2_6 0_2_7 –o

EMC – New VNX OE available

I’m a bit behind on my news, but just a quick note to say that FLARE 33 for the next-generation VNX (Block) has received an update to version 05.33.000.5.038. The release notes are here. Notably, a fix related to ETA 175619 is included. This is a good thing. As always, go to EMC’s Support site for more info, and talk to your local EMC people about whether it’s appropriate to upgrade.

EMC – DIY Heatmaps – Updated Version

I’ve patched the DIY Heatmaps script, fixing a problem with the table names generated in the database files. You can download it from the Utilities page.

 

Thanks

Mat.

EMC – Next-Generation VNX – Deduplication

In my previous post on the Next-Generation VNX, I spoke about some of the highlights of the new platform. In this post I’d like to dive a little deeper into the deduplication feature, because I think this stuff is pretty neat. In the interests of transparency, I’m taking a lot of this information from briefings I’ve received from EMC. I haven’t yet had the chance to test this for myself, so, as always, your mileage might vary.

One of the key benefits of deduplication is reduced footprint. Here’s a marketing picture that expresses, via a simple graph, how deduplication can help you do more, with less.

VNX_Efficiency

There are 3 basic steps to deduplication:

  1. Discover / Digest;
  2. Sort / Identify; and
  3. Map / Eliminate.

The Discovery phase is basically generating hashes of 8KB blocks using unique digests. It’s then sorted to identify chunk candidates for deduplication. The duplicates are then mapped and the space is freed up. Digests are used as pointers for the unique data chunks.

Deduplication can be turned on or off at the LUN level. Pools can contain both deduplicated and “normal” LUNs. Also, note that the total number of LUNs on a system can be deduplicated – there is no separate limit applied to the deduplication technology.

The deduplication properties of a LUN are as follows:

  • Feature State – On or Paused
  • State – Off, Enabling, On, Disabling
  • Status – indicates any problems during enabling or disabling

The deduplication properties of a Pool are as follows:

  • State – Idle (no deduplicated LUNs), Pending (between passes), Running (currently running a dedupe pass) and Paused (pool is paused)
  • Deduplication Rate – High, Medium, Low (Medium is current default)

Note that dedplication can be paused at a system level for all pools.

When deduplication is turned off on a LUN it is migrated out of the deduplication container within the pool. 8 simultaneous migrations per system can occur, and it obviously reduces the consumed space in the deduplication container.

At a high level, deduplication interoperability is there:

  • Works with FAST VP (dedupe LUNs behave as a single entity, when dedupe is turned off it goes back to per-LUN FAST VP settings)
  • Supports Snaps and Clones (VNX Snapshots are lost when enabling or disabling, Reserved LUNs for SnapView Snaps cannot be deduplicated)
  • Support for RP, MV and SAN Copy
  • LUN migration works, although moving between pools means deduplication is lost as it’s pool-based
  • Compression is not supported.

And that’s fixed-block deduplication for the next-generation VNX in a nutshell. When I get my hands on one of these I’ll be running it through some more realistic testing and scenarios.