EMC announces new VNXe

EMC World is just around the corner and, as is their wont, EMC are kicking off early with a few cheeky product announcements. I don’t have a lot to say about the VNXe, as I don’t do much in that space, but a lot of people might find this recent announcement of interest. If press releases aren’t your thing, here is a marketing slide you might enjoy instead.

vnxe3200_1

The cool thing about this is that the baby is getting the features of the bigger model, namely the FAST Suite, thin provisioning, file dedupe and MCx. Additionally, a processor speed improvement will help with the overall performance of the device. There’s a demo simulator you can check out here.

EMC also announced a new feature for VNX called D@RE, or Data-At-Rest-Encryption. This should be available as an NDU in Q3 2014. I hope to have more info on that in the future.

Finally, Project Liberty was announced. This is basically EMC’s virtualised VNX, and I’ll have more on that in the near future.

And if half-arsed blog posts aren’t your thing, I urge you to check out Jason Gaudreau’s post covering the same announcement. It’s a lot more coherent and useful.

EMC – Some new scripts

Mat has come up with a few new scripts – FlipFASTTiering and ReplicationCapacity. They’re PERL scripts that you can use to list / modify FAST Tiering scheduling and report on MirrorView replication data respectively. Hopefully you’ll find them of some use. Further information can be found on the Utilities page.

EMC – Using naviseccli to report on a LUN’s FAST status

Ever wondered what tier a LUN was sitting on in a FAST VP pool? Wonder no more, naviseccli is here to help.

C:\>naviseccli -user username -scope 0 -password password -h 1.1.1.1 lun -list -l 24 -userCap -consumedCap -tieringPolicy -initialTier -tiers
LOGICAL UNIT NUMBER 24
Name: LUN_0024
User Capacity (Blocks): 4294967296
User Capacity (GBs): 2048.000
Consumed Capacity (Blocks): 4379120640
Consumed Capacity (GBs): 2088.127
Tiering Policy: Auto Tier
Initial Tier: Optimize Pool
Tier Distribution:
FC: 93.87%
SATA: 6.13%

Also, you can see this information via Unisphere. Those of you who are challenged at the thought of typing something won’t be left out.

lun1

lun2

 

 

EMC – Maximum Pool LUN Size

Mat has been trying to create a 42TB LUN to use temporarily for Centera backups. I don’t want to go into why we’re doing Centera backups, but let’s just say we need the space. He created a Storage Pool on one of the CX4-960s, using 28 2TB spindles and 6+1 private RAID Groups. However, when he tried to bind the LUN, he got the following error.

err1

Weird. So what if we set the size to 44000GB?

err2

No, that doesn’t work either. Turns out, I should really read some of the stuff that I post here, like my article entitled “EMC CLARiiON VNX7500 Configuration guidelines – Part 1“, where I mention that the maximum size of a Pool LUN is 16TB. I was wrong in any case, as it looks more like it’s 14TB. Seems like we’ll be using RAID Groups and MetaLUNs to get over the line on this one.

EMC – CX4 FAST Cache cosmetic issues and using /debug

I noticed that one of our CX4s was exhibiting some odd behaviour the other day. When looking at the System Information window, I noticed that FAST Cache seemed broken. Here’s a picture of it.

Going to the FAST Cache tab on System Properties yielded the same result, as did the output of naviseccli (using naviseccli -h IPaddress cache -fast -info). Interestingly, though, it was still showing up with dirty pages.

We tried recreating it, but the 8 * 100GB EFDs we were using for FAST Cache weren’t available. So we logged a call, and after a bit of back and forth with support, worked out how to fix it. A few things to note first though. If support tell you that FAST Cache can’t be used because you’re using EFDs, not SSDs, ask to have the call escalated. Secondly, the solution I’m showing here fixes the specific problem we had. If you frig around with the tool you may end up causing yourself more pain than it’s worth.

So, to fix the problem we had, we needed to log in to the /debug page on the CX4. To do this, go to http://<yourSPaddress>/debug.

You’ll need your Navisphere or LDAP credentials to gain access. Once you’ve logged in, the page should look something like the following (paying particular attention to the warning).

 Now scroll down until you get to “Force A Full Poll”. Click on that and wait a little while.

Once this is done, you can log back into Unisphere and FAST Cache should look normal again.

 Hooray!

EMC – DIY Heatmaps – Updated Version

Mat has updated the DIY Heatmaps for EMC CLARiiON and VNX arrays to version 3.021. You can get it from the Utilities page here. Any and all feedback welcome. Changes below:

Add command line options:

 

–min_color –mid_color –max_color

To allow the user to select different color schemes for their heatmap graphs. The available colors to choose from are (red, green, blue, yellow, cyan, magenta, purple, orange, black, white)

 

–steps

Change the granularity of the heatmap steps, for example on an attribute like % Utilization, if steps is set to 20, there will be different color bands for 0-4%, 5-9%, 10-14%,etc the default is 10 so color bands will be at 0-9%,10-19%,20-29%, etc

 

–detail_data

This option will allow you to display detail heat graph for an object over time when it has been selected. For example, selecting the SP-B heatmap object below, produces a heat graph for that object over the duration of the NAR file.  Thanks to Ian  for the idea and code behind this.

There have been some other script improvements that:

Add exit code checking after running naviseccli

Browser compatibility fixes – mainly with Chrome, but this should improve display consistency across different browser platforms

EMC – Why FAST VP Best Practice is Best Practice

Those of you fortunate enough to have worked with me in a professional capacity will know that I’m highly opinionated. I generally try not to be opinionated on this blog, preferring instead to provide guidance on tangible technical things. On this occasion, however, I’d like to offer my opinion. I overheard someone in the office recently saying that best practices are just best practices, you don’t have to follow them. Generally speaking, they’re right. You don’t have to do what the vendor tells you, particularly if it doesn’t suit your environment, circumstances, whatever. What annoys me, though, is the idea that’s been adopted by a few in my industry that they can just ignore documents that cover best practices because there’s no way the vendor would know what’s appropriate for their environment. At this point I call BS. These types of documents are put out there because the vendor wants you to use their product in the way it was meant to be used. And – get this – they want you to get value from using their product. The idea being that you’ll be happy with the product, and buy from the vendor again.

BP Guides aren’t just for overpaid consultants to wave at know-nothing customers. They’re actually really useful guidelines around which you can base your designs. Crazy notion, right?

So, to my point. EMC recommend, when you’re using FAST VP on the CLARiiON / VNX, to leave 10% free space in your tiers. The reason they recommend this is that they want FAST VP to have sufficient space to move slices between tiers. Otherwise you’ll get errors like this “712d841a Could not complete operation Relocate 0xB00031ED4 allocate slice failed because 0xe12d8709”. And you’ll get lots of them. Which means that FAST is unable to move slices around the pool. In which case why did you by FAST in the first place? For more information on these errors, check out emc274840 and emc286486 on Powerlink.

If you want an easy way to query a pool’s capacity, use the following naviseccli command:

naviseccli -h ipaddress storagepool -list -tiers
Pool Name: SP_DATA_1
Pool ID: 3

Tier Name: FC
Raid Type: r_5
User Capacity (GBs): 33812.06
Consumed Capacity (GBs): 15861.97
Available Capacity (GBs): 17950.10
Percent Subscribed: 46.91%
Data Targeted for Higher Tier (GBs): 0.00
Data Targeted for Lower Tier (GBs): 0.00
Disks (Type):

Bus 6 Enclosure 7 Disk 14 (Fibre Channel)
Bus 6 Enclosure 7 Disk 12 (Fibre Channel)
Bus 6 Enclosure 7 Disk 10 (Fibre Channel)
Bus 3 Enclosure 5 Disk 3 (Fibre Channel)
Bus 3 Enclosure 5 Disk 1 (Fibre Channel)
Bus 4 Enclosure 5 Disk 2 (Fibre Channel)
Bus 4 Enclosure 5 Disk 0 (Fibre Channel)
[snip]
Bus 2 Enclosure 6 Disk 14 (Fibre Channel)
Bus 2 Enclosure 6 Disk 12 (Fibre Channel)
Bus 2 Enclosure 6 Disk 10 (Fibre Channel)
Bus 0 Enclosure 2 Disk 0 (Fibre Channel)
Bus 5 Enclosure 6 Disk 8 (Fibre Channel)
Bus 3 Enclosure 2 Disk 4 (Fibre Channel)
Bus 7 Enclosure 5 Disk 6 (Fibre Channel)

Pool Name: SP_TEST_10
Pool ID: 2
Tier Name: FC
Raid Type: r_10
User Capacity (GBs): 1600.10
Consumed Capacity (GBs): 312.02
Available Capacity (GBs): 1288.08
Percent Subscribed: 19.50%
Data Targeted for Higher Tier (GBs): 0.00
Data Targeted for Lower Tier (GBs): 0.00
Disks (Type):
Bus 1 Enclosure 7 Disk 3 (Fibre Channel)
Bus 1 Enclosure 7 Disk 5 (Fibre Channel)
Bus 1 Enclosure 7 Disk 7 (Fibre Channel)
Bus 1 Enclosure 7 Disk 2 (Fibre Channel)
Bus 1 Enclosure 7 Disk 4 (Fibre Channel)
Bus 1 Enclosure 7 Disk 6 (Fibre Channel)
Bus 1 Enclosure 7 Disk 9 (Fibre Channel)
Bus 1 Enclosure 7 Disk 8 (Fibre Channel)

And if you want to get the status of FAST VP operations on your pools, use the following command:

naviseccli -h ipaddress autotiering -info -opstatus
Storage Pool Name: SP_DATA_1
Storage Pool ID: 3
Relocation Start Time: N/A
Relocation Stop Time: N/A
Relocation Status: Inactive
Relocation Type: N/A
Relocation Rate: N/A
Data to Move Up (GBs): 0.00
Data to Move Down (GBs): 0.00
Data Movement Completed (GBs): N/A
Estimated Time to Complete: N/A
Schedule Duration Remaining: N/A

Storage Pool Name: SP_TEST_10
Storage Pool ID: 2
Relocation Start Time: N/A
Relocation Stop Time: N/A
Relocation Status: Inactive
Relocation Type: N/A
Relocation Rate: N/A
Data to Move Up (GBs): 0.00
Data to Move Down (GBs): 0.00
Data Movement Completed (GBs): N/A
Estimated Time to Complete: N/A
Schedule Duration Remaining: N/A

And next time you’re looking at a pool with tiers that are full, think about what you can do to alleviate the issue, and think about why you’ve automatically ignored the best practices guide.

EMC – DIY Heatmaps – Updated Version

Mat has updated the DIY Heatmaps for EMC CLARiiON and VNX arrays to version 3.019. You can get it from the Utilities page here. Any and all feedback welcome.

Latest fixes:

· Search Path environment variable for naviseccli

· Search common install locations for naviseccli

· Improve cross browser support – tested on IE, Chrome and FireFox

· Improve debug details – add module version reporting

· Fix divide by zero bug in rendering routine

EMC – DIY Heatmaps – Updated Version

Mat has updated the DIY Heatmaps for EMC CLARiiON and VNX arrays to version 3.018. You can get it from the Utilities page here. Any and all feedback welcome.

Latest fixes:

## 0.3.016 Add options to add array name and SP name to output file

## Fix –display_drive_type so that it displays empty drive slots as white, Removed / Failed drives as gray and unknown as green

## Add attributes to display total array and bus IOPS and Bandwidth

## Add –display_actual option to view actual IO stats

## Add read and write attributes for SP IOPS and bandwidth metrics

## Add –time_zone option

## Add the time zone to the heatmap output

## Add LUN bandwidth total, read & write and LUN IOPS total, read & write attributes

## Fix display problem when all of trays have their last disks configured as hotspares or not in use

## Add 2TB drive size

## 0.3.017 Change display options to allow controll of how many Disk, LUN and SP heatmaps per column

## Add –disk_maps, –lun_maps and –sp_maps

## 0.3.018 Add –debug option to print detailed debug information

EMC – DIY Heatmaps – Updated Version

Mat has updated the DIY Heatmaps for EMC CLARiiON and VNX arrays to version 3.015. You can get it from the Utilities page here. Any and all feedback welcome.