EMC Announces Unity – Part 2

I covered EMC‘s announcement of their new Unity platform in an earlier post, and thought it would be worthwhile following up on a few key points around data protection and protocol support.

 

Data Protection with Unity

You can do a bunch of the high level local and remote protection management through Unisphere, including:

  • Scheduling snapshots
  • Viewing system defined schedules
  • Modifying protection options
  • Customizing schedules based on your SLAs
  • Configuring replication
  • Managing replication operations such as session failover and failback
  • Viewing replication session states and statuses

Unified Snapshots provide:

  • Point-in-time snapshot copies of data;
  • Snapshots for both block and file resources (finally!); and
  • Snapshots are used as the foundation for native asynchronous replication on Unity.

The following table provides information on the limits with snapshots on the Unity platform.

Unity014_Snapshot_Limits

You can asynchronously replicate file and block data from Unity to Unity or Unity VSA, VNXe, or vVNX. How do I get my VNX data onto the Unity array? EMC say that RecoverPoint is your best bet for array replication activities from the VNX1 or 2 to the Unity platform. If you’re looking at data migration options, the following table may help.

Unity009_Migration

 

Protocols and Filesystems

There’s a fair bit of support for more “modern” iterations of SMB and NFS. These are outlined below:

SMB share options

  • Continuous Availability
  • Protocol Encryption
  • Access Based Enumeration (ABE)
  • Distributed File System (DFS)
  • Branch Cache
  • Offline Availability
  • Umask

Supported Features

  • Dynamic access control
  • Hyper-V shared VHDX
  • Antivirus

NFS V4.0 & 4.1

Unity introduces support for NFS v4.0 & 4.1

  • Functionality described in RFC 3530 & RFC 5661
  • Includes NFS ACL
  • Stateful protocol unlike earlier NFS versions

Note, however, the following exceptions

  • No pNFS
  • No directory delegation

FTP/SFTP

Unity supports accessing NAS Servers via FTP and SFTP

  • This can be enabled and disabled independently
  • Accessible by Windows, Unix, and anonymous users

Access control lists

  • Enable or disable access for users, groups, and hosts

FTP/SFTP auditing can be configured on the NAS Server

  • Client IP, time of connection, uploaded/downloaded files
  • Log directory and maximum log size are configurable

EMC have also delivered a new scalable filesystem. This filesystem is a 64-bit filesystem that delivers range of file services, including:

  • Scalability to 64TBs;
  • Space efficient Snapshots;
  • The ability to shrink a file system and reclaim that space;
  • Support for up to 256 VMDK clones;
  • Fast failover;
  • In-Memory Log Replay is an improvement to the file system’s ability to quickly recover its state in the event of an ungraceful shutdown. The advantage of this is a faster failover time; and
  • Improved quota management

The following table provides some more information on the supported configuration maximums for filesystems across the Unity platform.

Unity013_FS_NAS_Limits

 

FAST Cache

The following options are available for FAST Cache configuration on the new Unity arrays.

Unity007_FASTCache

Note also the following improvements (both of which I think are pretty neat from an operational perspective)

  • FAST Cache supports online expansion – up to the system maximum; and
  • FAST Cache supports online shrink – you now have the ability to remove all but 1 FAST Cache pair.

 

Maintenance Options

EMC have been paying attention to the like of Pure and Nimble with their long life maintenance programs designed to be a little kinder to customers wanting to keep their systems for more than five minutes. As such EMC customers can now “Xpect More” for all-flash systems, with Unity (all-flash) customers being guaranteed:

  • Lifetime maintenance pricing for their Unity all-flash;
  • Investment protection on flash drives that need to be replaced or repaired; and
  • Lifetime flash endurance protection.

Obviously I recommend reading the fine print about this program, but on the face of it it certainly warrants further investigation.

 

CLI

You’re probably asking if there is a CLI available for Unity, like naviseccli (Navisphere Secure CLI). After all, naviseccli is pretty awesome, and you’ve no doubt spent hours getting a bunch of stuff automated with just naviseccli and a dream. The good news is that yes, you can run UEMCLI commands from your workstation or via SSH on the system. The bad news is that previous custom scripts using naviseccli will not work using Unity UEMCLI.

 

Other Notes

Here are a few other points that I found interesting:

  • Inline compression is due before the end of the calendar year, and a deduplication option is yet to be made available for the platform.
  • There is a limit of 10 DAEs, 250 drives per bus (same as the VNX2).
  • Unity doesn’t have 60 or 120-drive DAEs, but there is a plan under consideration to support a higher number of drives.
  • Data At Rest Encryption (D@RE) is optional software that is only offered at the point of sale and cannot be enabled after the system is purchased. EMC don’t offer D@RE in certain restricted countries, including China and Russia.

 

Further Reading and Conclusion

[Update] There are a few nice articles that I didn’t see at the time of publication that I think are worth looking at. Dave Henry has a comprehensive write-up on Unity here, Rob Koper has some good coverage here, and Chris Evans has a typically thought-provoking article here that I recommend reading. Finally, Chad Sakac has a comprehensive write-up here that is well worth your time.

If you’ve had to use local protection tools on a unified VNX, you’ll be pleased to see the improvements that EMC have made with regards to coherent features and toolsets across file and block. Likewise if you’ve struggled with the lack of modern protocol support on previous unified offerings, then Unity will be a refreshing change. It’s a bummer that the CLI has changed, but this might be an opportunity to re-evaluate a number of the scripts you’ve been using to get things done previously. If nothing else, it should give me fodder for a few more blog posts along the lines of “I used to do x with naviseccli, now I do y with UEMCLI”. I’m looking forward to digging in further.

EMC Announces Unity – Part 1

EMC recently announced their new midrange array “Unity“. The message from EMC that I’ve heard during various briefings has been that it “eclipses” the VNX and VNXe. What they mean by that  is this. There is no VNX3 platform planned – Unity is EMC’s new midrange storage platform. Of interest though is that there are currently no VNX2 and VNXe EOL dates. EMC are positioning the Unity arrays in between the VNXe1600 and VNXe3200 and the 7600 and 8000 Hybrids. This will make a bit more sense as you read on. and while I’m at it, here’s a box shot, only because it wouldn’t be a product announcement without one of those.

Unity007_Box

 

Major Highlights

So what are the exciting parts of the announcement? Well, there are a few good bits that I’ll cover in depth further on.

  • HTML5 GUI – This is big. Java can finally go die in a fire. Or at least get updated on my laptop to something sensible;
  • Native block, file and VVOLS;
  • A new filesystem that goes to 64TB;
  • Unified block and file snapshots and replication;
  • Everything is now in 2RU of rack space – there are no more Control Stations, no more Data Movers.

Also of note is that within 90 days of GA VCE will be delivering these solutions as well.

 

New Models

There are four new models, with every model having an all-flash and hybrid option (all-flash being denoted by the F).

Unity001

All models feature:

  • Proactive support
  • Self-service portal
  • System monitoring
  • CloudIQ dashboard and management platform.

EMC talked a bit about the density improvements as well, using the change from a base VNX5800 to the Unity 600F. In this example:

  • The footprint goes from 7RU – 2RU;
  • Cabling goes from 30 cables down to 6;
  • Power consumption is reduced from 1495W to 703W;
  • rack installation time goes from 60min – 2min; and
  • The hero number increases as well, with a benchmark as follows: 101K -> 295K IOPS (Thin LUN, Small block random workloads).

I haven’t put one of these things in a rack yet, nor have I had a chance to do my own testing, so I can only report what EMC are telling me. As always, your mileage might vary.

 

Architecture

Are we finally rid of Windows-based FLARE running on SPs? EMC tells me we are. If you’ve been following Chad’s blog you’d have a feel for some of the background architecture that’s gone into Unity. In short, it’s a SUSE-based operating platform with everything (block, VVOLS and file) in a common pool. In my opinion this is kind of what we were hoping to see with VNX2, and it’s good to see it’s finally here.

Unity005_Architecture

Some of the features of the new architecture include:

  • A 64-bit, 64TB filesystem (wheee!);
  • Support for IP multi-tenancy;
  • Unified snapshots and replication (it was previously a bit of a mess of different tools);
  • Integrated data copy management (I need to read up on this);
  • Improved Quality of Service (QoS) and quota management;
  • Encryption and anti-virus services; and
  • “Modern” data protection choices.

 

Storage Pools

Storage Pools have been around since Release 30 of FLARE, but these ones are a bit more capable than their predecessors. All storage resources builds off storage pools. A few of the features include:

  • Modify operations include create, expand, modify, and delete (still no shrink, as best I can tell); and
  • Users can monitor and configure storage pools (good for shops with odd requirements).

Users can also view

  • Current and historical capacity usage;
  • FAST VP relocation and data distribution across storage pool tiers
  • Snapshot storage consumption thresholds and deletion policies

Here’s a handy table listing the maximum capacities for Storage Pools on each Unity model.

Unity012_StoragePools_Limits

Note that the file components live inside what EMC calls “NAS Servers”, which are like virtualised data movers. I’ll be looking into these in more depth in the near future.

Unity006_StoragePools

 

Speeds and Feeds

Here’s a table covering off the configurations for the various models (excluding the UnityVSA, which I’ll cover off later). Note that the Unity 500 (F) supports 350 drives initially, with 500 being supported in 2H 16. Note also that the Unity 600 (F) supports 500 drives with 1000 being supported in 2H 16.

Unity002_HybridMaximums

 

A DPE has two Storage Processors (SPs), each with:

  • A single socket CPU Intel Haswell processor with 6-12 cores each
  • DDR4 DIMM slots
  • Embedded ports:
    • 2x 1GbE RJ45 ports (management and service)
    • 2x 10GbE RJ45 ports (front-end)
    • 2x CNA ports (front-end; configured during OE install for either FC or Ethernet)
    • 2x mini-HD SAS ports (12Gb SAS DAE connectivity)
    • 1x USB port
  • Front end connectivity is IP/iSCSI & Fibre Channel
  • Back end connective to drives is 12Gb SAS

All Unity Hybrid models support the 2U drive enclosure which supports up to twenty five 2.5” drives and/or the 3U drive enclosure which supports fifteen  3.5”  drives. Note that the All-Flash models support only the 2U drive enclosure. There is no need for a 3U drive enclosure to be supported as that enclosure is for SAS and NL-SAS.

Here’s a table providing an overview of the (pretty reasonable) range of drives supported.

Unity003_DriveSupport

 

UnityVSA

You’ve already heard about vVNX. I even wrote about it. The UnityVSA takes that same concept and applies it to Unity, which is pretty cool. The following tables provide information on the basic configuration you’ll need in place to get it up and running.

Unity010_VSA_Reqs

There are a few different editions as well, with the 10TB and greater versions being made available on a yearly subscription basis with EMC Enhanced support. Pricing and capacity is as follows (note that these are US list prices):

  • 4TB – Free, Community supported
  • 10TB – $2995, EMC supported
  • 25TB – $3995, EMC supported
  • 50TB – $4995, EMC supported

Feature parity is there as much as it can be for a virtual system.

Unity011_VSA_features

 

Unity Unisphere

I mentioned at the start of this post that Unisphere no longer uses Java. This is seriously good news in my opinion. As well as this, Unity’s new user interface has the following benefits:

  • Eliminates security concerns using browser plugins (that’s right no one likes you Java);
  • A sleek and clean look and feel; and
  • A flat UI, allowing all functions to be accomplished on the first screen in a category (be it file, block or VMware VVOLS).

As a result of the move to HTML5, a wide range of browsers are now supported, including:

  • Google Chrome v33 or later;
  • Internet Explorer v10 or later;
  • Mozilla Firefox v28 or later; and
  • Apple Safari v6 or later.

Here’s a screenshot of the new UI, and you can see that it’s a lot different to Navisphere and Unisphere.

Unity004_Unisphere

 

Conclusion

I’ve worked with EMC midrange gear for a long time now, and it forms the bread and butter of a number of the solutions I sell and work on on a daily basis. While the VNX2 has at times looked a little long in the tooth, the Unity platform has (based on what I’ve been told so far) shaken off the rust and delivers a midrange array that feels a whole lot more modern than previous iterations of the EMC midrange. I’ll be interested to see how these things go in the field and am looking forward to putting them through their paces from a technical perspective. If you’re in the market for a new mid-range solution it wouldn’t hurt to talk to EMC about the Unity platform.

 

EMC – vVNX – A Brief Introduction

A few people have been asking me about EMC’s vVNX product, so I thought I’d share a few thoughts, feelings and facts. This isn’t comprehensive by any stretch, and the suitability of this product for use in your environment will depend on a whole shedload of factors, most of which I won’t be going into here. I do recommend you check out the “Introduction to the vVNX Community Edition” white paper as a starting point. Chad, as always, has a great post on the subject here.

 

Links

Firstly, here are some links that you will probably find useful:

When it comes time to license the product, you’ll need to visit this page.

vVNX_license

 

Hardware Requirements

A large number of “software-defined” products have hardware requirements, and the vVNX is no different. You’ll need to be running VMware vSphere 5.5 or later to get this running too. I haven’t tried this with Fusion yet.

Element Requirement
Hardware Processor Xeon E5 Series Quad/Dual Core CPU 64-bit x86 Intel 2 GHz (or greater)
Hardware Memory 16GB (minimum)
Hardware Network 2×1 GbE or 2×10 GbE
Hardware RAID (for Server DAS) Xeon E5 Series Quad/Dual Core CPU 64-bit x86 Intel 2 GHz (or greater)
Virtual Processor Cores 2 (2GHz+)
Virtual System Memory 12GB
Virtual Network Adapters 5 (2 ports for I/O, 1 for Unisphere, 1 for SSH, 1 for CMI)

There are a few things to note with the disk configuration. Obviously, the appliance sits on a disk subsystem attached to the ESXi host and is comprised of a number of VMDK files. EMC recommends that the disk provisioning used is “Thick Provisioned Eager Zeroed”. You also need to manually select the tier when you add disk to the pool as the vVNX just sees a number of VMDKs. The available tiers will be familiar to VNX users – extreme performance, performance and capacity. These correspond to SSD, SAS and NL-SAS.

 

Connectivity

The vVNX offers block connectivity via iSCSI, and file connectivity via Multiprotocol / SMB / NFS. No, there is no “passthrough FC” option as such. Let it go already.

 

Features

What’s pretty cool, in my opinion, is that the vVNX supports native asynchronous block replication between other vVNXs as well as the VNXe3200. As well as this, vVNX systems have integrated deduplication and compression support for file-based storage (file systems and VMware NFS Datastores). Note that this is file-based, so it operates on whole files that are stored in a file system. The filesystem is scanned for files that have not been accessed in 15 days.  Files can be excluded from deduplication and compression operations on either a file extension or path basis.

 

Big Brother

The VNXe3200 is ostensibly the vVNX’s big brother. Whilst EMC use the VNXe3200 as a comparison model when discussing vVNX capabilities but, as EMC point out in their introductory whitepaper, there are still a few differences.

VNXe3200 vVNX
Maximum Drives 150 (Dual SP) 16 vDisks (Single SP)
Total System Memory 48 GB 12 GB
Supported Drive Type 3.5”/2.5” SAS, NL-SAS, Flash vDisk
Supported Protocols SMB, NFS, iSCSI & FC SMB, NFS, iSCSI
Embedded IO Ports per SP 4 x 10GbE 2 x 1GbE or 2 x 10GbE
Backend Connectivity per SP 1 x 6 Gb/s x4 SAS vDisk
Max. Drive/vDisk Size 4TB 2TB
Max. Total Capacity 500TB 4TB
Max. Pool LUN Size 16TB 4TB
Max. Pool LUNs Per System 500 64
Max. Pools Per System 20 10
Max. NAS Servers 32 4
Max. File Systems 500 32
Max. Snapshots Per System 1000 128
Max. Replication Sessions 16 256

There are a few other key differences as well, before you get too carried away with replacing all of your VNXe3200s (not that I think people will get too carried away with this). The following points are taken from the “Introduction to the vVNX Community Edition” white paper:

  • MCx – Multicore Cache on the vVNX is for read cache only. Multicore FAST Cache is not supported by the vVNX and Multicore RAID is not applicable as redundancy is provided via the backend storage.
  • FAST Suite – The FAST Suite is not available with the vVNX.
  • Replication – RecoverPoint integration is not supported by the vVNX.
  • Unisphere CLI – Some commands, such as those related to disks and storage pools, will be different in syntax for the vVNX than the VNXe3200. Features that are not available on the vVNX will not be accessible via Unisphere CLI.
  • High Availability – Because the vVNX is a single instance implementation, it does not have the high availability features seen on the VNXe3200.
  • Software Upgrades – System upgrades on a vVNX will force a reboot, taking the system offline in order to complete the upgrade.
  • Fibre Channel Protocol Support – The vVNX does not support Fibre Channel.

 

Conclusion

I get excited whenever a vendor offers up a virtualised version of there product, either as a glorified simulator, a lab tool, or a test bed. It’s no doubt taken a lot of people inside EMC a lot of work to convince people in charge to release this thing into the wild. I’m looking forward to doing some more testing with it and publishing some articles that cover what it can and can’t do.

EMC announces new VNXe

EMC World is just around the corner and, as is their wont, EMC are kicking off early with a few cheeky product announcements. I don’t have a lot to say about the VNXe, as I don’t do much in that space, but a lot of people might find this recent announcement of interest. If press releases aren’t your thing, here is a marketing slide you might enjoy instead.

vnxe3200_1

The cool thing about this is that the baby is getting the features of the bigger model, namely the FAST Suite, thin provisioning, file dedupe and MCx. Additionally, a processor speed improvement will help with the overall performance of the device. There’s a demo simulator you can check out here.

EMC also announced a new feature for VNX called D@RE, or Data-At-Rest-Encryption. This should be available as an NDU in Q3 2014. I hope to have more info on that in the future.

Finally, Project Liberty was announced. This is basically EMC’s virtualised VNX, and I’ll have more on that in the near future.

And if half-arsed blog posts aren’t your thing, I urge you to check out Jason Gaudreau’s post covering the same announcement. It’s a lot more coherent and useful.