EMC – vVNX – A Brief Introduction

A few people have been asking me about EMC’s vVNX product, so I thought I’d share a few thoughts, feelings and facts. This isn’t comprehensive by any stretch, and the suitability of this product for use in your environment will depend on a whole shedload of factors, most of which I won’t be going into here. I do recommend you check out the “Introduction to the vVNX Community Edition” white paper as a starting point. Chad, as always, has a great post on the subject here.

 

Links

Firstly, here are some links that you will probably find useful:

When it comes time to license the product, you’ll need to visit this page.

vVNX_license

 

Hardware Requirements

A large number of “software-defined” products have hardware requirements, and the vVNX is no different. You’ll need to be running VMware vSphere 5.5 or later to get this running too. I haven’t tried this with Fusion yet.

Element Requirement
Hardware Processor Xeon E5 Series Quad/Dual Core CPU 64-bit x86 Intel 2 GHz (or greater)
Hardware Memory 16GB (minimum)
Hardware Network 2×1 GbE or 2×10 GbE
Hardware RAID (for Server DAS) Xeon E5 Series Quad/Dual Core CPU 64-bit x86 Intel 2 GHz (or greater)
Virtual Processor Cores 2 (2GHz+)
Virtual System Memory 12GB
Virtual Network Adapters 5 (2 ports for I/O, 1 for Unisphere, 1 for SSH, 1 for CMI)

There are a few things to note with the disk configuration. Obviously, the appliance sits on a disk subsystem attached to the ESXi host and is comprised of a number of VMDK files. EMC recommends that the disk provisioning used is “Thick Provisioned Eager Zeroed”. You also need to manually select the tier when you add disk to the pool as the vVNX just sees a number of VMDKs. The available tiers will be familiar to VNX users – extreme performance, performance and capacity. These correspond to SSD, SAS and NL-SAS.

 

Connectivity

The vVNX offers block connectivity via iSCSI, and file connectivity via Multiprotocol / SMB / NFS. No, there is no “passthrough FC” option as such. Let it go already.

 

Features

What’s pretty cool, in my opinion, is that the vVNX supports native asynchronous block replication between other vVNXs as well as the VNXe3200. As well as this, vVNX systems have integrated deduplication and compression support for file-based storage (file systems and VMware NFS Datastores). Note that this is file-based, so it operates on whole files that are stored in a file system. The filesystem is scanned for files that have not been accessed in 15 days.  Files can be excluded from deduplication and compression operations on either a file extension or path basis.

 

Big Brother

The VNXe3200 is ostensibly the vVNX’s big brother. Whilst EMC use the VNXe3200 as a comparison model when discussing vVNX capabilities but, as EMC point out in their introductory whitepaper, there are still a few differences.

VNXe3200 vVNX
Maximum Drives 150 (Dual SP) 16 vDisks (Single SP)
Total System Memory 48 GB 12 GB
Supported Drive Type 3.5”/2.5” SAS, NL-SAS, Flash vDisk
Supported Protocols SMB, NFS, iSCSI & FC SMB, NFS, iSCSI
Embedded IO Ports per SP 4 x 10GbE 2 x 1GbE or 2 x 10GbE
Backend Connectivity per SP 1 x 6 Gb/s x4 SAS vDisk
Max. Drive/vDisk Size 4TB 2TB
Max. Total Capacity 500TB 4TB
Max. Pool LUN Size 16TB 4TB
Max. Pool LUNs Per System 500 64
Max. Pools Per System 20 10
Max. NAS Servers 32 4
Max. File Systems 500 32
Max. Snapshots Per System 1000 128
Max. Replication Sessions 16 256

There are a few other key differences as well, before you get too carried away with replacing all of your VNXe3200s (not that I think people will get too carried away with this). The following points are taken from the “Introduction to the vVNX Community Edition” white paper:

  • MCx – Multicore Cache on the vVNX is for read cache only. Multicore FAST Cache is not supported by the vVNX and Multicore RAID is not applicable as redundancy is provided via the backend storage.
  • FAST Suite – The FAST Suite is not available with the vVNX.
  • Replication – RecoverPoint integration is not supported by the vVNX.
  • Unisphere CLI – Some commands, such as those related to disks and storage pools, will be different in syntax for the vVNX than the VNXe3200. Features that are not available on the vVNX will not be accessible via Unisphere CLI.
  • High Availability – Because the vVNX is a single instance implementation, it does not have the high availability features seen on the VNXe3200.
  • Software Upgrades – System upgrades on a vVNX will force a reboot, taking the system offline in order to complete the upgrade.
  • Fibre Channel Protocol Support – The vVNX does not support Fibre Channel.

 

Conclusion

I get excited whenever a vendor offers up a virtualised version of there product, either as a glorified simulator, a lab tool, or a test bed. It’s no doubt taken a lot of people inside EMC a lot of work to convince people in charge to release this thing into the wild. I’m looking forward to doing some more testing with it and publishing some articles that cover what it can and can’t do.