Disclaimer: I recently attended VMworld 2014 – SF. My flights and accommodation were paid for by myself, however VMware provided me with a free pass to the conference and various bits of swag. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
STO3161 – What can virtual volumes do for you?

STO3161 was presented by:
- Matt Cowger, (@mcowger), EMC
- Suzy Visvanathan, VMware – Product Manager VVOLs
There were two different tracks that they wanted to cover
- How will this be of benefit from a business perspective?
- What’s going on at the 201 technical level?
Suzy starts with the SDDC overview. With the goal of VVOLs being to transform storage by aligning it with application demands.

Today
- Create fix-sized, uniform LUNs
- Lack of granular control
- Complex provisioning cycles
- LUN-centric storage configurations
Today’s problems
- Extensive manual bookkeeping to match VMs to LUNs
- LUN-granularity hinders per-VM SLAs
- Overprovisioning
- Wasted resources, time, high costs
- Frequent data migrations
It’s not about VSAN or VVOLs, it’s about how to make the external array more feature-reach, more in control. Regardless of the storage you use, they want VMware to be the platform. Here’s a picture.

Suzy finishes by saying they’ve virtualised storage, but it’s not “slick” yet.
Now Matt explains the concept of Virtual Volumes.
“How many of you think LUNs suck? Only half? Are the rest of you using NFS?”

At a high-level:
- There’s no filesystem
- Managed through VASA APIs
- Arrays are partitioned into containers (storage containers)
- VM disks, called virtual volumes, stored natively on the storage containers
- IO from ESX to array through an access point called a Protocol Endpoint (PE) – this is like a Gatekeeper in VMAX, it just processes commands. There’s one PE configured per array
- Data services are offloaded to the array
- All managed through storage policy-based management framework
VNXe 3200 is the first place you’re going to see this.

“NFS vs FC vs iSCSI is a minor implementation detail now”
Storage pools host VVOL containers. You can look at capability profiles for various containers
Because the array is completely aware of the VM, you can do cool stuff, like offloading snapshots. Array managed cloning – better than VAAI XCOPY.
What we really want to do is manage applications by service level objectives via policy-based automation. This is what VMAX3 is all about.
So where does ViPR fit? Isn’t that what you just showed us?
There are array-specific details (i.e. Gold on VMAX vs Gold on VNX). These can be different on each array. That’s not ideal though. ViPR provides a single point for the storage to talk to, a single point for vSphere / VASA to talk to, a consistent view.
VNXe 3200 Virtual Volumes Beta starts in Q4 2014 (e-mail [email protected] for more information).
Note that there’s no support for SRM in the first version. They are working on per-VM replication. The arrays can replicate VVOLs though. RecoverPoint support for per-VM is coming too.
By making VVOLs work the same across all of the protocols, you get to be interested in what the storage arrays can do, not the protocols.
Hope that helps some. Matt and Suzy did a great job presenting. 4.5 stars.