EMC Announces VxRail

Yes, yes, I know it was a little while ago now. I’ve been occupied by other things and wanted to let the dust settle on the announcement before I covered it off here. And it was really a VCE announcement. But anyway. I’ve been doing work internally around all things hyperconverged and, as I work for a big EMC partner, people have been asking me about VxRail. So I thought I’d cover some of the more interesting bits.

So, let’s start with the reasonably useful summary links:

  • The VxRail datasheet (PDF) is here;
  • The VCE landing page for VxRail is here;
  • Chad’s take (worth the read!) can be found here; and
  • Simon from El Reg did a write-up here.

 

So what is it?

Well it’s a re-envisioning of VMware’s EVO:RAIL hyperconverged infrastructure in a way. But it’s a bit better than that, a bit more flexible, and potentially more cost effective. Here’s a box shot, because it’s what you want to see.

VxRail_002

Basically it’s a 2RU appliance housing 4 nodes. You can scale these nodes out in increments as required. There’s a range of hybrid configurations available.

VxRail_006

As well as some all flash versions.

VxRail_007

By default the initial configuration must be fully populated with 4 nodes, with the ability to scale up to 64 nodes (with qualification from VCE). Here are a few other notes on clusters:

  • You can’t mix All Flash and Hybrid nodes in the same cluster (this messes up performance);
  • All nodes within the cluster must have the same license type (Full License or BYO/ELA); and
  • First generation VSPEX BLUE appliances can be used in the same cluster with second generation appliances but EVC must be set to align with the G1 appliances for the whole cluster.

 

On VMware Virtual SAN

I haven’t used VSAN/Virtual SAN enough in production to have really firm opinions on it, but I’ve always enjoyed tracking its progress in the marketplace. VMware claim that the use of Virtual SAN over other approaches has the following advantages:

  • No need to install Virtual Storage Appliances (VSA);
  • CPU utilization <10%;
  • No reserved memory required;
  • Provides the shortest path for I/O; and
  • Seamlessly handles VM migrations.

If that sounds a bit like some marketing stuff, it sort of is. But that doesn’t mean they’re necessarily wrong either. VMware state that the placement of Virtual SAN directly in the hypervisor kernel allows it to “be fast, highly efficient, and be able to scale with flash and modern CPU architectures”.

While I can’t comment on this one way or another, I’d like to point out that this appliance is really a VMware play. The focus here is on the benefit of using an established hypervisor (vSphere), and established management solution (vCenter) and a (soon-to-be) established software defined storage solution (Virtual SAN). If you’re looking for the flexibility of multiple hypervisors or incorporating other storage solutions this really isn’t for you.

 

Further Reading and Final Thoughts

Enrico has a good write-up on El Reg about Virtual SAN 6.2 that I think is worth a look. You might also be keen to try something that’s NSX-ready. This is as close as you’ll get to that (although I can’t comment on the reality of one of those configurations). You’ve probably noticed there have been a tonne of pissing matches on the Twitters recently between VMware and Nutanix about their HCI offerings and the relative merits (or lack thereof) of their respective architectures. I’m not telling you to go one way or another. The HCI market is reasonably young, and I think there’s still plenty of change to come before the market has determined whether this really is the future of data centre infrastructure. In the meantime though, if you’re already slow-dancing with EMC or VCE and get all fluttery when people mention VMware, then the VxRail is worth a look if you’re HCI-curious but looking to stay with your current partner. It may not be for the adventurous amongst you, but you already know where to get your kicks. In any case, have a look at the datasheet and talk to your local EMC and VCE folk to see if this is the right choice for you.

EMC Announces VSPEX BLUE

EMC today announced their VSPEX BLUE offering and I thought I’d share some pictures and words from the briefing I received recently. PowerPoint presentations always look worse when I distil them down to a long series of dot points, so I’ll try and add some colour commentary along the way. Please note that I’m only going off EMC’s presentation, and haven’t had an opportunity to try the solution for myself. Nor do I know what the pricing is like. Get in touch with your local EMC representative or partner if you want to know more about that kind of thing.

EMC describe VSPEX BLUE as “an all-inclusive, Hyper-Converged Infrastructure Appliance, powered by VMware EVO:RAIL software”. Which seems like a nice thing to have in the DC. With VSPEX BLUE, the key EMC message is simplicity:

  • Simple to order – purchase with a single SKU
  • Simple to configure – through an automated, wizard driven interface
  • Simple to manage – with the new VSPEX BLUE Manager
  • Simple to scale – with automatic scale-out, where new appliances are automatically discovered and easily added to a cluster with a few mouse clicks
  • Simple to support  – with EMC 24 x 7 Global support offering a single point of accountability for all hardware and software, including all VMware software

It also “eliminates the need for advanced infrastructure planning” by letting you “start with one 2U/4-node appliance and scale up to four”.

Awwww, this guy seems sad. Maybe he doesn’t believe that the hyperconverged unicorn warriors of the data centre are here to save us all from ourselves.

BLUE_man

I imagine the marketing line would be something like “IT is hard, but you don’t need to be blue with VSPEX BLUE”.

Foundation

VSPEX BLUE Software

  • VSPEX BLUE Manager extends hardware monitoring, and integrates with EMC Connect Home and online support facilities.
  • VSPEX BLUE Market offers value-add EMC software products included with VSPEX BLUE.

VMware EVO:RAIL Engine

  • Automates cluster deployment and configuration, as well as scale-out and non-disruptive updates
  • Simple design with a clean interface, pre-sized VM templates and single-click policies

Resilient Cluster Architecture

  • VSAN distributed datastore provides consistent and resilient fault tolerance
  • VMotion provides system availability during maintenance and DRS load balances workloads

Software-defined data center (SDDC) building block

  • Combines compute, storage, network and management resources into a single virtualized software stack with vSphere and VSAN

Hardware

While we live in a software-defined world, the hardware is still somewhat important. EMC is offering 2 basic configurations to keep ordering and buying simple. You getting it yet? It’s all very simple.

  • VSPEX BLUE Standard which comes with 128GB of memory per node; or
  • VSPEX BLUE Performance comes with 192GB of memory per node.

Each configuration has a choice of a 1GbE copper or 10GbE fibre network interface. Here’re some pretty pictures of what the chassis looks like, sans EMC bezel. Note the similarities with EMC’s ECS offering.

BLUE_front

BLUE_rear

Processors (per node)

  • Intel Ivy Bridge (up to 130W)
  • Dual processor

Memory/processors (per node)

  • Four channels of Native DDR3 (1333)
  • Up to eight DDR3 ECC R-DIMMS per server node

Inputs/outputs (I/Os) (per node)

  • Dual GbE ports
  • Optional IB QDR/FDR or 10GbE integrated
  • 1 x 8 PCIe Gen3 I/O Mezz Option (Quad GbE or Dual 10GbE)
  • 1 x 16 PCIe Gen3HBA slots
  • Integrated BMC with RMM4 support

Chassis

  • 2U chassis supporting four hot swap nodes with half-width MBs
  • 2 x 1200W (80+ & CS Platoinum) redundant hot-swap PS
  • Dedicated cooling/node (no SPoF) – 3 x 40mm dual rotor fans
  • Front panel with separate power control per node
  • 17.24” x 30.35” x 3.46”

Disk

  • Integrated 4-Port SATA/SAS controller (SW RAID)
  • Up to 16 (four per node) 2.5” HDD

BLUE12

The VSPEX BLUE Standard configuration consists of four independent nodes consisting of the following:

  • 2 x Intel Dual Intel Ivy Bridge E5-2620 V2 (12 cores, 2.1 Ghz)
  • 8 x 16GB (128GB) , 1666MHz DIMMS Memory
  • 3 x 1.2TB 2.5” 10K RPM SASHDD
  • 1 x 400GB 2.5” SAS SSD (VSAN Cache)
  • 1 x 32GB SLC SATADOM  (ESXi Boot Image)
  • 2 x 10GBE BaseT or SFP+

The Performance configuration only differs from the standard in the amount of memory it contains, going from 128GB in the standard configuration to 192GB in the performance model, ideal for applications such as VDI.

VSPEX BLUE Manager

EMC had a number of design goals for the VSPEX BLUE Manager product, including:

  • Simplified the support experience
  • Embedded ESRS/VE
  • Seamless integration with EVO:RAIL and its facilities
  • The implementation of a management framework that allows driving EMC value-add software as services
  • Extended management orchestration for other use cases
  • Enablement of the VSPEX partner ecosystem

Software Inventory Management

  • Displays installed software versions
  • Discovers and downloads software updates
  • Automated, non-disruptive software upgrades

VB_market

Hardware Awareness

In my mind, this is the key bit of value-add that EMC offer with VSPEX BLUE – seeing what else is going on outside of EVO:RAIL.

  • Provides  information not available in EVO:RAIL
  • Maps alerts to graphical representation of hardware configuration
  • Displays detailed configuration of hardware parts for field services
  • Aggregates health monitoring from vCenter and hardware BMC IPMI
  • Integrates with ESRS Connect Home for proactive notification and problem resolution
  • Integrates with eServices online support resources
  • Automatically collects diagnostic logs and ingrates with vRealize Log Insight

RecoverPoint for VMs

I’m a bit of a fan of RecoverPoint for VMs. The VSPEX BLUE appliance includes an EMC Recoverpoint for VMs license entitling 15 VMs with support for free. The version shipping with this solution also no longer requires storage external to VMware VSAN to store replica and journal volumes.

  • Protect VMs at VM-level granularity
  • Asynchronous and synchronous replication
  • Consistency group for application-consistent recovery
  • vCenter plug-in integration
  • Discovery, provisioning, and orchestration of DR workflow management
  • WAN compression and deduplication to optimize bandwidth utilization

Conclusion

One final thing to note – VMware ELAs not supported. VSPEX BLUE is an all-inclusive SKU, so you can’t modify support options, licensing, etc. But the EVO:RAIL thing was never really a good option for people who want that kind of ability to tinker with configurations.

Based on the briefing I received, the on-paper specs, and the general thought that seems to have gone into the overall delivery of this product, it all looks pretty solid. I’ll be interested to see if any of my customers will be deploying this in the wild. If you’re hyperconverged-curious and want to look into this kind of thing then the EMC VSPEX BLUE may well be just the thing for you.

EMC announces new VMAX range

VMAX3

 

 

Powerful, trusted, agile. That’s how EMC is positioning the refreshed range of VMAX arrays. Note that they used to be powerful, trusted and smart. Agile is the new smart. Or maybe agile isn’t smart? In any case, I’m thinking of it more as bigger, better, more. But I guess we’re getting to the same point. I sat in on a pre-announcement briefing recently and, while opinionalysis isn’t my strong point, I thought I’d cover off on some speeds and feeds and general highlights, and leave the rest to those who are good at that kind of thing. As always, if you want to know further about these announcements, the best place to start would be your local EMC account team.

There are three models: the 100K, 200K and 400K. The 100K supports

  • 1 – 2 engines;
  • 1440 2.5″ drives;
  • 2.4PB of storage; and
  • 64 ports.

The 200K supports

  • 1 – 4 engines;
  • 2880 2.5″ drives;
  • 4.8PB of storage; and
  • 128 ports.

Finally, the 400K supports

  • 1 – 8 engines;
  • 5760 2.5″ drives;
  • 9.6PB of storage; and
  • 256 ports.

*Note that the capacity figures and drive counts are based on code updates that are scheduled for release in 2015.

Hypermax Operating System is a significant enhancement to Enginuity, and is built to run not just data services inside the box, but services coming in from outside the box as well. This includes an embedded data storage hypervisor allowing you to run services that were traditionally run outside the frame, such as management consoles, file gateways, cloud gateways and data mobility services.

Dynamic Virtual Matrix is being introduced to leverage the higher number of cores in the new hardware models. In the largest 400K, there’ll be 384 CPU cores available to use. These can be dynamically allocated to front-end, back-end or data services. Core / CPU isolation is also an available capability.

While they look like an ultra-dense 10K, they’re not. You can have two engines and drives in a single cabinet. All models support all-flash configurations. If money’s no object, you could scale to 4PB of flash in one frame.

Virtual Matrix is now Infiniband, while the backend is now SAS.

EMC claims base support for 6 * 9s of availability, and 7 * 9s availability with VPLEX (that’s 5 seconds per year of downtime).

Snapshotting has been refreshed, with SnapVX supporting up to 1024 copies per source. Doesn’t impact I/O, and doesn’t require target configuration.

Finally, read up on EMC ProtectPoint, it’ll be worth your time.

EMC announces Isilon enhancements

I sat in on a recent EMC briefing regarding some Isilon enhancements and I thought my three loyal readers might like to read through my notes. As I’ve stated before, I am literally one of the worst tech journalists on the internet, so if you’re after insight and deep analysis, you’re probably better off looking elsewhere. Let’s focus on skimming the surface instead, yeah? As always, if you want to know further about these announcements, the best place to start would be your local EMC account team.

Firstly, EMC have improved what I like to call the “Protocol Spider”, with support for the following new protocols:

  • SMB 3.0
  • HDFS 2.3*
  • OpenStack SWIFT*

* Note that this will be available by the end of the year.

Here’s a picture that says pretty much the same thing as the words above.

isilon_protocols

 

 

 

 

 

 

 

In addition to the OneFS updates, two new hardware models have also been announced.

S210

S210

 

  • Up to 13.8TB globally coherent cache in a single cluster (96GB RAM per node);
  • Dual Quad-Core Intel 2.4GHz Westmere Processors;
  • 24 * 2.5” 300GB or 600GB 10Krpm Serial Attached SCSI (SAS) 6Gb/s Drives; and
  • 10GbE (Copper & Fiber) Front-end Networking Interface.

 

Out with the old and in with the new.

S200vsS210_cropped

X410

X410

 

  • Up to 6.9TB globally coherent cache in a single cluster (48GB RAM per node);
  • Quad-Core Intel Nehalem E5504 Processor;
  • 12 * 3.5” 500GB, 1TB, 2TB, 3TB 7.2Krpm Serial ATA (SATA) Drives; and
  • 10GbE (Copper & Fiber) Front-end Networking Interface.

Some of the key features include:

  • 50% more DRAM in baseline configuration than current 2U X-series platform;
  • Configurable memory (6GB to 48GB) per node to suit specific application & workflow needs;
  • 3x increase in density per RU thus lowering power, cooling and footprint expenses;
  • Enterprise SSD support for latency sensitive namespace acceleration or file storage apps; and
  • Redesigned chassis that delivers superior cooling and vibration control.

 

Here’s a picture that does a mighty job of comparing the new model to the old one.

X400vsX410_cropped

 

Isilon SmartFlash

EMC also announced SmartFlash for Isilon, which uses SSDs as an addition to DRAM for flash capability. The upshot is that you can have 1PB Flash vs 37TB DRAM. It’s also globally coherent, unlike some of my tweets.

Here’s a picture.

Isilon_SmartFlash

EMC – BRS Announcements – Q3 2013

Disclaimer: As part of my participation in EMC Elect 2013, EMC sometimes provides me with access to product briefings before new product announcements are made. I don’t want to turn this blog into another avenue for EMC marketing, and EMC are not interested in that either. Nonetheless, I’ve had the opportunity via various channels to actually try some of this stuff and I thought it was worth putting up here. I’ll reiterate though, I haven’t had the chance to verify everything for myself. This is more a prompt for you to go and have a look for yourself.

So, EMC made a few announcements around its BRS line today and I thought some of the Data Domain stuff was noteworthy. Four new models were released; here’s a table of speeds and feeds. Keep in mind that these are numbers published by EMC, not verified by me. As always, your mileage might vary.

DD1

In any case, the DD2500 is the replacement for the DD640, the DD4200 replaces the DD670, the DD4500 replaces the DD860 and the DD7200 replaces the DD890. One of the cooler parts of this announcement, in my opinion, is the improved archive support. This is something we’ve been investigating internally as part of our take the Centera out the back and shoot it  project. Here’s a screenshot of a marketing slide that includes a number of logos.

DD2Other aspects of the announcement include EMC Avamar 7 and NetWorker 8.1. The Avamar NDMP Accelerator now supports backup for Isilon, in addition to VNX, VNXe, Celerra and NetApp systems. Being a tape user, I’m also mildly excited about DD Boost over Fibre Channel support in NetWorker 8.1, although I’ve not had the chance to try it in our lab yet, so I’ll restrain my enthusiasm until I’ve had time to test it out.

In any case, have a chat to your local EMC BRS team about this stuff if you think it might work for you. You can also read more about it on EMC Pulse and the Reflections blog. When I’ve had a chance to test DD Boost over FC I’ll post it up here.