VMware Cloud on AWS – Melbourne Region Added

VMware recently announced that VMware Cloud on AWS is now available in the AWS Asia-Pacific (Melbourne) Region. I thought I’d share some brief thoughts here along with a video I did with my colleague Satya.

 

What?

VMware Cloud on AWS is now available to consume in three Availability Zones (apse4-az1, apse4-az2, apse4-az3) in the Melbourne Region. From a host type – you have the option to deploy either I3en.metal or I4i.metal hosts. There is also support for stretched clusters and PCI-DSS compliance if required. The full list of VMware Cloud on AWS Regions and Availability Zones is here.

 

Why Is This Interesting?

Since the launch of VMware Cloud on AWS, customers have only had one choice when it comes to a Region – Sydney. This announcement gives organisations the ability to deploy architectures that can benefit from both increased availability and resiliency by leveraging multi-regional capabilities.

Availability

VMware Cloud on AWS already offers platform availability at a number of levels, including a choice of Availability Zones, Partition Placement groups, and support for stretched clusters across two Availability Zones. There’s also support for VMware High Availability, as well as support for automatically remediating failed hosts.

Resilience

In addition to the availability options customers can take advantage of, VMware Cloud on AWS also provides support for a number of resilience solutions, including VMware Cloud Disaster Recovery (VCDR) and VMware Site Recovery. Previously, customers in Australia and New Zealand were able to leverage these VMware (or third-party) solutions and deploy them across multiple Availability Zones. Invariably, it would look like the below diagram, with workloads hosted in one Availability Zone, and a second Availability Zone being used as the recovery location for those production workloads.

With the introduction of a second Region in A/NZ, customers can now look to deploy resilience solutions that are more like this diagram:

In this example, they can choose to run production workloads in the Melbourne Region and recover workloads into the Sydney Region if something goes pear-shaped. Note that VCDR is not currently available to deploy in the Melbourne Region, although it’s expected to be made available before the end of 2023.

 

Why Else Should I Care?

Data Sovereignty 

There are a variety of legal, regulatory, and administrative obligations governing the access, use, security and preservation of information within various government and commercial organisations in Victoria. These regulations are both national and state-based, and in the case of the Melbourne Region, provide organisations in Victoria the opportunity to store data in VMware Cloud on AWS that may not otherwise have been possible.

Data Locality

Not all applications and data reside in the same location. Many organisations have a mix of workloads residing on-premises and in the cloud. Some of these applications are latency-sensitive, and the launch of the Melbourne Region provides organisations with the ability to host applications closer to that data, as well as accessing native AWS services with improved responsiveness over applications hosted in the Sydney Region.

 

How?

If you’re an existing VMware Cloud on AWS customer, head over to https://cloud.vmware.com. Login to the Cloud Services Console. Click on the VMware Cloud on AWS tile. Click on Inventory. Then click on Create SDDC.

 

Thoughts

Some of the folks in the US and Europe are probably wondering why on earth this is such a big deal for the Australian and New Zealand market. And plenty of folks in this part of the world are probably not that interested either. Not every organisation is going to benefit from or look to take advantage of the Melbourne Region. Many of them will continue to deploy workloads into one or two of the Sydney-based Availability Zones, with DR in another Availability Zone, and not need to do any more. But for those organisations looking for resiliency across geographical regions, this is a great opportunity to really do some interesting stuff from a disaster recovery perspective. And while it seems delightfully antiquated to think that, in this global world we live in, some information can’t cross state lines, there are plenty of organisations in Victoria facing just that issue, and looking at ways to store that data in a sensible fashion close to home. Finally, we talk a lot about data having gravity, and this provides many organisations in Victoria with the ability to run workloads closer to that centre of data gravity.

If you’d like to hear me talking about this with my learned colleague Satya, you can check out the video here. Thanks to Satya for prompting me to do the recording, and for putting it all together. We’re aiming to do this more regularly on a variety of VMware-related topics, so keep an eye out.

VMware – Deploying vSphere Replication 5.8

As part of a recent vSphere 5.5 deployment, I installed a small vSphere Replication 5.8 proof-of-concept for the customer to trial site-to-site replication and get their minds around how they can do some simple DR activities. The appliance is fairly simple to deploy, so I thought I’d just provide a few links to articles that I found useful. Firstly, esxi-guy has a very useful soup-to-nuts post on the steps required to deploy a replication environment, and the steps to recover a VM. You can check it out here. Secondly, here’s a link to the official vSphere Replication documentation in PDF and eBook formats – just the sort of thing you’ll want to read while on the treadmill or sitting on the bus on the way home from the salt mines. Finally, if you’re working in an environment that has a number of firewalls in play, this list of ports you need to open is pretty handy.

One problem we did have was that we’d forgotten what the password was on the appliance we’d deployed at each site. I’m not the greatest cracker in any case, and so we agreed that re-deploying the appliance would be the simplest course of action. So I deleted the VM at each site and went through the “Deploy from OVF” thing again. The only thing of note that happened was that it warned me I had previously deployed a vSphere Replication instance with that name and IP address previously, and that I should get rid of the stale version. I did that at each site and then joined them together again and was good to go. I’m now trying to convince the customer that SRM might be of some use to them too. But baby steps, right?

Note also that, if you want to deploy additional vSphere Replication VMs to assist with load-balancing in your environment, you need to use the vSphere_Replication_AddOn_OVF10.ovf file for the additional appliances.

VMware – SRM 5.8 – You had one job!

The Problem

A colleague of mine has been doing some data centre failover testing for a customer recently and ran into an issue with VMware’s Site Recovery Manager (SRM) 5.8 running on vSphere 5.5 U2. When attempting to perform a recovery, and you’re running Linked Mode, and the protected site is off-line, the recovery may fail. The upshot of this is “The user is unable to perform a recovery at the recovery site, in the event of a DR scenario”. Here’s what it looks like.

SRM1

 

The Reason and Resolution

You can read more about the problem in this VMware KB article: Performing a Recovery using the Web Client in VMware vCenter Site Recovery Manager 5.8 reports the error: Failed to connect Site Recovery Manager Server(s). In short, there’s a PowerShell script you can run to make the recovery happen.

SRM0

 

Conclusion

I don’t know what to say about this. I’d like to put the boot into whomever at VMware is responsible for this SNAFU, but I’m guessing that they’ve already had a hard time of it. At least, I guess, there’s a workaround, if not a fix. But you’d be a bit upset if this happened for the first time during a real failover. But that’s why we test before we handover. And what is it with everything going pear-shaped when Linked Mode is in use?

 

*Update – 29/10/2015*

Marcel van den Berg recently pointed out that updating to SRM 5.8.1 resolves this issue. Further detail can be found here.

EMC Announces VSPEX BLUE

EMC today announced their VSPEX BLUE offering and I thought I’d share some pictures and words from the briefing I received recently. PowerPoint presentations always look worse when I distil them down to a long series of dot points, so I’ll try and add some colour commentary along the way. Please note that I’m only going off EMC’s presentation, and haven’t had an opportunity to try the solution for myself. Nor do I know what the pricing is like. Get in touch with your local EMC representative or partner if you want to know more about that kind of thing.

EMC describe VSPEX BLUE as “an all-inclusive, Hyper-Converged Infrastructure Appliance, powered by VMware EVO:RAIL software”. Which seems like a nice thing to have in the DC. With VSPEX BLUE, the key EMC message is simplicity:

  • Simple to order – purchase with a single SKU
  • Simple to configure – through an automated, wizard driven interface
  • Simple to manage – with the new VSPEX BLUE Manager
  • Simple to scale – with automatic scale-out, where new appliances are automatically discovered and easily added to a cluster with a few mouse clicks
  • Simple to support  – with EMC 24 x 7 Global support offering a single point of accountability for all hardware and software, including all VMware software

It also “eliminates the need for advanced infrastructure planning” by letting you “start with one 2U/4-node appliance and scale up to four”.

Awwww, this guy seems sad. Maybe he doesn’t believe that the hyperconverged unicorn warriors of the data centre are here to save us all from ourselves.

BLUE_man

I imagine the marketing line would be something like “IT is hard, but you don’t need to be blue with VSPEX BLUE”.

Foundation

VSPEX BLUE Software

  • VSPEX BLUE Manager extends hardware monitoring, and integrates with EMC Connect Home and online support facilities.
  • VSPEX BLUE Market offers value-add EMC software products included with VSPEX BLUE.

VMware EVO:RAIL Engine

  • Automates cluster deployment and configuration, as well as scale-out and non-disruptive updates
  • Simple design with a clean interface, pre-sized VM templates and single-click policies

Resilient Cluster Architecture

  • VSAN distributed datastore provides consistent and resilient fault tolerance
  • VMotion provides system availability during maintenance and DRS load balances workloads

Software-defined data center (SDDC) building block

  • Combines compute, storage, network and management resources into a single virtualized software stack with vSphere and VSAN

Hardware

While we live in a software-defined world, the hardware is still somewhat important. EMC is offering 2 basic configurations to keep ordering and buying simple. You getting it yet? It’s all very simple.

  • VSPEX BLUE Standard which comes with 128GB of memory per node; or
  • VSPEX BLUE Performance comes with 192GB of memory per node.

Each configuration has a choice of a 1GbE copper or 10GbE fibre network interface. Here’re some pretty pictures of what the chassis looks like, sans EMC bezel. Note the similarities with EMC’s ECS offering.

BLUE_front

BLUE_rear

Processors (per node)

  • Intel Ivy Bridge (up to 130W)
  • Dual processor

Memory/processors (per node)

  • Four channels of Native DDR3 (1333)
  • Up to eight DDR3 ECC R-DIMMS per server node

Inputs/outputs (I/Os) (per node)

  • Dual GbE ports
  • Optional IB QDR/FDR or 10GbE integrated
  • 1 x 8 PCIe Gen3 I/O Mezz Option (Quad GbE or Dual 10GbE)
  • 1 x 16 PCIe Gen3HBA slots
  • Integrated BMC with RMM4 support

Chassis

  • 2U chassis supporting four hot swap nodes with half-width MBs
  • 2 x 1200W (80+ & CS Platoinum) redundant hot-swap PS
  • Dedicated cooling/node (no SPoF) – 3 x 40mm dual rotor fans
  • Front panel with separate power control per node
  • 17.24” x 30.35” x 3.46”

Disk

  • Integrated 4-Port SATA/SAS controller (SW RAID)
  • Up to 16 (four per node) 2.5” HDD

BLUE12

The VSPEX BLUE Standard configuration consists of four independent nodes consisting of the following:

  • 2 x Intel Dual Intel Ivy Bridge E5-2620 V2 (12 cores, 2.1 Ghz)
  • 8 x 16GB (128GB) , 1666MHz DIMMS Memory
  • 3 x 1.2TB 2.5” 10K RPM SASHDD
  • 1 x 400GB 2.5” SAS SSD (VSAN Cache)
  • 1 x 32GB SLC SATADOM  (ESXi Boot Image)
  • 2 x 10GBE BaseT or SFP+

The Performance configuration only differs from the standard in the amount of memory it contains, going from 128GB in the standard configuration to 192GB in the performance model, ideal for applications such as VDI.

VSPEX BLUE Manager

EMC had a number of design goals for the VSPEX BLUE Manager product, including:

  • Simplified the support experience
  • Embedded ESRS/VE
  • Seamless integration with EVO:RAIL and its facilities
  • The implementation of a management framework that allows driving EMC value-add software as services
  • Extended management orchestration for other use cases
  • Enablement of the VSPEX partner ecosystem

Software Inventory Management

  • Displays installed software versions
  • Discovers and downloads software updates
  • Automated, non-disruptive software upgrades

VB_market

Hardware Awareness

In my mind, this is the key bit of value-add that EMC offer with VSPEX BLUE – seeing what else is going on outside of EVO:RAIL.

  • Provides  information not available in EVO:RAIL
  • Maps alerts to graphical representation of hardware configuration
  • Displays detailed configuration of hardware parts for field services
  • Aggregates health monitoring from vCenter and hardware BMC IPMI
  • Integrates with ESRS Connect Home for proactive notification and problem resolution
  • Integrates with eServices online support resources
  • Automatically collects diagnostic logs and ingrates with vRealize Log Insight

RecoverPoint for VMs

I’m a bit of a fan of RecoverPoint for VMs. The VSPEX BLUE appliance includes an EMC Recoverpoint for VMs license entitling 15 VMs with support for free. The version shipping with this solution also no longer requires storage external to VMware VSAN to store replica and journal volumes.

  • Protect VMs at VM-level granularity
  • Asynchronous and synchronous replication
  • Consistency group for application-consistent recovery
  • vCenter plug-in integration
  • Discovery, provisioning, and orchestration of DR workflow management
  • WAN compression and deduplication to optimize bandwidth utilization

Conclusion

One final thing to note – VMware ELAs not supported. VSPEX BLUE is an all-inclusive SKU, so you can’t modify support options, licensing, etc. But the EVO:RAIL thing was never really a good option for people who want that kind of ability to tinker with configurations.

Based on the briefing I received, the on-paper specs, and the general thought that seems to have gone into the overall delivery of this product, it all looks pretty solid. I’ll be interested to see if any of my customers will be deploying this in the wild. If you’re hyperconverged-curious and want to look into this kind of thing then the EMC VSPEX BLUE may well be just the thing for you.

VMware – SRM advanced setting for snap prefix

We haven’t been doing this in our production configurations, but if you want to change the behaviour of SRM with regards to the “snap-xxx” prefix on replica datastores, you need to modify an advanced setting in SRM. So, go to the vSphere client – SRM, and right-click on Site Recovery and select Advanced Settings. Under SanProvider, there’s an option called “SanProvider.fixRecoveredDatastoreNames” with a little checkbox that needs to be ticked to prevent the recovered datastores being renamed with the unsightly prefix.

You can also do this when manually mounting snapshots or mirrors with the help of the esxcfg-volume command – but that’s a story for another time.