Scale Computing Announces Support For Hybrid Storage and Other Good Things

Scale_Logo_High_Res

If you’re unfamiliar with Scale Computing, they’re a hyperconverged infrastructure (HCI) vendor out of Indianapolis that have been around for some time and deliver a solution aimed squarely at the small to mid-size market. They’ve been around since 2008, and launched their HC3 platform in 2012. They have around 1600 customers, and about 6000 units deployed in the field. Justin Warren provides a nice overview here as part of his research for Storage Field Day 5, while Trevor Pott wrote a comprehensive review for El Reg that you can read here. I was fortunate enough to get a briefing from Alan Conboy from Scale Computing and thought it worthy of putting pen to paper, so to speak.

 

So What is a Scale Computing?

Scale describes the HC3 as a scale-out system. It has the following features:

  • 3 or more nodes –fully automated Active/Active architecture;
  • Clustered virtualization compute platform with no virtualization licensing (KVM-based, not VMware);
  • Protocol-less pooled storage resources eliminate external storage requirements entirely with no SAN or VSA;
  • +60% efficiency gains built in to the IO path – Scale made much of this in my briefing, and it certainly looks good on paper;
  • Cluster is self healing and self load balancing – the nodes talk directly to each other;
  • Scale’s State Machine technology makes the cluster Self-Aware with no need for external management servers – so no out of band management servers. When you’ve done as many vSphere deployments as I have this becomes very appealing;

You can read a bit more about how it all hangs together here. Here’s a simple diagram of the how it looks from a networking perspective. Each node has 4 NICs, with two going to the back-end and two ports for the front-end. You can read up on recommended network switches here.

Scale01_HC3
Each node contains:

  • 8 to 40 vCores;
  • 32 to 512GB VM Memory;
  • Quad Network interface ports in 1GbE or 10GbE;
  • 4 or 8 spindles in 7.2k, 10k, or 15k RPM and SSD as a tier.

Here’s an overview of the different models, along with list prices in $US. You can check out the specification sheet here.

Scale02_Node_Models

 

So What’s New?

Flash. Scale tell me “it’s not being used as a simple cache, but as a proper, fluid tier of storage to meet the needs of a growing and changing SMB to SME market”. There are some neat features that have been built in to the interface. I was able to test these during the briefing with Scale. In a nutshell, there’s a level of granularity that the IT generalist should be pleased with.

  • Set different priority for VMs on a per virtual disk basis;
  • Change on the fly as needed;
  • Makes use of SLC SSD as a storage tier not just a cache; and
  • Keep unnecessary workloads off of the SSD tier completely.

Scale is deploying its new HyperCore Enhanced Automated Tiering (HEAT) technology across the HC3 product line and is introducing a flash storage tier as part of its HC2150 and HC4150 appliances. Scale tell me that they are “[a]vailable in 4- or 8-drive units”, and “Scale’s latest offerings include one 400 or 800GB SSD with three NL-SAS HDD in 1-6TB capacities and memory up to 256GB, or two 400 or 800GB SSD with 6 NL-SAS HDD in 1-2TB capacities and up to 512 GB memory respectively. Network connectivity for either system is achieved through two 10GbE SFP+ ports per node”.

It’s also worth noting that the new products can be used to form new clusters, or they can be added to existing HC3 clusters. Existing workloads on those clusters will automatically utilize the new storage tier when the new nodes are added. You can read more on what’s new here.

 

Further Reading and Feelings

As someone who deals with reasonably complex infrastructure builds as part of my day job, it was refreshing to get a briefing from a company who’s focus is on simplicity for a certain market segment, rather than trying to be the HCI vendor everyone goes to. I was really impressed with the intuitive nature of the interface, the simplicity with which tasks could be achieved, and the thought that’s gone into the architecture. The price, for what it offers, is very competitive as well, particularly in the face of more traditional compute + storage stacks aimed at SMEs. I’m working with Scale to get myself some more stick time in the near future and am looking forward to reporting back with the results.

EMC Announces VxRail

Yes, yes, I know it was a little while ago now. I’ve been occupied by other things and wanted to let the dust settle on the announcement before I covered it off here. And it was really a VCE announcement. But anyway. I’ve been doing work internally around all things hyperconverged and, as I work for a big EMC partner, people have been asking me about VxRail. So I thought I’d cover some of the more interesting bits.

So, let’s start with the reasonably useful summary links:

  • The VxRail datasheet (PDF) is here;
  • The VCE landing page for VxRail is here;
  • Chad’s take (worth the read!) can be found here; and
  • Simon from El Reg did a write-up here.

 

So what is it?

Well it’s a re-envisioning of VMware’s EVO:RAIL hyperconverged infrastructure in a way. But it’s a bit better than that, a bit more flexible, and potentially more cost effective. Here’s a box shot, because it’s what you want to see.

VxRail_002

Basically it’s a 2RU appliance housing 4 nodes. You can scale these nodes out in increments as required. There’s a range of hybrid configurations available.

VxRail_006

As well as some all flash versions.

VxRail_007

By default the initial configuration must be fully populated with 4 nodes, with the ability to scale up to 64 nodes (with qualification from VCE). Here are a few other notes on clusters:

  • You can’t mix All Flash and Hybrid nodes in the same cluster (this messes up performance);
  • All nodes within the cluster must have the same license type (Full License or BYO/ELA); and
  • First generation VSPEX BLUE appliances can be used in the same cluster with second generation appliances but EVC must be set to align with the G1 appliances for the whole cluster.

 

On VMware Virtual SAN

I haven’t used VSAN/Virtual SAN enough in production to have really firm opinions on it, but I’ve always enjoyed tracking its progress in the marketplace. VMware claim that the use of Virtual SAN over other approaches has the following advantages:

  • No need to install Virtual Storage Appliances (VSA);
  • CPU utilization <10%;
  • No reserved memory required;
  • Provides the shortest path for I/O; and
  • Seamlessly handles VM migrations.

If that sounds a bit like some marketing stuff, it sort of is. But that doesn’t mean they’re necessarily wrong either. VMware state that the placement of Virtual SAN directly in the hypervisor kernel allows it to “be fast, highly efficient, and be able to scale with flash and modern CPU architectures”.

While I can’t comment on this one way or another, I’d like to point out that this appliance is really a VMware play. The focus here is on the benefit of using an established hypervisor (vSphere), and established management solution (vCenter) and a (soon-to-be) established software defined storage solution (Virtual SAN). If you’re looking for the flexibility of multiple hypervisors or incorporating other storage solutions this really isn’t for you.

 

Further Reading and Final Thoughts

Enrico has a good write-up on El Reg about Virtual SAN 6.2 that I think is worth a look. You might also be keen to try something that’s NSX-ready. This is as close as you’ll get to that (although I can’t comment on the reality of one of those configurations). You’ve probably noticed there have been a tonne of pissing matches on the Twitters recently between VMware and Nutanix about their HCI offerings and the relative merits (or lack thereof) of their respective architectures. I’m not telling you to go one way or another. The HCI market is reasonably young, and I think there’s still plenty of change to come before the market has determined whether this really is the future of data centre infrastructure. In the meantime though, if you’re already slow-dancing with EMC or VCE and get all fluttery when people mention VMware, then the VxRail is worth a look if you’re HCI-curious but looking to stay with your current partner. It may not be for the adventurous amongst you, but you already know where to get your kicks. In any case, have a look at the datasheet and talk to your local EMC and VCE folk to see if this is the right choice for you.

‘Building a Modern Data Center’ Now Available

In my post on the Atlantis CX-4 announcement last week I mentioned that ActualTech Media would be releasing a new book in conjunction with Atlantis Computing – “Building a Modern Data Center: Principles and Strategies of Design”. The book is now available for download here and I highly recommend you check it out. If you have anything to do with data centres then this is an invaluable resource that covers a bunch of different aspects, not just the marketecture of hyperconvergence.  I’ve said on the record that it’s a ripping yarn, and there are a number of people who agree. A Kindle version is available here for US $2.99, with print copies (US $9.99) available from Amazon next month.  ActualTech Media are also running a webinar on February 2 that I’d recommend checking out if you have the time.

Atlantis Computing Announces HyperScale CX-4 and Dell Partnership

It’s been a little while since I talked about Atlantis Computing and things have developed a bit since then. They’ve added a bunch of new features to USX, including, amongst other things:

I was recently lucky enough to have the opportunity to be briefed on their latest developments by Priyadarshi Prasad, Senior Director of Product Management at Atlantis Computing.

 

HyperScale CX-4

Atlantis Computing recently announced a new addition to their HyperScale range of products – the CX-4. If you’re familiar with the existing HyperScale line-up, you’ll realise that this is aimed at the smaller end of the market. Atlantis have stated that “[t]he CX-4 appliance is a two-node hyperconverged integrated system with compute, all-flash storage, networking and virtualisation designed for remote offices, branch offices (ROBO) and “micro” data centres”.

Atlantis HyperScale Box Shot

Atlantis Computing have previously leveraged Cisco, HP, Lenovo and SuperMicro for their hardware offerings and this has continued with the CX-4. The SuperMicro specs are as follows:

Atlantis_CX-4_Spec

 

Dell FX2

Atlantis also let me know that “Dell is teaming with Atlantis to provide the entire line of Atlantis HyperScale all-flash hyperconverged appliances on their PowerEdge FX2 platform. Atlantis HyperScale CX-4, CX-12 and CX-24 appliances are now available on Dell servers through Dell distributors and channel partners in the U.S., Europe and Middle East, shipped directly to customers”. Here’s an artist’s interpretation of the FX2.

Dell_FX2_FC630_Front_edited

As far as the CX-4 goes, the Dell differences are as follows:

  • Form factor – 2U 2N or 2U 4N
  • Memory per Node – 256GB – 768GB
  • Redundant Integrated 10GbE switch

 

Resiliency

Resiliency for the cluster comes by way of a mirror relationship between the two nodes in the CX-4 appliance. Atlantis also provides the ability to define an external tie-breaker virtual machine (VM). In keeping with the ROBO theme, this can be run at a central site, and multiple data centres / appliances can use the same tie-breaker VM. There is also high availability logic in the CX-4 system itself.

The tie-breaker is ostensibly there to keep in contact with the nodes and understand whether they’re up or not. In the event of a split-brain scenario, there is a fight for the tie-breaker (a single token). But what happens if the tie-breaker VM is unavailable (e.g. the WAN link is down)? There’s also an internal tie-breaker operating between the nodes, handled by a service VM on each node.

Atlantis_Witness

 

Simplicity and Scale

One of the key focus areas for Atlantis has been on simplicity, and they’ve gone to great lengths to build a solution and supporting framework ensuring that the deployment, operation and support of these appliances is simple. There’s a single point of support (Atlantis), network connectivity is straightforward, you can have IP configuration done at the factory, and everything can be managed either centrally via USX Manager or individually if required.

The CX-4 can be used as a gateway to the CX-12 if you like, simply by adding another CX-4 (2 nodes). Or you can choose to scale out, depending on your particular use case.

 

Further Reading and Final Thoughts

Atlantis also recently commissioned a survey that was conducted by Scott D. Lowe at ActualTech Media. You can read the results of “From the Field: Software Defined Storage and Hyperconverged Infrastructure in 2016” here. It provides an interesting insight into what is happening out there in the big, bad world at the moment, and is definitely worth a read. Scott, along with David M. Davis and James Green, has also written a book – “Building a Modern Data Center – Principles and Strategies of Design”. You can reserve your copy here. While I’m linking to articles of interest, this white paper from DeepStorage.net on the Atlantis USX solution is worth a look (registration required).

I really like the focus by Atlantis on simplicity. Particularly if you’re looking to deploy these things in a fairly remote destination.

Secondly, the built-in resiliency of the solution allows for operational efficiencies (you don’t have to get someone straight out to the site in the event of a node failure). I also like the fact that you can use these as a starting point for a HCI deployment, without a significant up-front investment. Finally, the use of all-flash helps with power and cooling, which can be a real problem in remote sites that don’t have high quality data centre infrastructure options available.

I’ve been impressed with Atlantis in the discussions I’ve had with them, and I like the look of what they’ve done with the CX-4. It strikes me that they’ve thought about a number of different scenarios and use cases, and they’ve also thought about working with customers beyond the purchase of the first appliance. Given the street price of these things, it would be worthwhile investigating further if you’re in the market for a hyperconverged solution.

Atlantis Computing – Product Announcement

I haven’t previously talked a lot about Atlantis Computing but I have a friend who joined the company a while ago and he’s been quite enthusiastic about what they’re doing in the SDS / hyperwhat space, so I figured it was worth checking out.

 

Overview

You may have heard that Atlantis Computing recently announced the availability of Atlantis HyperScale appliances. In a nutshell, this is a software-defined storage solution on a choice of server hardware from HP, Cisco, Lenovo or SuperMicro using a hypervisor from VMware or Citrix. It sure does look pretty.

 

Atlantis-HyperScale-on-black

Atlantis says it’s an all-flash, hyper-converged, turnkey hardware appliance. If that seems like a mouthful, it is. It’s also built on the Atlantis USX platform with end-to-end support provided by Atlantis. If you’re not familiar with USX, I encourage you to check out some details on it here. In short, it:

  • Is a 100% Software-Defined Storage;
  • Is an enterprise-class storage platform;
  • Pools SAN, NAS, DAS, Flash; and
  • Runs on any server, any storage (within some very specific limits).

 

HyperScale and Hardware-defined Software

Here’s a pretty snazzy shot of the features in HyperScale. There’s a nice overview of the available data services and REST capability.

HyperScale2

The cool thing is you can run this on a mix of hardware vendors as well, including Cisco, HP, Lenovo and SuperMicro.

  • Fixed configurations and specifications of 12TB and 24TB;
  • Turnkey 4-node appliances; and
  • Supported by Atlantis (24x7x365 with 4 hour response).

HyperScale3

The CX-12 and CX-24 have the following specs (depending on the vendor you choose):

HyperScale4

Some of the models of servers cited in the briefing included (everything is 4 nodes):

  • Cisco USC C220 M4;
  • Lenovo x3550 M5;
  • SuperMicro TwinPro; and
  • HP DL360 Gen9.

This is not an exhaustive list, but gives you an idea of the type of appliance hardware in play.

 

Final Thoughts

Atlantis are very excited about some of the potential TCO benefits to be had in comparison with Nutanix, Simplivity and VMware’s EVO:RAIL. I’m not going to go into numbers here, but I think it would be worth your while, if you’re in the market for a Hyper solution, to have a conversation with these guys.

Storage Field Day 7 – Day 3 – Maxta

Disclaimer: I recently attended Storage Field Day 7.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

For each of the presentations I attended at SFD7, there are a few things I want to include in the post. Firstly, you can see video footage of the Maxta presentation here. You can also download my raw notes from the presentation here. Finally, here’s a link to the Maxta website that covers some of what they presented.

 

Company Overview

Yoram Novick, CEO of Maxta, took us through a little of the company’s history and an overview of the products they offer.

Founded 2009, Maxta “maximises the promise of hyper-convergence” through:

  • Choice;
  • Simplicity;
  • Scalability; and
  • Cost.

They currently offer a buzzword-compliant storage platform via their MxSP product, while producing hyper-converged appliances via the MaxDeploy platform. They’re funded by Andreessen Horowitz, Intel Capital, and Tenaya Capital amongst others and are seeking to “[a]lign the storage construct with the abstraction layer”. They do this through:

  • Dramatically simplified management;
  • “World class” VM-level data services;
  • Eliminating storage arrays and storage networking; and
  • Leveraging flash / disk and capacity optimization.

 

Solutions

MaxDeploy is Maxta’s Hyper-Converged Appliance, running on a combination of preconfigured servers and Maxta software. Maxta suggest you can go from zero to running VMs in 15 minutes. They offer peace of mind through:

  • Interoperability;
  • Ease of ordering and deployment; and
  • Predictability of performance.

MxSP is Maxta’s Software-Defined Storage product. Not surprisingly, it is software only, and offered via a perpetual license or via subscription. Like a number of SDS products, the benefits are as follows:

  • Flexibility
    • DIY – your choice in hardware
    • Works with existing infrastructure – no forklift upgrades
  • Full-featured
    • Enterprise class data services
    • Support latest and greatest technologies
  • Customised configuration for users
    • Major server vendors supported
    • Proposed configuration validated
    • Fulfilled by partners

 

Architecture

MaxtaMaxDeployArchitecture

The Maxta Architecture is built around the following key features:

Data Services

  • Data integrity
  • Data protection / snapshots / clones
  • High availability
  • Capacity optimisation (thin / deduplication / compression)
  • Linear scalability

Simplicity

  • VM-centric
  • Tight integration with orchestration software / tools
  • Policy based management
  • Multi-hypervisor support (VMware, KVM, OpenStack integration)

What’s the value proposition?

  • Maximise choice – any server, hypervisor, storage, workload
  • Maximise IT simplicity – manage VMs, not storage
  • Maximise Cost Savings – standard components and capacity optimisation
  • Provide high levels of data resiliency, availability and protection

I get the impression that Maxta thought a bit about data layout, with the following points being critical to the story:

  • Cluster-wide capacity balancing
  • Favours placement of new data on new / under-utilised disks / nodes
  • Periodic rebalancing across disks / nodes
  • Proactive data relocation

 

Closing Thoughts and Further Reading

I like Maxta’s story. I like the two-pronged  approach they’ve taken with their product set, and appreciate the level of thought they’ve put into their architecture. I have no idea how much this stuff costs, so can’t say whether it represents good value or no, but on the basis of the presentation I saw I certainly think they’re worth looking at if you’re looking to get into either mega-converged appliances or buzzword-storage platforms. You should also check out Keith’s preview blog post on Maxta here, while Cormac did a great write-up late last year that is well worth checking out.

 

EMC Announces VSPEX BLUE

EMC today announced their VSPEX BLUE offering and I thought I’d share some pictures and words from the briefing I received recently. PowerPoint presentations always look worse when I distil them down to a long series of dot points, so I’ll try and add some colour commentary along the way. Please note that I’m only going off EMC’s presentation, and haven’t had an opportunity to try the solution for myself. Nor do I know what the pricing is like. Get in touch with your local EMC representative or partner if you want to know more about that kind of thing.

EMC describe VSPEX BLUE as “an all-inclusive, Hyper-Converged Infrastructure Appliance, powered by VMware EVO:RAIL software”. Which seems like a nice thing to have in the DC. With VSPEX BLUE, the key EMC message is simplicity:

  • Simple to order – purchase with a single SKU
  • Simple to configure – through an automated, wizard driven interface
  • Simple to manage – with the new VSPEX BLUE Manager
  • Simple to scale – with automatic scale-out, where new appliances are automatically discovered and easily added to a cluster with a few mouse clicks
  • Simple to support  – with EMC 24 x 7 Global support offering a single point of accountability for all hardware and software, including all VMware software

It also “eliminates the need for advanced infrastructure planning” by letting you “start with one 2U/4-node appliance and scale up to four”.

Awwww, this guy seems sad. Maybe he doesn’t believe that the hyperconverged unicorn warriors of the data centre are here to save us all from ourselves.

BLUE_man

I imagine the marketing line would be something like “IT is hard, but you don’t need to be blue with VSPEX BLUE”.

Foundation

VSPEX BLUE Software

  • VSPEX BLUE Manager extends hardware monitoring, and integrates with EMC Connect Home and online support facilities.
  • VSPEX BLUE Market offers value-add EMC software products included with VSPEX BLUE.

VMware EVO:RAIL Engine

  • Automates cluster deployment and configuration, as well as scale-out and non-disruptive updates
  • Simple design with a clean interface, pre-sized VM templates and single-click policies

Resilient Cluster Architecture

  • VSAN distributed datastore provides consistent and resilient fault tolerance
  • VMotion provides system availability during maintenance and DRS load balances workloads

Software-defined data center (SDDC) building block

  • Combines compute, storage, network and management resources into a single virtualized software stack with vSphere and VSAN

Hardware

While we live in a software-defined world, the hardware is still somewhat important. EMC is offering 2 basic configurations to keep ordering and buying simple. You getting it yet? It’s all very simple.

  • VSPEX BLUE Standard which comes with 128GB of memory per node; or
  • VSPEX BLUE Performance comes with 192GB of memory per node.

Each configuration has a choice of a 1GbE copper or 10GbE fibre network interface. Here’re some pretty pictures of what the chassis looks like, sans EMC bezel. Note the similarities with EMC’s ECS offering.

BLUE_front

BLUE_rear

Processors (per node)

  • Intel Ivy Bridge (up to 130W)
  • Dual processor

Memory/processors (per node)

  • Four channels of Native DDR3 (1333)
  • Up to eight DDR3 ECC R-DIMMS per server node

Inputs/outputs (I/Os) (per node)

  • Dual GbE ports
  • Optional IB QDR/FDR or 10GbE integrated
  • 1 x 8 PCIe Gen3 I/O Mezz Option (Quad GbE or Dual 10GbE)
  • 1 x 16 PCIe Gen3HBA slots
  • Integrated BMC with RMM4 support

Chassis

  • 2U chassis supporting four hot swap nodes with half-width MBs
  • 2 x 1200W (80+ & CS Platoinum) redundant hot-swap PS
  • Dedicated cooling/node (no SPoF) – 3 x 40mm dual rotor fans
  • Front panel with separate power control per node
  • 17.24” x 30.35” x 3.46”

Disk

  • Integrated 4-Port SATA/SAS controller (SW RAID)
  • Up to 16 (four per node) 2.5” HDD

BLUE12

The VSPEX BLUE Standard configuration consists of four independent nodes consisting of the following:

  • 2 x Intel Dual Intel Ivy Bridge E5-2620 V2 (12 cores, 2.1 Ghz)
  • 8 x 16GB (128GB) , 1666MHz DIMMS Memory
  • 3 x 1.2TB 2.5” 10K RPM SASHDD
  • 1 x 400GB 2.5” SAS SSD (VSAN Cache)
  • 1 x 32GB SLC SATADOM  (ESXi Boot Image)
  • 2 x 10GBE BaseT or SFP+

The Performance configuration only differs from the standard in the amount of memory it contains, going from 128GB in the standard configuration to 192GB in the performance model, ideal for applications such as VDI.

VSPEX BLUE Manager

EMC had a number of design goals for the VSPEX BLUE Manager product, including:

  • Simplified the support experience
  • Embedded ESRS/VE
  • Seamless integration with EVO:RAIL and its facilities
  • The implementation of a management framework that allows driving EMC value-add software as services
  • Extended management orchestration for other use cases
  • Enablement of the VSPEX partner ecosystem

Software Inventory Management

  • Displays installed software versions
  • Discovers and downloads software updates
  • Automated, non-disruptive software upgrades

VB_market

Hardware Awareness

In my mind, this is the key bit of value-add that EMC offer with VSPEX BLUE – seeing what else is going on outside of EVO:RAIL.

  • Provides  information not available in EVO:RAIL
  • Maps alerts to graphical representation of hardware configuration
  • Displays detailed configuration of hardware parts for field services
  • Aggregates health monitoring from vCenter and hardware BMC IPMI
  • Integrates with ESRS Connect Home for proactive notification and problem resolution
  • Integrates with eServices online support resources
  • Automatically collects diagnostic logs and ingrates with vRealize Log Insight

RecoverPoint for VMs

I’m a bit of a fan of RecoverPoint for VMs. The VSPEX BLUE appliance includes an EMC Recoverpoint for VMs license entitling 15 VMs with support for free. The version shipping with this solution also no longer requires storage external to VMware VSAN to store replica and journal volumes.

  • Protect VMs at VM-level granularity
  • Asynchronous and synchronous replication
  • Consistency group for application-consistent recovery
  • vCenter plug-in integration
  • Discovery, provisioning, and orchestration of DR workflow management
  • WAN compression and deduplication to optimize bandwidth utilization

Conclusion

One final thing to note – VMware ELAs not supported. VSPEX BLUE is an all-inclusive SKU, so you can’t modify support options, licensing, etc. But the EVO:RAIL thing was never really a good option for people who want that kind of ability to tinker with configurations.

Based on the briefing I received, the on-paper specs, and the general thought that seems to have gone into the overall delivery of this product, it all looks pretty solid. I’ll be interested to see if any of my customers will be deploying this in the wild. If you’re hyperconverged-curious and want to look into this kind of thing then the EMC VSPEX BLUE may well be just the thing for you.

Brisbane VMUG – Febuary 2015

The first Brisbane VMUG of the year will be held on Thursday 5th February at the Alliance Hotel in Spring Hill (The Leichardt Room) from 4 – 6 pm. It’s sponsored by SimpliVity.

Here’s the agenda: Cloud Economics with Enterprise Capability Guest Speakers: Chris Troiani, Solutions Architect APJ, Simplivity

  • Overview of hyperconverged market.
  • Consideration of Hyperconverged Architectures – Is your architecture a fit?
  • Sizing best practices for transitioning from Converged to Hyperconverged Infrastructure
  • REAL Case Studies: Analysis and Design Rationale
  • Live Demo – Admin / Operational overview
  • Q&A & Networking with food and beer tasting.

You can register here. Hope to see you there.