Datrium Announces CloudShift

I recently had the opportunity to speak to Datrium‘s Brian Biles and Craig Nunes about their CloudShift announcement and thought it was worth covering some of the highlights here.

 

DVX Now

Datrium have had a scalable protection tier and focus on performance since their inception.

[image courtesy of Datrium]

The “mobility tier”, in the form of Cloud DVX, has been around for a little while now. It’s simple to consume (via SaaS), yields decent deduplication results, and the Datrium team tells me it also delivers fast RTO. There’s also solid support for moving data between DCs with the DVX platform. This all sounds like the foundation for something happening in the hybrid space, right?

 

And Into The Future

Datrium pointed out that disaster recovery has traditionally been a good way of finding out where a lot of the problems exist in you data centre. There’s nothing like failing a failover to understand where the integration points in your on-premises infrastructure are lacking. Disaster recovery needs to be a seamless, integrated process, but data centres are still built on various silos of technology. People are still using clouds for a variety of reasons, and some clouds do some things better than others. It’s easy to pick and choose what you need to get things done. This has been one of the big advantages of public cloud and a large reason for its success. As a result of this, however, the silos are moving to the cloud, even as they’re fixed in the DC.

As a result of this, Datrium are looking to develop a solution that delivers on the following theme: “Run. Protect. Any Cloud”. The idea is simple, offering up an orchestrated DR offering that makes failover and failback a painless undertaking. Datrium tell me they’ve been a big supporter of VMware’s SRM product, but have observed that there can be problems with VMware offering an orchestration-only layer, with adapters having issues from time to time, and managing the solution can be complicated. With CloudShift, Datrium are taking a vertical stack approach, positioning CloudShift as an orchestrator for DR as a SaaS offering. Note that it only works with Datrium.

[image courtesy of Datrium]

The idea behind CloudShift is pretty neat. With Cloud DVX you can already backup VMs to AWS using S3 and EC2. The idea is that you can leverage data already in AWS to fire up VMs on AWS (using on-demand instances of VMware Cloud on AWS) to provide temporary disaster recovery capability. The good thing about this is that converting your VMware VMs to someone else’s cloud is no longer a problem you need to resolve. You’ll need to have a relationship with AWS in the first place – it won’t be as simple as entering your credit card details and firing up an instance. But it certainly seems a lot simpler than having an existing infrastructure in place, and dealing with the conversion problems inherent in going from vSphere to KVM and other virtualisation platforms.

[image courtesy of Datrium]

Failover and failback is a fairly straightforward process as well, with the following steps required for failover and failback of workloads:

  1. Backup to Cloud DVX / S3 – This is ongoing and happens in the background;
  2. Failover required – the CloudShift runbook is initiated;
  3. Restart VM groups on VMC – VMs are rehydrated from data in S3; and
  4. Failback to on-premises – CloudShift reverses the process with deltas using change block tracking.

It’s being pitched as a very simple way to run DR, something that has been notorious for being a stressful activity in the past.

 

Thoughts and Further Reading

CloudShift is targeted for release in the first half of 2019. The economic power of DRaaS in the cloud is very strong. People love the idea that they can access the facility on-demand, rather than having passive infrastructure doing nothing on the off chance that it will be required. There’s obviously some additional cost when you need to use on demand versus reserved resources, but this is still potentially cheaper than standing up and maintaining your own secondary DC presence.

Datrium are focused on keeping inherently complex activities like DR simple. I’ll be curious to see whether they’re successful with this approach. The great thing about something like a generic orchestration framework like VMware SRM is that you can use a number of different vendors in the data centre and not have a huge problem with interoperability. The downside to this approach is that this broader ecosystem can leave you exposed to problems with individual components in the solution. Datrium is taking a punt that their customers are going to see the advantages of having an integrated approach to leveraging on demand services. I’m constantly astonished that people don’t get more excited about DRaaS offerings. It’s really cool that you can get this level of protection without having to invest a tonne in running your own passive infrastructure. If you’d like to read more about CloudShift, there’s a blog post that sheds some more light on the solution on Datrium’s site, and you can grab a white paper here too.

2018 AKA The Year After 2017

I said last year that I don’t do future prediction type posts, and then I did one anyway. This year I said the same thing and then I did one around some Primary Data commentary. Clearly I don’t know what I’m doing, so here we are again. This time around, my good buddy Jason Collier (Founder at Scale Computing) had some stuff to say about hybrid cloud, and I thought I’d wade in and, ostensibly, nod my head in vigorous agreement for the most part. Firstly, though, here’s Jason’s quote:

“Throughout 2017 we have seen many organizations focus on implementing a 100% cloud focused model and there has been a push for complete adoption of the cloud. There has been a debate around on-premises and cloud, especially when it comes to security, performance and availability, with arguments both for and against. But the reality is that the pendulum stops somewhere in the middle. In 2018 and beyond, the future is all about simplifying hybrid IT. The reality is it’s not on-premises versus the cloud. It’s on-premises and the cloud. Using hyperconverged solutions to support remote and branch locations and making the edge more intelligent, in conjunction with a hybrid cloud model, organizations will be able to support highly changing application environments”.

 

The Cloud

I talk to people every day in my day job about what their cloud strategy is, and most people in enterprise environments are telling me that there are plans afoot to go all in on public cloud. No one wants to run their own data centres anymore. No one wants to own and operate their own infrastructure. I’ve been hearing this for the last five years too, and have possibly penned a few strategy documents in my time that said something similar. Whether it’s with AWS, Azure, Google or one of the smaller players, public cloud as a consumption model has a lot going for it. Unfortunately, it can be hard to get stuff working up there reliably. Why? Because no-one wants to spend time “re-factoring” their applications. As a result of this, a lot of people want to lift and shift their workloads to public cloud. This is fine in theory, but a lot of those applications are running crusty versions of Microsoft’s flagship RDBMS, or they’re using applications that are designed for low-latency, on-premises data centres, rather than being addressable over the Internet. And why is this? Because we all spent a lot of the business’s money in the late nineties and early noughties building these systems to a level of performance and resilience that we thought people wanted. Except we didn’t explain ourselves terribly well, and now the business is tired of spending all of this money on IT. And they’re tired of having to go through extensive testing cycles every time they need to do a minor upgrade. So they stop doing those upgrades, and after some time passes, you find that a bunch of key business applications are suddenly approaching end of life and in need of some serious TLC. As a result of this, those same enterprises looking to go cloud first also find themselves struggling mightily to get there. This doesn’t mean public cloud isn’t necessarily the answer, it just means that people need to think things through a bit.

 

The Edge

Another reason enterprises aren’t necessarily lifting and shifting every single workload to the cloud is the concept of data gravity. Sometimes, your applications and your data need to be close to each other. And sometimes that closeness needs to occur closest to the place you generate the data (or run the applications). Whilst I think we’re seeing a shift in the deployment of corporate workloads to off-premises data centres, there are still some applications that need everything close by. I generally see this with enterprises working with extremely large datasets (think geo-spatial stuff or perhaps media and entertainment companies) that struggle to move large amounts of the data around in a fashion that is cost effective and efficient from a time and resource perspective. There are some neat solutions to some of these requirements, such as Scale Computing’s single node deployment option for edge workloads, and X-IO Technologiesneat approach to moving data from the edge to the core. But physics is still physics.

 

The Bit In Between

So back to Jason’s comment on hybrid cloud being the way it’s really all going. I agree that it’s very much a question of public cloud and on-premises, rather than one or the other. I think the missing piece for a lot of organisations, however, doesn’t necessarily lie in any one technology or application architecture. Rather, I think the key to a successful hybrid strategy sits squarely with the capability of the organization to provide consistent governance throughout the stack. In my opinion, it’s more about people understanding the value of what their company does, and the best way to help it achieve that value, than it is about whether HCI is a better fit than traditional rackmount servers connected to fibre channel fabrics. Those considerations are important, of course, but I don’t think they have the same impact on a company’s potential success as the people and politics does. You can have some super awesome bits of technology powering your company, but if you don’t understand how you’re helping the company do business, you’ll find the technology is not as useful as you hoped it would be. You can talk all you want about hybrid (and you should, it’s a solid strategy) but if you don’t understand why you’re doing what you do, it’s not going to be as effective.

Primary Data – Seeing the Future

It’s that time of year when public relations companies send out a heap of “What’s going to happen in 2018” type press releases for us blogger types to take advantage of. I’m normally reluctant to do these “futures” based posts, as I’m notoriously bad at seeing the future (as are most people). These types of articles also invariably push the narrative in a certain direction based on whatever the vendor being represented is selling. That said I have a bit of a soft spot for Lance Smith and the team at Primary Data, so I thought I’d entertain the suggestion that I at least look at what’s on his mind. Unfortunately, scheduling difficulties meant that we couldn’t talk in person about what he’d sent through, so this article is based entirely on the paragraphs I was sent, and Lance hasn’t had the opportunity to explain himself :)

 

SDS, What Else?

Here’s what Lance had to say about software-defined storage (SDS). “Few IT professionals admit to a love of buzzwords, and one of the biggest offenders in the last few years is the term, “software-defined storage.” With marketers borrowing from the successes of “software-defined-networking”, the use of “SDS” attempts all kinds of claims. Yet the term does little to help most of us to understand what a specific SDS product can do. Despite the well-earned dislike of the phrase, true software-defined storage solutions will continue to gain traction because they try to bridge the gap between legacy infrastructure and modern storage needs. In fact, even as hardware sales declines, IDC forecasts that the SDS market will grow at a rate of 13.5% from 2017 – 2021, growing to a $16.2B market by the end of the forecast period.”

I think Lance raises an interesting point here. There’re a lot of companies claiming to deliver software-defined storage solutions in the marketplace. Some of these, however, are still heavily tied to particular hardware solutions. This isn’t always because they need the hardware to deliver functionality, but rather because the company selling the solution also sells hardware. This is fine as far as it goes, but I find myself increasingly wary of SDS solutions that are tied to a particular vendor’s interpretation of what off the shelf hardware is.

The killer feature of SDS is the idea that you can do policy-based provisioning and management of data storage in a programmatic fashion, and do this independently of the underlying hardware. Arguably, with everything offering some kind of RESTful API capability, this is the case. But I think it’s the vendors who are thinking beyond simply dishing up NFS mount points or S3-compliant buckets that will ultimately come out on top. People want to be able to run this stuff anywhere – on crappy whitebox servers and in the public cloud – and feel comfortable knowing that they’ll be able to manage their storage based on a set of business-focused rules, not a series of constraints set out by a hardware vendor. I think we’re close to seeing that with a number of solutions, but I think there’s still some way to go.

 

HCI As Silo. Discuss.

His thoughts on HCI were, in my opinion, a little more controversial. “Hyperconverged infrastructure (HCI) aims to meet data’s changing needs through automatic tiering and centralized management. HCI systems have plenty of appeal as a fast fix to pay as you grow, but in the long run, these systems represent just another larger silo for enterprises to manage. In addition, since hyperconverged systems frequently require proprietary or dedicated hardware, customer choice is limited when more compute or storage is needed. Most environments don’t require both compute and storage in equal measure, so their budget is wasted when only more CPU or more capacity is really what applications need. Most HCI architecture rely on layers of caches to ensure good storage performance.  Unfortunately, performance is not guaranteed when a set of applications running in a compute node overruns a caches capacity.  As IT begins to custom-tailor storage capabilities to real data needs with metadata management software, enterprises will begin to move away from bulk deployments of hyperconverged infrastructure and instead embrace a more strategic data management role that leverages precise storage capabilities on premises and into the cloud.”

There’re are a few nuggets in this one that I’d like to look at further. Firstly, the idea that HCI becomes just another silo to manage is an interesting one. It’s true that HCI as a technology is a bit different to the traditional compute / storage / network paradigm that we’ve been managing for the last few decades. I’m not convinced, however, that it introduces another silo of management. Or maybe, what I’m thinking is that you don’t need to let it become another silo to manage. Rather, I’ve been encouraging enterprises to look at their platform management at a higher level, focusing on the layer above the compute / storage / network to deliver automation, orchestration and management. If you build that capability into your environment, then whether you consume compute via rackmount servers, blade or HCI becomes less and less relevant. It’s easier said than done, of course, as it takes a lot of time and effort to get that layer working well. But the sweat investment is worth it.

Secondly, the notion that “[m]ost environments don’t require both compute and storage in equal measure, so their budget is wasted when only more CPU or more capacity is really what applications need” is accurate, but most HCI vendors are offering a way to expand storage or compute now without necessarily growing the other components (think Nutanix with their storage-only nodes and NetApp’s approach to HCI). I’d posit that architectures have changed enough with the HCI market leaders to the point that this is no longer a real issue.

Finally, I’m not convinced that “performance is not guaranteed when a set of applications running in a compute node overruns a caches capacity” is as much of a problem as it was a few years ago. Modern hypervisors have a lot of smarts built into them in terms of service quality and the modelling for capacity and performance sizing has improved significantly.

 

Conclusion

I like Lance, and I like what Primary Data bring to the table with their policy-based SDS solution. I don’t necessarily agree with him on some of these points (particularly as I think HCI solutions have matured a bunch in the last few years) but I do enjoy the opportunity to think about some of these ideas when I otherwise wouldn’t. So what will 2018 bring in my opinion? No idea, but it’s going to be interesting, that’s for sure.

Dell EMC VxRail 4.5 – A Few Notes

VxRail 4.5 was announced in May by Dell EMC and I’ve been a bit slow in going through my enablement on the platform. The key benefit (beyond some interesting hardware permutations that were announced), is support for VMware vSphere 6.5 U1 and vSAN 6.6. I thought I’d cover a few of the more interesting aspects of the VxRail platform and core VMware enhancements.

Note that VxRail 4.5 does not support Generation 1 hardware, but it does support G2 and 3 Quanta models, and G3 Dell PowerEdge appliances.

 

VxRail Enhancements

Multi-node Additions

Prior to version 4.5, adding an additional node to the existing cluster was a bit of a pain. Only one node could be added at a time and this could take some time when you had a lot of nodes to add. Now, however,

  • Multiple nodes (up to 6) can be added simultaneously.
  • Each node expansion is a separate process. If one fails, the remaining five will keep going.

There is now also a node removal procedure, used to decommission old generation VxRail products and migrate to new generation VxRail hardware. This is only supported for VxRail 4.0.301 and above and removal of only one node at a time is recommended.

 

Network Planning

Different VLANs are recommended for vSAN traffic and for management across multiple VxRail clusters.

 

VxRail network topologies use dual top-of-rack (ToR) switches to remove the switch as a single point of failure.

 

vSAN 6.6 Enhancements

Disk Format 5

As I mentioned earlier, VxRail 4.5 introduces support for vSAN 6.6 and disk format 5.

  • All nodes in the VxRail cluster must be running vSAN 6.6 due to the upgraded disk format.
  • The upgrade from disk format 3 to 5 is a metadata only conversion and data evacuation is not required. You need disk format 5 is required for datastore-level encryption (see below).
  • VxRail will automatically upgrade the disk format version to 5 when you upgrade to VxRail 4.5.

 

Unicast Support

Unicast is supported for vSAN communications starting with vSAN 6.6. The idea is to reduce network configuration complexity. There is apparently no performance impact associated with the use of Unicast. vSAN will switch to unicast mode once all hosts in the cluster have been upgraded to vSAN 6.6 and disk format 5. You won’t need to reconfigure the ToR switches to disable multicast features in vSAN.

 

vSAN Data-at-Rest Encryption

vSAN Data-at-Rest Encryption (D@RE) is enabled at cluster level, supporting hybrid, all-flash, and stretched clusters. Note that it requires an external vCenter and does not support embedded vCenter. It

  • Works with all vSAN features, including deduplication and compression.
  • Integrates with all KMIP-compliant key management technologies, including SafeNet, HyTrust, Thales, Vormetric, etc.

When enabling encryption, vSAN performs a rolling reformat of every disk group in the cluster. As such, it is recommended to enable encryption on the vSAN datastore after the initial VxRail deployment. Whilst it’s a matter of ticking a checkbox, it can take a lot of time to complete depending on how much data needs to be migrated about the place.

 

Other Reading

You can read more about vSAN D@RE here. Chad delivered a very useful overview of the VxRail and VxRack updates announced at Dell EMC World 2017 that you can read here.

Scale Computing Announces Cloud Unity – Clouds For Everyone

 

The Announcement

Scale Computing recently announced the availability of a new offering: Cloud Unity. I had the opportunity to speak with the Scale Computing team at VMworld US this year to run through some of the finer points of the announcement and thought I’d cover it off here.

 

Cloud What?

So what exactly is Cloud Unity? If you’ve been keeping an eye on the IT market in the last few years, you’ll notice that everything has cloud of some type in its product name. In this case, Cloud Unity is a mechanism by which you can run Scale Computing’s HC3 hypervisor nested in Google Cloud Platform (GCP). The point of the solution, ostensibly, is to provide a business with disaster recovery capability on a public cloud platform. You’re basically running an HC3 cluster on GCP, with the added benefit that you can create an encrypted VXLAN connection between your on-premises HC3 cluster and the GCP cluster. The neat thing here is that everything runs as a small instance to handle replication from on-premises and only scales up when you’re actually needing to run the VMs in anger. The service is bought through Scale Computing, and starts from as little as $1000US per month (for 5TB). There are other options available as well and the solution is expected to be Generally Available in Q4 this year.

 

Conclusion and Further Reading

This isn’t the first time nested virtualisation has been released as a product, with AWS, Azure and Ravello all doing similar things. The cool thing here is that it’s aimed at Scale Computing’s traditional customers, namely small to medium businesses. These are the people who’ve bought into the simplicity of the Scale Computing model and don’t necessarily have time to re-write their line of business applications to work as cloud native applications (as much as it would be nice that this were the case). Whilst application lift and shift isn’t the ideal outcome, the other benefit of this approach is that companies who may not have previously invested in DR capability can now leverage this product to solve the technical part of the puzzle fairly simply.

DR should be a simple thing to have in place. Everyone has horror stories of data centres going off line because of natural disasters or, more commonly, human error. The price of good DR, however, has traditionally been quite high. And it’s been pretty hard to achieve. The beauty of this solution is that it provides businesses with solid technical capabilities for a moderate price, and allows them to focus on people and processes, which are arguably the key parts of DR that are commonly overlooked. Disasters are bad, which is why they’re called disasters. If you run a small to medium business and want to protect yourself from bad things happening, this is the kind of solution that should be of interest to you.

A few years ago, Scale Computing sent me a refurbished HC1000 cluster to play with, and I’ve had first-hand exposure to the excellent support staff and experience that Scale Computing tell people about. The stories are true – these people are very good at what they do and this goes a long way in providing consumers with confidence in the solution. This confidence is fairly critical to the success of technical DR solutions – you want to leverage something that’s robust in times of duress. You don’t want to be worrying about whether it will work or not when your on-premises DC is slowly becoming submerged in water because building maintenance made a boo boo. You want to be able to focus on your processes to ensure that applications and data are available when and where they’re required to keep doing what you do.

If you’d like to read what other people have written, Justin Warren posted a handy article at Forbes, and Chris Evans provided a typically insightful overview of the announcement and the challenges it’s trying to solve that you can read here. Scott D. Lowe also provided a very useful write-up here. Scale Computing recently presented at Tech Field Day 15, and you can watch their videos here.

The Thing About NetApp HCI Is …

Disclaimer: I recently attended VMworld 2017 – US.  My flights were paid for by ActualTech Media, VMware provided me with a free pass to the conference and various bits of swag, and Tech Field Day picked up my hotel costs. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

You can view the video of NetApp‘s presentation here, and download a copy of my rough notes from here.

 

What’s In A Name?

There’s been some amount of debate about whether NetApp’s HCI offering is really HCI or CI. I’m not going to pick sides in this argument. I appreciate that words mean things and definitions are important, but I’d like to focus more on what NetApp’s offering delivers, rather than whether someone in Tech Marketing made the right decision to call this HCI. Let’s just say they’re closer to HCI than WD is to cloud.

 

Ye Olde Architectures (The HCI Tax)

NetApp spent some time talking about the “HCI Tax” – the overhead of providing various data services with first generation HCI appliances. Gabe touched on the impact of running various iterations of controller VMs, along with the increased memory requirements for services such as deduplication, erasure coding, compression, and encryption. The model for first generation HCI is simple – grow your storage and compute in lockstep as your performance requirements increase. The great thing with this approach is that you can start small and grow your environment as required. The problem with this approach is that you may only need to grow your storage, or you may only need to grow your compute requirement, but not necessarily both. Granted, a number of HCI vendors now offer storage-only nodes to accommodate this requirement, but NetApp don’t think the approach is as polished as it could be. The requirement to add compute as you add storage can also have a financial impact in terms of the money you’ll spend in licensing for CPUs. Whilst one size fits all has its benefits for linear workloads, this approach still has some problems.

 

The New Style?

NetApp suggest that their solution offers the ability to “scale on your terms”. With this you can

  • Optimise and protect existing investments;
  • Scale storage and compute together or independently; and
  • Eliminate the “HCI Tax”.

Note that only the storage nodes have disks, the compute nodes get blanks. The disks are on the front of the unit and the nodes are stateless. You can’t have different tiers of storage nodes as it’s all one cluster. It’s also BYO switch for connectivity, supporting 10/25Gbps. In terms of scalability, from a storage perspective you can scale as much as SolidFire can nowadays (around 100 nodes), and your compute nodes are limited by vSphere’s maximum configuration.

There are “T-shirt sizes” for implementation, and you can start small with as little as two blocks (2 compute nodes and 4 storage nodes). I don’t believe you mix t-shirt sizes in the same cluster. Makes sense if you think about it for more than a second.

 

Thoughts

Converged and hyper-converged are different things, and I think this post from Nick Howell (in the context of Cohesity as HCI) sums up the differences nicely. However, what was interesting for me during this presentation wasn’t whether or not this qualifies as HCI or not. Rather, it was about NetApp building on the strengths of SolidFire’s storage offering (guaranteed performance with QoS and good scale) coupled with storage / compute independence to provide customers with a solution that seems to tick a lot of boxes for the discerning punter.

Unless you’ve been living under a rock for the last few years, you’ll know that NetApp are quite a different beast to the company first founded 25 years ago. The great thing about them (and the other major vendors) entering the already crowded HCI market is that they offer choices that extend beyond the HCI play. For the next few years at least, there are going to be workloads that just may not go so well with HCI. If you’re already a fan of NetApp, chances are they’ll have an alternative solution that will allow you to leverage their capability and still get the outcome you need. Gabe made the excellent point that “[y]ou can’t go from traditional to cloud overnight, you need to evaluate your apps to see where they fit”. This is exactly the same with HCI. I’m looking forward to see how they go against the more established HCI vendors in the marketplace, and whether the market responds positively to some of the approaches they’ve taken with the solution.

Atlantis – Not Your Father’s VDI

 

I’ve written about Atlantis Computing a few times before, and last week Bob Davis and Patrick Brennan were nice enough to run me through what they’ve been up to recently. What I’m about to cover isn’t breaking news, but I thought it worthwhile writing about nonetheless.

 

Citrix Workspace

Atlantis have been focusing an awful lot on Citrix workspaces lately, which I don’t think is a bad thing.

 

End-to-End Visibility

The beauty of a heavily integrated solution is that you get great insights all the way through the solution stack. Rather than having to look at multiple element managers to troubleshoot problems, you can get a view of everything from the one place. This is something I’ve had a number of customers asking for.

  • Single pane of glass for the entire virtual workspace infrastructure monitoring;
  • Proactive risk management for workspace;
  • Troubleshoot and identify workspace issues faster; and
  • Save money on operational costs.

 

Reporting

People love good reporting. So does Citrix, so you’ve got that in spades here as well.  Including:

  • Detailed historical information;
  • Proactive risk management;
  • Trending infrastructure requirements; and
  • Scaling with confidence.

 

On-demand Desktop Delivery

The whole solution can be integrated with the Citrix cloud offering, with:

  • Elastic dynamic provisioning on-premises or in the cloud with one management platform;
  • Rapid deployment of applications or desktops with simplified administration; and
  • Easy provision of Desktop as a Service.

 

HPE Intelligent Edge

It wouldn’t be product coverage without some kind of box shot. Software is nothing without hardware. Or so I like to say.

Here’s a link to the product landing page. It’s basically the HPE Edgeline EL4000 (4-Node) with m510 cartridges

  • M510 Cartridge: Intel Xeon D “Broadwell-DE” 1.7GHz – 16 cores w/ 128GB RAM
  • Equipped with NVMe
  • 4TB Effective Storage Capacity

It runs the full Citrix Stack: XenApp + XenDesktop + XenServer and was announced at Citrix Summit 2017.

 

Thoughts and Further Reading

I have a lot of clients using all kinds of different combinations to get apps and desktops to their clients. It can be a real pain to manage, and upgrades can be difficult to deliver in a seamless fashion. If you’re into Citrix, and I know a lot of people are, then the Atlantis approach using “full-stack management” certainly has its appeal.  It takes the concept of hyperconverged and adds a really useful layer of integration with application and desktop delivery, doing what HCI has done for infrastructure already and ratcheting it up a notch. Is this mega-hyperconverged? Maybe not, but it seems to be a smarter way to do things, albeit for a specific use case.

If there’s one thing that HCI hasn’t managed to do well, it’s translate the application layer into something as simple as the infrastructure that it’s hosted on. Arguably this is up to the people selling the apps, but it’s nice to see Atlantis having a crack at it.

Atlantis aren’t quite the household name that SimpliVity or Nutanix are (yes I know households don’t really talk about these companies either). But they’ve done some interesting stuff in the HCI space, and the decision to focus heavily on VDI has been a good one. There’s a lot to be said for solutions that are super simple to deploy and easy to maintain, particularly if the hosted software platform is also easy to use. Coupled with some solid cloud integration I think the solution is pretty neat, at least on paper. You can read about the announcement here. Here’s a link to the solution brief. If you’d like to find out more, head to this site and fill out the form. I’m looking forward to hearing about how this plays out in the data centre.

 

Random Short Take #3

It’s only been 13 months since I did one of these, so clearly the frequency is increasing. Here’re a few things that I’ve noticed and thought may be worthy of your consideration:

  • This VMware KB article on Calculating Bandwidth Requirements for vSphere Replication is super helpful and goes into a lot of depth on planning for replication. There’s also an appliance available via a Fling, and a calculator you can use.
  • NetWorker 9.1 was recently released and Preston has a great write-up on some of the new features. “Things are hotting up”, as they say.
  • Rawlinson has now joined Cohesity, and I look forward to hearing more about that in the near future. In the meantime,  this article on automated deployment for HCI is very handy.
  • My hosting provider moved me to a new platform in September. By October I’d decided to move somewhere else based on the poor performance of the site and generally shoddy support experience. I’m now with SiteGround. They’re nice, fast and cheap enough for me. I’ve joined their affiliate program, so if you decide to sign up with them I can get some cash.
  • My blog got “hacked” yesterday. Someone put a redirect in place to a men’s performance pill site. Big thanks to Mike Yurick for pointing it out to me and to my colleague Josh for answering my pleas for help and stepping in and cleaning it up while I was on a plane inter-state. He used Wordfence to scan and clean up the site – check them out and make sure your backups are up to date. If it happens to you, and you don’t have a Josh, check out this guidance from WordPress.
  • The next Brisbane VMUG will be held on Tuesday February 21st. I’ll be putting up an article on it in the next few weeks. It will be sponsored by Veeam and should be great.
  • I always enjoy spending time with Stephen Foskett, and when I can’t be with him I like to read his articles (it’s not stalky at all). This one on containers was particularly good.

That’s it for the moment. Hopefully you all have an enjoyable and safe holiday season. And if my site looks like this in the future – let me know.

Maxta Introduces “Freemium” Hyperconverged Solution

maxta

This is a really quick post while it’s still in my head. I stopped by the Maxta booth at VMworld recently. I haven’t spoken to Maxta since I saw them at SFD7, so I was curious to hear about what they were up to. They told me about their “freemium” MxSP software license they were releasing soon.

Maxta_Freemium

As I understand it (and please someone correct me if I’m wrong), it’s limited to 3 nodes and 24TB of capacity, but you’ll be able to upgrade Free MxSP licenses non-disruptively to a premium license when you’re ready to go head on. You can read the press release here and sign up for the license here (note that there are some geographic restrictions on this).

EMC Announces VxRail

Yes, yes, I know it was a little while ago now. I’ve been occupied by other things and wanted to let the dust settle on the announcement before I covered it off here. And it was really a VCE announcement. But anyway. I’ve been doing work internally around all things hyperconverged and, as I work for a big EMC partner, people have been asking me about VxRail. So I thought I’d cover some of the more interesting bits.

So, let’s start with the reasonably useful summary links:

  • The VxRail datasheet (PDF) is here;
  • The VCE landing page for VxRail is here;
  • Chad’s take (worth the read!) can be found here; and
  • Simon from El Reg did a write-up here.

 

So what is it?

Well it’s a re-envisioning of VMware’s EVO:RAIL hyperconverged infrastructure in a way. But it’s a bit better than that, a bit more flexible, and potentially more cost effective. Here’s a box shot, because it’s what you want to see.

VxRail_002

Basically it’s a 2RU appliance housing 4 nodes. You can scale these nodes out in increments as required. There’s a range of hybrid configurations available.

VxRail_006

As well as some all flash versions.

VxRail_007

By default the initial configuration must be fully populated with 4 nodes, with the ability to scale up to 64 nodes (with qualification from VCE). Here are a few other notes on clusters:

  • You can’t mix All Flash and Hybrid nodes in the same cluster (this messes up performance);
  • All nodes within the cluster must have the same license type (Full License or BYO/ELA); and
  • First generation VSPEX BLUE appliances can be used in the same cluster with second generation appliances but EVC must be set to align with the G1 appliances for the whole cluster.

 

On VMware Virtual SAN

I haven’t used VSAN/Virtual SAN enough in production to have really firm opinions on it, but I’ve always enjoyed tracking its progress in the marketplace. VMware claim that the use of Virtual SAN over other approaches has the following advantages:

  • No need to install Virtual Storage Appliances (VSA);
  • CPU utilization <10%;
  • No reserved memory required;
  • Provides the shortest path for I/O; and
  • Seamlessly handles VM migrations.

If that sounds a bit like some marketing stuff, it sort of is. But that doesn’t mean they’re necessarily wrong either. VMware state that the placement of Virtual SAN directly in the hypervisor kernel allows it to “be fast, highly efficient, and be able to scale with flash and modern CPU architectures”.

While I can’t comment on this one way or another, I’d like to point out that this appliance is really a VMware play. The focus here is on the benefit of using an established hypervisor (vSphere), and established management solution (vCenter) and a (soon-to-be) established software defined storage solution (Virtual SAN). If you’re looking for the flexibility of multiple hypervisors or incorporating other storage solutions this really isn’t for you.

 

Further Reading and Final Thoughts

Enrico has a good write-up on El Reg about Virtual SAN 6.2 that I think is worth a look. You might also be keen to try something that’s NSX-ready. This is as close as you’ll get to that (although I can’t comment on the reality of one of those configurations). You’ve probably noticed there have been a tonne of pissing matches on the Twitters recently between VMware and Nutanix about their HCI offerings and the relative merits (or lack thereof) of their respective architectures. I’m not telling you to go one way or another. The HCI market is reasonably young, and I think there’s still plenty of change to come before the market has determined whether this really is the future of data centre infrastructure. In the meantime though, if you’re already slow-dancing with EMC or VCE and get all fluttery when people mention VMware, then the VxRail is worth a look if you’re HCI-curious but looking to stay with your current partner. It may not be for the adventurous amongst you, but you already know where to get your kicks. In any case, have a look at the datasheet and talk to your local EMC and VCE folk to see if this is the right choice for you.