Maxta Announces MxIQ

Maxta recently announced MxIQ. I had the opportunity to speak to Barry Phillips (Chief Marketing Officer) and Kiran Sreenivasamurthy (VP, Product Management) and thought I’d share some information from the announcement here. It’s been a while since I’ve covered Maxta, and you can read my previous thoughts on them here.

 

Introducing MxIQ

MxIQ is Maxta’s support and analytics solution and it focuses on four key aspects:

  • Proactive support through data analytics;
  • Preemptive recommendation engine;
  • Forecast capacity and performance trends; and
  • Resource planning assistance.

Historical data trends for capacity and performance are available, as well as metadata concerning cluster configuration, licensing information, VM inventory and logs.

Architecture

MxIQ is a server – client solution and the server component is currently hosted by Maxta in AWS. This can be decoupled from AWS and hosted in a private DC environment if customers don’t want their data sitting in AWS. The downside of this is that Maxta won’t have visibility into the environment, and you’ll lose a lot of the advantages of aggregated support data and analytics.

[image courtesy of Maxta]

There is a client component that runs on every node in the cluster in the customer site. Note that one agent in each cluster is active, with the other agents communicate with the active agent. From a security perspective, you only need to configure an outbound connection, as the server responds to client requests, but doesn’t initiate communications with the client. This may change in the future as Maxta adds increased functionality to the solution.

From a heartbeat perspective, the agent talks to the server every minute or so. If, for some reason, it doesn’t check in, a support ticket is automatically opened.

[image courtesy of Maxta]

Privileges

There are three privilege levels that are available with the MxIQ solution.

  • Customer
  • Partner
  • Admin

Note that the Admin (Maxta support) needs to be approved by the customer.

[image courtesy of Maxta]

The dashboard provides an easy to consume overview of what’s going on with managed Maxta clusters, and you can tell at a glance if there are any problems or areas of concern.

[image courtesy of Maxta]

 

Thoughts

I asked the Maxta team if they thought this kind of solution would result in more work for support staff as there’s potentially more information coming in and more support calls being generated. Their opinion was that, as more and more activities were automated, the workload would decrease. Additionally, logs are collected every four hours. This saves Maxta support staff time chasing environmental information after the first call is logged. I also asked whether the issue resolution was automated. Maxta said it wasn’t right now, as it’s still early days for the product, but that’s the direction it’s heading in.

The type of solution that Maxta are delivering here is nothing new in the marketplace, but that doesn’t mean it’s not valuable for Maxta and their customers. I’m a big fan of adding automated support and monitoring to infrastructure environments. It makes it easier for the vendor to gather information about how their product is being used, and it provides the ability for them to be proactive, and super responsive, to customer issues as the arise.

From what I can gather from my conversation with the Maxta team, it seems like there’s a lot of additional functionality they’ll be looking to add to the product as it matures. The real value of the solution will increase over time as customers contribute more and more telemetry data and support to the environment. This will obviously improve Maxta’s ability to respond quickly to support issues, and, potentially, give them enough information to avoid some of the more common problems in the first place. Finally, the capacity planning feature will no doubt prove invaluable as customers continue to struggle with growth in their infrastructure environments. I’m really looking forward to seeing how this product evolves over time.

Axellio Announces FX-WSSD

 

Axellio (a division of X-IO Technologies) recently announced their new FX-WSSD appliance based on Windows Server 2019. I had the opportunity to speak to Bill Miller (CEO) and Barry Martin (Product Manager for the HCI WSSD product) and thought I’d share some thoughts here.

 

What Is It?

Axellio recently announced the new FabricXpress Hyper-Converged Infrastructure (HCI) | Windows Server Software-Defined Datacenter (known as FX-WSSD to its friends). It’s built on the Axellio Edge FX-1000 platform and comes licensed with Windows Server Datacenter Edition 2019 and runs Microsoft Storage Spaces Direct. You can manage it with Windows Admin Center and the (optional) 5nine management suite.

 

Density

A big part of the Axellio story here revolves around density. You get 4 nodes in 4 RU, and up to 36 NVMe drives per server. Axellio tell me you can pack up to 920TB of raw NVMe-based storage in these things (assuming you’re deploying 6.4TB NVMe drives). You can also have a minimum of 4 drives per server if you have a requirement that is more reliant on processing. There’s a full range of iWARP adapters from Chelsio Communications available with support for 4x 10, 40, or 100GbE connections.

[image courtesy of Axellio]

You can start small and scale up (or out) if required. There’s support for up to 16 nodes in a cluster, and you can manage multiple clusters together if need be.

 

Not That Edge

When I think of edge computing I think of scientific folks doing funky things with big data and generally running Linux-type workloads. While this type of edge computing is still common (and well-catered for with Axellio’s solutions), Axellio are going after what they refer to as the “enterprise edge” market as opposed to the non-Windows workloads. The Windows DC Edition licensing makes sense if you want to run Hyper-V and a number of Windows-based workloads, such as Active Directory domain controllers, file and print services, small databases (basically the type of enterprise workloads traditionally found in remote offices).

 

Thoughts and Further Reading

I’m the first to admit that my working knowledge of current Windows technologies is nowhere near what it was 15 years ago. But I understand why choosing Windows as the foundation platform for the edge HCI appliance makes sense for Axellio. There’s a lot less investment they need to make in terms of raw product development, the Windows virtualisation platform continues to mature, there’s already a big install base of Windows in the enterprise, and operations folks will be fairly comfortable with the management interface.

I’ve written about Axellio’s Edge solution previously, and this new offering is a nice extension of that with some Windows chops and “HCI” sensibilities. I’m not interested in getting into a debate about whether this is really a hyper-converged offering or not, but there’s a bunch of compute, storage and networking stuck together with a hypervisor and management tier to help keep it running. Whatever you want to call it, I can see this being a useful (and flexible) solution for those shops who need to have certain workloads close to the edge, and are already leveraging the Windows operating platform to do it.

You can grab the Axellio Data Sheet from here, and a copy of the press release can be found here.

Scale Computing Announces Partnership With APC by Schneider Electric For DCIAB

(I’m really hoping the snappy title will bring in a few more readers). I recently had a chance to speak with Doug Howell, Senior Director Global Alliances at Scale Computing about their Data Centre In A Box (DCIAB) offering in collaboration with APC by Schneider Electric and thought I’d share some thoughts.

 

It’s A Box

Well, a biggish box. The solution is built on APC’s Micro Data Centre solution, combined with 3 Scale HC3 1150 nodes. The idea is that you have 1 SKU to deal with, which includes the Scale HC3 nodes, UPS, PDUs, and rack. You can then wheel it in, plug it in to the wall and network and it’s ready to go. Howell mentioned that they have a customer that is in the process of deploying a significant amount of these things in the wild.

Note that this is slightly different to the EMEA campaign with Lenovo from earlier in the year and is focused, at this stage, on the North American market. You can grab the solution brief from here.

 

Thoughts

The “distributed enterprise” has presented challenges to IT organisations for years now. Not everyone works in a location that is nicely co-located with headquarters. And these folks need compute and storage too. You’ve no doubt heard about how the “edge” is the new hotness in IT, and I frequently hear pitches from vendors talking about how they handle storage or compute requirements at the edge in some kind of earth-shattering way. It’s been a hard problem to solve, because locality (either for storage or compute or both) is generally a big part of the success of these solutions, particularly from the end user’s perspective. This is oftentimes at odds with traditional enterprise deployments, where all of the key compute and storage components are centrally located for ease of access, management and protection. Improvements in WAN technologies, and distributed application availability is changing that story to an extent though, hence the requirement for these kind of edge solutions. Sometimes, you just need to have stuff close to where you’re main business activity is occurring.

So what makes the Scale and APC offering any different? Nothing really, except that Scale have built their reputation on being able to deliver simple to operate hyper-converged infrastructure to small and medium enterprises with a minimum of fuss and at a reasonable price point. The cool thing here is that you’re also leveraging APC’s ability to deliver robust micro DC services with Scale’s offering that can fit in well with their other solutions, such as DRaaS.

Not every solution from every vendor needs to be unique for it to stand out from the crowd. Scale have historically demonstrated a relentless focus on quality products, excellent after-sales support and market focus. This collaboration will no doubt open up some more doors for them with APC customers who were previously unaware of the Scale story (and vice versa). This can only be a good thing in my opinion.

Datrium Announces CloudShift

I recently had the opportunity to speak to Datrium‘s Brian Biles and Craig Nunes about their CloudShift announcement and thought it was worth covering some of the highlights here.

 

DVX Now

Datrium have had a scalable protection tier and focus on performance since their inception.

[image courtesy of Datrium]

The “mobility tier”, in the form of Cloud DVX, has been around for a little while now. It’s simple to consume (via SaaS), yields decent deduplication results, and the Datrium team tells me it also delivers fast RTO. There’s also solid support for moving data between DCs with the DVX platform. This all sounds like the foundation for something happening in the hybrid space, right?

 

And Into The Future

Datrium pointed out that disaster recovery has traditionally been a good way of finding out where a lot of the problems exist in you data centre. There’s nothing like failing a failover to understand where the integration points in your on-premises infrastructure are lacking. Disaster recovery needs to be a seamless, integrated process, but data centres are still built on various silos of technology. People are still using clouds for a variety of reasons, and some clouds do some things better than others. It’s easy to pick and choose what you need to get things done. This has been one of the big advantages of public cloud and a large reason for its success. As a result of this, however, the silos are moving to the cloud, even as they’re fixed in the DC.

As a result of this, Datrium are looking to develop a solution that delivers on the following theme: “Run. Protect. Any Cloud”. The idea is simple, offering up an orchestrated DR offering that makes failover and failback a painless undertaking. Datrium tell me they’ve been a big supporter of VMware’s SRM product, but have observed that there can be problems with VMware offering an orchestration-only layer, with adapters having issues from time to time, and managing the solution can be complicated. With CloudShift, Datrium are taking a vertical stack approach, positioning CloudShift as an orchestrator for DR as a SaaS offering. Note that it only works with Datrium.

[image courtesy of Datrium]

The idea behind CloudShift is pretty neat. With Cloud DVX you can already backup VMs to AWS using S3 and EC2. The idea is that you can leverage data already in AWS to fire up VMs on AWS (using on-demand instances of VMware Cloud on AWS) to provide temporary disaster recovery capability. The good thing about this is that converting your VMware VMs to someone else’s cloud is no longer a problem you need to resolve. You’ll need to have a relationship with AWS in the first place – it won’t be as simple as entering your credit card details and firing up an instance. But it certainly seems a lot simpler than having an existing infrastructure in place, and dealing with the conversion problems inherent in going from vSphere to KVM and other virtualisation platforms.

[image courtesy of Datrium]

Failover and failback is a fairly straightforward process as well, with the following steps required for failover and failback of workloads:

  1. Backup to Cloud DVX / S3 – This is ongoing and happens in the background;
  2. Failover required – the CloudShift runbook is initiated;
  3. Restart VM groups on VMC – VMs are rehydrated from data in S3; and
  4. Failback to on-premises – CloudShift reverses the process with deltas using change block tracking.

It’s being pitched as a very simple way to run DR, something that has been notorious for being a stressful activity in the past.

 

Thoughts and Further Reading

CloudShift is targeted for release in the first half of 2019. The economic power of DRaaS in the cloud is very strong. People love the idea that they can access the facility on-demand, rather than having passive infrastructure doing nothing on the off chance that it will be required. There’s obviously some additional cost when you need to use on demand versus reserved resources, but this is still potentially cheaper than standing up and maintaining your own secondary DC presence.

Datrium are focused on keeping inherently complex activities like DR simple. I’ll be curious to see whether they’re successful with this approach. The great thing about something like a generic orchestration framework like VMware SRM is that you can use a number of different vendors in the data centre and not have a huge problem with interoperability. The downside to this approach is that this broader ecosystem can leave you exposed to problems with individual components in the solution. Datrium is taking a punt that their customers are going to see the advantages of having an integrated approach to leveraging on demand services. I’m constantly astonished that people don’t get more excited about DRaaS offerings. It’s really cool that you can get this level of protection without having to invest a tonne in running your own passive infrastructure. If you’d like to read more about CloudShift, there’s a blog post that sheds some more light on the solution on Datrium’s site, and you can grab a white paper here too.

Dell Technologies World 2018 – Dell EMC (H)CI Updates

Disclaimer: I recently attended Dell Technologies World 2018.  My flights, accommodation and conference pass were paid for by Dell Technologies via the Press, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Announcement

Dell EMC today announced enhancements to their (hyper)converged infrastructure offerings: VxRail and VxRack SDDC.

VxRail

  • VMware Validated Designs for SDDC to plan, operate, & deploy on-prem cloud
  • Future-proof performance w/NVMe, 2x more memory (up to 3TB per node), 2x graphics acceleration, and 25Gbps networking support
  • New STIG Compliance Guide and automated scripts accelerate deployment of secure infrastructure

VxRack SDDC

  • Exclusive automation & serviceability extensions with VMware Cloud Foundation (VCF)
  • Now leverages more powerful 14th Gen PowerEdge 
  • End-to-end cloud infrastructure security

 

Gil Shneorson on HCI

During the week I also had the chance to speak with Gil Shneorson and I thought it would be worthwhile sharing some of his insights here.

What do you think about HCI in the context of an organisation’s journey to cloud. Is it a stop-gap? “HCI is simply a new way to consume infrastructure (compute and SDS) – and you get some stuff that wasn’t available before. Your environments are evergreen – you take less risk, you don’t have to plan ahead, don’t tend to buy too much or too little”.

Am I going to go traditional or HCI? “Most are going HCI. Where is the role of traditional storage? It’s become more specialised – bare metal, extreme performance, certain DR scenarios. HCI comes partially with everything – lots of storage, lots of CPU. Customers are using it in manufacturing, finance, health care, retail – all in production. There’s no more delineation. Economics are there. Picked up over 3000 customers in 9 quarters”.

Shneorson went on to say that HCI provides “[g]ood building blocks for cloud-like environments – IaaS. It’s the software on top, not the HCI itself. The world is dividing into specific stacks – VMware, Microsoft, Nutanix. Dell EMC are about VMware’s multi-cloud approach. If you do need on-premises, HCI is a good option, and won’t be going away. The Edge is growing like crazy too. Analytics, decision making. Not just point of sale for stores. You need a lot more just in time scale for storage, compute, network.

How about networking? “More is being done. Moving away form storage networks has been a challenge. Not just technically, but organisationally. Finding people who know a bit about everything isn’t easy. Sometimes they stick with the old because of the people. You need a lot of planning to put your IO on the customers’ network. Then you need to automate. We’re still trying to make HCI as robust as traditional architectures”.

And data protection? “Data protection still taking bit of a backseat”.

Are existing VCE customers were upset about some of the move away from Cisco? “Generally, if they were moving away from converged solutions, it was more because they’d gained more confidence in HCI, rather than the changing tech or relationships associated with Dell EMC’s CI offering”.

 

Thoughts

This weeks announcements around VxRail and VxRack SDDC weren’t earth shattering by any stretch, but the thing that sticks in my mind is that Dell EMC continue to iteratively improve the platform and are certainly focused on driving VxRail to be number one in the space. There’s a heck of a lot of competition out there from their good friends at Nutanix, so I’m curious to see how this plays out. When it comes down to it, it doesn’t matter what platform you use to deliver outcomes, they key is that you deliver those outcomes. In the market, it seems the focus is moving more towards how the applications can deliver value, rather than what infrastructure is hosting those applications. This is a great move, but just like serverless needs servers, you still need to think about where your value-adding applications are being hosted. Ideally, you want the data close to the processing, and, depending on the applications, your users need to be close to that data and processing too. Hyper-converged infrastructure can be a really nice solution to leverage when you want to move beyond the traditional storage / compute / network paradigm. You can start small and scale (to a point) as required. Dell EMC’s VxRail and VxRack story is getting better as time goes on.

Datrium Cloud DVX – Not Your Father’s Cloud Data Protection Solution

Disclaimer: I recently attended Storage Field Day 15.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Datrium recently presented at Storage Field Day 15. You can see videos of their presentation here, and download my rough notes from here. Datrium presented on both DVX (Distributed Virtual x) and Cloud DVX. In this article I’m going to focus on Cloud DVX.

 

Cloud DVX

Cloud DVX is “a cloud-native instance of Datrium DVX that offers recovery services for VMs running in DVX on-premises“. They say that Cloud DVX is “Cloud Backup Done Right”. They say it’s super simple to use, highly efficient, and delivers really fast recovery capabilities. They say a lot, so what are some of the use cases?

Consolidated Backup

  • Consolidated backups.
  • Faster access vs. off-site tapes.
  • Long retention – no media mgmt.

 

Recover to On-premises

  • Off-site backups.
  • Cloud as second or third site.
  • Retrieve on-prem for recovery.

 

Recover to Cloud [Future feature]

  • Cloud as the DR site.
  • On-demand DR infrastructure.
  • SAAS-based DR orchestration.

 

Thoughts and Further Reading

Datrium are positioning Cloud DVX as a far more cost-effective solution for cloud backup then simply storing your data on S3. For existing Datrium customers this solution makes a lot of sense. There are great efficiencies to be had through Datrium’s global deduplication and, based on the demo I saw, it looks to be a simple solution to get up and running. Get yourself an AWS account, addd your key to your on-premises DVX environment, and set your protection policy. Then let Datrium take care of the rest. Datrium are really keen to make this an “iPhone to iCloud-like” experience.

I’ve been working with data protection solutions for some time now. I’m not a greybeard by any stretch but there’s certainly some silver in those sideburns. In my opinion, data protection solutions have been notoriously complicated to deploy in a cost effective and reliable manner. It often seems like the simple act of protecting critical data (yes, I’m overstating things a bit) has to be made difficult in order for system administrators to feel a sense of accomplishment. The advent of cloud has moved the goalposts again, with a number of solutions being positioned as “cloud-ready” that really just add to the complexity of the solution rather than giving enterprise what they need: a simple and easy way to protect data, and recover, in a cost effective fashion. Data protection shouldn’t be the pain in the rear that it is today. And shoving traditional data protection solutions onto platforms such as AWS or Azure and calling them “cloud ready” is disingenuous at best and, at worst, really quite annoying. That’s why something like Cloud DVX, coupled with Datrium’s on-premises solution, strikes me as an elegant solution that could really change the way people protect their traditional applications.

Datrium have plans for the future that involve some on-demand disaster recovery services and other orchestration pieces that will be built around Cloud DVX. Their particular approach to “open” convergence has certainly made some waves in the market and generated a lot of interest. The architecture is not what we’ve come to see in “traditional” converged and hyper-converged systems (can we say traditional for this stuff yet?) and delivers a number of efficiencies in terms of cost and performance that makes for a compelling solution. The company was founded by a lot of pretty smart folks who know a thing or two about data efficiency on storage platforms (amongst other things), so they might just have a chance at making this whole “open convergence” thing work (I need to lay off the air quotes too).

You can grab the datasheet here, a copy of the DVX solution brief here, an article from El Reg here and a post by Storage Review hereGlenn Dekhayser also did a great article on Datrium that you can read here.

2018 AKA The Year After 2017

I said last year that I don’t do future prediction type posts, and then I did one anyway. This year I said the same thing and then I did one around some Primary Data commentary. Clearly I don’t know what I’m doing, so here we are again. This time around, my good buddy Jason Collier (Founder at Scale Computing) had some stuff to say about hybrid cloud, and I thought I’d wade in and, ostensibly, nod my head in vigorous agreement for the most part. Firstly, though, here’s Jason’s quote:

“Throughout 2017 we have seen many organizations focus on implementing a 100% cloud focused model and there has been a push for complete adoption of the cloud. There has been a debate around on-premises and cloud, especially when it comes to security, performance and availability, with arguments both for and against. But the reality is that the pendulum stops somewhere in the middle. In 2018 and beyond, the future is all about simplifying hybrid IT. The reality is it’s not on-premises versus the cloud. It’s on-premises and the cloud. Using hyperconverged solutions to support remote and branch locations and making the edge more intelligent, in conjunction with a hybrid cloud model, organizations will be able to support highly changing application environments”.

 

The Cloud

I talk to people every day in my day job about what their cloud strategy is, and most people in enterprise environments are telling me that there are plans afoot to go all in on public cloud. No one wants to run their own data centres anymore. No one wants to own and operate their own infrastructure. I’ve been hearing this for the last five years too, and have possibly penned a few strategy documents in my time that said something similar. Whether it’s with AWS, Azure, Google or one of the smaller players, public cloud as a consumption model has a lot going for it. Unfortunately, it can be hard to get stuff working up there reliably. Why? Because no-one wants to spend time “re-factoring” their applications. As a result of this, a lot of people want to lift and shift their workloads to public cloud. This is fine in theory, but a lot of those applications are running crusty versions of Microsoft’s flagship RDBMS, or they’re using applications that are designed for low-latency, on-premises data centres, rather than being addressable over the Internet. And why is this? Because we all spent a lot of the business’s money in the late nineties and early noughties building these systems to a level of performance and resilience that we thought people wanted. Except we didn’t explain ourselves terribly well, and now the business is tired of spending all of this money on IT. And they’re tired of having to go through extensive testing cycles every time they need to do a minor upgrade. So they stop doing those upgrades, and after some time passes, you find that a bunch of key business applications are suddenly approaching end of life and in need of some serious TLC. As a result of this, those same enterprises looking to go cloud first also find themselves struggling mightily to get there. This doesn’t mean public cloud isn’t necessarily the answer, it just means that people need to think things through a bit.

 

The Edge

Another reason enterprises aren’t necessarily lifting and shifting every single workload to the cloud is the concept of data gravity. Sometimes, your applications and your data need to be close to each other. And sometimes that closeness needs to occur closest to the place you generate the data (or run the applications). Whilst I think we’re seeing a shift in the deployment of corporate workloads to off-premises data centres, there are still some applications that need everything close by. I generally see this with enterprises working with extremely large datasets (think geo-spatial stuff or perhaps media and entertainment companies) that struggle to move large amounts of the data around in a fashion that is cost effective and efficient from a time and resource perspective. There are some neat solutions to some of these requirements, such as Scale Computing’s single node deployment option for edge workloads, and X-IO Technologiesneat approach to moving data from the edge to the core. But physics is still physics.

 

The Bit In Between

So back to Jason’s comment on hybrid cloud being the way it’s really all going. I agree that it’s very much a question of public cloud and on-premises, rather than one or the other. I think the missing piece for a lot of organisations, however, doesn’t necessarily lie in any one technology or application architecture. Rather, I think the key to a successful hybrid strategy sits squarely with the capability of the organization to provide consistent governance throughout the stack. In my opinion, it’s more about people understanding the value of what their company does, and the best way to help it achieve that value, than it is about whether HCI is a better fit than traditional rackmount servers connected to fibre channel fabrics. Those considerations are important, of course, but I don’t think they have the same impact on a company’s potential success as the people and politics does. You can have some super awesome bits of technology powering your company, but if you don’t understand how you’re helping the company do business, you’ll find the technology is not as useful as you hoped it would be. You can talk all you want about hybrid (and you should, it’s a solid strategy) but if you don’t understand why you’re doing what you do, it’s not going to be as effective.

Primary Data – Seeing the Future

It’s that time of year when public relations companies send out a heap of “What’s going to happen in 2018” type press releases for us blogger types to take advantage of. I’m normally reluctant to do these “futures” based posts, as I’m notoriously bad at seeing the future (as are most people). These types of articles also invariably push the narrative in a certain direction based on whatever the vendor being represented is selling. That said I have a bit of a soft spot for Lance Smith and the team at Primary Data, so I thought I’d entertain the suggestion that I at least look at what’s on his mind. Unfortunately, scheduling difficulties meant that we couldn’t talk in person about what he’d sent through, so this article is based entirely on the paragraphs I was sent, and Lance hasn’t had the opportunity to explain himself :)

 

SDS, What Else?

Here’s what Lance had to say about software-defined storage (SDS). “Few IT professionals admit to a love of buzzwords, and one of the biggest offenders in the last few years is the term, “software-defined storage.” With marketers borrowing from the successes of “software-defined-networking”, the use of “SDS” attempts all kinds of claims. Yet the term does little to help most of us to understand what a specific SDS product can do. Despite the well-earned dislike of the phrase, true software-defined storage solutions will continue to gain traction because they try to bridge the gap between legacy infrastructure and modern storage needs. In fact, even as hardware sales declines, IDC forecasts that the SDS market will grow at a rate of 13.5% from 2017 – 2021, growing to a $16.2B market by the end of the forecast period.”

I think Lance raises an interesting point here. There’re a lot of companies claiming to deliver software-defined storage solutions in the marketplace. Some of these, however, are still heavily tied to particular hardware solutions. This isn’t always because they need the hardware to deliver functionality, but rather because the company selling the solution also sells hardware. This is fine as far as it goes, but I find myself increasingly wary of SDS solutions that are tied to a particular vendor’s interpretation of what off the shelf hardware is.

The killer feature of SDS is the idea that you can do policy-based provisioning and management of data storage in a programmatic fashion, and do this independently of the underlying hardware. Arguably, with everything offering some kind of RESTful API capability, this is the case. But I think it’s the vendors who are thinking beyond simply dishing up NFS mount points or S3-compliant buckets that will ultimately come out on top. People want to be able to run this stuff anywhere – on crappy whitebox servers and in the public cloud – and feel comfortable knowing that they’ll be able to manage their storage based on a set of business-focused rules, not a series of constraints set out by a hardware vendor. I think we’re close to seeing that with a number of solutions, but I think there’s still some way to go.

 

HCI As Silo. Discuss.

His thoughts on HCI were, in my opinion, a little more controversial. “Hyperconverged infrastructure (HCI) aims to meet data’s changing needs through automatic tiering and centralized management. HCI systems have plenty of appeal as a fast fix to pay as you grow, but in the long run, these systems represent just another larger silo for enterprises to manage. In addition, since hyperconverged systems frequently require proprietary or dedicated hardware, customer choice is limited when more compute or storage is needed. Most environments don’t require both compute and storage in equal measure, so their budget is wasted when only more CPU or more capacity is really what applications need. Most HCI architecture rely on layers of caches to ensure good storage performance.  Unfortunately, performance is not guaranteed when a set of applications running in a compute node overruns a caches capacity.  As IT begins to custom-tailor storage capabilities to real data needs with metadata management software, enterprises will begin to move away from bulk deployments of hyperconverged infrastructure and instead embrace a more strategic data management role that leverages precise storage capabilities on premises and into the cloud.”

There’re are a few nuggets in this one that I’d like to look at further. Firstly, the idea that HCI becomes just another silo to manage is an interesting one. It’s true that HCI as a technology is a bit different to the traditional compute / storage / network paradigm that we’ve been managing for the last few decades. I’m not convinced, however, that it introduces another silo of management. Or maybe, what I’m thinking is that you don’t need to let it become another silo to manage. Rather, I’ve been encouraging enterprises to look at their platform management at a higher level, focusing on the layer above the compute / storage / network to deliver automation, orchestration and management. If you build that capability into your environment, then whether you consume compute via rackmount servers, blade or HCI becomes less and less relevant. It’s easier said than done, of course, as it takes a lot of time and effort to get that layer working well. But the sweat investment is worth it.

Secondly, the notion that “[m]ost environments don’t require both compute and storage in equal measure, so their budget is wasted when only more CPU or more capacity is really what applications need” is accurate, but most HCI vendors are offering a way to expand storage or compute now without necessarily growing the other components (think Nutanix with their storage-only nodes and NetApp’s approach to HCI). I’d posit that architectures have changed enough with the HCI market leaders to the point that this is no longer a real issue.

Finally, I’m not convinced that “performance is not guaranteed when a set of applications running in a compute node overruns a caches capacity” is as much of a problem as it was a few years ago. Modern hypervisors have a lot of smarts built into them in terms of service quality and the modelling for capacity and performance sizing has improved significantly.

 

Conclusion

I like Lance, and I like what Primary Data bring to the table with their policy-based SDS solution. I don’t necessarily agree with him on some of these points (particularly as I think HCI solutions have matured a bunch in the last few years) but I do enjoy the opportunity to think about some of these ideas when I otherwise wouldn’t. So what will 2018 bring in my opinion? No idea, but it’s going to be interesting, that’s for sure.

Scale Computing and WinMagic Announce Partnership, Refuse to Sit Still

Scale Computing and WinMagic recently announced a partnership improving the security of Scale’s HC3 solution. I had the opportunity to be briefed by the good folks at Scale and WinMagic and thought I’d provide a brief overview of the announcement here.

 

But Firstly, Some Background

Scale Computing announced their HC3 Cloud Unity offering in late September this year. Cloud Unity, in a nutshell, let’s you run embedded HC3 instances in Google Cloud. Coupled with some SD-WAN smarts, you can move workloads easily between on-premises infrastructure and GCP. It enables companies to perform lift and shift migrations, if required, with relative ease, and removes a lot of the complexity traditionally associated of deploying hybrid-friendly workloads in the data centre.

 

So the WinMagic Thing?

WinMagic have been around for quite some time, and offer a range of security products aimed at various sizes of organization. This partnership with Scale delivers SecureDoc CloudVM as a mechanism for encryption and key management. You can download a copy of the brochure from here. The point of the solution is to provide a secure mechanism for hosting your VMs either on-premises or in the cloud. Key management can be a pain in the rear, and WinMagic provides a fully-featured solution for this that’s easy to use and simple to manage. There’s broad support for a variety of operating environments and clients. Authentication and authorized key distribution takes place prior to workloads being deployed to ensure that the right person is accessing data from an expected place and device and there’s support for password only or multi-factor authentication.

 

Thoughts

Scale Computing have been doing some really cool stuff in the hyperconverged arena for some time now. The new partnership with Google Cloud, and the addition of the WinMagic solution, demonstrates their focus on improving an already impressive offering with some pretty neat features. It’s one thing to enable customers to get to the cloud with relative ease, but it’s a whole other thing to be able to help them secure their assets when they make that move to the cloud.

It’s my opinion that Scale Computing have been the quiet achievers in the HCI marketplace, with reported fantastic customer satisfaction and a solid range of products on offer at a very reasonable RRP. Couple this with an intelligent hypervisor platform and the ability to securely host assets in the public cloud, and it’s clear that Scale Computing aren’t interested in standing still. I’m really looking forward to seeing what’s next for them. If you’re after an HCI solution where you can start really (really) small and grow as required, it would be worthwhile having a chat to them.

Also, if you’re into that kind of thing, Scale and WinMagic are hosting a joint webinar on November 28 at 10:30am EST. Registration for the webinar “Simplifying Security across your Universal I.T. Infrastructure: Top 5 Considerations for Securing Your Virtual and Cloud IT Environments, Without Introducing Unneeded Complexity” can be found here.

 

 

Dell EMC VxRail 4.5 – A Few Notes

VxRail 4.5 was announced in May by Dell EMC and I’ve been a bit slow in going through my enablement on the platform. The key benefit (beyond some interesting hardware permutations that were announced), is support for VMware vSphere 6.5 U1 and vSAN 6.6. I thought I’d cover a few of the more interesting aspects of the VxRail platform and core VMware enhancements.

Note that VxRail 4.5 does not support Generation 1 hardware, but it does support G2 and 3 Quanta models, and G3 Dell PowerEdge appliances.

 

VxRail Enhancements

Multi-node Additions

Prior to version 4.5, adding an additional node to the existing cluster was a bit of a pain. Only one node could be added at a time and this could take some time when you had a lot of nodes to add. Now, however,

  • Multiple nodes (up to 6) can be added simultaneously.
  • Each node expansion is a separate process. If one fails, the remaining five will keep going.

There is now also a node removal procedure, used to decommission old generation VxRail products and migrate to new generation VxRail hardware. This is only supported for VxRail 4.0.301 and above and removal of only one node at a time is recommended.

 

Network Planning

Different VLANs are recommended for vSAN traffic and for management across multiple VxRail clusters.

 

VxRail network topologies use dual top-of-rack (ToR) switches to remove the switch as a single point of failure.

 

vSAN 6.6 Enhancements

Disk Format 5

As I mentioned earlier, VxRail 4.5 introduces support for vSAN 6.6 and disk format 5.

  • All nodes in the VxRail cluster must be running vSAN 6.6 due to the upgraded disk format.
  • The upgrade from disk format 3 to 5 is a metadata only conversion and data evacuation is not required. You need disk format 5 is required for datastore-level encryption (see below).
  • VxRail will automatically upgrade the disk format version to 5 when you upgrade to VxRail 4.5.

 

Unicast Support

Unicast is supported for vSAN communications starting with vSAN 6.6. The idea is to reduce network configuration complexity. There is apparently no performance impact associated with the use of Unicast. vSAN will switch to unicast mode once all hosts in the cluster have been upgraded to vSAN 6.6 and disk format 5. You won’t need to reconfigure the ToR switches to disable multicast features in vSAN.

 

vSAN Data-at-Rest Encryption

vSAN Data-at-Rest Encryption (D@RE) is enabled at cluster level, supporting hybrid, all-flash, and stretched clusters. Note that it requires an external vCenter and does not support embedded vCenter. It

  • Works with all vSAN features, including deduplication and compression.
  • Integrates with all KMIP-compliant key management technologies, including SafeNet, HyTrust, Thales, Vormetric, etc.

When enabling encryption, vSAN performs a rolling reformat of every disk group in the cluster. As such, it is recommended to enable encryption on the vSAN datastore after the initial VxRail deployment. Whilst it’s a matter of ticking a checkbox, it can take a lot of time to complete depending on how much data needs to be migrated about the place.

 

Other Reading

You can read more about vSAN D@RE here. Chad delivered a very useful overview of the VxRail and VxRack updates announced at Dell EMC World 2017 that you can read here.