Primary Data – Seeing the Future

It’s that time of year when public relations companies send out a heap of “What’s going to happen in 2018” type press releases for us blogger types to take advantage of. I’m normally reluctant to do these “futures” based posts, as I’m notoriously bad at seeing the future (as are most people). These types of articles also invariably push the narrative in a certain direction based on whatever the vendor being represented is selling. That said I have a bit of a soft spot for Lance Smith and the team at Primary Data, so I thought I’d entertain the suggestion that I at least look at what’s on his mind. Unfortunately, scheduling difficulties meant that we couldn’t talk in person about what he’d sent through, so this article is based entirely on the paragraphs I was sent, and Lance hasn’t had the opportunity to explain himself :)

 

SDS, What Else?

Here’s what Lance had to say about software-defined storage (SDS). “Few IT professionals admit to a love of buzzwords, and one of the biggest offenders in the last few years is the term, “software-defined storage.” With marketers borrowing from the successes of “software-defined-networking”, the use of “SDS” attempts all kinds of claims. Yet the term does little to help most of us to understand what a specific SDS product can do. Despite the well-earned dislike of the phrase, true software-defined storage solutions will continue to gain traction because they try to bridge the gap between legacy infrastructure and modern storage needs. In fact, even as hardware sales declines, IDC forecasts that the SDS market will grow at a rate of 13.5% from 2017 – 2021, growing to a $16.2B market by the end of the forecast period.”

I think Lance raises an interesting point here. There’re a lot of companies claiming to deliver software-defined storage solutions in the marketplace. Some of these, however, are still heavily tied to particular hardware solutions. This isn’t always because they need the hardware to deliver functionality, but rather because the company selling the solution also sells hardware. This is fine as far as it goes, but I find myself increasingly wary of SDS solutions that are tied to a particular vendor’s interpretation of what off the shelf hardware is.

The killer feature of SDS is the idea that you can do policy-based provisioning and management of data storage in a programmatic fashion, and do this independently of the underlying hardware. Arguably, with everything offering some kind of RESTful API capability, this is the case. But I think it’s the vendors who are thinking beyond simply dishing up NFS mount points or S3-compliant buckets that will ultimately come out on top. People want to be able to run this stuff anywhere – on crappy whitebox servers and in the public cloud – and feel comfortable knowing that they’ll be able to manage their storage based on a set of business-focused rules, not a series of constraints set out by a hardware vendor. I think we’re close to seeing that with a number of solutions, but I think there’s still some way to go.

 

HCI As Silo. Discuss.

His thoughts on HCI were, in my opinion, a little more controversial. “Hyperconverged infrastructure (HCI) aims to meet data’s changing needs through automatic tiering and centralized management. HCI systems have plenty of appeal as a fast fix to pay as you grow, but in the long run, these systems represent just another larger silo for enterprises to manage. In addition, since hyperconverged systems frequently require proprietary or dedicated hardware, customer choice is limited when more compute or storage is needed. Most environments don’t require both compute and storage in equal measure, so their budget is wasted when only more CPU or more capacity is really what applications need. Most HCI architecture rely on layers of caches to ensure good storage performance.  Unfortunately, performance is not guaranteed when a set of applications running in a compute node overruns a caches capacity.  As IT begins to custom-tailor storage capabilities to real data needs with metadata management software, enterprises will begin to move away from bulk deployments of hyperconverged infrastructure and instead embrace a more strategic data management role that leverages precise storage capabilities on premises and into the cloud.”

There’re are a few nuggets in this one that I’d like to look at further. Firstly, the idea that HCI becomes just another silo to manage is an interesting one. It’s true that HCI as a technology is a bit different to the traditional compute / storage / network paradigm that we’ve been managing for the last few decades. I’m not convinced, however, that it introduces another silo of management. Or maybe, what I’m thinking is that you don’t need to let it become another silo to manage. Rather, I’ve been encouraging enterprises to look at their platform management at a higher level, focusing on the layer above the compute / storage / network to deliver automation, orchestration and management. If you build that capability into your environment, then whether you consume compute via rackmount servers, blade or HCI becomes less and less relevant. It’s easier said than done, of course, as it takes a lot of time and effort to get that layer working well. But the sweat investment is worth it.

Secondly, the notion that “[m]ost environments don’t require both compute and storage in equal measure, so their budget is wasted when only more CPU or more capacity is really what applications need” is accurate, but most HCI vendors are offering a way to expand storage or compute now without necessarily growing the other components (think Nutanix with their storage-only nodes and NetApp’s approach to HCI). I’d posit that architectures have changed enough with the HCI market leaders to the point that this is no longer a real issue.

Finally, I’m not convinced that “performance is not guaranteed when a set of applications running in a compute node overruns a caches capacity” is as much of a problem as it was a few years ago. Modern hypervisors have a lot of smarts built into them in terms of service quality and the modelling for capacity and performance sizing has improved significantly.

 

Conclusion

I like Lance, and I like what Primary Data bring to the table with their policy-based SDS solution. I don’t necessarily agree with him on some of these points (particularly as I think HCI solutions have matured a bunch in the last few years) but I do enjoy the opportunity to think about some of these ideas when I otherwise wouldn’t. So what will 2018 bring in my opinion? No idea, but it’s going to be interesting, that’s for sure.