Disclaimer: I recently attended Storage Field Day 12. My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
Here are some notes from Datera‘s presentation at Storage Field Day 12. You can view the video here and download my rough notes here.
Hybrid is the New Black
Datera’s Mark Fleischmann spent some time talking to us about the direction Datera think the industry is heading. They’re seeing the adoption of public cloud operations and architecture as the “new IT blueprint”. Ultimately, a move to a “Unified Hybrid Cloud” seems to be the desired end-state for most enterprises, where we’re able to leverage a bunch of different solutions depending on requirements, etc. In my mind it’s not that dissimilar to the focus on “best of breed” that was popular when I first got into the technology industry. It’s a concept that looks great on a slide, but it’s a lot harder to effect than people realise.
According to Datera, the goal is to deliver self-tuning invisible data infrastructure. This provides:
- Policy-based automation;
- High performance;
- Low latency;
- Simple management;
- Scalability; and
For Datera, the key attribute is the policy based one. I wrote a little about the focus on intent after I saw them at Storage Field Day 10. I still think this is a key part of Datera’s value proposition, but they’ve branched out a bit more and are now also focused particularly on high performance and low latency. Datera are indeed keen to “give people better than public cloud”, and are working on hybrid cloud data management to provide a fabric across public and private clouds.
What do we have now?
So where are we at right now in the enterprise? According to Datera, we have:
- Expensive silos – composed of legacy IT and open source building blocks – neither of which were designed to operate as-a-Service (aaS); and
- Data gravity – where data is restricted in purpose-built silos with the focus on captive data services.
What do we want?
That doesn’t sound optimal. Datera suggest that we’d prefer:
- Automation – with cloud-like data simplicity, scalability and agility, application-defined smart automation, “self-driving” infrastructure; and
- Choice – hybrid data choices of services across clouds, flexibility and options.
Which sounds like something I would prefer. Of course, Datera point out that “[d]ata is the foundation (and the hard part)”. What we really need is for a level of simplicity that can be applied to our infrastructure in much the same way as our applications are easy to use (except Word, that’s not easy to use).
What’s a Hybrid?
So what does this hybrid approach really look like? For Datera, there are a few different pieces to the puzzle.
Multi-cloud Data Fabric
Datera want you to be able to leverage on-premises clouds, but with “better than AWS” data services:
- True scale out with mixed media
- Multiple tiers of service
- 100% operations offload
You’re probably also interested in enterprise performance and capabilities, such as:
- 10x performance, 1/10 latency
- Data sovereignty, security and SLOs
- Data services platform and ecosystem
You’ll want all of this wrapped up in cloud operations too, including cloud simplicity and agility:
- Architected to operate as a service;
- Self-tuning, wide price/performance band; and
- Role-based multi-tenancy.
- Multi-customer IaaS operations portal; and
- Predictive data analysis and insights.
So Can Datera Hybrid?
They reckon they can, and I tend to agree. They offer a bunch of features that feel like all kinds of hybrid.
- Heterogeneous node configurations in single cluster (AFN + HFA);
- Deployed on industry standard x86 servers;
- Grow-as-you-grow (node add, replacement, decommission, reconfiguration);
- Single-click cluster-wide upgrade; and
- Online volume expansion, replica reconfiguration.
Policy-based Data Placement
- Multiple service levels – IOPS, latency, bandwidth, IO durability;
- Policy-based data and target port placement;
- All-flash, primary flash replica, or hybrid volumes;
- Application provisioning decoupled from infrastructure management;
- Template-based application deployment; and
- Automated to scale.
Native Layer-3 Support
- DC as the failure domain (target port (IP) can move anywhere);
- Scale beyond Layer-2 boundaries; and
- Scale racks without overlay networking.
- Automate around network/power failure domains or programmable availability zones (data/replica distribution, rack awareness); and
- Data services with compute affinity.
- Real-time load target port and storage rebalancing;
- Transparent IP address failover;
- Transparent node failure handling, network link handling; and
- Dynamic run-time load balancing based on workload / system / infrastructure changes.
- Multi-tenancy for storage resources;
- Micro-segmentation for users/tenants/applications;
- Noisy neighbour isolation through QoS;
- IOPS and bandwidth controls (total, read, write); and
- IP pools, VLAN tagging for network isolation.
- API-first DevOps provisioning approach;
- RESTful API with self-describing schema;
- Interactive API browser; and
- Integration with wide eco-system.
What Do I Do With This Information?
Cloud Operations & Analytics
Datera also get that you need good information to make good decisions around infrastructure, applications and data. To this end, they offer some quite useful features in terms of analytics and monitoring.
From a system telemetry perspective, you get continuous system monitoring and a multi-cluster view. You also get insights into network performance and system / application performance. Coupled with capacity planning and trending information and system inventory information there’s a bunch of useful data available. The basic monitoring in terms of failure handling and alerting is also covered.
Conclusion and Further Reading
It’s not just Datera that are talking about hybrid solutions. A bunch of companies across a range of technologies are talking about it. Not because it’s necessarily the best approach to infrastructure, but rather because it takes a bunch of the nice things we like about (modern) cloud operations and manages to apply them to the legacy enterprise infrastructure stack that a lot of us struggle with on a daily basis.
People like cloud because it’s arguably a better way of working in a lot of cases. People are getting into the idea of renting service versus buying products outright. I don’t understand why this has developed this way in recent times, although I do understand there can be very good fiscal reasons for doing so. [I do remember being at an event last year where rent versus buy was discussed in broad terms. I will look into that further].
Datera understand this too, and they also understand that “legacy” infrastructure management can be a real pain for enterprises, and that the best answer, as it stands, is some kind of hybrid approach. Datera’s logo isn’t the only thing that’s changed in recent times, and they’ve come an awful long way since I first heard from them at Storage Field Day 10. I’m keen to see how their hybrid approach to infrastructure, data and applications develops in the next 6 – 12 months. At this stage, it seems they have a solid plan and are executing it. Arjan felt the same way, and you can read his article here.