Cloudistics, Choice and Private Cloud

I’ve had my eye on Cloudistics for a little while now.  They published an interesting post recently on virtualisation and private cloud. It makes for an interesting read, and I thought I’d comment briefly and post this article if for no other reason than you can find your way to the post and check it out.

TL;DR – I’m rambling a bit, but it’s not about X versus Y, it’s more about getting your people and processes right.

 

Cloud, Schmoud

There are a bunch of different reasons why you’d want to adopt a cloud operating model, be it public, private or hybrid. These include the ability to take advantage of:

  • On-demand service;
  • Broad network access;
  • Resource pooling;
  • Rapid elasticity; and
  • Measured service, or pay-per-use.

Some of these aspects of cloud can be more useful to enterprises than others, depending in large part on where they are in their journey (I hate calling it that). The thing to keep in mind is that cloud is really just a way of doing things slightly differently to improve deficiencies in areas that are normally not tied to one particular piece of technology. What I mean by that is that cloud is a way of dealing with some of the issues that you’ve probably seen in your IT organisation. These include:

  • Poor planning;
  • Complicated network security models;
  • Lack of communication between IT and the business;
  • Applications that don’t scale; and
  • Lack of capacity planning.

Operating Expenditure

These are all difficult problems to solve, primarily because people running IT organisations need to be thinking not just about technology problems, but also people and business problems. And solving those problems takes resources, something that’s often in short supply. Coupled with the fact that many businesses feel like they’ve been handing out too much money to their IT organisations for years and you start to understand why many enterprises are struggling to adapt to new ways of doing things. One thing that public cloud does give you is a way to consume resources via OpEx rather than CapEx. The benefit here is that you’re only consuming what you need, and not paying for the whole thing to be built out on the off chance you’ll use it all over the five year life of the infrastructure. Private cloud can still provide this kind of benefit to the business via “showback” mechanisms that can really highlight the cost of infrastructure being consumed by internal business units. Everyone has complained at one time or another about the Finance group having 27 test environments, now they can let the executives know just what that actually costs.

Are You Really Cloud Native?

Another issue with moving to cloud is that a lot of enterprises are still looking to leverage Infrastructure-as-a-Service (IaaS) as an extension of on-premises capabilities rather than using cloud-native technologies. If you’ve gone with lift and shift (or “move and improve“) you’ve potentially just jammed a bunch of the same problems you had on-premises in someone else’s data centre. The good thing about moving to a cloud operating model (even if it’s private) is that you’ll get people (hopefully) used to consuming services from a catalogue, and taking responsibility for how much their footprint occupies. But if your idea of transformation is running SQL 2005 on Windows Server 2003 deployed from VMware vRA then I think you’ve got a bit of work to do.

 

Conclusion

As Cloudistics point out in their article, it isn’t really a conversation about virtualisation versus private cloud, as virtualisation (in my mind at least) is the platform that makes a lot of what we do nowadays with private cloud possible. What is more interesting is the private versus public debate. But even that one is no longer as clear cut as vendors would like you to believe. If a number of influential analysts are right, most of the world has started to realise that it’s all about a hybrid approach to cloud. The key benefits of adopting a new way of doing things are more about fixing up the boring stuff, like process. If you think you get your house in order simply by replacing the technology that underpins it then you’re in for a tough time.

SwiftStack 6.0 – Universal Access And More

I haven’t covered SwiftStack in a little while, and they’ve been doing some pretty interesting stuff. They made some announcements recently but a number of scheduling “challenges” and some hectic day job commitments prevented me from speaking to them until just recently. In the end I was lucky enough to snaffle 30 minutes with Mario Blandini and he kindly took me through the latest news.

 

6.0 Then, So What?

Universal Access

Universal Access is really very cool. Think of it as a way to write data in either file or object format, and then read it back in file or object format, depending on how you need to consume it.

[image courtesy of SwiftStack]

Key features include:

  • Gateway free – the data is stored in cloud-native format in a single namespace;
  • Accessible via file (SMB3 / NFS4) and / or object API (S3 / Swift). Note that this is not a replacement for NAS, but it will give you the ability to work with some of those applications that expect to see file in places; and
  • Applications can write data one way, access the data another way, and vice versa.

The great thing is that, according to SwiftStack, “Universal Access enables applications to take advantage of all data under management, no matter how it was written or where it is stored, without the need to refactor applications”.

 

Universal Access Multi-Cloud

So what if you take to really neat features like, say, Cloud Sync and Universal Access, and combine them? You get access to a single, multi-cloud, storage namespace.

[image courtesy of SwiftStack]

 

Thoughts

As Mario took me through the announcements he mentioned that SwiftStack are “not just an object storage thing based on Swift” and I thought that was spot on. Universal Access (particularly with multi-cloud) is just the type of solution that enterprises looking to add mobility to workloads are looking for. The problem for some time has been that data gets tied up in silos based on the protocol that a controller speaks, rather than the value of the data to the business. Products like this go a long way towards relieving some of the pressure on enterprises by enabling simpler access to more data. Being able to spread it across on-premises and public cloud locations also makes for simpler consumption models and can help business leverage the data in a more useful way than was previously possible. Add in the usefulness of something like Cloud Sync in terms of archiving data to public cloud buckets and you’ll start to see that these guys are onto something. I recommend you head over to the SwiftStack site and request a demo. You can read the press release here.

Datera – Hybrid Is The New Black

Disclaimer: I recently attended Storage Field Day 12.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

Here are some notes from Datera‘s presentation at Storage Field Day 12. You can view the video here and download my rough notes here.

 

Hybrid is the New Black

Datera’s Mark Fleischmann spent some time talking to us about the direction Datera think the industry is heading. They’re seeing the adoption of public cloud operations and architecture as the “new IT blueprint”. Ultimately, a move to a “Unified Hybrid Cloud” seems to be the desired end-state for most enterprises, where we’re able to leverage a bunch of different solutions depending on requirements, etc. In my mind it’s not that dissimilar to the focus on “best of breed” that was popular when I first got into the technology industry. It’s a concept that looks great on a slide, but it’s a lot harder to effect than people realise.

According to Datera, the goal is to deliver self-tuning invisible data infrastructure. This provides:

  • Policy-based automation;
  • High performance;
  • Low latency;
  • Simple management;
  • Scalability; and
  • Agility.

For Datera, the key attribute is the policy based one. I wrote a little about the focus on intent after I saw them at Storage Field Day 10. I still think this is a key part of Datera’s value proposition, but they’ve branched out a bit more and are now also focused particularly on high performance and low latency. Datera are indeed keen to “give people better than public cloud”, and are working on hybrid cloud data management to provide a fabric across public and private clouds.

 

What do we have now?

So where are we at right now in the enterprise? According to Datera, we have:

  • Expensive silos – composed of legacy IT and open source building blocks – neither of which were designed to operate as-a-Service (aaS); and
  • Data gravity – where data is restricted in purpose-built silos with the focus on captive data services.

 

What do we want?

That doesn’t sound optimal. Datera suggest that we’d prefer:

  • Automation – with cloud-like data simplicity, scalability and agility, application-defined smart automation, “self-driving” infrastructure; and
  • Choice – hybrid data choices of services across clouds, flexibility and options.

Which sounds like something I would prefer. Of course, Datera point out that “[d]ata is the foundation (and the hard part)”. What we really need is for a level of simplicity that can be applied to our infrastructure in much the same way as our applications are easy to use (except Word, that’s not easy to use).

 

What’s a Hybrid?

So what does this hybrid approach really look like? For Datera, there are a few different pieces to the puzzle.

Multi-cloud Data Fabric

Datera want you to be able to leverage on-premises clouds, but with “better than AWS” data services:

  • True scale out with mixed media
  • Multiple tiers of service
  • 100% operations offload

You’re probably also interested in enterprise performance and capabilities, such as:

  • 10x performance, 1/10 latency
  • Data sovereignty, security and SLOs
  • Data services platform and ecosystem

 

Cloud Operations

You’ll want all of this wrapped up in cloud operations too, including cloud simplicity and agility:

  • Architected to operate as a service;
  • Self-tuning, wide price/performance band; and
  • Role-based multi-tenancy.

Multi-cloud Optionality

  • Multi-customer IaaS operations portal; and
  • Predictive data analysis and insights.

 

So Can Datera Hybrid?

They reckon they can, and I tend to agree. They offer a bunch of features that feel like all kinds of hybrid.

Symmetric Scale-out

  • Heterogeneous node configurations in single cluster (AFN + HFA);
  • Deployed on industry standard x86 servers;
  • Grow-as-you-grow (node add, replacement, decommission, reconfiguration);
  • Single-click cluster-wide upgrade; and
  • Online volume expansion, replica reconfiguration.

 

Policy-based Data Placement

  • Multiple service levels – IOPS, latency, bandwidth, IO durability;
  • Policy-based data and target port placement;
  • All-flash, primary flash replica, or hybrid volumes;
  • Application provisioning decoupled from infrastructure management;
  • Template-based application deployment; and
  • Automated to scale.

 

Infrastructure Awareness

Native Layer-3 Support

  • DC as the failure domain (target port (IP) can move anywhere);
  • Scale beyond Layer-2 boundaries; and
  • Scale racks without overlay networking.

Fault Domains

  • Automate around network/power failure domains or programmable availability zones (data/replica distribution, rack awareness); and
  • Data services with compute affinity.

Self-adaptive System

  • Real-time load target port and storage rebalancing;
  • Transparent IP address failover;
  • Transparent node failure handling, network link handling; and
  • Dynamic run-time load balancing based on workload / system / infrastructure changes.

Multi-Tenancy

  • Multi-tenancy for storage resources;
  • Micro-segmentation for users/tenants/applications;
  • Noisy neighbour isolation through QoS;
  • IOPS and bandwidth controls (total, read, write); and
  • IP pools, VLAN tagging for network isolation.

API-driven Programmable

  • API-first DevOps provisioning approach;
  • RESTful API with self-describing schema;
  • Interactive API browser; and
  • Integration with wide eco-system.

 

What Do I Do With This Information?

Cloud Operations & Analytics

Datera also get that you need good information to make good decisions around infrastructure, applications and data. To this end, they offer some quite useful features in terms of analytics and monitoring.

From a system telemetry perspective, you get continuous system monitoring and a multi-cluster view. You also get insights into network performance and system / application performance. Coupled with capacity planning and trending information and system inventory information there’s a bunch of useful data available. The basic monitoring in terms of failure handling and alerting is also covered.

 

Conclusion and Further Reading

It’s not just Datera that are talking about hybrid solutions. A bunch of companies across a range of technologies are talking about it. Not because it’s necessarily the best approach to infrastructure, but rather because it takes a bunch of the nice things we like about (modern) cloud operations and manages to apply them to the legacy enterprise infrastructure stack that a lot of us struggle with on a daily basis.

People like cloud because it’s arguably a better way of working in a lot of cases. People are getting into the idea of renting service versus buying products outright. I don’t understand why this has developed this way in recent times, although I do understand there can be very good fiscal reasons for doing so. [I do remember being at an event last year where rent versus buy was discussed in broad terms. I will look into that further].

Datera understand this too, and they also understand that “legacy” infrastructure management can be a real pain for enterprises, and that the best answer, as it stands, is some kind of hybrid approach. Datera’s logo isn’t the only thing that’s changed in recent times, and they’ve come an awful long way since I first heard from them at Storage Field Day 10. I’m keen to see how their hybrid approach to infrastructure, data and applications develops in the next 6 – 12 months. At this stage, it seems they have a solid plan and are executing it. Arjan felt the same way, and you can read his article here.

Brisbane VMUG – November 2016

hero_vmug_express_2011

The November edition of the Brisbane VMUG meeting will be held on Thursday 17th November at the EMC office from 2 – 4pm (that’s right, the same day as Ikea opens at North Lakes). It’s sponsored by VMware and promises to be a great afternoon.

Here’s the agenda:

  • VMUG Intro
  • VMware Presentation: My favourite VCDX Michael Francis will be presenting a deep dive on the design considerations and the realities of extending your data centre to VMware vCloud Air with VMware Hybrid Cloud Manager.
  • Q&A
  • Refreshments.

Michael will be presenting a tonne of technical content using real world use cases and it should be a great session. You can find out more information and register for the event here. I hope to see you there. This will be the last session for the year, with our next event hopefully being a social event in December.

LOGO1