Datrium Announces CloudShift

I recently had the opportunity to speak to Datrium‘s Brian Biles and Craig Nunes about their CloudShift announcement and thought it was worth covering some of the highlights here.

 

DVX Now

Datrium have had a scalable protection tier and focus on performance since their inception.

[image courtesy of Datrium]

The “mobility tier”, in the form of Cloud DVX, has been around for a little while now. It’s simple to consume (via SaaS), yields decent deduplication results, and the Datrium team tells me it also delivers fast RTO. There’s also solid support for moving data between DCs with the DVX platform. This all sounds like the foundation for something happening in the hybrid space, right?

 

And Into The Future

Datrium pointed out that disaster recovery has traditionally been a good way of finding out where a lot of the problems exist in you data centre. There’s nothing like failing a failover to understand where the integration points in your on-premises infrastructure are lacking. Disaster recovery needs to be a seamless, integrated process, but data centres are still built on various silos of technology. People are still using clouds for a variety of reasons, and some clouds do some things better than others. It’s easy to pick and choose what you need to get things done. This has been one of the big advantages of public cloud and a large reason for its success. As a result of this, however, the silos are moving to the cloud, even as they’re fixed in the DC.

As a result of this, Datrium are looking to develop a solution that delivers on the following theme: “Run. Protect. Any Cloud”. The idea is simple, offering up an orchestrated DR offering that makes failover and failback a painless undertaking. Datrium tell me they’ve been a big supporter of VMware’s SRM product, but have observed that there can be problems with VMware offering an orchestration-only layer, with adapters having issues from time to time, and managing the solution can be complicated. With CloudShift, Datrium are taking a vertical stack approach, positioning CloudShift as an orchestrator for DR as a SaaS offering. Note that it only works with Datrium.

[image courtesy of Datrium]

The idea behind CloudShift is pretty neat. With Cloud DVX you can already backup VMs to AWS using S3 and EC2. The idea is that you can leverage data already in AWS to fire up VMs on AWS (using on-demand instances of VMware Cloud on AWS) to provide temporary disaster recovery capability. The good thing about this is that converting your VMware VMs to someone else’s cloud is no longer a problem you need to resolve. You’ll need to have a relationship with AWS in the first place – it won’t be as simple as entering your credit card details and firing up an instance. But it certainly seems a lot simpler than having an existing infrastructure in place, and dealing with the conversion problems inherent in going from vSphere to KVM and other virtualisation platforms.

[image courtesy of Datrium]

Failover and failback is a fairly straightforward process as well, with the following steps required for failover and failback of workloads:

  1. Backup to Cloud DVX / S3 – This is ongoing and happens in the background;
  2. Failover required – the CloudShift runbook is initiated;
  3. Restart VM groups on VMC – VMs are rehydrated from data in S3; and
  4. Failback to on-premises – CloudShift reverses the process with deltas using change block tracking.

It’s being pitched as a very simple way to run DR, something that has been notorious for being a stressful activity in the past.

 

Thoughts and Further Reading

CloudShift is targeted for release in the first half of 2019. The economic power of DRaaS in the cloud is very strong. People love the idea that they can access the facility on-demand, rather than having passive infrastructure doing nothing on the off chance that it will be required. There’s obviously some additional cost when you need to use on demand versus reserved resources, but this is still potentially cheaper than standing up and maintaining your own secondary DC presence.

Datrium are focused on keeping inherently complex activities like DR simple. I’ll be curious to see whether they’re successful with this approach. The great thing about something like a generic orchestration framework like VMware SRM is that you can use a number of different vendors in the data centre and not have a huge problem with interoperability. The downside to this approach is that this broader ecosystem can leave you exposed to problems with individual components in the solution. Datrium is taking a punt that their customers are going to see the advantages of having an integrated approach to leveraging on demand services. I’m constantly astonished that people don’t get more excited about DRaaS offerings. It’s really cool that you can get this level of protection without having to invest a tonne in running your own passive infrastructure. If you’d like to read more about CloudShift, there’s a blog post that sheds some more light on the solution on Datrium’s site, and you can grab a white paper here too.

Nexsan Announces Assureon Cloud Transfer

Announcement

Nexsan announced Cloud Transfer for their Assureon product a little while ago. I recently had the chance to catch up with Gary Watson (Founder / CTO at Nexsan) and thought it would be worth covering the announcement here.

 

Assureon Refresher

Firstly, though, it might be helpful to look at what Assureon actually is. In short, it’s an on-premises storage archive that offers:

  • Long term archive storage for fixed content files;
  • Dependable file availability, with files being audited every 90 days;
  • Unparalleled file integrity; and
  • A “policy” system for protecting and stubbing files.

Notably, there is always a primary archive and a DR archive included in the price. No half-arsing it here – which is something that really appeals to me. Assureon also doesn’t have a “delete” key as such – files are only removed based on defined Retention Rules. This is great, assuming you set up your policies sensibly in the first place.

 

Assureon Cloud Transfer

Cloud Transfer provides the ability to move data between on-premises and cloud instances. The idea is that it will:

  • Provide reliable and efficient cloud mobility of archived data between cloud server instances and between cloud vendors; and
  • Optimise cloud storage and backup costs by offloading cold data to on-premises archive.

It’s being positioned as useful for clients who have a large unstructured data footprint on public cloud infrastructure and are looking to reduce their costs for storing data up there. There’s currently support for Amazon AWS and Microsoft Azure, with Google support coming in the near future.

[image courtesy of Nexsan]

There’s stub support for those applications that support. There’s also an optional NFS / SMB interface that can be configured in the cloud as an Assureon archiving target that caches hot files and stubs cold files. This is useful for those non-Windows applications that have a lot of unstructured data that could be moved to an archive.

 

Thoughts and Further Reading

The concept of dedicated archiving hardware and software bundles, particularly ones that live on-premises, might seem a little odd to some folks who spend a lot of time failing fast in the cloud. There are plenty of enterprises, however, that would benefit from the level of rigour that Nexsan have wrapped around the Assureon product. It’s my strong opinion that too many people still don’t understand the difference between backup and recovery and archive data. The idea that you need to take archive data and make it immutable (and available) for a long time has great appeal, particularly for organisations getting slammed with a whole lot of compliance legislation. Vendors have been talking about reducing primary storage use for years, but there seems to have been some pushback from companies not wanting to invest in these solutions. It’s possible that this was also a result of some kludgy implementations that struggled to keep up with the demands of the users. I can’t speak for the performance of the Assureon product, but I like the fact that it’s sold as a pair, and with a lot of the decision-making around protection taken away from the end user. As someone who worked in an organisation that liked to cut corners on this type of thing, it’s nice to see that.

But why would you want to store stuff on-premises? Isn’t everyone moving everything to the cloud? No, they’re not. I don’t imagine that this type of product is being pitched at people running entirely in public cloud. It’s more likely that, if you’re looking at this type of solution, you’re probably running a hybrid setup, and still have a footprint in a colocation facility somewhere. The benefit of this is that you can retain control over where your archived data is placed. Some would say that’s a bit of a pain, and an unnecessary expense, but people familiar with compliance will understand that business is all about a whole lot of wasted expense in order to make people feel good. But I digress. Like most on-premises solutions, the Assureon offering compares well with a public cloud solution on a $/GB basis, assuming you’ve got a lot of sunk costs in place already with your data centre presence.

The immutability story is also a pretty good one when you start to think about organisations that have been hit by ransomware in the last few years. That stuff might roll through your organisation like a hot knife through butter, but it won’t be able to do anything with your archive data – that stuff isn’t going anywhere. Combine that with one of those fancy next generation data protection solutions and you’re in reasonable shape.

In any case, I like what the Assureon product offers, and am looking forward to seeing Nexsan move beyond the Windows-only platform support that it currently offers. You can read the Nexsan Assueron Cloud Transfer press release here. David Marshall covered the announcement over at VMblog and ComputerWeekly.com did an article as well.

Cloudistics, Choice and Private Cloud

I’ve had my eye on Cloudistics for a little while now.  They published an interesting post recently on virtualisation and private cloud. It makes for an interesting read, and I thought I’d comment briefly and post this article if for no other reason than you can find your way to the post and check it out.

TL;DR – I’m rambling a bit, but it’s not about X versus Y, it’s more about getting your people and processes right.

 

Cloud, Schmoud

There are a bunch of different reasons why you’d want to adopt a cloud operating model, be it public, private or hybrid. These include the ability to take advantage of:

  • On-demand service;
  • Broad network access;
  • Resource pooling;
  • Rapid elasticity; and
  • Measured service, or pay-per-use.

Some of these aspects of cloud can be more useful to enterprises than others, depending in large part on where they are in their journey (I hate calling it that). The thing to keep in mind is that cloud is really just a way of doing things slightly differently to improve deficiencies in areas that are normally not tied to one particular piece of technology. What I mean by that is that cloud is a way of dealing with some of the issues that you’ve probably seen in your IT organisation. These include:

  • Poor planning;
  • Complicated network security models;
  • Lack of communication between IT and the business;
  • Applications that don’t scale; and
  • Lack of capacity planning.

Operating Expenditure

These are all difficult problems to solve, primarily because people running IT organisations need to be thinking not just about technology problems, but also people and business problems. And solving those problems takes resources, something that’s often in short supply. Coupled with the fact that many businesses feel like they’ve been handing out too much money to their IT organisations for years and you start to understand why many enterprises are struggling to adapt to new ways of doing things. One thing that public cloud does give you is a way to consume resources via OpEx rather than CapEx. The benefit here is that you’re only consuming what you need, and not paying for the whole thing to be built out on the off chance you’ll use it all over the five year life of the infrastructure. Private cloud can still provide this kind of benefit to the business via “showback” mechanisms that can really highlight the cost of infrastructure being consumed by internal business units. Everyone has complained at one time or another about the Finance group having 27 test environments, now they can let the executives know just what that actually costs.

Are You Really Cloud Native?

Another issue with moving to cloud is that a lot of enterprises are still looking to leverage Infrastructure-as-a-Service (IaaS) as an extension of on-premises capabilities rather than using cloud-native technologies. If you’ve gone with lift and shift (or “move and improve“) you’ve potentially just jammed a bunch of the same problems you had on-premises in someone else’s data centre. The good thing about moving to a cloud operating model (even if it’s private) is that you’ll get people (hopefully) used to consuming services from a catalogue, and taking responsibility for how much their footprint occupies. But if your idea of transformation is running SQL 2005 on Windows Server 2003 deployed from VMware vRA then I think you’ve got a bit of work to do.

 

Conclusion

As Cloudistics point out in their article, it isn’t really a conversation about virtualisation versus private cloud, as virtualisation (in my mind at least) is the platform that makes a lot of what we do nowadays with private cloud possible. What is more interesting is the private versus public debate. But even that one is no longer as clear cut as vendors would like you to believe. If a number of influential analysts are right, most of the world has started to realise that it’s all about a hybrid approach to cloud. The key benefits of adopting a new way of doing things are more about fixing up the boring stuff, like process. If you think you get your house in order simply by replacing the technology that underpins it then you’re in for a tough time.

SwiftStack Announces 1space

SwiftStack recently announced 1space, and I was lucky enough to snaffle some time with Joe Arnold to talk more about what it all means. I thought it would be useful to do a brief post, as I really do like SwiftStack, and I feel like I don’t spend enough time talking about them.

 

The Announcement

So what exactly is 1space? It’s basically SwiftStack delivering access to their storage across both on-premises and public cloud. But what does that mean? Well, you get some cool features as a result, including:

  • Integrated multi-cloud access
  • Scale-out & high-throughput data movement
  • Highly reliable & available policy execution
  • Policies for lifecycle, data protection & migration
  • Optional, scale-out containers with AWS S3 support
  • Native access in public cloud (direct to S3, GCS, etc.)
  • Data created in public cloud accessible on-premises
  • Native format enabling cloud-native services

[image courtesy of SwiftStack]

According to Arnold, one of the really cool things about this is that it “provides universal access to over both file protocols and object APIs to a single storage namespace, it is increasingly used for distributed workflows across multiple geographic regions and multiple clouds”.

 

Metadata Search

But wait …

One of the really nice things that SwiftStack has done is add integrated metadata search via a desktop client for Windows, macOS, and Linux. It’s called MetaSync.

 

Thoughts

This has been a somewhat brief post, but something I did want to focus on was the fact that this product has been open-sourced. SwiftStack have been pretty keen on open source as a concept, and I think that comes through when you have a look at some of their contributions to the community. These contributions shouldn’t be underestimated, and I think it’s important that we call out when vendors are contributing to the open source community. Let’s face it, a whole lot of startups are taking advantage of code generated by the open source community, and a number of them have the good sense to know that it’s most certainly a two-way street, and they can’t relentlessly pillage the community without it eventually falling apart.

But this announcement isn’t just me celebrating the contributions of neckbeards from within the vendor community and elsewhere. SwiftStack have delivered something that is really quite cool. In much the same way that storage types won’t shut up about NVMe over Fabrics, cloud folks are really quite enthusiastic about the concept of multi-cloud connectivity. There are a bunch of different use cases where it makes sense to leverage a universal namespace for your applications. If you’d like to see SwiftStack in action, check out this YouTube channel (there’s a good video about 1space here) and if you’d like to take SwiftStack for a spin, you can do that here.

Druva Announces CloudRanger Acquisition

Announcement

Druva recently announced that they’ve acquired CloudRanger. I had the opportunity to catch up with W. Curtis Preston about the news recently and thought I’d cover it briefly here.

 

What’s A CloudRanger?

Here’s the high-level view of the company:

  • Founded in 2016
  • Headquartered in Donegal, Ireland
  • 300+ Global Customers
  • 3x Growth in last 6 months
  • 100% Cloud native ‘as-a-Service’
  • Pay as you go pricing model
  • Biggest client creating 4,000 snapshots per day

 

Why CloudRanger?

Agentless Service

  • API Account IAM access ensures greater customer account security
  • Leverages AWS Quiescing capabilities
  • No account proxies (No additional costs, increased security)
  • No software needed to be updated

Broadest service coverage

  • Amazon EC2, EBS, RDS & RedShift
  • Automated Disaster Recovery (ADR)
  • Server scheduling for Amazon EC2 & RDS
  • SaaS based solution, compared to CPM server based approach
  • Easy to use platform for managing multiple AWS accounts
  • Featured SaaS product in AWS Marketplace available via SaaS contracts

Consumption Based Pricing Model

  • Pay as you go with full insight into data usage for cost predictability

 

A Good Fit

So where does CloudRanger fit in the broader Druva story? You’ll notice in the below picture that Apollo is missing. The main reason for the acquisition, as best I can tell, is that CloudRanger gives Druva the capability they were after with Apollo but in a much shorter timeframe.

[image courtesy of Druva]

 

Thoughts

A lot of customers want a lot of different things from their software vendors, particularly when it comes to data protection. A lot of companies have particular needs, and infrastructure protection is a complicated beast at the best of times. Sometimes it makes sense to try and develop these features for your customers. And sometimes it makes sense to go out and acquire those features. In this case, Druva has realised that CloudRanger gets them to a point in their product development far quicker than they may have gotten to under their own steam. The point of this acquisition isn’t that the good folks at Druva don’t have the chops to deliver what CloudRanger does already, but now they can move on to other platform enhancements. This does assume that the acquisition will go smoothly, but given that this doesn’t appear to be a hostile takeover, I’m assuming that part will go well.

Druva have done a lot of cool stuff recently, and I do like their approach to data protection (management?) that has differentiated itself from some of the more traditional approaches in the marketplace. CloudRanger gives them solid capability with AWS workloads, and I imagine Azure will be on the radar as well. I’m looking forward to seeing how this plays out, and what impact it has on some of their competitors in the space.

Cohesity – Cloud Edition for Azure – A Few Notes

I deployed Cohesity Cloud Edition in Microsoft Azure recently and took a few notes. I’m the first to admit that I’m completely hopeless when it comes to fumbling my way about Azure, so this probably won’t seem as convoluted a process to you as it did to me. If you have access to the documentation section of the Cohesity support site, there’s a PDF you can download that explains everything. I won’t go into too much detail but there are a few things to consider. There’s also a handy solution brief on the Cohesity website that sheds a bit more light on the solution.

 

Process

The installation requires a Linux VM be setup in Azure (a small one – DS1_V2 Standard). Just like in the physical world, you need to think about how many nodes you want to deploy in Azure (this will be determined largely by how much you’re trying to protect). As part of the setup you edit a Cohesity-provided JSON file with a whole bunch of cool stuff like Application IDs and Keys and Tenant IDs.

Subscription ID

Specify the subscription ID for the subscription used to store the resources of the Cohesity Cluster.

WARNING: The subscription account must have owner permissions for the specified subscription.

Application ID

Specify the Application ID assigned by Azure during the service principal creation process.

Application Key

Specify the Application key generated by Azure during the service principal creation process that is used for authentication.

Tenant ID

Specify the unique Tenant ID assigned by Azure.

The Linux VM then goes off and builds the cluster in the location you specify with the details you’ve specified. If you haven’t done so already, you’ll need to create a Service Principal as well. Microsoft has some useful documentation on that here.

 

Limitations

One thing to keep in mind is that, at this stage, “Cohesity does not support the native backup of Microsoft Azure VMs. To back up a cloud VM (such as a Microsoft Azure VM), install the Cohesity agent on the cloud VM and create a Physical Server Protection Job that backs up the VM”. So you’ll see that, even if you add Azure as a source, you won’t be able to perform VM backups in the same way you would with vSphere workloads, as “”Cloud Edition only supports registering a Microsoft Azure Cloud for converting and cloning VMware VMs. The registered Microsoft Azure Cloud is where the VMs are cloned to”. This is the same across most public cloud platforms, as Microsoft, Amazon and friends aren’t terribly interested in giving out that kind of access to the likes of Cohesity or Rubrik. Still, if you’ve got the right networking configuration in place, you can back up your Azure VMs either to the Cloud Edition or to an on-premises instance (if that works better for you).

 

Thoughts

I’m on the fence about “Cloud Editions” of data protection products, but I do understand why they’ve come to be a thing. Enterprises have insisted on a lift and shift approach to moving workloads to public cloud providers and have then panicked about being able to protect them, because the applications they’re running aren’t cloud-native and don’t necessarily work well across multiple geos. And that’s fine, but there’s obviously an overhead associated with running cloud editions of data protection solutions. And it feels like you’re just putting off the inevitable requirement to re-do the whole solution. I’m all for leveraging public cloud – it can be a great resource to get things done effectively without necessarily investing a bunch of money in your own infrastructure. But you need to re-factor your apps for it to really make sense. Otherwise you find yourself deploying point solutions in the cloud in order to avoid doing the not so cool stuff.

I’m not saying that this type of solution doesn’t have a place. I just wish it didn’t need to be like this sometimes …

What’s The Buzz About StorageOS?

I wrote about StorageOS almost twelve months ago, and recently had the opportunity to catch up with Chris Brandon about what StorageOS have been up to. They’ve been up to a fair bit as it happens, so I thought I’d share some of the details here.

 

The Announcement

What’s StorageOS? According to Brandon it’s “[a] software-defined, scale-out/up storage platform for running enterprise containerized applications in production”. The “buzz” is that StorageOS is now generally available for purchase and they’ve secured some more funding.

 

Cloud Native Storage, Eh?

StorageOS have come up with some thinking around the key tenets of cloud native storage. To wit, it needs to be:

  • Application Centric;
  • Application Platform Agnostic;
  • Declarative and Composable;
  • API Driven and Self-Managed;
  • Agile;
  • Natively Secure;
  • Performant; and
  • Consistently Available.

 

What Can StorageOS Do For Me?

According to Brandon, StorageOS offers are a number of benefits:

  • It’s Enterprise Class – so you can keep your data safe and available;
  • Policy Management allows you to enforce policies and rules while still enabling storage self-service by developers and DevOps teams;
  • Deploy It Anywhere – cloud, VM or server – you decide;
  • Data Services – Replication for HA, data reduction, storage pooling and agility to scale up or scale out based on application requirements;
  • Performance – Optimised to give you the best performance from your platform;
  • Cost-Effective Pricing – Only pay for the storage you use. Lower OpEx and CapEx;
  • Integrated Storage – Integrated into your favorite platforms with extensible plugins and APIs; and
  • Made Easy – Automated configuration and simple management.

 

Architecture

There is a container installed on each node and this runs both the data plane and control plane.

Data Plane

  • Manages data access requests
  • Pools aggregated storage for presentation
  • Runs as a container

Control Plane

  • Manages config, health, scheduling, policy, provisioning and recovery
  • API is accessed by plugins, CLI, GUI
  • Runs as a container

Containers are also used to create a highly available storage pool.

[Image courtesy of StorageOS]

 

Thoughts And Further Reading

StorageOS secured some funding recently and have moved their headquarters from London to New York. They’re launching at KubeCon, Red Hat Summit and dockercon. They have a number of retail and media customers and are working closely with strategic partners. They’ll initially be shipping the Enterprise version, and there is a Professional version on the way. They are also committed to always having a free version available for developers to try it out (this is capacity limited to 100GB right now).

We’ve come some way from the one application per host approach of the early 2000s. The problem, however, is that “legacy” storage hasn’t been a good fit for containers. And containers have had some problems with storage in general. StorageOS are working hard to fix some of those issues and are looking to deliver a product that neatly sidesteps some of the issues inherent in container storage while delivering some features that have been previously unavailable in container deployments.

The team behind the company have some great heritage with cloud-native applications, and I like that they’re working hard to make sure this really is a cloud-native solution, not just a LUN being pointed at an operating environment. Ease of consumption is a popular reason for shifting to the cloud, and StorageOS are ensuring that people can leverage their product with a simple to understand subscription model. They’re not your stereotypical cloud neckbeards though (that’s my prejudice, not yours). The financial services background comes through in the product architecture, with a focus on availability and performance being key to the platform. I also like the policy-based approach to data placement and the heavy focus on orchestration and automation. You can read more about some of the product features here.

Things have really progressed since I first spoke to StorageOS last year, and I’m looking forward to seeing what they come up with in the next 12 months.

Nexenta Announces NexentaCloud

I haven’t spoken to Nexenta in some time, but that doesn’t mean they haven’t been busy. They recently announced NexentaCloud in AWS, and I had the opportunity to speak to Michael Letschin about the announcement.

 

What Is It?

In short, it’s a version of NexentaStor that you can run in the cloud. It’s ostensibly an EC2 machine running in your virtual private cloud using EBS for storage on the backend. It’s:

  • Available in the AWS Marketplace;
  • Is deployed on preconfigured Amazon Machine Images; and
  • Delivers unified file and block services (NFS, SMB, iSCSI).

According to Nexenta, the key benefits include:

  • Access to a fully-featured file (NFS and SMB) and block (iSCSI) storage array;
  • Improved cloud resource efficiency through
    • data reduction
    • thin provisioning
    • snapshots and clones
  • Seamless replication to/from NexentaStor and NexentaCloud;
  • Rapid deployment of NexentaCloud instances for test/dev operations;
  • Centralised management of NexentaStor and NexentaCloud;
  • Advanced Analytics across your entire Nexenta storage environment; and
  • Migrate legacy applications to the cloud without re-architecting your applications.

There’s an hourly or annual subscription model, and I believe there’s also capacity-based licensing options available.

 

But Why?

Some of the young people reading this blog who wear jeans to work every day probably wonder why on earth you’d want to deploy a virtual storage array in your VPC in the first place. Why would your cloud-native applications care about iSCSI access? It’s very likely they don’t. But one of the key reasons why you might consider the NexentaCloud offering is because you’ve not got the time or resources to re-factor your applications and you’ve simply lifted and shifted a bunch of your enterprise applications into the cloud. These are likely applications that depend on infrastructure-level resiliency rather than delivering their own application-level resiliency. In this case, a product like NexentaCloud makes sense in that it provides some of the data services and resiliency that are otherwise lacking with those enterprise applications.

 

Thoughts

I’m intrigued by the NexentaCloud offering (and by Nexenta the company, for that matter). They have a solid history of delivering interesting software-defined storage solutions at a reasonable cost and with decent scale. If you’ve had the chance to play with NexentaStor (or deployed it in production), you’ll know it’s a fairly solid offering with a lot of the features you’d look for in a traditional storage platform. I’m curious to see how many enterprises take advantage of the NexentaCloud product, although I know there are plenty of NexentaStor users out in the wild, and I have no doubt their CxOs are placing a great amount of pressure on them to don the cape and get “to the cloud” post haste.

Druva Announces Cloud Platform Enhancements

Druva Cloud Platform

Data protection has been on my mind quite a bit lately. I’ve been talking to a number of vendors, partners and end users about data protection challenges and, sometimes, successes. With World Backup Day coming up I had the opportunity to get a briefing from W. Curtis Preston on Druva’s Cloud Platform and thought I’d share some of the details here.

 

What is it?

Druva Cloud Platform is Druva’s tool for tying together their as-a-Service data protection solution within a (sometimes maligned) single pane of glass. The idea behind it is you can protect your assets – from end points through to your cloud applications (and everything in between) – all from the one service, and all managed in the one place.

[image courtesy of Druva]

 

Druva Cloud Platform was discussed at Tech Field Day Extra at VMworld US 2017, and now fully supports Phoenix (the DC protection offering), inSync
(end point & SaaS protection), and Apollo (native EC2 backup). There’s also some nice Phoenix integration with VMware Cloud on AWS (VMC).

[image courtesy of Druva]

 

Druva’s Cloud Credentials

Druva provide a nice approach to as-a-Service data protection that’s a little different from a number of competing products:

  • You don’t need to see or manage backup server nodes;
  • Server infrastructure security is not your responsibility;
  • Server nodes are spawned / stopped based on load;
  • S3 is less expensive (and faster with parallelisation);
  • There are no egress charges during restore; and
  • No on-premises component or CapEx is required (although you can deploy a cache node for quicker restore to on-premises).

 

Thoughts

I first encountered Druva at Tech Field Day Extra VMworld US in 2017 and was impressed by both the breadth of their solution and the cloudiness of it all compared to some of the traditional vendor approaches to protecting cloud-native and traditional workloads via the cloud. They have great support for end point protection, SaaS and traditional, DC-flavoured workloads. I’m particularly a fan of their willingness to tackle end point protection. When I was first starting out in data protection, a lot of vendors were speaking about how they could protect business from data loss. Then it seemed like it all became a bit too hard and maybe we just started to assume that the data was safe somewhere in the cloud or data centre (week not really but we’re talking feelings, not fact for the moment). End point protection is not an easy thing to get right, but it’s a really important part of data protection. Because ultimately you’re protecting data from bad machines and bad events and, ultimately, bad people. Sometimes the people aren’t bad at all, just a little bit silly.

Cloud is hard to do well. Lifting and shifting workloads from the DC to the public cloud has proven to be a challenge for a lot of enterprises. And taking a lift and shift approach to data protection in the cloud is also proving to be a bit of challenge, not least of which because people struggle with the burstiness of cloud workloads and need protection solutions that can accommodate those requirements. I like Druva’s approach to data protection, at least from the point of view of their “cloud-nativeness” and their focus on protecting a broad spectrum of workloads and scenarios. Not everything they do will necessarily fit in with the way you do things in your business, but there’re some solid, modern foundations there to deliver a comprehensive service. And I think that’s a nice thing to build on.

Druva are also presenting at Cloud Field Day 3 in early April. I recommend checking out their session. Justin also did a post in anticipation of the session that is well worth a read.

Cloudian Announces HyperStore 7 – Gets Super Cloudy

Cloudian recently announced HyperStore 7, and I was fortunate enough to grab a few minutes with John Toor to run through the announcement.

 

The Announcement

The key features of HyperStore 7 include:

  • Multi-cloud access via a common API: Manage all cloud and on-premises storage assets, including Amazon AWS, Google GCP, and Microsoft Azure via a common API
  • Merge Files and Objects:  Combine file and object management to a single namespace, accessed via SMB (CIFS) / NFS protocols and the S3 API
  • Scale-out architecture: Multiple distributed controllers can manage a single namespace across on-premises and cloud environments for performance scaling, increased availability and simplified data access
  • Converged Data Access: Permits data stored as files to be retrieved as objects, or vice versa, providing full data interchangeability

I’ll run through these in a little more detail below.

 

Multi-cloud via Common API

The cool thing about HyperStore 7 is that it’s delivered as a single software image. This means you can manage your HyperStore environment from a common interface, regardless of whether it’s an appliance located on-premises, or a virtual image running in Azure, GCP or AWS.

[image courtesy of Cloudian]

 

The common image also means you can start out small and build up. You can deploy on-premises first, then work up to a hybrid cloud deployment, and then, if you’re so inclined, you can deploy HyperStore 7 natively in the cloud. The best thing about this feature is that you don’t need to undo the work you’ve already done on-premises, you can just build on it.

 

Files and Objects, Together

Once of the most exciting features, in my opinion, is “Converged Data Access”. The recent introduction of HyperFile ramps up the file and object play considerably, with a single namespace across multiple environments, and files and objects being stored in that namespace. You can access data in object or file format interchangeably as well.

[image courtesy of Cloudian]

 

Note also that data stored in its native cloud format. So if you’re using Azure, for example, your data is stored in blob format, and is thus accessible to other applications that can leverage that format.

 

Other Notes

The basic edition of HyperFile is included with HyperStore at no charge. The hardware appliance remains the primary model for on-premises deployments, with Cloudian noting that a lot of customers are still most comfortable buying hardware from a vendor for their storage deployments.

 

Thoughts

With the introduction of HyperFile, Cloudian made some leaps ahead in terms of breadth of offering. In my opinion, the ability to deploy HyperStore 7 on your favourite public cloud platform, and have it running a shared data pool with your on-premises HyperStore storage, is simply icing on the cake. A lot of people are talking about how they are all in with multi-cloud solutions, but it seems that Cloudian have come up with a fairly simple solution to the problem. You’ll need to do a little work to make sure your networking is set up in the way you need it to meet your requirements, but you’d need to do that if you were looking to do file or object in public cloud in any case. There are a bunch of use cases for this type of technology, and it’s nice to see that it’s not a bunch of different products glued together and called a solution.

It’s no secret that I think Cloudian have been doing some pretty cool stuff in the object space for a while now. The addition of HyperFile capability last year, and this multi-cloud capability in HyperStore 7, gets me all kinds of excited to see what they’ve got in store for the future. If you’re after a scalable object (and file) solution works well on-premises and off-premises, you’d do well to check out what Cloudian has to offer.