Druva Announces CloudRanger Acquisition

Announcement

Druva recently announced that they’ve acquired CloudRanger. I had the opportunity to catch up with W. Curtis Preston about the news recently and thought I’d cover it briefly here.

 

What’s A CloudRanger?

Here’s the high-level view of the company:

  • Founded in 2016
  • Headquartered in Donegal, Ireland
  • 300+ Global Customers
  • 3x Growth in last 6 months
  • 100% Cloud native ‘as-a-Service’
  • Pay as you go pricing model
  • Biggest client creating 4,000 snapshots per day

 

Why CloudRanger?

Agentless Service

  • API Account IAM access ensures greater customer account security
  • Leverages AWS Quiescing capabilities
  • No account proxies (No additional costs, increased security)
  • No software needed to be updated

Broadest service coverage

  • Amazon EC2, EBS, RDS & RedShift
  • Automated Disaster Recovery (ADR)
  • Server scheduling for Amazon EC2 & RDS
  • SaaS based solution, compared to CPM server based approach
  • Easy to use platform for managing multiple AWS accounts
  • Featured SaaS product in AWS Marketplace available via SaaS contracts

Consumption Based Pricing Model

  • Pay as you go with full insight into data usage for cost predictability

 

A Good Fit

So where does CloudRanger fit in the broader Druva story? You’ll notice in the below picture that Apollo is missing. The main reason for the acquisition, as best I can tell, is that CloudRanger gives Druva the capability they were after with Apollo but in a much shorter timeframe.

[image courtesy of Druva]

 

Thoughts

A lot of customers want a lot of different things from their software vendors, particularly when it comes to data protection. A lot of companies have particular needs, and infrastructure protection is a complicated beast at the best of times. Sometimes it makes sense to try and develop these features for your customers. And sometimes it makes sense to go out and acquire those features. In this case, Druva has realised that CloudRanger gets them to a point in their product development far quicker than they may have gotten to under their own steam. The point of this acquisition isn’t that the good folks at Druva don’t have the chops to deliver what CloudRanger does already, but now they can move on to other platform enhancements. This does assume that the acquisition will go smoothly, but given that this doesn’t appear to be a hostile takeover, I’m assuming that part will go well.

Druva have done a lot of cool stuff recently, and I do like their approach to data protection (management?) that has differentiated itself from some of the more traditional approaches in the marketplace. CloudRanger gives them solid capability with AWS workloads, and I imagine Azure will be on the radar as well. I’m looking forward to seeing how this plays out, and what impact it has on some of their competitors in the space.

Cohesity – Cloud Edition for Azure – A Few Notes

I deployed Cohesity Cloud Edition in Microsoft Azure recently and took a few notes. I’m the first to admit that I’m completely hopeless when it comes to fumbling my way about Azure, so this probably won’t seem as convoluted a process to you as it did to me. If you have access to the documentation section of the Cohesity support site, there’s a PDF you can download that explains everything. I won’t go into too much detail but there are a few things to consider. There’s also a handy solution brief on the Cohesity website that sheds a bit more light on the solution.

 

Process

The installation requires a Linux VM be setup in Azure (a small one – DS1_V2 Standard). Just like in the physical world, you need to think about how many nodes you want to deploy in Azure (this will be determined largely by how much you’re trying to protect). As part of the setup you edit a Cohesity-provided JSON file with a whole bunch of cool stuff like Application IDs and Keys and Tenant IDs.

Subscription ID

Specify the subscription ID for the subscription used to store the resources of the Cohesity Cluster.

WARNING: The subscription account must have owner permissions for the specified subscription.

Application ID

Specify the Application ID assigned by Azure during the service principal creation process.

Application Key

Specify the Application key generated by Azure during the service principal creation process that is used for authentication.

Tenant ID

Specify the unique Tenant ID assigned by Azure.

The Linux VM then goes off and builds the cluster in the location you specify with the details you’ve specified. If you haven’t done so already, you’ll need to create a Service Principal as well. Microsoft has some useful documentation on that here.

 

Limitations

One thing to keep in mind is that, at this stage, “Cohesity does not support the native backup of Microsoft Azure VMs. To back up a cloud VM (such as a Microsoft Azure VM), install the Cohesity agent on the cloud VM and create a Physical Server Protection Job that backs up the VM”. So you’ll see that, even if you add Azure as a source, you won’t be able to perform VM backups in the same way you would with vSphere workloads, as “”Cloud Edition only supports registering a Microsoft Azure Cloud for converting and cloning VMware VMs. The registered Microsoft Azure Cloud is where the VMs are cloned to”. This is the same across most public cloud platforms, as Microsoft, Amazon and friends aren’t terribly interested in giving out that kind of access to the likes of Cohesity or Rubrik. Still, if you’ve got the right networking configuration in place, you can back up your Azure VMs either to the Cloud Edition or to an on-premises instance (if that works better for you).

 

Thoughts

I’m on the fence about “Cloud Editions” of data protection products, but I do understand why they’ve come to be a thing. Enterprises have insisted on a lift and shift approach to moving workloads to public cloud providers and have then panicked about being able to protect them, because the applications they’re running aren’t cloud-native and don’t necessarily work well across multiple geos. And that’s fine, but there’s obviously an overhead associated with running cloud editions of data protection solutions. And it feels like you’re just putting off the inevitable requirement to re-do the whole solution. I’m all for leveraging public cloud – it can be a great resource to get things done effectively without necessarily investing a bunch of money in your own infrastructure. But you need to re-factor your apps for it to really make sense. Otherwise you find yourself deploying point solutions in the cloud in order to avoid doing the not so cool stuff.

I’m not saying that this type of solution doesn’t have a place. I just wish it didn’t need to be like this sometimes …

What’s The Buzz About StorageOS?

I wrote about StorageOS almost twelve months ago, and recently had the opportunity to catch up with Chris Brandon about what StorageOS have been up to. They’ve been up to a fair bit as it happens, so I thought I’d share some of the details here.

 

The Announcement

What’s StorageOS? According to Brandon it’s “[a] software-defined, scale-out/up storage platform for running enterprise containerized applications in production”. The “buzz” is that StorageOS is now generally available for purchase and they’ve secured some more funding.

 

Cloud Native Storage, Eh?

StorageOS have come up with some thinking around the key tenets of cloud native storage. To wit, it needs to be:

  • Application Centric;
  • Application Platform Agnostic;
  • Declarative and Composable;
  • API Driven and Self-Managed;
  • Agile;
  • Natively Secure;
  • Performant; and
  • Consistently Available.

 

What Can StorageOS Do For Me?

According to Brandon, StorageOS offers are a number of benefits:

  • It’s Enterprise Class – so you can keep your data safe and available;
  • Policy Management allows you to enforce policies and rules while still enabling storage self-service by developers and DevOps teams;
  • Deploy It Anywhere – cloud, VM or server – you decide;
  • Data Services – Replication for HA, data reduction, storage pooling and agility to scale up or scale out based on application requirements;
  • Performance – Optimised to give you the best performance from your platform;
  • Cost-Effective Pricing – Only pay for the storage you use. Lower OpEx and CapEx;
  • Integrated Storage – Integrated into your favorite platforms with extensible plugins and APIs; and
  • Made Easy – Automated configuration and simple management.

 

Architecture

There is a container installed on each node and this runs both the data plane and control plane.

Data Plane

  • Manages data access requests
  • Pools aggregated storage for presentation
  • Runs as a container

Control Plane

  • Manages config, health, scheduling, policy, provisioning and recovery
  • API is accessed by plugins, CLI, GUI
  • Runs as a container

Containers are also used to create a highly available storage pool.

[Image courtesy of StorageOS]

 

Thoughts And Further Reading

StorageOS secured some funding recently and have moved their headquarters from London to New York. They’re launching at KubeCon, Red Hat Summit and dockercon. They have a number of retail and media customers and are working closely with strategic partners. They’ll initially be shipping the Enterprise version, and there is a Professional version on the way. They are also committed to always having a free version available for developers to try it out (this is capacity limited to 100GB right now).

We’ve come some way from the one application per host approach of the early 2000s. The problem, however, is that “legacy” storage hasn’t been a good fit for containers. And containers have had some problems with storage in general. StorageOS are working hard to fix some of those issues and are looking to deliver a product that neatly sidesteps some of the issues inherent in container storage while delivering some features that have been previously unavailable in container deployments.

The team behind the company have some great heritage with cloud-native applications, and I like that they’re working hard to make sure this really is a cloud-native solution, not just a LUN being pointed at an operating environment. Ease of consumption is a popular reason for shifting to the cloud, and StorageOS are ensuring that people can leverage their product with a simple to understand subscription model. They’re not your stereotypical cloud neckbeards though (that’s my prejudice, not yours). The financial services background comes through in the product architecture, with a focus on availability and performance being key to the platform. I also like the policy-based approach to data placement and the heavy focus on orchestration and automation. You can read more about some of the product features here.

Things have really progressed since I first spoke to StorageOS last year, and I’m looking forward to seeing what they come up with in the next 12 months.

Nexenta Announces NexentaCloud

I haven’t spoken to Nexenta in some time, but that doesn’t mean they haven’t been busy. They recently announced NexentaCloud in AWS, and I had the opportunity to speak to Michael Letschin about the announcement.

 

What Is It?

In short, it’s a version of NexentaStor that you can run in the cloud. It’s ostensibly an EC2 machine running in your virtual private cloud using EBS for storage on the backend. It’s:

  • Available in the AWS Marketplace;
  • Is deployed on preconfigured Amazon Machine Images; and
  • Delivers unified file and block services (NFS, SMB, iSCSI).

According to Nexenta, the key benefits include:

  • Access to a fully-featured file (NFS and SMB) and block (iSCSI) storage array;
  • Improved cloud resource efficiency through
    • data reduction
    • thin provisioning
    • snapshots and clones
  • Seamless replication to/from NexentaStor and NexentaCloud;
  • Rapid deployment of NexentaCloud instances for test/dev operations;
  • Centralised management of NexentaStor and NexentaCloud;
  • Advanced Analytics across your entire Nexenta storage environment; and
  • Migrate legacy applications to the cloud without re-architecting your applications.

There’s an hourly or annual subscription model, and I believe there’s also capacity-based licensing options available.

 

But Why?

Some of the young people reading this blog who wear jeans to work every day probably wonder why on earth you’d want to deploy a virtual storage array in your VPC in the first place. Why would your cloud-native applications care about iSCSI access? It’s very likely they don’t. But one of the key reasons why you might consider the NexentaCloud offering is because you’ve not got the time or resources to re-factor your applications and you’ve simply lifted and shifted a bunch of your enterprise applications into the cloud. These are likely applications that depend on infrastructure-level resiliency rather than delivering their own application-level resiliency. In this case, a product like NexentaCloud makes sense in that it provides some of the data services and resiliency that are otherwise lacking with those enterprise applications.

 

Thoughts

I’m intrigued by the NexentaCloud offering (and by Nexenta the company, for that matter). They have a solid history of delivering interesting software-defined storage solutions at a reasonable cost and with decent scale. If you’ve had the chance to play with NexentaStor (or deployed it in production), you’ll know it’s a fairly solid offering with a lot of the features you’d look for in a traditional storage platform. I’m curious to see how many enterprises take advantage of the NexentaCloud product, although I know there are plenty of NexentaStor users out in the wild, and I have no doubt their CxOs are placing a great amount of pressure on them to don the cape and get “to the cloud” post haste.

Druva Announces Cloud Platform Enhancements

Druva Cloud Platform

Data protection has been on my mind quite a bit lately. I’ve been talking to a number of vendors, partners and end users about data protection challenges and, sometimes, successes. With World Backup Day coming up I had the opportunity to get a briefing from W. Curtis Preston on Druva’s Cloud Platform and thought I’d share some of the details here.

 

What is it?

Druva Cloud Platform is Druva’s tool for tying together their as-a-Service data protection solution within a (sometimes maligned) single pane of glass. The idea behind it is you can protect your assets – from end points through to your cloud applications (and everything in between) – all from the one service, and all managed in the one place.

[image courtesy of Druva]

 

Druva Cloud Platform was discussed at Tech Field Day Extra at VMworld US 2017, and now fully supports Phoenix (the DC protection offering), inSync
(end point & SaaS protection), and Apollo (native EC2 backup). There’s also some nice Phoenix integration with VMware Cloud on AWS (VMC).

[image courtesy of Druva]

 

Druva’s Cloud Credentials

Druva provide a nice approach to as-a-Service data protection that’s a little different from a number of competing products:

  • You don’t need to see or manage backup server nodes;
  • Server infrastructure security is not your responsibility;
  • Server nodes are spawned / stopped based on load;
  • S3 is less expensive (and faster with parallelisation);
  • There are no egress charges during restore; and
  • No on-premises component or CapEx is required (although you can deploy a cache node for quicker restore to on-premises).

 

Thoughts

I first encountered Druva at Tech Field Day Extra VMworld US in 2017 and was impressed by both the breadth of their solution and the cloudiness of it all compared to some of the traditional vendor approaches to protecting cloud-native and traditional workloads via the cloud. They have great support for end point protection, SaaS and traditional, DC-flavoured workloads. I’m particularly a fan of their willingness to tackle end point protection. When I was first starting out in data protection, a lot of vendors were speaking about how they could protect business from data loss. Then it seemed like it all became a bit too hard and maybe we just started to assume that the data was safe somewhere in the cloud or data centre (week not really but we’re talking feelings, not fact for the moment). End point protection is not an easy thing to get right, but it’s a really important part of data protection. Because ultimately you’re protecting data from bad machines and bad events and, ultimately, bad people. Sometimes the people aren’t bad at all, just a little bit silly.

Cloud is hard to do well. Lifting and shifting workloads from the DC to the public cloud has proven to be a challenge for a lot of enterprises. And taking a lift and shift approach to data protection in the cloud is also proving to be a bit of challenge, not least of which because people struggle with the burstiness of cloud workloads and need protection solutions that can accommodate those requirements. I like Druva’s approach to data protection, at least from the point of view of their “cloud-nativeness” and their focus on protecting a broad spectrum of workloads and scenarios. Not everything they do will necessarily fit in with the way you do things in your business, but there’re some solid, modern foundations there to deliver a comprehensive service. And I think that’s a nice thing to build on.

Druva are also presenting at Cloud Field Day 3 in early April. I recommend checking out their session. Justin also did a post in anticipation of the session that is well worth a read.

Cloudian Announces HyperStore 7 – Gets Super Cloudy

Cloudian recently announced HyperStore 7, and I was fortunate enough to grab a few minutes with John Toor to run through the announcement.

 

The Announcement

The key features of HyperStore 7 include:

  • Multi-cloud access via a common API: Manage all cloud and on-premises storage assets, including Amazon AWS, Google GCP, and Microsoft Azure via a common API
  • Merge Files and Objects:  Combine file and object management to a single namespace, accessed via SMB (CIFS) / NFS protocols and the S3 API
  • Scale-out architecture: Multiple distributed controllers can manage a single namespace across on-premises and cloud environments for performance scaling, increased availability and simplified data access
  • Converged Data Access: Permits data stored as files to be retrieved as objects, or vice versa, providing full data interchangeability

I’ll run through these in a little more detail below.

 

Multi-cloud via Common API

The cool thing about HyperStore 7 is that it’s delivered as a single software image. This means you can manage your HyperStore environment from a common interface, regardless of whether it’s an appliance located on-premises, or a virtual image running in Azure, GCP or AWS.

[image courtesy of Cloudian]

 

The common image also means you can start out small and build up. You can deploy on-premises first, then work up to a hybrid cloud deployment, and then, if you’re so inclined, you can deploy HyperStore 7 natively in the cloud. The best thing about this feature is that you don’t need to undo the work you’ve already done on-premises, you can just build on it.

 

Files and Objects, Together

Once of the most exciting features, in my opinion, is “Converged Data Access”. The recent introduction of HyperFile ramps up the file and object play considerably, with a single namespace across multiple environments, and files and objects being stored in that namespace. You can access data in object or file format interchangeably as well.

[image courtesy of Cloudian]

 

Note also that data stored in its native cloud format. So if you’re using Azure, for example, your data is stored in blob format, and is thus accessible to other applications that can leverage that format.

 

Other Notes

The basic edition of HyperFile is included with HyperStore at no charge. The hardware appliance remains the primary model for on-premises deployments, with Cloudian noting that a lot of customers are still most comfortable buying hardware from a vendor for their storage deployments.

 

Thoughts

With the introduction of HyperFile, Cloudian made some leaps ahead in terms of breadth of offering. In my opinion, the ability to deploy HyperStore 7 on your favourite public cloud platform, and have it running a shared data pool with your on-premises HyperStore storage, is simply icing on the cake. A lot of people are talking about how they are all in with multi-cloud solutions, but it seems that Cloudian have come up with a fairly simple solution to the problem. You’ll need to do a little work to make sure your networking is set up in the way you need it to meet your requirements, but you’d need to do that if you were looking to do file or object in public cloud in any case. There are a bunch of use cases for this type of technology, and it’s nice to see that it’s not a bunch of different products glued together and called a solution.

It’s no secret that I think Cloudian have been doing some pretty cool stuff in the object space for a while now. The addition of HyperFile capability last year, and this multi-cloud capability in HyperStore 7, gets me all kinds of excited to see what they’ve got in store for the future. If you’re after a scalable object (and file) solution works well on-premises and off-premises, you’d do well to check out what Cloudian has to offer.

SwiftStack 6.0 – Universal Access And More

I haven’t covered SwiftStack in a little while, and they’ve been doing some pretty interesting stuff. They made some announcements recently but a number of scheduling “challenges” and some hectic day job commitments prevented me from speaking to them until just recently. In the end I was lucky enough to snaffle 30 minutes with Mario Blandini and he kindly took me through the latest news.

 

6.0 Then, So What?

Universal Access

Universal Access is really very cool. Think of it as a way to write data in either file or object format, and then read it back in file or object format, depending on how you need to consume it.

[image courtesy of SwiftStack]

Key features include:

  • Gateway free – the data is stored in cloud-native format in a single namespace;
  • Accessible via file (SMB3 / NFS4) and / or object API (S3 / Swift). Note that this is not a replacement for NAS, but it will give you the ability to work with some of those applications that expect to see file in places; and
  • Applications can write data one way, access the data another way, and vice versa.

The great thing is that, according to SwiftStack, “Universal Access enables applications to take advantage of all data under management, no matter how it was written or where it is stored, without the need to refactor applications”.

 

Universal Access Multi-Cloud

So what if you take to really neat features like, say, Cloud Sync and Universal Access, and combine them? You get access to a single, multi-cloud, storage namespace.

[image courtesy of SwiftStack]

 

Thoughts

As Mario took me through the announcements he mentioned that SwiftStack are “not just an object storage thing based on Swift” and I thought that was spot on. Universal Access (particularly with multi-cloud) is just the type of solution that enterprises looking to add mobility to workloads are looking for. The problem for some time has been that data gets tied up in silos based on the protocol that a controller speaks, rather than the value of the data to the business. Products like this go a long way towards relieving some of the pressure on enterprises by enabling simpler access to more data. Being able to spread it across on-premises and public cloud locations also makes for simpler consumption models and can help business leverage the data in a more useful way than was previously possible. Add in the usefulness of something like Cloud Sync in terms of archiving data to public cloud buckets and you’ll start to see that these guys are onto something. I recommend you head over to the SwiftStack site and request a demo. You can read the press release here.

WekaIO Have Been Busy – Really Busy

WekaIO recently announced Version 3.1 of their Matrix software, and I had the good fortune to catch up with David Hiatt. We’d spoken a little while ago when WekaIO came out of stealth and they’ve certainly been busy in the interim. In fact, they’ve been busy to the point that I thought it was worth putting together a brief overview of what’s new.

 

What Is WekaIO?

WekaIO have been around since 2013, gaining their first customers in 2016. They’ve had 17 patents filed, 45 identified, and 8 issued. Their focus has primarily been on delivering, in their words, the “highest performance file system targeted at compute intensive applications”. They deliver a fully POSIX-compliant file system that can run on bare metal, hypervisors, Docker, or in the public or private cloud.

[image courtesy of WekaIO]

Some of the key features of the architecture include the fact that it is distributed, resilient at scale, can perform fast rebuilds, and provides end-to-end protection. Right now, their key use cases include genomics, machine learning, media rendering, semiconductors, financial trading and analytics. The company has staff coming from XIV, NetApp, IBM, EMC, and Intel, amongst others.

 

So What’s News?

Well, there’s been a bit going on:

 

Matrix Version 3.1 – Much Better Than Matrix Revolutions

Not that that’s too hard to do. But there have been a bunch of new features added to WekaIO’s Matrix software. Here’s a table that summarises the new features.

Feature Explanation
Network Redundancy Binding network links and load balancing
Infiniband Native support for InfiniBand
Multiple File Systems Logical partitioning allows more granular allocation of performance and capacity
Cluster Scaling Dynamically shrink and grow clusters
NVMe Native support for NVMe devices
Snapshots and Clones High performance 4K granularity
Snap to Object Store Saving metadata of snap to OBS
Deployment in AWS Install and run Matrix on EC2 clusters

David also took me through what look like some very, very good SPECsfs2014 Software Build results, particularly when compared with some competitive solutions. He also walked me through the Marketplace configurator. This is really cool stuff – flexible and easy to use. You can check out a demo of it here.

 

Conclusion

All the cool kids are doing stuff with AWS. And that’s fine. But I really like that WekaIO also make stuff easy to run on-premises as well. And they also make it really fast. Because sometimes you just need to run stuff near you, and sometimes there needs to be an awful lot of it. WekaIO’s model is flexible, with the annual subscription approach and lack of maintenance contracts bound to appeal to a lot of people. The great thing is it’s easy to manage, easy to scale and supports all the file protocols you’d be interested in. There’s a bunch of (configurable) resiliency built in and support for hybrid workloads if required.

With a Formula One slide including customer testimonials from the likes of DreamWorks and SDSC, I get the impression that WekaIO are up to something pretty cool. Plus, I really enjoy chatting to David about what’s going on in the world of highly scalable file systems, and am looking forward to our next call in a few months’ time to see what they’ve been up to. I get the impression there’s little chance they’ll be sitting still.

Scale Computing and WinMagic Announce Partnership, Refuse to Sit Still

Scale Computing and WinMagic recently announced a partnership improving the security of Scale’s HC3 solution. I had the opportunity to be briefed by the good folks at Scale and WinMagic and thought I’d provide a brief overview of the announcement here.

 

But Firstly, Some Background

Scale Computing announced their HC3 Cloud Unity offering in late September this year. Cloud Unity, in a nutshell, let’s you run embedded HC3 instances in Google Cloud. Coupled with some SD-WAN smarts, you can move workloads easily between on-premises infrastructure and GCP. It enables companies to perform lift and shift migrations, if required, with relative ease, and removes a lot of the complexity traditionally associated of deploying hybrid-friendly workloads in the data centre.

 

So the WinMagic Thing?

WinMagic have been around for quite some time, and offer a range of security products aimed at various sizes of organization. This partnership with Scale delivers SecureDoc CloudVM as a mechanism for encryption and key management. You can download a copy of the brochure from here. The point of the solution is to provide a secure mechanism for hosting your VMs either on-premises or in the cloud. Key management can be a pain in the rear, and WinMagic provides a fully-featured solution for this that’s easy to use and simple to manage. There’s broad support for a variety of operating environments and clients. Authentication and authorized key distribution takes place prior to workloads being deployed to ensure that the right person is accessing data from an expected place and device and there’s support for password only or multi-factor authentication.

 

Thoughts

Scale Computing have been doing some really cool stuff in the hyperconverged arena for some time now. The new partnership with Google Cloud, and the addition of the WinMagic solution, demonstrates their focus on improving an already impressive offering with some pretty neat features. It’s one thing to enable customers to get to the cloud with relative ease, but it’s a whole other thing to be able to help them secure their assets when they make that move to the cloud.

It’s my opinion that Scale Computing have been the quiet achievers in the HCI marketplace, with reported fantastic customer satisfaction and a solid range of products on offer at a very reasonable RRP. Couple this with an intelligent hypervisor platform and the ability to securely host assets in the public cloud, and it’s clear that Scale Computing aren’t interested in standing still. I’m really looking forward to seeing what’s next for them. If you’re after an HCI solution where you can start really (really) small and grow as required, it would be worthwhile having a chat to them.

Also, if you’re into that kind of thing, Scale and WinMagic are hosting a joint webinar on November 28 at 10:30am EST. Registration for the webinar “Simplifying Security across your Universal I.T. Infrastructure: Top 5 Considerations for Securing Your Virtual and Cloud IT Environments, Without Introducing Unneeded Complexity” can be found here.

 

 

Aparavi Comes Out Of Stealth. Dazzles.

Santa Monica-based (I love that place) SaaS data protection outfit, Aparavi, recently came out of stealth, and I thought it was worthwhile covering their initial offering.

 

So Latin Then?

What’s an Aparavi? It’s apparently Latin and means “[t]o prepare, make ready, and equip”. The way we consume infrastructure has changed, but a lot of data protection products haven’t changed to accommodate this. Aparavi are keen to change that, and tell me that their product is “designed to work seamlessly alongside your business continuity plan to ease the burden of compliance and data protection for mid market companies”. Sounds pretty neat, so how does it work?

 

Architecture

Aparavi uses a three tiered architecture written in Node.js and C++. It consists of:

  • The Aparavi hosted platform;
  • An on-premises software appliance; and
  • A source client.

[image courtesy of Aparavi]

The platform is available as a separate module if required, otherwise it’s hosted on Aparavi’s infrastructure. The software appliance is the relationship manager in the solution. It performs in-line deduplication and compression. The source client can be used as a temporary recovery location if required. AES-256 encryption is done at the source, and the metadata is also encrypted. Key storage is all handled via keyring-style encryption mechanisms. There is communication between the web platform and the appliance, but the appliance can operate when the platform is off-line if required.

 

Cool Features

There are a number of cool features of the Aparavi solution, including:

  • Patented point-in-time recovery – you can recover data from any combination of local and cloud storage (you don’t need the backup set to live in one place);
  • Cloud active data pruning – will automatically remove files, and portions of files no longer needed from cloud locations;
  • Multi-cloud agile retention (this is my favourite) – you can use multiple cloud locations without the need to move data from one to the other;
  • Open data format – open source published, with Aparavi providing a reader so data can be read by any tool; and
  • Multi-tier, multi-tenancy – Aparavi are very focused on delivering a multi-tier and multi-tenant environment for service providers and folks who like to scale.

 

Retention Simplified

  • Policy Engine – uses file exclusion and inclusion lists
  • Comprehensive Search – search by user name and appliance name as well as file name
  • Storage Analytics – how much you’re saving by pruning, data growth / shrinkage over time, % change monitor
  • Auditing and Reporting Tools
  • RESTful API – anything in the UI can be automated

 

What Does It Run On?

Aparavi runs on all Microsoft-supported Windows platforms as well as most major Linux distributions (including Ubuntu and RedHat). They use the Amazon S3 API, and support GCP and are working on OpenStack and Azure. They’ve also got some good working relationships with Cloudian and Scality, amongst others.

[image courtesy of Aparavi]

 

Availability?

Aparavi are having a “soft launch” on October 25th. The product is licensed on the amount of source data protected. From a pricing perspective, the first TB is always free. Expect to pay US $999/year for 3TB.

 

Conclusion

Aparavi are looking to focus on the mid-market to begin with, and stressed to me that it isn’t really a tool that will replace your day to day business continuity tool. That said, they recognize that customers may end up using the tool in ways that they hadn’t anticipated.

Aparavi’s founding team of Adrian Knapp, Rod Christensen, Jonathan Calmes and Jay Hill have a whole lot of experience with data protection engines and a bunch of use cases. Speaking to Jonathan it feels like they’ve certainly thought about a lot the issues facing folks leveraging cloud for data protection. I like the open approach to storing the data, and the multi-cloud friendliness takes the story well beyond the hybrid slideware I’m accustomed to seeing from some companies.

Cloud has opened up a lot of possibilities for companies that were traditionally constrained by their own ability to deliver functional, scalable and efficient infrastructure internally. It’s since come to people’s attention that, much like the days of internal-only deployments, a whole lot of people who should know better still don’t understand what they’re doing with data protection, and there’s crap scattered everywhere. Products like Aparavi are a positive step towards taking control of data protection in fluid environments, potentially helping companies to get it together in an effective manner. I’m looking forward to diving further into the solution, and am interested to see how the industry reacts to Aparavi over the coming months.