Rubrik Cloud Data Management 4.2 Announced – “Purpose Built for the Hybrid Cloud”

Rubrik recently announced 4.2 of their Cloud Data Management platform and I was fortunate enough to sit in on a sneak preview from Chris Wahl, Kenneth Hui, and Rebecca Fitzhugh. “Purpose Built for the Hybrid Cloud”, there are a whole bunch of new features in this release. I’ve included a summary table below, and will dig in to some of the more interesting ones.

Expanding the Ecosystem Core Features & Services General Enhancements
AWS Native Protection (EC2 Instances) Rubrik Envoy SQL Server FILESTREAM
VMware vCloud Director Integration Rubrik Edge on Hyper-V SQL Server Log Shipping
Windows Full Volume Protection Network Throttling NAS Native API Integration
AIX & Solaris Support VLAN Tagging (GUI) NAS SMB Scan Enhancements
SNMP AHV VSS snapshot
Multi-File restore Proxy per Archival Location
Reader-Writer Archival Locations

 

AWS Native Protection (EC2 Instances)

One of the key parts of this announcement is cloud-native protection, delivered specifically with AWS EBS Snapshots. The cool thing is you can have Rubrik running on-premises or sitting in the cloud.

Use cases?

  • Automate manual processes – use policy engine to automate lifecycle management of snapshots, including scheduling and retention
  • Rapid recovery from failure – eliminate manual steps for instance and file recovery
  • Replicate instances in other availability zones and regions – launch instances in other AZs and Regions when needed using snapshots
  • Consolidate data management – one solution to manage data across on-premises DCs and public clouds

Snapshots have been a manual process to deal with. Now there’s no need to mess with crontab or various AWS tools to get the snaps done. It also aligns with Rubrik’s vision of having a single tool to manage both cloud and on-premises workloads. The good news is that files in snapshots are indexed and searchable, so individual file recovery is also pretty simple.

 

VMware vCloud Director Integration

It may or may not be a surprise to learn that VMware vCloud Director is still in heavy use with service providers, so news of Rubrik integration with vCD shouldn’t be too shocking. Rubrik spent a little time talking about some of the “Foundational Services” they offer, including:

  • Backup – Hosted or Managed
  • ROBO Protection
  • DR – Mirrored Site service
  • Archival – Hosted or Managed

The value they add, though, is in the additional services, or what they term “Next Generation premium services”. These include:

  • Dev / Test
  • Cloud Archival
  • DR in Cloud
  • Near-zero availability
  • Cloud migration
  • Cloud app protection

Self-service is the key

To be able to deliver a number of these services, particularly in the service provider space, there’s been a big focus on multi-tenancy.

  • Operate multi-customer configuration through a single cluster
  • Logically partition cluster into tenants as “Organisations”
  • Offer self-service management for each organisation
  • Centrally control, monitoring and reporting with aggregated data

Support for vCD (version 8.10 and later) is as follows:

  • Auto discovery of vCD hierarchy
  • SLA based auto protect at different levels of vCD hierarchy
  • vCD Instance
  • vCD Organization • Org VDC
  • vApp
  • Recovery workflows
  • Export and Instant recovery
  • Network settings
  • File restore
  • Self-service using multi-tenancy
  • Reports for vCD organization

 

Windows Full Volume Protection

Rubrik have always had fileset-based protection, and they’re now offering the ability with Windows hosts to protect a volume at a time, eg. C:\ volume. These protection jobs incorporate additional information such as partition type, volume size, and permissions.

[image courtesy of Rubrik]

There’s also a Rubrik-created package to create bootable Microsoft Windows Preinstallation Environment (WinPE) media to restore the OS as well as provide disk partition information. There are multiple options for customers to recover entire volumes in addition to system state, including Master Boot Record (MBR), GUID Partition Table (GPT) information, and OS.

Why would you? There are a few use cases, including

  • P2V – remember those?
  • Physical RDM mapping compatibility – you might still have those about, because, well, reasons
  • Physical Exchange servers and log truncation
  • Cloud mobility (AWS to Azure or vice versa)

So now you can select volumes or filesets, and you can store the volumes in a Volume Group.

[image courtesy of Rubrik]

 

AIX and Solaris Support

Wahl was reluctant to refer to AIX and Solaris as “traditional” DC applications, because it all makes us feel that little bit older. In any case, AIX support was already available in the 4.1.1 release, and 4.2 adds Oracle Solaris support. There are a few restore scenarios that come to mind, particularly when it comes to things like migration. These include:

  • Restore (in place) – Restores the original AIX server at the original path or a different path.
  • Export (out of place) – Allows exporting to another AIX or Linux host that has the Rubrik Backup Service (RBS) running.
  • Download Only – Ability to download files to the machine from which the administrator is running the Rubrik web interface.
  • Migration – Any AIX application data can be restored or exported to a Linux host, or vice versa from Linux to an AIX host. In some cases, customers have leveraged this capability for OS migrations, removing the need for other tools.

 

Rubrik Envoy

Rubrik Envoy is a trusted ambassador (its certificate is issued by the Rubrik cluster) that represents the service provider’s Rubrik cluster in an isolated tenant network.

[image courtesy of Rubrik]

 

The idea is that service providers are able to offer backup-as-a-service (BaaS) to co-hosted tenants, enabling self-service SLA management with on-demand backup and recovery. The cool thing is you don’t have to deploy the Virtual Edition into the tenant network to get the connectivity you need. Here’s how it comes together:

  1. Once a tenant subscribes to BaaS from the SP, an Envoy virtual appliance is deployed on the tenant’s network.
  2. The tenant may log into Envoy, which will route the Rubrik UI to the MSP’s Rubrik cluster.
  3. Envoy will only allow access to objects that belong to the tenant.
  4. The Rubrik cluster works with the tenant VMs, via Envoy, for all application quiescence, file restore, point-in-time recovery, etc.

 

Network Throttling

Network throttling is something that a lot of customers were interested in. There’s not an awful lot to say about it, but the options are No, Default and Scheduled. You can use it to configure the amount of bandwidth used by archival and replication traffic, for example.

 

Core Feature Improvements

There are a few other nice things that have been added to the platform as well.

  • Rubrik Edge is now available on Hyper-V
  • VLAN tagging was supported in 4.1 via the CLI, GUI configuration is now available
  • SNMPv2c support (I loves me some SNMP)
  • GUI support for multi-file recovery

 

General Enhancements

A few other enhancements have been added, including:

  • SQL Server FILESTREAM fully supported now (I’m not shouting, it’s just how they like to write it);
  • SQL Server Log Shipping; and
  • Per-Archive Proxy Support.

Rubrik were also pretty happy to announce NAS Vendor Native API Integration with NetApp and Isilon.

  • Network Attached Storage (NAS) vendor-native API integration.
    • NetApp ONTAP (ONTAP API v8.2 and later) supporting cluster-mode for NetApp filers.
    • Dell EMC Isilon OneFS (v8.x and later) + ChangeList (v7.1.1 and later)
  • NAS vendor-native API integration further enhances our current capability to take volume-based snapshots.
  • This feature also enhances the overall backup fileset backup performance.

NAS SMB Scan Enhancements have also been included, providing a 10x performance improvement (according to Rubrik).

 

Thoughts

Point releases aren’t meant to be massive undertakings, but companies like Rubrik are moving at a fair pace and adding support for products to try and meet the requirements of their customers. There’s a fair bit going on in this one, and the support for AWS snapshots is kind of a big deal. I really like Rubrik’s focus on multi-tenancy, and they’re slowing opening up doors to some enterprises still using the likes of AIX and Solaris. This has previously been the domain of the more traditional vendors, so it’s nice to see progress has been made. Not all of the world runs on containers or in vSphere VMs, so delivering this capability will only help Rubrik gain traction in some of the more conservative shops around town.

Rubrik are working hard to address some of the “enterprise-y” shortcomings or gaps that may have been present in earlier iterations of their product. It’s great to see this progress over such a short period of time, and I’m looking forward to hearing about what else they have up their sleeve.

Druva Announces CloudRanger Acquisition

Announcement

Druva recently announced that they’ve acquired CloudRanger. I had the opportunity to catch up with W. Curtis Preston about the news recently and thought I’d cover it briefly here.

 

What’s A CloudRanger?

Here’s the high-level view of the company:

  • Founded in 2016
  • Headquartered in Donegal, Ireland
  • 300+ Global Customers
  • 3x Growth in last 6 months
  • 100% Cloud native ‘as-a-Service’
  • Pay as you go pricing model
  • Biggest client creating 4,000 snapshots per day

 

Why CloudRanger?

Agentless Service

  • API Account IAM access ensures greater customer account security
  • Leverages AWS Quiescing capabilities
  • No account proxies (No additional costs, increased security)
  • No software needed to be updated

Broadest service coverage

  • Amazon EC2, EBS, RDS & RedShift
  • Automated Disaster Recovery (ADR)
  • Server scheduling for Amazon EC2 & RDS
  • SaaS based solution, compared to CPM server based approach
  • Easy to use platform for managing multiple AWS accounts
  • Featured SaaS product in AWS Marketplace available via SaaS contracts

Consumption Based Pricing Model

  • Pay as you go with full insight into data usage for cost predictability

 

A Good Fit

So where does CloudRanger fit in the broader Druva story? You’ll notice in the below picture that Apollo is missing. The main reason for the acquisition, as best I can tell, is that CloudRanger gives Druva the capability they were after with Apollo but in a much shorter timeframe.

[image courtesy of Druva]

 

Thoughts

A lot of customers want a lot of different things from their software vendors, particularly when it comes to data protection. A lot of companies have particular needs, and infrastructure protection is a complicated beast at the best of times. Sometimes it makes sense to try and develop these features for your customers. And sometimes it makes sense to go out and acquire those features. In this case, Druva has realised that CloudRanger gets them to a point in their product development far quicker than they may have gotten to under their own steam. The point of this acquisition isn’t that the good folks at Druva don’t have the chops to deliver what CloudRanger does already, but now they can move on to other platform enhancements. This does assume that the acquisition will go smoothly, but given that this doesn’t appear to be a hostile takeover, I’m assuming that part will go well.

Druva have done a lot of cool stuff recently, and I do like their approach to data protection (management?) that has differentiated itself from some of the more traditional approaches in the marketplace. CloudRanger gives them solid capability with AWS workloads, and I imagine Azure will be on the radar as well. I’m looking forward to seeing how this plays out, and what impact it has on some of their competitors in the space.

OT – New Site Sponsor – Vembu

Please welcome Vembu Technologies as a sponsor of PenguinPunk.net. They are a data protection company that has been around for some time now with a comprehensive suite of products aimed at small to medium enterprises. You can read more about them here. I’m looking forward to taking their stuff for a spin in the lab in the next little while to see what they can really do.

The idea that I’m accepting sponsorship money for this blog doesn’t site well with some folks. But I’ve been maintaining this site for over ten years now, and sponsorship is one way I can keep getting to the big tech conferences and events that are so critical (I think) to understanding what’s happening in the industry. It doesn’t mean you’ll now be bombarded with advertorials from the companies that sponsor me. Any paid for content carries a disclaimer up front so we’re all clear about who’s paying for it and what it is. But running a blog as a hobby still costs money, and I’ve been reaching in to my own pocket a lot for some of this stuff. And while I’m shilling for the site, my rates are reasonable and the delivery model is simple. Feel free to get in contact via email / Twitter / whatever if it’s something you might like to do.

Cohesity – Cloud Edition for Azure – A Few Notes

I deployed Cohesity Cloud Edition in Microsoft Azure recently and took a few notes. I’m the first to admit that I’m completely hopeless when it comes to fumbling my way about Azure, so this probably won’t seem as convoluted a process to you as it did to me. If you have access to the documentation section of the Cohesity support site, there’s a PDF you can download that explains everything. I won’t go into too much detail but there are a few things to consider. There’s also a handy solution brief on the Cohesity website that sheds a bit more light on the solution.

 

Process

The installation requires a Linux VM be setup in Azure (a small one – DS1_V2 Standard). Just like in the physical world, you need to think about how many nodes you want to deploy in Azure (this will be determined largely by how much you’re trying to protect). As part of the setup you edit a Cohesity-provided JSON file with a whole bunch of cool stuff like Application IDs and Keys and Tenant IDs.

Subscription ID

Specify the subscription ID for the subscription used to store the resources of the Cohesity Cluster.

WARNING: The subscription account must have owner permissions for the specified subscription.

Application ID

Specify the Application ID assigned by Azure during the service principal creation process.

Application Key

Specify the Application key generated by Azure during the service principal creation process that is used for authentication.

Tenant ID

Specify the unique Tenant ID assigned by Azure.

The Linux VM then goes off and builds the cluster in the location you specify with the details you’ve specified. If you haven’t done so already, you’ll need to create a Service Principal as well. Microsoft has some useful documentation on that here.

 

Limitations

One thing to keep in mind is that, at this stage, “Cohesity does not support the native backup of Microsoft Azure VMs. To back up a cloud VM (such as a Microsoft Azure VM), install the Cohesity agent on the cloud VM and create a Physical Server Protection Job that backs up the VM”. So you’ll see that, even if you add Azure as a source, you won’t be able to perform VM backups in the same way you would with vSphere workloads, as “”Cloud Edition only supports registering a Microsoft Azure Cloud for converting and cloning VMware VMs. The registered Microsoft Azure Cloud is where the VMs are cloned to”. This is the same across most public cloud platforms, as Microsoft, Amazon and friends aren’t terribly interested in giving out that kind of access to the likes of Cohesity or Rubrik. Still, if you’ve got the right networking configuration in place, you can back up your Azure VMs either to the Cloud Edition or to an on-premises instance (if that works better for you).

 

Thoughts

I’m on the fence about “Cloud Editions” of data protection products, but I do understand why they’ve come to be a thing. Enterprises have insisted on a lift and shift approach to moving workloads to public cloud providers and have then panicked about being able to protect them, because the applications they’re running aren’t cloud-native and don’t necessarily work well across multiple geos. And that’s fine, but there’s obviously an overhead associated with running cloud editions of data protection solutions. And it feels like you’re just putting off the inevitable requirement to re-do the whole solution. I’m all for leveraging public cloud – it can be a great resource to get things done effectively without necessarily investing a bunch of money in your own infrastructure. But you need to re-factor your apps for it to really make sense. Otherwise you find yourself deploying point solutions in the cloud in order to avoid doing the not so cool stuff.

I’m not saying that this type of solution doesn’t have a place. I just wish it didn’t need to be like this sometimes …

What’s New With Zerto?

Zerto recently held their annual conference (ZertoCON) last week in Boston. I didn’t attend, but I did have time to catch up with Rob Strechay prior to Zerto making some announcements around the company and future direction. I thought I’d cover those here.

 

IT Resilience Platform

The first announcement revolved around the “IT Resilience Platform“. The idea behind the strategy is that backup, disaster recovery and cloud mobility solutions into a single, simple, scalable platform. Strechay says that “this strategy combines continuous availability, workload mobility, and multi-cloud agility to ensure you can withstand any disruption, leverage new technology seamlessly, and move forward with confidence”. They’ve found that Zerto is being used both for unplanned and planned disruptions, and they’ve also been seeing a lot more activity resolving ransomware and security incidents. From a planned outage perspective, DC consolidation has been a big part of the planned disruption activity as well.

What’s driving this direction? According to Strechay, companies are looking for fewer point solutions. They’re also seeing backup and DR activities converging. Cloud is driving this technology convergence and is changing the way data protection is being delivered.

  • Cloud for backup
  • Cloud for DR
  • Application mobility

“It’s good if it’s done properly”. Zerto tell me they haven’t rushed into this and are not taking the approach lightly. They see IT Resilience as a combination of  Backup, DR Replication, and Hybrid Cloud. Strechay told me that Zerto are going to stay software only and will partner on the hardware side where required. So what does it look like conceptually?

[image courtesy of Zerto]

Think of this as a mode of transport. The analytics and control is like the navigation system, the orchestration and automation layer are the steering wheel, and continuous data protection is the car.

 

Vision for the Future of Backup

Strechay also shared with me Zerto’s vision for the future of backup. In short, “it needs to change”. They really want to move away from the concept of periodic protection to continuous, journal-based protection delivering seconds of RPO at scale to meet customer expectations. How are they going to do this? The key differentiation will be CDP combined with best of breed replication.

 

Zerto 7 Preview

Strechay also shared some high level details of Zerto 7, with key features including:

  • Intelligent index and search
  • Elastic journal
  • Data protection workflows
  • Architecture enhanced
  • LTR targets

There’ll be a new and enhanced user experience – they’re busy revisiting workflows and enhancing a number of them (e.g. reducing clicks, enhanced APIs, etc). They’ll also be looking at features such as prescriptive analytics (what if I added more VMs to this journal?). They’re aiming for a release in Q1 2019.

 

Thoughts

The way we protect data is changing. Companies like Zerto, Rubrik and Cohesity are bringing a new way of thinking to an age old problem. They’re coming at it from slightly different angles as well. This can only be a good thing for the industry. A lot of the technical limitations that we faced previously have been removed in terms of bandwidth and processing power. This provides the opportunity to approach the problem from the business perspective. Rather than saying “we can’t do that”, we have the opportunity to say “we can do that”. That doesn’t mean that scale is a simple thing to manage, but it seems like there are more ways to solve this problem than there have been previously.

I’ve been a fan of Zerto’s approach for some time. I like the idea that a company has shared their new vision for data protection some months out from actually delivering the product. It makes a nice change from companies merely regurgitating highlights from their product release notes (not that that isn’t useful at times). Zerto have a rich history of delivering CDP solutions for virtualised environments, and they’ve made some great inroads with cloud workload protection as well. The idea of moving away from periodic data protection to something continuous is certainly interesting, and obviously fits in well with Zerto’s strengths. It’s possibly not a strategy that will work well in every situation, particularly with smaller environments. But if you’re leveraging replication technologies already, it’s worth looking at how Zerto might be able to deliver a more complete solution for your data protection requirements.

Cohesity Basics – Cloud Tier

I’ve been doing some work with Cohesity in our lab and thought it worth covering some of the basic features that I think are pretty neat. In this edition of Cohesity Basics, I thought I’d quickly cover off how to get started with the “Cloud Tier” feature. You can read about Cohesity’s cloud integration approach here. El Reg did a nice write-up on the capability when it was first introduced as well.

 

What Is It?

Cohesity have a number of different technologies that integrate with the cloud, including Cloud Archive and Cloud Tier. With Cloud Archive you can send copies of snapshots up to the cloud to keep as a copy separate to the backup data you might have replicated to a secondary appliance. This is useful if you have some requirement to keep a monthly or six-monthly copy somewhere for compliance reasons. Cloud Tier is an overflow technology that allows you to have cold data migrated to a cloud target when the capacity of your environment exceeds 80%. Note that “coldness” is defined in this instance as older than 60 days. That is, you can’t just pump a lot of data in to your appliance to see how this works (trust me on that). The coldness level is configurable, but I recommend you engage with Cohesity support before you go down that track. It’s also important to note that once you turn on Cloud Tier for a View Box, you can’t turn it off again.

 

How Do I?

Here’s how to get started in 10 steps or less. Apologies if the quality of some of these screenshots is not great. The first thing to do is register an External Target on your appliance. In this example I’m running version 5.0.1 of the platform on a Cohesity Virtual Edition VM. Click on Protection – External Target.

Under External Targets you’ll see any External Targets you’ve already configured. Select Register External Target.

You’ll need to give it a name and choose whether you’re using it for Archival or Cloud Tier. This choice also impacts some of the types of available targets. You can’t, for example, configure a NAS or QStar target for use with Cloud Tier.

Selecting Cloud Tier will provide you with more cloudy targets, such as Google, AWS and Azure.

 

In this example, I’ve selected S3 (having already created the bucket I wanted to test with). You need to know the Bucket name, Region, Access Key ID and your Secret Access Key.

If you have it all correct, you can click on Register and it will work. If you’ve provided the wrong credentials, it won’t work. You then need to enable Cloud Tier on the View Box. Go to Platform – Cluster.

Click on View Boxes and the click on the three dots on the right to Edit the View Box configuration.

You then can toggle Cloud Tier and select the External Target you want to use for Cloud Tier.

Once everything is configured (and assuming you have some cold data to move to the cloud and your appliance is over 80% full) you can click on the cluster dashboard and you’ll see an overview of Cloud Tier storage in the Storage part of the overview.

 

 

Thoughts?

All the kids are getting into cloud nowadays, and Cohesity is no exception. I like this feature because it can help with managing capacity on your on-premises appliance, particularly if you’ve had a sudden influx of data into the environment, or you have a lot of old data that you likely won’t be accessing. You still need to think about your egress charges (if you need to get those cold blocks back) and you need to think about what the cost of that S3 bucket (or whatever you’re using) really is. I don’t see the default coldness level being a problem, as you’d hope that you sized your appliance well enough to cope with a certain amount of growth.

Features like this demonstrate both a willingness on behalf of Cohesity to embrace cloud technologies, as well as a focus on ease of use when it comes to reasonably complicated activities like moving protection data to an alternative location. My thinking is that you wouldn’t necessarily want to find yourself in the position of having to suddenly shunt a bunch of cold data to a cloud location if you can help it (although I haven’t done the maths on which is a better option) but it’s nice to know that the option is available and easy enough to setup.

Random Short Take #5

So it’s been over six months since I did one of these, and it’s clear that I’m literally rubbish at doing them regularly.

Cohesity – SQL Log Backup Warning

This one falls into the category of “unlikely that it will happen to you but might be worth noting”. I’ve been working with some Cohesity gear in the lab recently and came across a warning, not an error, when I was doing a SQL backup.

But before I get to that, it’s important to share the context of the testing. With Cohesity, there’s some support for protecting Microsoft SQL workloads that live on Windows Failover Clusters (as well as AAGs – but that’s a story for another time). You configure these separately from your virtual sources, and you install an agent on each node in the cluster. In my test environment I’ve created a simple two-node Windows Failover Cluster based on Windows 2016. It has some shared disk and a heartbeat network (a tip of the hat to Windows clusters of yore). I’ve cheated, because it’s virtualised, but needs must and all that. I’m running SQL 2014 on top of this. It took me a little while to get that working properly, mainly because I’m a numpty with SQL. I finally had everything setup when I noticed the following error after each SQL protection job ran.

I was a bit confused as I had set the databases to full recovery mode. Of course, the more it happened, the more I got frustrated. I fiddled about with permissions on the cluster, manual maintenance jobs, database roles and all manner of things I shouldn’t be touching. I even went for a short walk. The thing I didn’t do, though, was click the arrow on the left hand side of the job. That expands the job run details so you can read more about what happened. If I’d done that, I would have seen this error straight away. And the phrase “No databases available for log backup” would have made more sense.

And I would have realised that the reason I was getting the log backup warning was because it was skipping the system databases and, as I didn’t have any other databases deployed, it wasn’t doing any log backups. This is an entirely unlikely scenario in the real world, because you’ll be backing up SQL clusters that have data on them. If they don’t have data on them, they’re likely low value items and won’t get protected. The only situation where you might come across this is if you’re testing your infrastructure before deploying data to it. I resolved the issue by creating a small database. The log backups then went through without issue.

For reference, the DataPlatform version I’m using is 5.0.1.

Rubrik Announces Polaris GPS

Rubrik recently announced their GPS module for Polaris. The product name gives me shivers because it’s the name of a data centre I spent a lot of weekends in years ago. In any case, Polaris is a new platform being built in parallel with Rubrik’s core offering. Chris Wahl very kindly took us through what some of the platform capabilities are.

 

Polaris What?

Polaris is the SaaS platform itself, and Rubrik are going to build modules for it (as well as allowing 3rd parties to contribute). So let’s not focus too much on Polaris, and more on those modules. The idea is to provide a unified control plane with a single point of control. According to Rubrik, there is a going to be significant focus on a Great User Experience ™.

“Rubrik Polaris is a consumable resource that you tap into, rather than a pile of infrastructure that you setup and manage”

 

I’m A Polaris

The first available module is “Rubrik Polaris GPS”. The idea is that you can:

  • Command and control of all Rubrik CDM instances, globally;
  • Monitor for compliance and leverage alerts to dig into trouble spots;
  • Work with open and documented RESTful APIs with visibility into a global data footprint. Automate and orchestrate all of Rubrik from a single entry point.

The creation and enforcement of business SLA policies is based on flexible criteria: geography, installation, compliance needs, planned growth, data migrations, etc. You can start to apply various policies to data – some you might want to keep in a particular geographical zone, some you might need replicated, etc.

Another cool thing is that the APIs are open and documented, making third-party integration (or roll your own stuff) a real possibility.

From a security perspective, there’s no currently available on-premises version but that’s a possibility in the future (for dark sites). You also need to add clusters manually (i.e. securely) – clusters won’t just automatically join the platform. The idea is, according to Rubrik, to “show you enough data to make actionable decisions, but don’t show too much”. This seems like a solid approach.

 

Questions?

Is my backup source data available to Polaris?

– No. The backup source data is available only to the respective Clusters. Polaris has access only to activities and reports on Clusters that have been granted access to Polaris.

Is Polaris a separate CDM version?

– No. Polaris is a SaaS service.

What is the maximum number of Clusters that can be managed by Polaris?

– There is no hard limit to the number of Clusters that can be managed by Polaris.

How secure is Polaris GPS?

– Polaris uses multiple levels of security to protect customer data and service: authentication, secure connection, data security, data isolation, data residency, etc.

 

Thoughts

So what problem are they trying to solve? Well, what if you wanted to apply global protection policies to multiple appliances? GPS could be leveraged here. This first module isn’t going to be very useful for folks who are running a single deployment of Briks, but it’s going to be very interesting for folks who’ve got a large deployment that may or may not be geographically dispersed. The GPS module is going to be very handy, and shows the potential of the platform. I’m keen to see what else they come up with to leverage the offering. I’m also interested to see whether there’s much uptake from third-parties. These extensible platforms always seem like a great idea, but I often see limited support from third-parties with the vendor doing the bulk of the heavy lifting. That said, I’m more than happy to see that Rubrik have taken this open approach with the API, as it does allow for some potentially interesting integrations to happen.

If you’ve been keeping an eye on the secondary storage market, you’ll see that the companies offering solutions are well beyond simply delivering data protection storage with backup and recovery capabilities. There’s a whole lot more that can be done with this data, and Rubrik are focused on delivering more out of the platform than just basic copy data management. The idea of Polaris delivering a consolidated, SaaS-based view of infrastructure is likely the first step in a bigger play for them. I think this is a good way to get people using their infrastructure differently, and I like that these companies are working to make things simpler to use in order to deliver value back to the business. Read more about Polaris GPS here.

Druva Announces Cloud Platform Enhancements

Druva Cloud Platform

Data protection has been on my mind quite a bit lately. I’ve been talking to a number of vendors, partners and end users about data protection challenges and, sometimes, successes. With World Backup Day coming up I had the opportunity to get a briefing from W. Curtis Preston on Druva’s Cloud Platform and thought I’d share some of the details here.

 

What is it?

Druva Cloud Platform is Druva’s tool for tying together their as-a-Service data protection solution within a (sometimes maligned) single pane of glass. The idea behind it is you can protect your assets – from end points through to your cloud applications (and everything in between) – all from the one service, and all managed in the one place.

[image courtesy of Druva]

 

Druva Cloud Platform was discussed at Tech Field Day Extra at VMworld US 2017, and now fully supports Phoenix (the DC protection offering), inSync
(end point & SaaS protection), and Apollo (native EC2 backup). There’s also some nice Phoenix integration with VMware Cloud on AWS (VMC).

[image courtesy of Druva]

 

Druva’s Cloud Credentials

Druva provide a nice approach to as-a-Service data protection that’s a little different from a number of competing products:

  • You don’t need to see or manage backup server nodes;
  • Server infrastructure security is not your responsibility;
  • Server nodes are spawned / stopped based on load;
  • S3 is less expensive (and faster with parallelisation);
  • There are no egress charges during restore; and
  • No on-premises component or CapEx is required (although you can deploy a cache node for quicker restore to on-premises).

 

Thoughts

I first encountered Druva at Tech Field Day Extra VMworld US in 2017 and was impressed by both the breadth of their solution and the cloudiness of it all compared to some of the traditional vendor approaches to protecting cloud-native and traditional workloads via the cloud. They have great support for end point protection, SaaS and traditional, DC-flavoured workloads. I’m particularly a fan of their willingness to tackle end point protection. When I was first starting out in data protection, a lot of vendors were speaking about how they could protect business from data loss. Then it seemed like it all became a bit too hard and maybe we just started to assume that the data was safe somewhere in the cloud or data centre (week not really but we’re talking feelings, not fact for the moment). End point protection is not an easy thing to get right, but it’s a really important part of data protection. Because ultimately you’re protecting data from bad machines and bad events and, ultimately, bad people. Sometimes the people aren’t bad at all, just a little bit silly.

Cloud is hard to do well. Lifting and shifting workloads from the DC to the public cloud has proven to be a challenge for a lot of enterprises. And taking a lift and shift approach to data protection in the cloud is also proving to be a bit of challenge, not least of which because people struggle with the burstiness of cloud workloads and need protection solutions that can accommodate those requirements. I like Druva’s approach to data protection, at least from the point of view of their “cloud-nativeness” and their focus on protecting a broad spectrum of workloads and scenarios. Not everything they do will necessarily fit in with the way you do things in your business, but there’re some solid, modern foundations there to deliver a comprehensive service. And I think that’s a nice thing to build on.

Druva are also presenting at Cloud Field Day 3 in early April. I recommend checking out their session. Justin also did a post in anticipation of the session that is well worth a read.