Rubrik Basics – Add Cluster To Polaris

I wrote about Rubrik’s Polaris platform when it was first announced around 3 years ago. When you buy in to the Rubrik solution, you get access to Polaris GPS by default (I think). Other modules, such as Sonar or Radar, are licensed separately. In any case, GPS is a handy tool if you have more than one Rubrik cluster under your purview. I thought it would be useful for folks out there to see how simple it is to add your Rubrik cluster to Polaris. I know that most of these basics articles seem like fairly rudimentary fare, but I find it helpful when evaluating new tech to understand how the pieces fit together in terms of an actual implementation. I’m hopeful that someone else will find this of use as well. Note that you’ll need Internet connectivity of some sort between your environment and the Polaris SaaS environment. You also need to consider the security of your environment in terms of firewalling, multi-factor authentication, RBAC, and so on. It’s also important to note that removing a cluster from Polaris currently involves engaging Rubrik Support. I can only imagine this will change in the future.

When you get onboard with Rubrik, you’ll get setup with the Polaris portal. Access to this is usually via customer.my.rubrik.com. Login with your credentials.

If you haven’t done anything in the environment previously, you’ll be prompted add your first cluster every time you login. Eventually it’ll wear you down and you’ll find yourself clicking on the + sign.

This will give you a single-use token to add to your cluster. Click on Copy to clipboard.

Now login to the Rubrik web UI of the cluster you want to add to Polaris. Click on the Gear icon, and then Cluster Settings (under System Configuration).

Past the token in and click Save.

It might take a minute, but you should be able to see your cluster in the Polaris dashboard.

Rubrik Basics – Add LDAP

I thought I’d run through the basics of adding LDAP support to a Rubrik Edge cluster. I’ve written previously about multi-tenancy considerations with Rubrik, and thought it might be useful to start down that path in the lab to demonstrate some of the process. It’s not a terribly difficult task, but I did find a little trial and error was required. I suspect that’s because of some environmental issues on my side, rather than the Rubrik side of things. Anyway, let’s get started. Click on the Gear / Settings icon in the Web UI. Then select Users under Access Management.

Click on the LDAP Servers tab and click on “Add LDAP Server”.

You’ll be presented with the Add LDAP Server workflow window.

I messed this up a few times in my environment, but this is what worked for me.

Domain name: domainname.com.au

Base DN: dc=domainname,dc=com,dc=au

Bind DN or Username: [email protected]

Password: *******

Click Next to continue.

I pointed to one of the Active Directory servers in the environment. This went better when I added the domain name search to the cluster. The port I used was 389, but I’ve seen variations on that in various articles across the Internet.

If that works, you then have the option to enable MFA integration.

Toggling the button will give you the option to add two-step verification. There are some articles on the Internet that provide further guidance on that, and this video is quite useful too.

Once you’ve added your directory source, it’s time to assign roles to a user.

Click on Assign Roles, then drop down the directory you’d like to search in.

In this example, there’s the local user directory, and the domain source that I added previously.

If I search for people called Dan in this directory, it’s not too hard to find my username.

I can then assign a role to my directory username. By default, the configured roles are Administrator and ReadOnlyAdmin.

Now my AD account is listed under the users and I can login to CDM using my domain credentials.

And that’s it. If you want to read more about Rubrik and AD integration, including some neat automation, check out this article from Frederic Lhoest.

Rubrik Basics – Edge Deployment

It’s been a while since I’ve posted any basic how-to articles, but I’ve recently been doing some work with Rubrik Edge in the lab and thought I’d run through the basics. There’s a new document outlining the process on the articles page.

If you’re unfamiliar with Rubrik Edge, it’s Rubrik’s RO/BO solution in the form of a virtual appliance that comes in 5 and 10TB versions. The product page is here, and the datasheet can be found here. It runs on VMware vSphere, Nutanix AHV, and Microsoft Hyper-V, so it’s pretty handy in terms of supported hypervisor deployment options. The cool thing about Edge is that you can deploy on reasonably small amounts of hardware, which can be a fairly common scenario in edge deployments. You can also have 100 (I think) Edge appliances replicating data back to a Rubrik physical cluster. You can’t, however, use it as a target for replication.

In any case, I’ll be posting a few of these basics articles over the next few weeks to give readers a feel for how easy it is to get up and running with the platform.

Random Short Take #53

Welcome to Random Short Take #53. A few players have worn 53 in the NBA including Mark Eaton, James Edwards, and Artis Gilmore. My favourite though was Chocolate Thunder, Darryl Dawkins. Let’s get random.

  • I love Preston’s series of articles covering the basics of backup and recovery, and this one on backup lifecycle is no exception.
  • Speaking of data protection, Druva has secured another round of funding. You can read Mellor’s thoughts here, and the press release is here.
  • More data protection press releases? I’ve got you covered. Zerto released one recently about cloud data protection. Turns out folks like cloud when it comes to data protection. But I don’t know that everyone has realised that there’s some work still to do in that space.
  • In other press release news, Cloud Propeller and Violin Systems have teamed up. Things seem to have changed a bit at Violin Systems since StorCentric’s acquisition, and I’m interested to see how things progress.
  • This article on some of the peculiarities associated with mainframe deployments in the old days by Anthony Vanderwerdt was the most entertaining thing I’ve read in a while.
  • Alastair has been pumping out a series of articles around AWS principles, and this one on understanding your single points of failure is spot on.
  • Get excited! VMware Cloud Director 10.2.2 is out now. Read more about that here.
  • A lot of people seem to think it’s no big thing to stretch Layer 2 networks. I don’t like it, and this article from Ethan Banks covers a good number of reasons why you should think again if you’re that way inclined.

Random Short Take #52

Welcome to Random Short Take #52. A few players have worn 52 in the NBA including Victor Alexander (I thought he was getting dunked on by Shawn Kemp but it was Chris Gatling). My pick is Greg Oden though. If only his legs were the same length. Let’s get random.

  • Penguin Computing and Seagate have been doing some cool stuff with the Exos E 5U84 platform. You can read more about that here. I think it’s slightly different to the AP version that StorONE uses, but I’ve been wrong before.
  • I still love Fibre Channel (FC), as unhealthy as that seems. I never really felt the same way about FCoE though, and it does seem to be deader than tape.
  • VMware vSAN 7.0 U2 is out now, and Cormac dives into what’s new here. If you’re in the ANZ timezone, don’t forget that Cormac, Duncan and Frank will be presenting (virtually) at the Sydney VMUG *soon*.
  • This article on data mobility from my preferred Chris Evans was great. We talk a lot about data mobility in this industry, but I don’t know that we’ve all taken the time to understand what it really means.
  • I’m a big fan of Tech Field Day, and it’s nice to see presenting companies take on feedback from delegates and putting out interesting articles. Kit’s a smart fellow, and this article on using VMware Cloud for application modernisation is well worth reading.
  • Preston wrote about some experiences he had recently with almost failing drives in his home environment, and raised some excellent points about resilience, failure, and caution.
  • Speaking of people I worked with briefly, I’ve enjoyed Siobhán’s series of articles on home automation. I would never have the patience to do this, but I’m awfully glad that someone did.
  • Datadobi appears to be enjoying some success, and have appointed Paul Repice to VP of Sales for the Americas. As the clock runs down on the quarter, I’m going two for one, and also letting you know that Zerto has done some work to enhance its channel program.

Random Short Take #49

Happy new year and welcome to Random Short Take #49. Not a great many players have worn 49 in the NBA (2 as it happens). It gets better soon, I assure you. Let’s get random.

  • Frederic has written a bunch of useful articles around useful Rubrik things. This one on setting up authentication to use Active Directory came in handy recently. I’ll be digging in to some of Rubrik’s multi-tenancy capabilities in the near future, so keep an eye out for that.
  • In more things Rubrik-related, this article by Joshua Stenhouse on fully automating Rubrik EDGE / AIR deployments was great.
  • Speaking of data protection, Chris Colotti wrote this useful article on changing the Cloud Director database IP address. You can check it out here.
  • You want more data protection news? How about this press release from BackupAssist talking about its partnership with Wasabi?
  • Fine, one more data protection article. Six backup and cloud storage tips from Backblaze.
  • Speaking of press releases, WekaIO has enjoyed some serious growth in the last year. Read more about that here.
  • I loved this article from Andrew Dauncey about things that go wrong and learning from mistakes. We’ve all likely got a story about something that went so spectacularly wrong that you only made that mistake once. Or twice at most. It also reminds me of those early days of automated ESX 2.5 builds and building magical installation CDs that would happily zap LUN 0 on FC arrays connected to new hosts. Fun times.
  • Finally, I was lucky enough to talk to Intel Senior Fellow Al Fazio about what’s happening with Optane, how it got to this point, and where it’s heading. You can read the article and check out the video here.

Random Short Take #48

Welcome to Random Short Take #48. Not a great many players have worn 48 in the NBA (2 as it happens). It gets better soon, I assure you. Let’s get random.

  • I may or may not have a few bezels in my home office, so I enjoyed this article from Mellor on bezels.
  • Another great article from Preston reflecting on 2020 and data protection. And the reading and listening part is important too.
  • If your business is part of VCPP, this article on what’s new with pricing provides a good summary of what’s changed. If you’re not, it’s probably not going to make as much sense.
  • This is a great article on Apple’s OCSP and how things can go south pretty quickly.
  • Datadobi and Wasabi recently announced a technology alliance partnership – you can read more about that here.
  • The SolarWinds attack and some things you should know.

If you’ve read this far, thanks for reading. You may have noticed that I wrote fewer posts this year. Some of that is due to increased workload at the day job, some of that is related to non-blog writing projects, and some of that has been general mental fatigue. I also couldn’t really get into the big vendor virtual conferences in the way that I’d hoped to, and this had an impact on content output to an extent.

In any case, wherever you are, stay safe, happy holidays, and see you on the line next year.

Druva Update – Q3 2020

I caught up with my friend W. Curtis Preston from Druva a little while ago to talk about what the company has been up to. It seems like quite a bit, so I thought I’d share some notes here.

 

DXP and Company Update

Firstly, Druva’s first conference, DXP, is coming up shortly. There’s an interesting range of topics and speakers, and it looks to be jam packed with useful info. You can find out more and register for that here. The company seems to be going from strength to strength, enjoying 50% year-on-year growth, and 70% for Phoenix in particular (its DC product).

If you’re into Gartner Peer Insights – Druva has taken out the top award in 3 categories – file analysis, DRaaS, and data centre backup. Preston also tells me Druva is handling around 5 million backups a day, for what it’s worth. Finally, if you’re into super fluffy customer satisfaction metrics, Druva is reporting an “industry-leading NPS score of 88” that has been third-party verified.

 

Product News

It’s Fun To Read The CCPA

If you’re unfamiliar, California has released its version of the GDPR, know as the California Consumer Privacy Act. Druva has created a template for data types that shouldn’t be stored in plain text and can flag them as they’re backed up. It can also do the same thing in email, and you can now do a federated search against both of these things. If anything turns up that shouldn’t be there, you can go and remove problematic files.

ServiceNow Automation

Druva now has support for automated SNOW ticket creation. It’s based on some advanced logic, too. For example, if a backup fails 3 times, a ticket will be created and can be routed to the people who should be caring about such things.

More APIs

There’s been a lot of done work to deliver more APIs, and a more robust RBAC implementation.

DRaaS

DRaaS is currently only for VMware, VMC, and AWS-based workloads. Preston tells me that users are getting an RTO of 15-20 minutes, and an RPO of 1 hour. Druva added failback support a little while ago (one VM at a time). That feature has now been enhanced, and you can failback as many workloads as you want. You can also add a prefix or suffix to a VM name, and Druva has added a failover prerequisite check as well.

 

Other Notes

In other news, Druva is now certified on VMC on Dell. It’s added support for Microsoft Teams and support for Slack. Both useful if you’ve stopped storing your critical data in email and started storing it in collaboration apps instead.

Storage Insights and Recommendations

There’s also a storage insights feature that is particularly good for unstructured data. Say, for example, that 30% of your backups are media files, you might not want to back them up (unless you’re in the media streaming business, I guess). You can delete bad files from backups, and automatically create an exclusion for those file types.

Support for K8s

Support for everyone’s favourite container orchestration system has been announced, not yet released. Read about that here. You can now do a full backup of an entire K8s environment (AWS only in v1). This includes Docker containers, mounted volumes, and DBs referenced in those containers.

NAS Backup

Druva has enhanced its NAS backup in two ways, the first of which is performance. Preston tells me the current product is at least 10X faster than one year ago. Also, for customers already using a native recovery mechanism like snapshots, Druva has also added the option to backup directly to Glacier, which cuts your cost in half.

Oracle Support

For Oracle, Druva has what Preston describes as “two solid options”. Right now there’s an OVA that provides a ready to go, appliance-like experience, uses the image copy format (supporting block-level incremental, and incremental merge). The other option will be announced next week at DxP.

 

Thoughts and Further Reading

Some of these features seem like incremental improvements, but when you put it all together, it makes for some impressive reading. Druva has done a really impressive job, in my opinion, of sticking with the built in the cloud, for the cloud mantra that dominates much of its product design. The big news is the support for K8s, but things like multi-VM failback with the DRaaS solution is nothing to sneeze at. There’s more news coming shortly, and I look forward to covering that. In the meantime, if you have the time, be sure to check out DXP – I think it will be quite an informative event.

 

 

Zerto Announces 8.5 and Zerto Data Protection

Zerto recently announced 8.5 of its product, along with a new offering, Zerto Data Protection (ZDP). I had the good fortune to catch up with Caroline Seymour (VP, Product Marketing) about the news and thought I’d share some thoughts here.

 

ZDP, Yeah You Know Me

Global Pandemic for $200 Please, Alex

In “these uncertain times”, organisations are facing new challenges

  • No downtime, no data loss, 24/7 availability
  • Influx of remote work
  • Data growth and sprawl
  • Security threats
  • Acceleration of cloud

Many of these things were already a problem, and the global pandemic has done a great job highlighting them.

“Legacy Architecture”

Zerto paints a bleak picture of the “legacy architecture” adopted by many of the traditional dat protection solutions, positing that many IT shops need to use a variety of tools to get to a point where operations staff can sleep better at night. Disaster recovery, for example, is frequently handled via replication for mission-critical applications, with backup being performed via periodic snapshots for all other applications. ZDP aims to being all this together under one banner of continuous data protection, delivering:

  • Local continuous backup and long-term retention (LTR) to public cloud; and
  • Pricing optimised for backup.

[image courtesy of Zerto]

Features

[image courtesy of Zerto]

So what do you get with ZDP? Some neat features, including:

  • Continuous backup with journal
  • Instant restore from local journal
  • Application consistent recovery
  • Short-term SLA policy settings
  • Intelligent index and search
  • LTR to disk, object or Cloud (Azure, AWS)
  • LTR policies, daily incremental with weekly, monthly or yearly fulls
  • Data protection workflows

 

New Licensing

It wouldn’t be a new software product without some mention of new licensing. If you want to use ZDP, you get:

  • Backup for short-term retention and LTR;
  • On-premises or backup to cloud;
  • Analytics; and
  • Orchestration and automation for backup functions.

If you’re sticking with (the existing) Zerto Cloud Edition, you get:

  • Everything in ZDP;
  • Disaster Recovery for on-premises and cloud;
  • Multi-cloud support; and
  • Orchestration and automation.

 

Zerto 8.5

A big focus of Zerto’s recently has been VMware on public cloud support, including the various flavours of VMware on Azure, AWS, and Oracle Cloud. There are a bunch of reasons why this approach has proven popular with existing VMware customers looking to migrate from on-premises to public cloud, including:

  • Native VMware support – run existing VMware workloads natively on IaaS;
  • Policies and configuration don’t need to change;
  • Minimal changes – no need to refactor applications; and
  • IaaS benefits- reliability, scale, and operational model.

[image courtesy of Zerto]

New in 8.5

With 8.5, you can now backup directly to Microsoft Azure and AWS. You also get instant file and folder restores to production. There’s now support for VMware on public cloud disaster recovery and data protection for Microsoft Azure VMware Solution, Google Cloud VMware Engine, and the Oracle Cloud VMware Solution. You also get platform automation and lifecycle management features, including:

  • Auto-evacuate for recovery hosts;
  • Auto-populate for recovery hosts; and
  • Encryption capabilities.

And finally, a Zerto PowerShell Cmdlets Module has also been released.

 

Thoughts and Further Reading

The writing’s been on the wall for some time that Zerto might need to expand its solution offering to incorporate backup and recovery. Continuous data protection is a great feature and my experience with Zerto has been that it does what it says on the tin. The market, however, is looking for ways to consolidate solution offerings in order to save a few more dollarydoos and keep the finance department happy. I haven’t seen the street pricing for ZDP, but Seymour seemed confident that it stacks up well against the more traditional data protection options on the market, particularly when compared against offerings that incorporate components that deal with CDP and periodic data protection with different tools. There’s a new TCO calculator on the Zerto website, and there’s also the opportunity to talk to a Zerto account representative about your particular needs.

I’ve always treated regular backup and recovery and disaster recovery as very different things, mainly because they are. Companies frequently make the mistake of trying to cobble together some kind of DR solution using traditional backup and recovery tools. I’m interested to see how Zerto goes with this approach. It’s not the first company to converge elements that fit in the data protection space together, and it will be interesting to see how much of the initial uptake of ZDP is with existing customers or net new logos. The broadening of support for the VMware on X public cloud workloads is good news for enterprises too (putting aside my thoughts on whether or not that’s a great long term strategy for said enterprises). There’s some interesting stuff happening, and I’m looking forward to see how the story unfolds over the next 6 – 12 months.

Datadobi Announces DobiProtect

Datadobi recently announced DobiProtect. I had the opportunity to speak with Michael Jack and Carl D’Halluin about the announcement, and thought I’d share some thoughts here.

 

The Problem

Disaster Recovery

Modern disaster recovery solutions tend more towards business continuity than DR. The challenge with data replication solutions is that it’s a trivial thing to replicate corruption from your primary storage to your DR storage. Backup systems are vulnerable too, and most instances you need to make some extra effort to ensure you’ve got a replicated catalogue, and that your backup data is not isolated. Invariably, you’ll be looking to restore to like hardware in order to reduce the recovery time. Tape is still a pain to deal with, and invariably you’re also at the mercy of people and processes going wrong.

What Do Customers Need?

To get what you need out of a robust DR system, there are a few criteria that need to be met, including:

  • An easy way to select business-critical data;
  • A simple way to make a golden copy in native format;
  • A bunker site in a DC or cloud;
  • A manual air-gap procedure;
  • A way to restore to anything; and
  • A way to failover if required.

 

Enter DobiProtect

What Does It Do?

The idea is that you have two sites with a manual air-gap between them, usually controlled by a firewall of some type. The first site is where you run your production workload, and there’ll likely be a subset of data that is really quirte important to your business. You can use DobiProtect to get that data from your production site to DR (it might even be in a bunker!). In order to get the data from Production to DR, DobiProtect scans the data before it’s pulled across to DR. Note that the data is pulled, not pushed. This is important as it means that there’s no obvious trace of the bunker’s existence in production.

[image courtesy of Datadobi]

If things go bang, you can recover to any NAS or Object.

  • Browse golden copy
  • Select by directory structure, folder, or object patterns
  • Mounts and shares
  • Specific versions

Bonus Use Case

One of the more popular use cases that Datadobi spoke to me about was heterogeneous edge-to-core protection. Data on the edge is usually more vulnerable, and not every organisation has the funding to put robust protection mechanisms in place at every edge site to protect critical data. With the advent of COVID-19, many organisations have been pushing more data to the edge in order for remote workers to have better access to data. The challenge then becomes keeping that data protected in a reliable fashion. DobiProtect can be used to pull data from the core once data has been pulled back from the edge. Because it’s a software only product, your edge storage can be anything that supports object, SMB, or NFS, and the core could be anything else. This provides a lot of flexibility in terms of the expense traditionally associated with DR at edge sites.

[image courtesy of Datadobi]

 

Thoughts and Further Reading

The idea of an air-gapped site in a bunker somewhere is the sort of thing you might associate with a James Bond story. In Australia these aren’t exactly a common thing (bunkers, not James Bond stories), but Europe and the US is riddled with them. As Jack pointed out in our call, “[t]he first rule of bunker club – you don’t talk about the bunker”. Datadobi couldn’t give me a list of customers using this type of solution because all of the customers didn’t want people to know that they were doing things this way. It seems a bit like security via obscurity, but there’s no point painting a big target on your back or giving clues out for would-be crackers to get into your environment and wreak havoc.

The idea that your RPO is a day, rather than minutes, is also confronting for some folks. But the idea of this solution is that you’ll use it for your absolutely mission critical can’t live without it data, not necessarily your virtual machines that you may be able to recover normally if you’re attacked or the magic black smoke escapes from one of your hosts. If you’ve gone to the trouble of looking into acquiring some rack space in a bunker, limited the people in the know to a handful, and can be bothered messing about with a manual air-gap process, the data you’re looking to protect is clearly pretty important.

Datadobi has a rich heritage in data migration for both file and object storage systems. It makes sense that eventually customer demand would drive them down this route to deliver a migration tool that ostensibly runs all the time as sort of data protection tool. This isn’t designed to protect everything in your environment, but for the stuff that will ruin your business if it goes away, it’s very likely worth the effort and expense. There are some folks out there actively looking for ways to put you over a barrel, so it’s important to think about what it’s worth to your organisation to avoid that if possible.