Aparavi Announces File Protect & Insight – Helps With Third Drawer Down

I recently had the opportunity to speak to Victoria Grey (CMO), Darryl Richardson (Chief Product Evangelist), and Jonathan Calmes (VP Business Development) from Aparavi regarding their File Protect and Insight solution. If you’re a regular reader, you may remember I’m quite a fan of Aparavi’s approach and have written about them a few times. I thought I’d share some of my thoughts on the announcement here.

 

FPI?

The title is a little messy, but think of your unstructured data in the same way you might look at the third drawer down in your kitchen. There’s a bunch of stuff in there and no-one knows what it all does, but you know it has some value. Aparavi describe File Protect and Insight (FPI), as “[f]ile by file data protection and archive for servers, endpoints and storage devices featuring data classification, content level search, and hybrid cloud retention and versioning”. It takes the data you’re not necessarily sure about, and makes it useful. Potentially.

It comes with a range of features out of the box, including:

  • Data Awareness
    • Data classification
    • Metadata aggregation
    • Policy driven workflows
  • Global Security
    • Role-based permissions
    • Encryption (in-flight and at rest)
    • File versioning
  • Data Search and Access
    • Anywhere / anytime file access
    • Seamless cloud integration
    • Full-content search

 

How Does It Work?

The solution is fairly simple to deploy. There’s a software appliance installed on-premises (this is known as the aggregator). There’s a web-accessible management console, and you configure your sources to be protected via network access.

[image courtesy of Aparavi]

You get the ability to mount backup data from any point in time, and you can provide a path that can be shared via the network to users to access that data. Regardless of where you end up storing the data, you leave the index on-premises, and search against the index, not the source. This saves you in terms of performance and speed. There’s also a good story to be had in terms of cloud provider compatibility. And if you’re looking to work with an on-premises / generic S3 provider, chances are high that the solution won’t have too many issues with that either.

 

Thoughts

Data protection is hard to do well at the best of times, and data management is even harder to get right. Enterprises are busy generating terabytes of data and are struggling to a) protect it successfully, and b) make use of that protected data in an intelligent fashion. It seems that it’s no longer enough to have a good story around periodic data protection – most of the vendors have proven themselves capable in this regard. What differentiates companies is the ability to make use of that protected data in new and innovative ways that can increase the value to that data to the business that’s generating it.

Companies like Aparavi are doing a pretty good job of taking the madness that is your third drawer down and providing you with some semblance of order in the chaos. This can be a real advantage in the enterprise, not only for day to day data protection activities, but also for extended retention and compliance challenges, as well as storage optimisation challenges that you may face. You still need to understand what the data is, but something like FPI can help you to declutter what that data is, making it easier to understand.

I also like some of the ransomware detection capabilities being built into the product. It’s relatively rudimentary for the moment, but keeping a close eye on the percentage of changed data is a good indicator of wether or not something is going badly wrong with the data sources you’re trying to protect. And if you find yourself the victim of a ransomware attack, the theory is that Aparavi has been storing a secondary, immutable copy of your data that you can recover from.

People want a lot of different things from their data protection solutions, and sometimes it’s easy to expect more than is reasonable from these products without really considering some of the complexity that can arise from that increased level of expectation. That said, it’s not unreasonable that your data protection vendors should be talking to you about data management challenges and deriving extra value from your secondary data. A number of people have a number of ways to do this, and not every way will be right for you. But if you’ve started noticing a data sprawl problem, or you’re looking to get a bit more from your data protection solution, particularly for unstructured data, Aparavi might be of some interest. You can read the announcement here.

Backblaze Announces Version 7.0 – Keep Your Stuff For Longer

Backblaze recently announced Version 7.0 of its cloud backup solution for consumer and business and I thought I’d run through the announcement here.

 

Extended Version History

30 Days? 1 Year? 

One of the key parts of this announcement is support for extended retention of backup data. All Backblaze computer backup accounts have 30-Day Version History included with their backup license. But you can now extend that to 1 year if you like. Note that this will cost an additional $2/month and is charged based on your license type (monthly, yearly, or 2-year). It’s also prorated to align with your existing subscription.

Forever

Want to have a more permanent relationship with you protection data? You can also elect to keep it forever, at the cost of an additional $2/month (aligned to your license plan type) plus $0.005/GB/Month for versions modified on your computer more than 1 year ago. There’s a handy FAQ that you can read here. Note that all pricing from Backblaze is in US dollars.

[image courtesy of Backblaze]

 

Other Updates

Are you trying to back up really large files (like videos)? You might already know that Backblaze takes large files and chunks them into smaller ones before uploading them to the Internet. Upload performance has now been improved, with the maximum packet size being increased from 30MB to 100MB. This allows the Backblaze app to transmit data more efficiently by better leveraging threading. According to Backblaze, this also “smoothes out upload performance, reduces sensitivity to latency, and leads to smaller data structures”.

Other highlights of this release include:

  • For the aesthetically minded amongst you, the installer now looks better on higher resolution displays;
  • For Windows users, an issue with OpenSSL and Intel’s Apollo Lake chipsets has now been resolved; and
  • For macOS users, support for Catalina is built in. (Note that this is also available with the latest version 6 binary).

Availability?

Version 7.0 will be rolled out to all users over the next few weeks. If you can’t wait, there are two ways to get hold of the new version:

 

Thoughts and Further Reading

It seems weird that I’ve been covering Backblaze as much as I have, given their heritage in the consumer data protection space, and my focus on service providers and enterprise offerings. But Backblaze has done a great job of making data protection accessible and affordable for a lot of people, and they’ve done it in a fairly transparent fashion at the same time. Note also that this release covers both consumers and business users. The addition of extended retention capabilities to their offering, improved performance, and some improved compatibility is good news for Backblaze users. It’s really easy to setup and get started with the application, they support a good variety of configurations, and you’ll sleep better knowing your data is safely protected (particularly if you accidentally fat-finger an important document and need to recover an older version). If you’re thinking about signing up, you can use this affiliate link I have and get yourself a free month (and I’ll get one too).

If you’d like to know more about the features of Version 7.0, there’s a webinar you can jump on with Yev. The webinar will be available on BrightTalk (registration is required) and you can sign up by visiting the Backblaze BrightTALK channel. You can also read more details on the Backblaze blog.

Random Short Take #23

Want some news? In a shorter format? And a little bit random? This listicle might be for you.

  • Remember Retrospect? They were acquired by StorCentric recently. I hadn’t thought about them in some time, but they’re still around, and celebrating their 30th anniversary. Read a little more about the history of the brand here.
  • Sometimes size does matter. This article around deduplication and block / segment size from Preston was particularly enlightening.
  • This article from Russ had some great insights into why it’s not wise to entirely rule out doing things the way service providers do just because you’re working in enterprise. I’ve had experience in both SPs and enterprise and I agree that there are things that can be learnt on both sides.
  • This is a great article from Chris Evans about the difficulties associated with managing legacy backup infrastructure.
  • The Pure Storage VM Analytics Collector is now available as an OVA.
  • If you’re thinking of updating your Mac’s operating environment, this is a fairly comprehensive review of what macOS Catalina has to offer, along with some caveats.
  • Anthony has been doing a bunch of cool stuff with Terraform recently, including using variable maps to deploy vSphere VMs. You can read more about that here.
  • Speaking of people who work at Veeam, Hal has put together a great article on orchestrating Veeam recovery activities to Azure.
  • Finally, the Brisbane VMUG meeting originally planned for Tuesday 8th has been moved to the 15th. Details here.

Random Short Take #19

Here are some links to some random news items and other content that I recently found interesting. You might find them interesting too. Episode 19 – let’s get tropical! It’s all happening.

  • I seem to link to Alastair’s blog a lot. That’s mainly because he’s writing about things that interest me, like this article on data governance and data protection. Plus he’s a good bloke.
  • Speaking of data protection, Chris M. Evans has been writing some interesting articles lately on things like backup as a service. Having worked in the service provider space for a piece of my career, I wholeheartedly agree that it can be a “leap of faith” on the part of the customer to adopt these kinds of services.
  • This post by Raffaello Poltronieri on VMware’s vRealize Operations session at Tech Field Day 19 makes for good reading.
  • This podcast episode from W. Curtis Preston was well worth the listen. I’m constantly fascinated by the challenges presented to infrastructure in media and entertainment environments, particularly when it comes to data protection.
  • I always enjoy reading Preston’s perspective on data protection challenges, and this article is no exception.
  • This article from Tom Hollingsworth was honest and probably cut too close to the bone with a lot of readers. There are a lot of bad habits that we develop in our jobs, whether we’re coding, running infrastructure, or flipping burgers. The key is to identify those behaviours and work to address them where possible.
  • Over at SimplyGeek.co.uk, Gavin has been posting a number of Ansible-related articles, including this one on automating vSphere VM and ova deployments. A number of folks in the industry talk a tough game when it comes to automation, and it’s nice to see Gavin putting it on wax and setting a great example.
  • The Mark Of Cain have announced a national tour to commemorate the 30th anniversary of their Battlesick album. Unfortunately I may not be in the country when they’re playing in my part of the woods, but if you’re in Australia you can find out more information here.

Veeam Basics – Configuring A Scale-Out Backup Repository

I’ve been doing some integration testing with Pure Storage and Veeam in the lab recently, and thought I’d write an article on configuring a scale-out backup repository (SOBR). To learn more about SOBR configurations, you can read the Veeam documentation here. This post from Rick Vanover also covers the what and the why of SOBR. In this example, I’m using a couple of FlashBlade-based NFS repositories that I’ve configured as per these instructions. Each NFS repository is mounted on a separate Linux virtual machine. I’m using a Windows-based Veeam Backup & Replication server running version 9.5 Update 4.

 

Process

Start by going to Backup Infrastructure -> Scale-out Repositories and click on Add Scale-out Repository.

Give it a name, maybe something snappy like “Scale-out Backup Repository 1”?

Click on Add to add the backup repositories.

When you click on Add, you’ll have the option to select the backup repositories you want to use. You can select them all, but for the purpose of this exercise, we won’t.

In this example, Backup Repository 1 and 2 are the NFS locations I configured previously. Select those two and click on OK.

You’ll now see the repositories listed as Extents.

Click on Advanced to check the advanced setttings are what you expect them to be. Click on OK.

Click Next to continue. You’ll see the following message.

You then choose the placement policy. It’s strongly recommended that you stick with Data locality as the placement policy.

You can also pick object storage to use as a Capacity Tier.

You’ll also have an option to configure the age of the files to be moved, and when they can be moved. And you might want to encrypt the data uploaded to your object storage environment, depending on where that object storage lives.

Once you’re happy, click on Apply. You’ll be presented with a summary of the configuration (and hopefully there won’t be any errors).

 

Thoughts

The SOBR feature, in my opinion, is pretty cool. I particularly like the ability to put extents in maintenance mode. And the option to use object storage as a capacity tier is a very useful feature. You get some granular control in terms of where you put your backup data, and what kind of performance you can throw at the environment. And as you can see, it’s not overly difficult to configure the environment. There are a few things to keep on mind though. Make sure your extents are stored on resilient hardware. If you keep your backup sets together with the data locality option, you’l be a sad panda if that extent goes bye bye. And the same goes for the performance option. You’ll also need Enterprise or Enterprise Plus editions of Veeam Backup & Replication for this feature to work. And you can’t use this feature for these types of jobs:

  • Configuration backup job;
  • Replication jobs (including replica seeding);
  • VM copy jobs; and
  • Veeam Agent backup jobs created by Veeam Agent for Microsoft Windows 1.5 or earlier and Veeam Agent for Linux 1.0 Update 1 or earlier.

There are any number of reasons why a scale-out backup repository can be a handy feature to use in your data protection environment. I’ve had the misfortune in the past of working with products that were difficult to manage from a data mobility perspective. Too many times I’ve been stuck going through all kinds of mental gymnastics working out how to migrate data sets from one storage platform to the next. With this it’s a simple matter of a few clicks and you’re on your way with a new bucket. The tiering to object feature is also useful, particularly if you need to keep backup sets around for compliance reasons. There’s no need to spend money on these living on performance disk if you can comfortably have them sitting on capacity storage after a period of time. And if you can control this movement through a policy-driven approach, then that’s even better. If you’re new to Veeam, it’s worth checking out a feature like this, particularly if you’re struggling with media migration challenges in your current environment. And if you’re an existing Enterprise or Enterprise Plus customer, this might be something you can take advantage of.

Dell EMC Announces PowerProtect Software (And Hardware)

Disclaimer: I recently attended Dell Technologies World 2019.  My flights, accommodation and conference pass were paid for by Dell Technologies via the Media, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Last week at Dell Technologies World there were a number of announcements made regarding Data Protection. I thought I’d cover them here briefly. Hopefully I’ll have the chance to dive a little deeper into the technology in the next few weeks.

 

PowerProtect Software

The new PowerProtect software is billed as Dell EMC’s “Next Generation Data Management software platform” and provides “data protection, replication and reuse, as well as SaaS-based management and self-service capabilities that give individual data owners the autonomy to control backup and recovery operations”. It currently offers support for:

  • Oracle;
  • Microsoft SQL;
  • VMware;
  • Windows Filesystems; and
  • Linux Filesystems.

More workload support is planned to arrive in the next little while. There are some nice features included, such as automated discovery and on-boarding of databases, VMs and Data Domain protection storage. There’s also support for tiering protection data to public cloud environments, and support for SaaS-based management is a nice feature too. You can view the data sheet here.

 

PowerProtect X400

The PowerProtect X400 is being positioned by Dell EMC as a “multi-dimensional” appliance, with support for both scale out and scale up expansion.

There are three “bits” to the X400 story. There’s the X400 cube, which is the brains of the operation. You then scale it out using either X400F (All-Flash) or X400H (Hybrid) cubes. The All-Flash version can be configured from 64 – 448TB of capacity, delivering up to 22.4PB of logical capacity. The Hybrid version runs from 64 – 384TB of capacity, and can deliver up to 19.2PB of logical capacity. The logical capacity calculation is based on “10x – 50x deduplication ratio”. You can access the spec sheet here, and the data sheet can be found here.

Scale Up and Out?

So what do Dell EMC mean by “multi-dimensional” then? It’s a neat marketing term that means you can scale up and out as required.

  • Scale-up with grow-in-place capacity expansion (16TB); and
  • Scale-out compute and capacity with additional X400F or X400H cubes (starting at 64TB each).

This way you can “[b]enefit from the linear scale-out of performance, compute, network and capacity”.

 

IDPA

Dell EMC also announced that the Integrated Data Protection Appliance (IDPA) was being made available in an 8-24TB version, providing a lower capacity option to service smaller environments.

 

Thoughts and Further Reading

Everyone I spoke to at Dell Technologies World was excited about the PowerProtect announcement. Sure, it’s their job to be excited about this stuff, but there’s a lot here to be excited about, particularly if you’re an existing Dell EMC data protection customer. The other “next-generation” data protection vendors seem to have given the 800 pound gorilla the wakeup call it needed, and the PowerProtect offering is a step in the right direction. The scalability approach used with the X400 appliance is potentially a bit different to what’s available in the market today, but it seems to make sense in terms of reducing the footprint of the hardware to a manageable amount. There were some high numbers being touted in terms of performance but I won’t be repeating any of those until I’ve seen this for myself in the wild. The all-flash option seems a little strange at first, as this normally associated with data protection, but I think it’s competitive nod to some of the other vendors offering top of rack, all-flash data protection.

So what if you’re an existing Data Domain / NetWorker / Avamar customer? There’s no need to panic. You’ll see continued development of these products for some time to come. I imagine it’s not a simple thing for an established company such as Dell EMC to introduce a new product that competes in places with something it already sells to customers. But I think it’s the right thing for them to do, as there’s been significant pressure from other vendors when it comes to telling a tale of simplified data protection leveraging software-defined solutions. Data protection requirements have seen significant change over the last few years, and this new architecture is a solid response to those changes.

The supported workloads are basic for the moment, but a cursory glance through most enterprise environments would be enough to reassure you that they have the most common stuff covered. I understand that existing DPS customers will also get access to PowerProtect to take it for a spin. There’s no word yet on what the migration path for existing customers looks like, but I have no doubt that people have already thought long and hard about what that would look like and are working to make sure the process is field ready (and hopefully straightforward). Dell EMC PowerProtect Software platform and PowerProtect X400 appliance will be generally available in July 2019.

For another perspective on the announcement, check out Preston‘s post here.

Cohesity – Cohesity Cluster Virtual Edition ESXi – A Few Notes

I’ve covered the Cohesity appliance deployment in a howto article previously. I’ve also made use of the VMware-compatible Virtual Edition in our lab to test things like cluster to cluster replication and cloud tiering. The benefits of virtual appliances are numerous. They’re generally easy to deploy, don’t need dedicated hardware, can be re-deployed quickly when you break something, and can be a quick and easy way to validate a particular process or idea. They can also be a problem with regards to performance, and are at the mercy of the platform administrator to a point. But aren’t we all? With 6.1, Cohesity have made available a clustered virtual edition (the snappily titled Cohesity Cluster Virtual Edition ESXi). If you have access to the documentation section of the Cohesity support site, there’s a PDF you can download that explains everything. I won’t go into too much detail but there are a few things to consider before you get started.

 

Specifications

Base Appliance 

Just like the non-clustered virtual edition, there’s a small and large configuration you can choose from. The small configuration supports up to 8TB for the Data disk, while the large configuration supports up to 16TB for the Data disk. The small config supports 4 vCPUs and 16GB of memory, while the large configuration supports 8 vCPUs and 32GB of memory.

Disk Configuration

Once you’ve deployed the appliance, you’ll need to add the Metadata disk and Data disk to each VM. The Metadata disk should be between 512GB and 1TB. For the large configuration, you can also apparently configure 2x 512GB disks, but I haven’t tried this. The Data disk needs to be between 512GB and 8TB for the small configuration and up to 16TB for the large configuration (with support for 2x 8TB disks). Cohesity recommends that these are formatted as Thick Provision Lazy Zeroed and deployed in Independent – Persistent mode. Each disk should be attached to its own SCSI controller as well, so you’ll have the system disk on SCSI 0:0, the Metadata disk on SCSI 1:0, and so on.

I did discover a weird issue when deploying the appliance on a Pure Storage FA-450 array in the lab. In vSphere this particular array’s datastore type is identified by vCenter as “Flash”. For my testing I had a 512GB Metadata disk and 3TB Data disk configured on the same datastore, with the three nodes living on three different datastores on the FlashArray. This caused errors with the cluster configuration, with the configuration wizard complaining that my SSD volumes were too big.

I moved the Data disk (with storage vMotion) to an all flash Nimble array (that for some reason was identified by vSphere as “HDD”) and the problem disappeared. Interestingly I didn’t have this problem with the single node configuration of 6.0.1 deployed with the same configuration. I raised a ticket with Cohesity support and they got back to me stating that this was expected behaviour in 6.1.0a. They tell me, however, that they’ve modified the behaviour of the configuration routine in an upcoming version so fools like me can run virtualised secondary storage on primary storage.

Erasure Coding

You can configure the appliance for increased resiliency at the Storage Domain level as well. If you go to Platform – Cluster – Storage Domains you can modify the DefaultStorageDomain (and other ones that you may have created). Depending on the size of the cluster you’ve deployed, you can choose the number of failures to tolerate and whether or not you want erasure coding enabled.

You can also decide whether you want EC to be a post-process activity or something that happens inline.

 

Process

Once you’ve deployed (a minimum) 3 copies of the Clustered VE, you’ll need to manually add Metadata and Data disks to each VM. The specifications for these are listed above. Fire up the VMs and go to the IP of one of the nodes. You’ll need to log in as the admin user with the appropriate password and you can then start the cluster configuration.

This bit is pretty much the same as any Cohesity cluster deployment, and you’ll need to specify things like a hostname for the cluster partition. As always, it’s a good idea to ensure your DNS records are up to date. You can get away with using IP addresses but, frankly, people will talk about you behind your back if you do.

At this point you can also decide to enable encryption at the cluster level. If you decide not to enable it you can do this on a per Domain basis later.

Click on Create Cluster and you should see something like the following screen.

Once the cluster is created, you can hit the virtual IP you’ve configured, or any one of the attached nodes, to log in to the cluster. Once you log in, you’ll need to agree to the EULA and enter a license key.

 

Thoughts

The availability of virtual appliance versions for storage and data protection solutions isn’t a new idea, but it’s certainly one I’m a big fan of. These things give me an opportunity to test new code releases in a controlled environment before pushing updates into my production environment. It can help with validating different replication topologies quickly, and validating other configuration ideas before putting them into the wild (or in front of customers). Of course, the performance may not be up to scratch for some larger environments, but for smaller deployments and edge or remote office solutions, you’re only limited by the available host resources (which can be substantial in a lot of cases). The addition of a clustered version of the virtual edition for ESXi and Hyper-V is a welcome sight for those of us still deploying on-premises Cohesity solutions (I think the Azure version has been clustered for a few revisions now). It gets around the main issue of resiliency by having multiple copies running, and can also address some of the performance concerns associated with running virtual versions of the appliance. There are a number of reasons why it may not be the right solution for you, and you should work with your Cohesity team to size any solution to fit your environment. But if you’re running Cohesity in your environment already, talk to your account team about how you can leverage the virtual edition. It really is pretty neat. I’ll be looking into the resiliency of the solution in the near future and will hopefully be able to post my findings in the next few weeks.

Rubrik Announces Cloud Data Management 5.0 – Drops In A Shedload Of Enhancements

I recently had the opportunity to hear from Chris Wahl about Rubrik CDM 5.0 (codename Andes) and thought it worthwhile covering here.

 

Announcement Summary

  • Instant recovery for Oracle databases;
  • NAS Direct Archive to protect massive unstructured data sets;
  • Microsoft Office 365 support via Polaris SaaS Platform;
  • SAP-certified protection for SAP HANA;
  • Policy-driven protection for Epic EHR; and
  • Rubrik works with Rubrik Datos IO to protect NoSQL databases.

 

New Features and Enhancements

As you can see from the list above, there’s a bunch of new features and enhancements. I’ll try and break down a few of these in the section below.

Oracle Protection

Rubrik have had some level of capability with Oracle protection for a little while now, but things are starting to hot up with 5.0.

  • Simplified configuration (Oracle Auto Protection and Live Mount, Oracle Granular SLA Policy Assignments, and Oracle Automated Instance and Database Discovery)
  • Orchestration of operational and PiT recoveries
  • Increased control for DBAs

NAS Direct Archive

People have lots of data now. Like, a real lot. I don’t know how many Libraries of Congress exactly, but it can be a lot. Previously, you’d have to buy a bunch of Briks to store this data. Rubrik have recognised that this can be a bit of a problem in terms of footprint. With NAS Direct Archive, you can send the data to an “archive” target of your choice. So now you can protect a big chunk of data that goes through the Rubrik environment to end target such as object storage, public cloud, or NFS. The idea is to reduce the amount of Rubrik devices you need to buy. Which seems a bit weird, but their customers will be pretty happy to spend their money elsewhere.

[image courtesy of Rubrik]

It’s simple to get going, requiring a tick of a box to be configured. The metadata remains protected with the Rubrik cluster, and the good news is that nothing changes from the end user recovery experience.

Elastic App Service (EAS)

Rubrik now provides the ability to ingest DBs across a wider spectrum, allowing you to protect more of the DB-based applications you want, not just SQL and Oracle workloads.

SAP HANA Protection

I’m not really into SAP HANA, but plenty of organisations are. Rubrik now offer a SAP Certified Solution which, if you’ve had the misfortune of trying to protect SAP workloads before, is kind of a neat feature.

[image courtesy of Rubrik]

SQL Server Enhancements

There have been some nice enhancements with SQL Server protection, including:

  • A Change Block Tracking (CBT) filter driver to decrease backup windows; and
  • Support for group Volume Shadow Copy Service (VSS) snapshots.

So what about Group Backups? The nice thing about these is that you can protect many databases on the same SQL Server. Rather than process each VSS Snapshot individually, Rubrik will group the databases that belong to the same SLA Domain and process the snapshots as a batch group. There are a few benefits to this approach:

  • It reduces SQL Server overhead, as well as decreases the amount of time a backup requires to be completed; and
  • In turn, allowing customers to take more frequent backups of their databases delivering a lower RPO to the business.

vSphere Enhancements

Rubrik have done vSphere things since forever, and this release includes a few nice enhancements, including:

  • Live Mount VMDKs from a Snapshot – providing the option to choose to mount specific VMDKs instead of an entire VM; and
  • After selecting the VMDKs, the user can select a specific compatible VM to attach the mounted VMDKs.

Multi-Factor Authentication

The Rubrik Andes 5.0 integration with RSA SecurID will include RSA Authentication Manager 8.2 SP1+ and RSA SecurID Cloud Authentication Service. Note that CDM will not be supporting the older RADIUS protocol. Enabling this is a two-step process:

  • Add the RSA Authentication Manager or RSA Cloud Authentication Service in the Rubrik Dashboard; and
  • Enable RSA and associate a new or existing local Rubrik user or a new or existing LDAP server with the RSA Authentication Manager or RSA Cloud Authentication Service.

You also get the ability to generate API tokens. Note that if you want to interact with the Rubrik CDM CLI (and have MFA enabled) you’ll need these.

Other Bits and Bobs

There are a few other enhancements included, including:

  • Windows Bare Metal Recovery;
  • SLA Policy Advanced Configuration;
  • Additional Reporting and Metrics; and
  • Snapshot Retention Enhancements.

 

Thoughts and Further Reading

Wahl introduced the 5.0 briefing by talking about digital transformation as being, at its core, an automation play. The availability of a bunch of SaaS services can lead to fragmentation in your environment, and legacy technology doesn’t deal with with makes transformation. Rubrik are positioning themselves as a modern company, well-placed to help you with the challenges of protecting what can quickly become a complex and hard to contain infrastructure. It’s easy to sit back and tell people how transformation can change their business for the better, but these kinds of conversations often eschew the high levels of technical debt in the enterprise that the business is doing its best to ignore. I don’t really think that transformation is as simple as some vendors would have us believe, but I do support the idea that Rubrik are working hard to make complex concepts and tasks as simple as possible. They’ve dropped a shedload of features and enhancements in this release, and have managed to do so in a way that you won’t need to install a bunch of new applications to support these features, and you won’t need to do a lot to get up and running either. For me, this is the key advantage that the “next generation” data protection companies have over their more mature competitors. If you haven’t been around for decades, you very likely don’t offer support for every platform and application under the sun. You also likely don’t have customers that have been with you for 20 years that you need to support regardless of the official support status of their applications. This gives the likes of Rubrik the flexibility to deliver features as and when customers require them, while still focussing on keeping the user experience simple.

I particularly like the NAS Direct Archive feature, as it shows that Rubrik aren’t simply in this to push a bunch of tin onto their customers. A big part of transformation is about doing things smarter, not just faster. the folks at Rubrik understand that there are other solutions out there that can deliver large capacity solutions for protecting big chunks of data (i.e. NAS workloads), so they’ve focussed on leveraging other capabilities, rather than trying to force their customers to fill their data centres with Rubrik gear. This is the kind of thinking that potential customers should find comforting. I think it’s also the kind of approach that a few other vendors would do well to adopt.

*Update*

Here’re some links to other articles on Andes from other folks I read that you may find useful:

Vembu BDR Suite 4.0 Is Coming

Disclaimer

Vembu are a site sponsor of PenguinPunk.net. They’ve asked me to look at their product and write about it. I’m in the early stages of evaluating the BDR Suite in the lab, but thought I’d pass on some information about their upcoming 4.0 release. As always, if you’re interested in these kind of solutions, I’d encourage you to do your own evaluation and get in touch with the vendor, as everyone’s situation and requirements are different. I can say from experience that the Vembu sales and support staff are very helpful and responsive, and should be able to help you with any queries. I recently did a brief article on getting started with BDR Suite 3.9.1 that you can download from here.

 

New Features

So what’s coming in 4.0?

Hyper-V Cluster Backup

Vembu will support backing up VMs in a Hyper-V cluster and, even if VMs configured for backup are moved from one host to another, the incremental backup will continue to happen without any interruption.

Shared VHDx Backup

Vembu now supports backup of the shared VHDx of Hyper-V.

CheckSum-based Incrementals

Vembu uses CBT for incremental backups. And for some CBT failure cases they will be using CheckSum for the incremental to happen without any interruption.

Credential Manager

No need to enter credentials every time, Vembu Credential Manager now allows you to manage the credentials of the host and the VMs running in it. This will be particularly handy if you’re doing a lot of application-aware backup job configuration.

 

Thoughts

I had a chance to speak with Vembu about the product’s functionality. There’s a lot to like in terms of breadth of features. I’m interested in seeing how 4.0 goes when it’s released and hope to do a few more articles on the product then. If you’re looking to evaluate the product, this evaluator’s guide is as good place as any to start. As an aside, Vembu are also offering 10% off their suite this Halloween (until November 2nd) – see here for more details.

For a fuller view of what’s coming in 4.0, you can read Vladan‘s coverage here.

Updated Articles Page

I recently had the opportunity to deploy a Vembu BDR 3.9.1 Update 1 appliance and thought I’d run through the basics of getting started. There’s a new document outlining the process on the articles page.