Ransomware? More Like Ransom Everywhere …

Stupid title, but ransomware has been in the news quite a bit recently. I’ve had some tabs open in my browser for over twelve months with articles about ransomware that I found interesting. I thought it was time to share them and get this post out there. This isn’t comprehensive by any stretch, but rather it’s a list of a few things to look at when looking into anti-ransomware solutions, particularly for NAS environments.

 

It Kicked Him Right In The NAS

The way I see it (and I’m really not the world’s strongest security person), there are (at least) three approaches to NAS and ransomware concerns.

The Endpoint

This seems to be where most companies operate – addressing ransomware as it enters the organisation via the end users. There are a bunch of solutions out there that are designed to protect humans from themselves. But this approach doesn’t always help with alternative attack vectors and it’s only as good as the update processes you have in place to keep those endpoints updated. I’ve worked in a few shops where endpoint protection solutions were deployed and then inadvertently clobbered by system updates or users with too many privileges. The end result was that the systems didn’t do what they were meant to and there was much angst.

The NAS Itself

There are things you can do with NetApp solutions, for example, that are kind of interesting. Something like Stealthbits looks neat, and Varonis also uses FPolicy to get a similar result. Your mileage will vary with some of these solutions, and, again, it comes down to the ability to effectively ensure that these systems are doing what they say they will, when they will.

Data Protection

A number of the data protection vendors are talking about their ability to recover quickly from ransomware attacks. The capabilities vary, as they always do, but most of them have a solid handle on quick recovery once an infection is discovered. They can even help you discover that infection by analysing patterns in your data protection activities. For example, if a whole bunch of data changes overnight, it’s likely that you have a bit of a problem. But, some of the effectiveness of these solutions is limited by the frequency of data protection activity, and whether anyone is reading the alerts. The challenge here is that it’s a reactive approach, rather than something preventative. That said, companies like Rubrik are working hard to enhance its Radar capability into something a whole lot more interesting.

Other Things

Other things that can help limit your exposure to ransomware include adopting generally robust security practices across the board, monitoring all of your systems, and talking to your users about not clicking on unknown links in emails. Some of these things are easier to do than others.

 

Thoughts

I don’t think any of these solutions provide everything you need in isolation, but the challenge is going to be coming up with something that is supportable and, potentially, affordable. It would also be great if it works too. Ransomware is a problem, and becoming a bigger problem every day. I don’t want to sound like I’m selling you insurance, but it’s almost not a question of if, but when. But paying attention to some of the above points will help you on your way. Of course, sometimes Sod’s Law applies, and things will go badly for you no matter how well you think you’ve designed your systems. At that point, it’s going to be really important that you’ve setup your data protection systems correctly, otherwise you’re in for a tough time. Remember, it’s always worth thinking about what your data is worth to you when you’re evaluating the relative value of security and data protection solutions. This article from Chin-Fah had some interesting insights into the problem. And this article from Cohesity outlined a comprehensive approach to holistic cyber security. This article from Andrew over at Pure Storage did a great job of outlining some of the challenges faced by organisations when rolling out these systems. This list of NIST ransomware resources from Melissa is great. And if you’re looking for a useful resource on ransomware from VMware’s perspective, check out this site.

ComplyTrust And The Right To Be Forgotten

I came across a solution from ComplyTrust a little while ago and thought it was worth mentioning here. I am by no means any kind of authority with this kind of stuff so this is very much a high-level view.

 

The Problem

Over the last little while (decades even?), a number of countries and local authorities have tightened up privacy regulations in the hope that citizens would have some level of protection from big corporations mercilessly exploiting their personal information for commercial gain. A number of these regulations (General Data Protection Regulation, California Consumer Privacy Act, etc.) include the idea of “the right to be forgotten”. This gives citizens the right to request, in particular circumstances, that data about them is not kept by particular organisations. Why is this important? We have pretty good privacy protection in Australia, but I still get recruiters moving from one organisation to another and taking contacts with them.

How Does This Happen? 

Think of all the backups of data that organisations make. Now think of how long some of those get kept for. For every 1 restore you do, you might have made 100 backups. Depending on what an organisation is doing for data protection, there are potentially thousands of copies of records relating to you stored on their infrastructure. And then, when a company gets acquired, all that data gets passed on to the acquiring company. Suddenly it becomes that much more difficult to keep track of which company has your data on file.

Not a week goes by where I don’t get an offer to buy contact details of VMware users or people interested in cloud products. There is a whole world of B2B marketing going on where your details are being sold for a very low price. Granted, some of this is illegitimate in the first place, so regulations aren’t really going to help you. But the right to be removed from various databases around the place is still important, and something that governments are starting to pay more attention to.

The challenge for these organisations is that they can’t exactly keep a database of people they’re meant to forget – it defeats the purpose of the exercise.

 

The Solution?

So what’s one possible solution? Forget-Me-Yes (FMY) is a “Software-as-a-Service API Platform specifically manages both organizational and individual Right-to-be-Forgotten (RtbF) and Right-of-Erase (RoE) compliance of structured data for Brazil’s LGPD, Europe’s GDPR, California Consumer Privacy Act (CCPA),  Virginia CDPA, Nevada SB220, and Washington Privacy Act (WPA)”.

It’s a SaaS offering going for US $39.99 per month. To get started, you authenticate the service with one or more databases you want to manage. In version 1 of the software, it only supports Salesforce. I understand that ComplyTrust is looking to expand support to get the solution working with Shopify, Marketo, and a generic SQL plugin. It stores just enough information to uniquely identify the person, and no more than that.

 

Thoughts and Further Reading

Some of us want to be remembered forever, but most of us place more value on the choice not to be remembered forever. As I said at the start, I have very little real understanding of the depth and breadth of some of the privacy issues facing both citizens and corporations alike. That said, working closely with data protection offerings on a daily basis, and being focused on data retention for fun and profit, I can see how this is going to become something of a hot topic as the world gets back to spending time trying to understand the implications of keeping scads of data on folks without their consent. Clearly, a solution like this from ComplyTrust isn’t the final word in addressing the issue, but it’s nice to see that folks are taking this problem seriously. I’m looking forward to hearing more about this product as it evolves in the next little while.

MDP – Yeah You Know Me

Data protection is a funny thing. Much like insurance, most folks understand that it’s important, normally dread having to use it, and dislike the fact that it costs money “just in case something goes wrong”. But then they get hit by ransomware, or Judy in Accounting absolutely destroys a critical spreadsheet, and they realise it’s probably not such a bad thing to have this “data protection”. Books are weird too. Not the idea that we’ll put a whole bunch of information in a file and make it accessible to people. Rather, that sometimes that information is given context and then printed out, sold, read, and stuck on a shelf somewhere for future reference. Indeed, I was a voracious consumer of technical books early in my career, particularly when many vendors were insisting that this was the way to share knowledge with end users. YouTube wasn’t a thing, and access to manuals and reference guides was limited to partners or the vendors themselves. The problem with technical books, however, is that if they cover a specific version of software (or hardware or whatever), they very quickly become outdated in potentially significant ways. As enjoyable as some of those books about Windows NT 4.0 might have been for us all, they quickly became monitor stands when Windows 2000 was released. The more useful books were the ones that shared more of the how, what, when, and why of the topic, rather than digging in to specific guidance on how to do an activity with a particular solution. Particularly when that solution was re-written by the vendor between major versions.

Early on in my career I got involved in my employer’s backup and recovery solution. At the time it was all about GFS backup schemes and DDS-2 drives and per-server protection schemes that mostly worked. It was viewed as an unnecessary expense and given to junior staff to look after. There was a feeling, at least with some of the Windows stuff, that if anything went wrong it would likely go wrong in a big way. I generally felt ill at ease when recovery requests would hit the service desk queue. As a result of this, my interest in being able to bring data back from human error, disaster, or other kinds of failure was piqued, and I went out and bought a copy of Unix Backup and Recovery. As a system administrator, it was a great book to have at hand. There was a nice combination of understandable examples and practical application of backup and recovery principles covered throughout that book. I used to joke that it even had a happy ending, and everyone got their data back. As I moved through my career, I maintained an interest in data protection (it seemed, at one stage, to go hand in hand with storage for whatever reason), and I’ve often wondered what people do when they aren’t given the appropriate guidance on how to best do data protection to meet their needs.

All of this is an extremely long-winded way of saying that my friend W. Curtis Preston has released his fourth book, the snappily titled “Modern Data Protection“, and it makes for some excellent reading. If you listen to him talk about why he wrote another book on his podcast, you’ll appreciate that this thing was over 10 years in the making, had an extensive outline developed for it, and really took a lot of effort to get done. As Curtis points out, he goes out of his way not to name vendors or solutions in the book (he works for Druva). Instead, he spends time on the basics (why backup?), what you should backup, how to backup, and even when you should be backing up things.

This one doesn’t just cover off the traditional centralised server / tape library combo so common for many years in enterprise shops. It also goes into more modern on-premises solutions (I think the kids call them hyper-converged) and cloud-native solutions of all different shapes and sizes. He talks about how to protect a wide variety of workloads and solution architectures, drills in on the importance of recovery testing, and even covers off the difference between backup and archive. Yes, they are different, and I’m not just saying that because I contributed that particular chapter. There’s talk of traditional data sources, deduplication technologies, and more fashionable stuff like Docker and Kubernetes.

The book comes in at a svelte 350ish pages, and you know that each chapter could have almost been a book on its own (or at least a very long whitepaper). That said, Preston does a great job of sticking to the topic at hand, and breaking down potentially complex scenarios in a concise and simple to digest fashion. As I like to say to anyone who’ll listen, this stuff can be hard to get right, and you want to get it right, so it helps if the book you’re using gets it right too.

Should you read this book? Yes. Particularly if you have data or know someone who has data. You may be a seasoned industry veteran or new to the game. It doesn’t matter. You might be a consultant, an architect, or an end user. You might even work at a data protection vendor. There’s something in this for everyone. I was one of the technical editors on this book, fancy myself as knowing about about data protection, and I learnt a lot of stuff. Even if you’re not directly in charge of data protection for your own data or your organisation’s data, this is an extremely useful guide that covers off the things you should be looking at with your existing solution or with a new solution. You can buy it directly from O’Reilly, or from big book sellers. It comes in electronic and physical versions and is well worth checking out. If you don’t believe me, ask Mellor, or Leib – they’ll tell you the same thing.

  • Publisher: O’Reilly
  • ISBN: 9781492094050

Finally, thanks to Preston for getting me involved in this project, for putting up with my English (AU) spelling, and for signing my copy of Unix Backup and Recovery.

Random Short Take #57

Welcome to Random Short Take #57. Only one player has worn 57 in the NBA. So it looks like this particular bit is done. Let’s get random.

  • In the early part of my career I spent a lot of time tuning up old UNIX workstations. I remember lifting those SGI CRTs from desk to desk was never a whole lot of fun. This article about a Sun Ultra 1 project bought back a hint of nostalgia for those days (but not enough to really get into it again). Hat tip to Scott Lowe for the link.
  • As you get older, you realise that people talk a whole lot of rubbish most of the time. This article calling out audiophiles for the practice was great.
  • This article on the Backblaze blog about one company’s approach to building its streaming media capability on B2 made for interesting reading.
  • DH2i recently announced the general availability of DxEnterprise (DxE) for Containers, enabling cloud-native Microsoft SQL Server container Availability Groups outside and inside Kubernetes.
  • Speaking of press releases, Zerto has made a few promotions recently. You can keep up with that news here.
  • I’m terrible when it comes to information security, but if you’re looking to get started in the field, this article provides some excellent guidance on what you should be focussing on.
  • We all generally acknowledge that NTP is important, and most of us likely assume that it’s working. But have you been checking? This article from Tony does a good job of outlining some of the reasons you should be paying some more attention to NTP.
  • This is likely the most succinct article from John you’ll ever read, and it’s right on the money too.

Rubrik Basics – Multi-tenancy – Create An Organization

I covered multi-tenancy with Rubrik some time ago, but things have certainly advanced since then. One of the useful features of Rubrik CDM (and something that’s really required for Envoy to make sense) is the Organizations feature. This is the way in which you can use a combination of LDAP sources, roles, and tenant workloads to deliver a packaged multi-tenancy feature to organisations either within or external to your company. In this article I’ll run through the basics of setting up an Organization. If you’d like to see how it can be applied in a practical sense, it’s worth checking out my post on deploying Rubrik Envoy.

It starts, as these things often do, by clicking on the gear in the Rubrik CDM UI. Select Organizations (located under Access Management).

Click on Create Organization.

You’ll want to give it a name, and think about whether you want to give your tenant the ability to do per-tenant access control.

You’ll want an Org Admin Role to have particular abilities, and you might like to get fancy and add in some additional roles that will have some other capabilities.

At this point you’ll get to select which users you want in your Organization.

Hopefully you’ve added the tenant’s LDAP source to your environment already.

And it’s worth thinking about what users and / or groups you’ll be using from that LDAP source to populate your Organization’s user list.

You’ll also need to consider which role will be assigned to these users (rather than relying on Global Admins to do things for tenants).

You can then assign particular resources, including VMs, vApps, and so forth.

You can also select what SLA Domains the Organization has access to, as well as Archival locations, and replication targets and sources. This becomes important in a multi-tenanted environment as you don’t want folks putting data where they shouldn’t.

At this point you can download the Rubrik Envoy OVA, deploy it, and connect it to your Organization.

And then you’re done. Well, normally you would be, but I didn’t select a whole lot of objects in this example. Click Finish and you’re on your way.

Assuming you’ve assigned your roles correctly, when your tenant logs in, he or she will only be able to see and control resources that belong to that particular Organization.

 

Rubrik Basics – Envoy Deployment

I’ve recently been doing some work with Rubrik Envoy in the lab and thought I’d run through the basics. There’s a new document outlining the process on the articles page.

 

Why Envoy?

This page explains it better than I do, but Envoy is ostensibly a way for service providers to deliver Rubrik services to customers sitting on networks that are isolated from the Rubrik environment. Why would you need to do this? There are all kinds of reasons why you don’t want to give your tenants direct access to your data protection resources, and most of these revolve around security (even if your Rubrik environment is secured appropriately). As many SPs will also tell you, bringing private networks from a tenant / edge into your core is usually not a great experience either.

At a high level, it looks like this.

In this example, Tenant A sits on a private network, and the Envoy Tenant Network is 10.0.1.10. The Rubrik Routable Network on the Envoy appliance is 192.168.0.201, and the data management interface on the Rubrik cluster is 192.168.0.200. The Envoy appliance talks to tenant hosts over ports 12800 and 12801. The Rubrik cluster communicates with Envoy over ports 7500 and 7501. The only time the tenant network communicates with the Rubrik cluster is when the Envoy / Rubrik UI is used by the tenant. This is accessed over a port specified when the Organization is created (see below), and the Envoy to cluster communication is over port 443.

Other Notes

Envoy isn’t a data mover in its current iteration, but rather a way for SPs to present some self-service capabilities to tenants in a controlled fashion without relying on third-party portals or network translation tools. So if you had a bunch of workloads sitting in a tenant’s environment, you’d be better served deploying Rubrik Air / Edge appliances and then replicating that data into the core. If your tenant has a vCenter environment with a few VMs, you can use the Rubrik Backup Service to backup those VMs, but you couldn’t setup vCenter as a source for the tenant unless you opened up networks between your environments by some other means and added it to your Rubrik cluster. This would be ugly at best.

Note also that the deployment assumes you’re creating an Organization in the Rubrik appliance that will be used to isolate the tenant’s data and access from other tenants in the environment. To get hold of the Envoy OVA appliance and credentials, you need to run through the Organization creation process and connect the Envoy appliance when prompted. You’ll also need to ensure that you’ve configured Roles correctly for your tenant’s environment.

If, for some reason, you need to change or view the IP configuration of the Envoy appliance, it’s important to note that the articles on the Rubrik support site are a little out of step with CentOS 7 (i.e. written for Ubuntu). I don’t know whether this is because I’m using Rubrik Air appliances in the lab, but I think it’s maybe just a shift. In any case, to get IP information, you need to login to the console and go to /etc/sysconfig/network-scripts. You’ll find a couple of files (ifcfg-eth0 and ifcfg-eth1) that will tell you whether you’ve made a boo boo with your configuration or not.

 

Conclusion

I’m the first to admit it took a little while to understand the utility of something like Envoy. Most SPs struggle to deliver self-service capabilities for services that don’t always do network multi-tenancy very well. This is a good step in the direction of solving some of the problems associated with that. It’s also important to understand that, if your tenant has workloads sitting in VMware Cloud Director, for example, they’ll be accessing Rubrik resources in a different fashion. As I mentioned before, if there is a bit to protect on the edge site, it’s likely a better option to deploy a virtualised Rubrik appliance or a smaller cluster and replicate that data. In any case, I’ll update this post if I come across anything else useful.

Random Short Take #56

Welcome to Random Short Take #56. Only three players have worn 56 in the NBA. I may need to come up with a new bit of trivia. Let’s get random.

  • Are we nearing the end of blade servers? I’d hoped the answer was yes, but it’s not that simple, sadly. It’s not that I hate them, exactly. I bought blade servers from Dell when they first sold them. But they can present challenges.
  • 22dot6 emerged from stealth mode recently. I had the opportunity to talk to them and I’ll post something soon about that. In the meantime, this post from Mellor covers it pretty well.
  • It may be a Northern Hemisphere reference that I don’t quite understand, but Retrospect is running a “Dads and Grads” promotion offering 90 days of free backup subscriptions. Worth checking out if you don’t have something in place to protect your desktop.
  • Running VMware Cloud Foundation and want to stretch your vSAN cluster across two sites? Tony has you covered.
  • The site name in VMware Cloud Director can look a bit ugly. Steve O gives you the skinny on how to change it.
  • Pure//Accelerate happened recently / is still happening, and there was a bit of news from the event, including the new and improved Pure1 Digital Experience. As a former Pure1 user I can say this was a big part of the reason why I liked using Pure Storage.
  • Speaking of press releases, this one from PDI and its investment intentions caught my eye. It’s always good to see companies willing to spend a bit of cash to make progress.
  • I stumbled across Oxide on Twitter and fell for the aesthetic and design principles. Then I read some of the articles on the blog and got even more interested. Worth checking out. And I’ll be keen to see just how it goes for the company.

*Bonus Round*

I was recently on the Restore it All podcast with W. Curtis Preston and Prasanna Malaiyandi. It was a lot of fun as always, despite the fact that we talked about something that’s a pretty scary subject (data (centre) loss). No, I’m not a DC manager in real life, but I do have responsibility for what goes into our DC so I sort of am. Don’t forget there’s a discount code for the book in the podcast too.

Retrospect Announces Retrospect Backup 18 and Retrospect Virtual 2021

Retrospect recently announced new versions of its Backup (18) and Virtual (2021) products. I had the opportunity to speak to JG Heithcock (GM, Retrospect) about the announcement and thought I’d share some thoughts here.

 

What’s New?

New Management Console & Workflow 

  • Simplified workflows
  • Comprehensive reporting through an updated management console

The Retrospect Management Console now supports geo tracking with a worldwide map of all users, Retrospect Backup servers, and remote clients, down to the city.

[image courtesy of Retrospect]

Cloud Native

  • Deploy directly in the cloud
  • Protect application data

Note that cloud native means that you can deploy agents on cloud-based hypervisor workloads and protect them. It doesn’t mean support for things like Kubernetes.

Anti-Ransomware Protection

Enables users to set immutable retention periods and policies within Amazon S3, Wasabi and Backblaze B2 and supports bucket-level object lock in Google Cloud Storage and Microsoft Azure.

Pricing

There’s a variety of pricing options available. When you buy a perpetual license, you have access to any new minor or major version upgrades for 12 months. With the monthly subscription model you have access to the latest version of the product for as long as you keep the subscription active.

[image courtesy of Retrospect]

 

Thoughts And Further Reading

I’ve mentioned in my previous coverage of Retrospect that I’m fan of the product, if only for the fact that the consumer and SME space is screaming out for simple to use data protection solutions. Any solution that can help users develop some kind of immunity to ransomware has to be a good thing, and it’s nice to see Retrospect getting there in terms of cloud support. This isn’t as fully featured a product as some of the enterprise solutions out there, but for the price it doesn’t need to be.

Ultimately, the success of software like this is a balance between usability, cost, and reliability. The Retrospect folks seem cognisant of this, and have gone some way to fill the gaps where they could, and are working on others. I’ll be taking this version for a spin in the lab in the very near future, and hope to report back with how it all went.

Rubrik Basics – Rubrik CDM Upgrades With Polaris – Part 2

This is the second part of the super exciting article “Rubrik CDM Upgrades With Polaris”. In the first episode, I connected my Polaris tenancy to a valid Rubrik Support account so it could check for CDM upgrades. In this post, I’ll be covering the actual update process using Polaris. Hold on to your hats.

To get started, login to Polaris, click on the Gear icon, and select CDM Upgrades.

If there’s a new version of CDM available for deployment, you’ll see it listed in the dashboard. In this example, my test Edge cluster has an update available (5.3.1-p3). Happy days!

You’ll need to get this update downloaded to the cluster you want to install it on first. Click on the ellipsis and select Download.

You can then choose to download the release from Rubrik or locally.

Click on the version you want to download and click Next.

You then get the opportunity to confirm the download. Click on Confirm to do this.

It will then let you know that it’s working on it.

Once the update has downloaded, you’ll see “Ready for upgrade” on the dashboard.

Select the cluster you’d like to upgrade and click on Upgrade.

At this point, you’ll get the option to schedule the upgrade, and select to rollback if the upgrade fails for some reason.

Confirm the upgrade and you’ll be on your way.

Polaris lets you know that it’s working on it.

You can see the progress in the dashboard.

When it’s done, it’s done.

And that’s it. This highlights the utility of something like Polaris, particularly when you’re managing a large number of clusters and need to keep things in tip-top shape.

Rubrik Basics – Rubrik CDM Upgrades With Polaris – Part 1

I decided to break this article into 2 parts. Not because it’s super epic or particularly complicated, but because there are a lot of screenshots and it just looks weird if I put it in one big thing. Should it have been a downloadable article? Sure, probably. But here we are. It’s been some time since I ran through the Rubrik CDM upgrade process (on physical hardware no less). I didn’t have access to Polaris GPS at that time, and thought it would be useful to run through what it looks like to perform platform upgrades via that rather than the CLI. This post covers the process of configuring Polaris to check for CDM updates, and the second post covers deploying those updates to Rubrik clusters.

Login to your Polaris dashboard, click on the Gear icon, and select CDM Upgrades.

Click on Connect to Support Portal to enter your Rubrik support account details. This lets your Polaris instance communicate freely with the Rubrik Support Portal.

You’ll need a valid support account to connect.

If you’ve guessed your password successfully, you’ll get a message at the bottom of the screen letting you know as much.

If you environment was already fairly up to date, you may not see anything listed in the CDM Upgrades dashboard.

And that’s it for Part 1. I can hear you asking “how could it get any more exciting than this, Dan?”. I know, it’s pretty great. Just wait until I run you though deploying an update in this post.