Datadobi Announces DobiProtect

Datadobi recently announced DobiProtect. I had the opportunity to speak with Michael Jack and Carl D’Halluin about the announcement, and thought I’d share some thoughts here.

 

The Problem

Disaster Recovery

Modern disaster recovery solutions tend more towards business continuity than DR. The challenge with data replication solutions is that it’s a trivial thing to replicate corruption from your primary storage to your DR storage. Backup systems are vulnerable too, and most instances you need to make some extra effort to ensure you’ve got a replicated catalogue, and that your backup data is not isolated. Invariably, you’ll be looking to restore to like hardware in order to reduce the recovery time. Tape is still a pain to deal with, and invariably you’re also at the mercy of people and processes going wrong.

What Do Customers Need?

To get what you need out of a robust DR system, there are a few criteria that need to be met, including:

  • An easy way to select business-critical data;
  • A simple way to make a golden copy in native format;
  • A bunker site in a DC or cloud;
  • A manual air-gap procedure;
  • A way to restore to anything; and
  • A way to failover if required.

 

Enter DobiProtect

What Does It Do?

The idea is that you have two sites with a manual air-gap between them, usually controlled by a firewall of some type. The first site is where you run your production workload, and there’ll likely be a subset of data that is really quirte important to your business. You can use DobiProtect to get that data from your production site to DR (it might even be in a bunker!). In order to get the data from Production to DR, DobiProtect scans the data before it’s pulled across to DR. Note that the data is pulled, not pushed. This is important as it means that there’s no obvious trace of the bunker’s existence in production.

[image courtesy of Datadobi]

If things go bang, you can recover to any NAS or Object.

  • Browse golden copy
  • Select by directory structure, folder, or object patterns
  • Mounts and shares
  • Specific versions

Bonus Use Case

One of the more popular use cases that Datadobi spoke to me about was heterogeneous edge-to-core protection. Data on the edge is usually more vulnerable, and not every organisation has the funding to put robust protection mechanisms in place at every edge site to protect critical data. With the advent of COVID-19, many organisations have been pushing more data to the edge in order for remote workers to have better access to data. The challenge then becomes keeping that data protected in a reliable fashion. DobiProtect can be used to pull data from the core once data has been pulled back from the edge. Because it’s a software only product, your edge storage can be anything that supports object, SMB, or NFS, and the core could be anything else. This provides a lot of flexibility in terms of the expense traditionally associated with DR at edge sites.

[image courtesy of Datadobi]

 

Thoughts and Further Reading

The idea of an air-gapped site in a bunker somewhere is the sort of thing you might associate with a James Bond story. In Australia these aren’t exactly a common thing (bunkers, not James Bond stories), but Europe and the US is riddled with them. As Jack pointed out in our call, “[t]he first rule of bunker club – you don’t talk about the bunker”. Datadobi couldn’t give me a list of customers using this type of solution because all of the customers didn’t want people to know that they were doing things this way. It seems a bit like security via obscurity, but there’s no point painting a big target on your back or giving clues out for would-be crackers to get into your environment and wreak havoc.

The idea that your RPO is a day, rather than minutes, is also confronting for some folks. But the idea of this solution is that you’ll use it for your absolutely mission critical can’t live without it data, not necessarily your virtual machines that you may be able to recover normally if you’re attacked or the magic black smoke escapes from one of your hosts. If you’ve gone to the trouble of looking into acquiring some rack space in a bunker, limited the people in the know to a handful, and can be bothered messing about with a manual air-gap process, the data you’re looking to protect is clearly pretty important.

Datadobi has a rich heritage in data migration for both file and object storage systems. It makes sense that eventually customer demand would drive them down this route to deliver a migration tool that ostensibly runs all the time as sort of data protection tool. This isn’t designed to protect everything in your environment, but for the stuff that will ruin your business if it goes away, it’s very likely worth the effort and expense. There are some folks out there actively looking for ways to put you over a barrel, so it’s important to think about what it’s worth to your organisation to avoid that if possible.

BackupAssist Announces BackupAssist ER

BackupAssist recently announced BackupAssist ER. I recently had the opportunity to speak with Linus Chang (CEO), Craig Ryan, and Madeleine Tan about the announcement.

 

BackupAssist

Founded in 2001, BackupAssist is focussed primarily on the small to medium enterprise (under 500 seats). They sell the product via a variety of mechanisms, including:

  • Direct
  • Partners
  • Distribution channels

 

Challenges Are Everywhere

Some of the challenges faced by the average SME when it comes to data protection include the following:

  • Malware
  • COVID-19
  • Compliance

So what does the average SME need when it comes to selecting a data protection solution?

  • Make it affordable
  • Automatic offsite backups with history and retention
  • Most recoveries are local – make them fast!
  • The option to recover in the cloud if needed (the fallback to the fallback)

 

What Is It?

So what exactly is BackupAssist ER? It’s backup and recovery software.

[image courtesy of BackupAssist]

It’s deployed on Windows servers, and has support for disk to disk to cloud as a protection topology.

CryptoSafeGuard

Another cool feature is CryptoSafeGuard, providing the following features:

  • Shield from unauthorised access
  • Detect – Alert – Preserve

Disaster Recovery

  • VM Instant boot (converting into a Hyper-V guest)
  • BMR (catering for dissimilar hardware)
  • Download cloud backup anywhere

Data Recovery

The product supports the granular recovery of files, Exchange, and applications.

Data Handling and Control

A key feature of the solution is the approach to data handling, offering:

  • Accessibility
  • Portability
  • Retention

It uses the VHDX file format to store protection data. It can also backup to Blob storage. Chang also advised that they’re working on introducing S3 compatibility at some stage.

Retention

The product supports a couple of different retention schemes, including:

  • Local – Keep N copies (GFS is coming)
  • Cloud – Keep X copies
  • Archival – Keep a backup on a HDD, and retain for years

Pricing

BackupAssist ER is licensed in a variety of ways. Costs are as follows:

  • Per physical machine – $399 US annually;
  • Per virtual guest machine – $199 US annually; and
  • Per virtual host machine – $699 US annually.

There are discounts available for multi-year subscriptions, as well as discounts to be had if you’re looking to purchase licensing for more than 5 machines.

 

Thoughts and Further Reading

Chang noted that BackupAssist is “[n]ot trying to be the best, but the best fit”. You’ll see that a lot of the capability is Microsoft-centric, with support for Windows and Hyper-V. This makes sense when you look at what the SME market is doing in terms of leveraging Microsoft platforms to deliver their IT requirements. Building a protection product that covers every platform is time-consuming and expensive in terms of engineering effort. What Chang and the team have been focussed on is delivering data protection products to customers at a particular price point while delivering the right amount of technology.

The SME market is notorious for wanting to consume quality product at a particular price point. Every interaction I’ve had with customers in the SME segment has given me a crystal clear understanding of “Champagne tastes on a beer budget”. But in much the same way that some big enterprise shops will never stop doing things at a glacial pace, so too will many SME shops continue to look for high value at a low cost. Ultimately, compromises need to be made to meet that price point, hence the lack of support for features such as VMware. That doesn’t mean that BackupAssist can’t meet your requirements, particularly if you’re running your business’s IT on a couple of Windows machines. For this it’s well suited, and the flexibility on offer in terms of disk targets, retention, and recovery should be motivation to investigate further. It’s a bit of a nasty world out there, so anything you can do to ensure your business data is a little safer should be worthy of further consideration. You can read the press release here.

Backup Awareness Month, Backblaze, And A Simple Question

Last month was Backup Awareness Month (at least according to Backblaze). It’s not formally recognised by any government entities, and it’s more something that was made up by Backblaze. But I’m a big fan of backup awareness, so I endorse making up stuff like this. I had a chance to chat to Yev over at Backblaze about the results of a survey Backblaze runs annually and thought I’d share my thoughts here. Yes, I know I’m a bit behind, but I’ve been busy.

As I mentioned previously, as part of the backup awareness month celebrations, Backblaze reaches out to folks in the US and asks a basic question: “How often do you backup all the data on your computer?”. This has shown some interesting facts about consumer backup habits. There has been a positive decrease in the amount of people stating that they have never backed up their data (down to around one fifth of the respondents), and the frequency of which backup has increased.

Other takeaways from the results include:

  • Almost 50% of people lose their data each year;
  • 41% of people do not completely understand the difference between cloud backup and cloud storage;
  • Millennials are the generation most likely to backup their data daily; and
  • Seniors (65+) have gone from being the best age group at backing up data to the worst.

 

Thoughts

I bang on a lot about how important backup (and recovery) is across both the consumer and enterprise space. Surveys like this are interesting because they highlight, I think, the importance of regularly backing up our data. We’re making more and more of it, and it’s not magically protected by the benevolent cloud fairies, so it’s up to us to protect it. Particularly if it’s important to us. It’s scary to think that one in two people are losing data on a regular basis, and scarier still that most folks don’t understand the distinction between cloud storage and cloud backup. I was surprised that Millennials are most likely to backup their data, but my experience with younger generations really only extends to my children, so they’re maybe not the best indicator of what the average consumer is doing. It’s also troubling that older folk are struggling to keep on top of backups. Anecdotally that lines up with my experience as well. So I think it’s great that Yev and the team at Backblaze have been on something of a crusade to educate people about cloud backup and how it can help them. I love that the company is all about making it easier for consumers, not harder.

As an industry we need to be better at making things simple for people to consume, and more transparent in terms of what can be achieved with technology. I know this blog isn’t really focused on consumer technology, and it might seem a bit silly that I carry on a bit about consumer backup. But you all have data stored some place or another that means something to you. And I know not all of you are protecting it appropriately. Backup is like insurance. It’s boring. People don’t like paying for it. But when something goes bang, you’ll be glad you have it. If these kind of posts can raise some awareness, and get one more person to protect the data that means something to them in an effective fashion, then I’ll be happy with that.

World Backup Day 2020

World Backup Day has been and gone already (it’s 31st March each year). I don’t normally write much about it, as I’d like to think that every day is World Backup Day. But not everyone is into data protection in the same way I am though. Every year, some very nice people at a PR firm I work with send me a series of quotes about World Backup Day, and I invariably file them away, and don’t write anything on the topic. But I thought this year, “in these uncertain times”, that it might be an idea to put together a short article that included some of those quotes and some of my own thoughts on the topic.

 

The Vendor’s View

Steve Cochran (Chief Technology Officer, ConnectWise), had this to say on the topic:

“There are two major reasons why we should take backups seriously: Hardware failure and human error. Systems are not foolproof and every piece of hardware will fail eventually, so it’s not a question of if, but rather when, these failures will happen. If you haven’t kept up with your backups, you’ll get caught unprepared. There’s also a factor of human error where you might accidentally delete a file or photo. We put our entire lives on our computers and mobile devices, but we also make mistakes, and not having a backup system in place is almost silly at this point. While you need to dedicate some time to set up automatic backups, you don’t have to keep up with them — they simply run in the background.”

 

Yev Pusin (Director of Strategy, Backblaze), chipped in with this:

“World Backup Day is coming up, and while many will folks will go with phrases like ‘Don’t be an April Fool, Backup Today,’ it is not the route I’ll go down this year. Backing up your data is something that should be taken seriously, especially with the recent increase in major ransomware attacks and the sudden increase in the amount of remote workers we are seeing in 2020 as a result of COVID-19.

While World Backup Day serves as a great reminder of the importance of backing up your data, data backup is something that should be an everyday activity. That used to be a daunting task, but it no longer has to be one!”

 

Carl D’Halluin (CTO, Datadobi), had this to say on the topic:

“Ultimately, in a world of rising threats, organizations must develop the ability to protect and back up their data quickly, flexibly, securely, and cost-effectively, so data can be backed up down to the individual file level.”

 

Data Protection is Everyone’s Problem

Data protection is everyone’s problem. But I don’t want that to sound like I’m trying to scare you. It’s one of those things that’s important though. More and more of our everyday activities revolve around technology and data. In the much the same way as most of us now have home insurance, and car insurance, and health insurance, we also need to consider the need for data insurance. This isn’t just a problem for companies, and it’s not just a problem for the end user, it’s a problem for everyone.

So what can you do? There’s all manner of things you can do to improve your personal and business data protection situation. From a personal perspective, I recommend you do the equivalent of going to your doctor for a health check, and do a health check on your data. Spend a day taking note of everything that you interact with, and question the data that’s generated during those interactions. Is it important to you? What would you do if you couldn’t access it? Then go and find a way to protect it if possible. That might be something as mundane as taking screenshots of messages (and baking up the resultant screenshots). It might be more complicated, and involve installing some software on your computer. Whatever it is, if you’re not doing it, and you think you should be, try and make it a priority. If it all seems too complicated, or something you don’t feel capable of doing yourself, don’t be afraid to ask people on the Internet for help.

The same goes for business. You might work for a company where the responsibility for data protection in a corporate sense lies with someone else, but I would suggest that, just like workplace health and safety, data protection (availability, integrity, and security) is everyone’s responsibility. If you’re generating data and keeping it on your laptop, how is your company going to protect that data? Is there a place you should be storing it? Why aren’t you doing that? Is your company relying on SaaS applications but not protecting those apps? Talk to the people responsible. Things go wrong all the time. You don’t want to be on the wrong end of it. Indeed, in celebration of World Backup Day, I recently jumped on a Druva podcast with W. Curtis Preston and Stephen Manley to talk about when things do go wrong. You can listen to it here.

Data protection can be difficult, but it’s not impossible. Particularly when you start to understand the value of your data. So let’s all try to make every day “World Backup Day”. Okay, I know that’s a terrible line, but you know what I mean.

Infrascale Protects Your Infrastructure At Scale

Disclaimer: I recently attended Storage Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Infrascale recently presented at Storage Field Day 19. You can see videos of the presentation here, and download my rough notes from here.

 

Infrascale?

Russ Reeder (CEO) introduced the delegates to Infrascale. If you’ve not heard of Infrascale before, it’s a service provider and vendor focused primarily on backup and disaster recovery services. It has around 150 employees and operates in 10 cities in 5 countries. Infrascale currently services around 60000 customers / 250000 VMs and endpoints. Reeder said Infrascale as a company is “[p]assionate about its customers’ happiness and success”.

 

Product Portfolio

There are four different products in the Infrascale portfolio.

Infrascale Cloud Backup (ICB)

  • Backup directly to the cloud
  • Recover data in seconds
  • Optimised for endpoints and branch office servers
  • Ransomware detection & remediation

Infrascale Cloud Application Backup (ICAB)

  • Defy cloud applications limited retention policies
  • Backup O365, SharePoint and OneDrive, G-Suite, Salesforce.com, box.com, and more
  • Recover individual mail items or mailboxes

Infrascale Disaster Recovery – Local (IDR-LOCAL)

  • Backup systems to an on-premises appliance
  • Run system replicas (locally) in minutes
  • Restore from on-premises appliance or the cloud
  • Archive / DR data to disk

Infrascale Disaster Recovery – Cloud (IDR-CLOUD)

  • Backup systems to an on-premises appliance and to a bootable cloud appliance
  • Run system replicas in minutes (locally or boot in the cloud)
  • Optimised for mission-critical physical and virtual servers

Support for Almost Everything

Infrascale offers support for almost everything, including VMware, Hyper-V, Bare Metal, End Points, public cloud workloads.

Other Features

Speedy DR locally or to the Cloud

  • IDR is very fast – boot ready in minutes
  • IDR enables recovery locally or in the cloud

Backup Target Optionality; Vigilant Data Security

  • ICB allows for backup targets “anywhere”
  • ICB detects ransomware and mitigates impact

Single View

The Infrascale dashboard does a pretty decent job of providing all of the information you might need about the service in a single view.

[image courtesy of Infrascale]

Appliances

There are a variety of appliance options available, as well as virtual editions of the appliance that you can use.

[image courtesy of Infrascale]

 

Thoughts and Further Reading

Regular readers of this blog would know that I’m pretty interested in data protection as a topic. I’m sad to say that I hadn’t heard of Infrascale prior to this presentation, but I’m glad I have now. There are a lot of service providers out there offering some level of data protection and disaster recovery as a service. These services offer varying levels of protection, features, and commercial benefits. Infrascale distinguish themselves by offering its own hardware platform as a core part of the offering, rather than building a solution based on one of the major data protection vendors.

In my day job I work a lot with product development for these types of solutions and, to be honest, the idea of developing a hardware data protection appliance is not something that appeals. As a lot of failed hardware vendors will tell you, it’s one thing to have a great idea, and quite another to execute successfully on that idea. But Infrascale has done the hard work on engineering the solution, and it seems to offer all of the features the average punter looks for in a DPaaS and DRaaS offering. I’m also a big fan of the fact that it offers support for endpoint protection, as I think this is a segment that is historically under-represented in the data protection space. It has a good number of customers, primarily in the SME range, and is continuing to add services to its product portfolio.

Disaster recovery and data protection are things at that aren’t always done very well by small to medium enterprises. Unfortunately, these types of businesses tend to have the most to lose when something goes wrong with their critical business data (either via operator error, ransomware, or actual disaster). Something like Infrascale’s offering is a great way to take away a lot of the complexity traditionally associated with protecting that important data. I’m looking forward to hearing more about Infrascale in the future.

Random Short Take #27

Welcome to my semi-regular, random news post in a short format. This is #27. You’d think it would be hard to keep naming them after basketball players, and it is. None of my favourite players ever wore 27, but Marvin Barnes did surface as a really interesting story, particularly when it comes to effective communication with colleagues. Happy holidays too, as I’m pretty sure this will be the last one of these posts I do this year. I’ll try and keep it short, as you’ve probably got stuff to do.

  • This story of serious failure on El Reg had me in stitches.
  • I really enjoyed this article by Raj Dutt (over at Cohesity’s blog) on recovery predictability. As an industry we talk an awful lot about speeds and feeds and supportability, but sometimes I think we forget about keeping it simple and making sure we can get our stuff back as we expect.
  • Speaking of data protection, I wrote some articles for Druva about, well, data protection and things of that nature. You can read them here.
  • There have been some pretty important CBT-related patches released by VMware recently. Anthony has provided a handy summary here.
  • Everything’s an opinion until people actually do it, but I thought this research on cloud adoption from Leaseweb USA was interesting. I didn’t expect to see everyone putting their hands up and saying they’re all in on public cloud, but I was also hopeful that we, as an industry, hadn’t made things as unclear as they seem to be. Yay, hybrid!
  • Site sponsor StorONE has partnered with Tech Data Global Computing Components to offer an All-Flash Array as a Service solution.
  • Backblaze has done a nice job of talking about data protection and cloud storage through the lens of Star Wars.
  • This tip on removing particular formatting in Microsoft Word documents really helped me out recently. Yes I know Word is awful.
  • Someone was nice enough to give me an acknowledgement for helping review a non-fiction book once. Now I’ve managed to get a character named after me in one of John Birmingham’s epics. You can read it out of context here. And if you’re into supporting good authors on Patreon – then check out JB’s page here. He’s a good egg, and his literary contributions to the world have been fantastic over the years. I don’t say this just because we live in the same city either.

Aparavi Announces File Protect & Insight – Helps With Third Drawer Down

I recently had the opportunity to speak to Victoria Grey (CMO), Darryl Richardson (Chief Product Evangelist), and Jonathan Calmes (VP Business Development) from Aparavi regarding their File Protect and Insight solution. If you’re a regular reader, you may remember I’m quite a fan of Aparavi’s approach and have written about them a few times. I thought I’d share some of my thoughts on the announcement here.

 

FPI?

The title is a little messy, but think of your unstructured data in the same way you might look at the third drawer down in your kitchen. There’s a bunch of stuff in there and no-one knows what it all does, but you know it has some value. Aparavi describes File Protect and Insight (FPI), as “[f]ile by file data protection and archive for servers, endpoints and storage devices featuring data classification, content level search, and hybrid cloud retention and versioning”. It takes the data you’re not necessarily sure about, and makes it useful. Potentially.

It comes with a range of features out of the box, including:

  • Data Awareness
    • Data classification
    • Metadata aggregation
    • Policy driven workflows
  • Global Security
    • Role-based permissions
    • Encryption (in-flight and at rest)
    • File versioning
  • Data Search and Access
    • Anywhere / anytime file access
    • Seamless cloud integration
    • Full-content search

 

How Does It Work?

The solution is fairly simple to deploy. There’s a software appliance installed on-premises (this is known as the aggregator). There’s a web-accessible management console, and you configure your sources to be protected via network access.

[image courtesy of Aparavi]

You get the ability to mount backup data from any point in time, and you can provide a path that can be shared via the network to users to access that data. Regardless of where you end up storing the data, you leave the index on-premises, and search against the index, not the source. This saves you in terms of performance and speed. There’s also a good story to be had in terms of cloud provider compatibility. And if you’re looking to work with an on-premises / generic S3 provider, chances are high that the solution won’t have too many issues with that either.

 

Thoughts

Data protection is hard to do well at the best of times, and data management is even harder to get right. Enterprises are busy generating terabytes of data and are struggling to a) protect it successfully, and b) make use of that protected data in an intelligent fashion. It seems that it’s no longer enough to have a good story around periodic data protection – most of the vendors have proven themselves capable in this regard. What differentiates companies is the ability to make use of that protected data in new and innovative ways that can increase the value to that data to the business that’s generating it.

Companies like Aparavi are doing a pretty good job of taking the madness that is your third drawer down and providing you with some semblance of order in the chaos. This can be a real advantage in the enterprise, not only for day to day data protection activities, but also for extended retention and compliance challenges, as well as storage optimisation challenges that you may face. You still need to understand what the data is, but something like FPI can help you to declutter what that data is, making it easier to understand.

I also like some of the ransomware detection capabilities being built into the product. It’s relatively rudimentary for the moment, but keeping a close eye on the percentage of changed data is a good indicator of wether or not something is going badly wrong with the data sources you’re trying to protect. And if you find yourself the victim of a ransomware attack, the theory is that Aparavi has been storing a secondary, immutable copy of your data that you can recover from.

People want a lot of different things from their data protection solutions, and sometimes it’s easy to expect more than is reasonable from these products without really considering some of the complexity that can arise from that increased level of expectation. That said, it’s not unreasonable that your data protection vendors should be talking to you about data management challenges and deriving extra value from your secondary data. A number of people have a number of ways to do this, and not every way will be right for you. But if you’ve started noticing a data sprawl problem, or you’re looking to get a bit more from your data protection solution, particularly for unstructured data, Aparavi might be of some interest. You can read the announcement here.

Backblaze Announces Version 7.0 – Keep Your Stuff For Longer

Backblaze recently announced Version 7.0 of its cloud backup solution for consumer and business and I thought I’d run through the announcement here.

 

Extended Version History

30 Days? 1 Year? 

One of the key parts of this announcement is support for extended retention of backup data. All Backblaze computer backup accounts have 30-Day Version History included with their backup license. But you can now extend that to 1 year if you like. Note that this will cost an additional $2/month and is charged based on your license type (monthly, yearly, or 2-year). It’s also prorated to align with your existing subscription.

Forever

Want to have a more permanent relationship with you protection data? You can also elect to keep it forever, at the cost of an additional $2/month (aligned to your license plan type) plus $0.005/GB/Month for versions modified on your computer more than 1 year ago. There’s a handy FAQ that you can read here. Note that all pricing from Backblaze is in US dollars.

[image courtesy of Backblaze]

 

Other Updates

Are you trying to back up really large files (like videos)? You might already know that Backblaze takes large files and chunks them into smaller ones before uploading them to the Internet. Upload performance has now been improved, with the maximum packet size being increased from 30MB to 100MB. This allows the Backblaze app to transmit data more efficiently by better leveraging threading. According to Backblaze, this also “smoothes out upload performance, reduces sensitivity to latency, and leads to smaller data structures”.

Other highlights of this release include:

  • For the aesthetically minded amongst you, the installer now looks better on higher resolution displays;
  • For Windows users, an issue with OpenSSL and Intel’s Apollo Lake chipsets has now been resolved; and
  • For macOS users, support for Catalina is built in. (Note that this is also available with the latest version 6 binary).

Availability?

Version 7.0 will be rolled out to all users over the next few weeks. If you can’t wait, there are two ways to get hold of the new version:

 

Thoughts and Further Reading

It seems weird that I’ve been covering Backblaze as much as I have, given their heritage in the consumer data protection space, and my focus on service providers and enterprise offerings. But Backblaze has done a great job of making data protection accessible and affordable for a lot of people, and it’s done it in a fairly transparent fashion at the same time. Note also that this release covers both consumers and business users. The addition of extended retention capabilities to its offering, improved performance, and some improved compatibility is good news for Backblaze users. It’s really easy to setup and get started with the application, they support a good variety of configurations, and you’ll sleep better knowing your data is safely protected (particularly if you accidentally fat-finger an important document and need to recover an older version). If you’re thinking about signing up, you can use this affiliate link I have and get yourself a free month (and I’ll get one too).

If you’d like to know more about the features of Version 7.0, there’s a webinar you can jump on with Yev. The webinar will be available on BrightTalk (registration is required) and you can sign up by visiting the Backblaze BrightTALK channel. You can also read more details on the Backblaze blog.

Random Short Take #23

Want some news? In a shorter format? And a little bit random? This listicle might be for you.

  • Remember Retrospect? They were acquired by StorCentric recently. I hadn’t thought about them in some time, but they’re still around, and celebrating their 30th anniversary. Read a little more about the history of the brand here.
  • Sometimes size does matter. This article around deduplication and block / segment size from Preston was particularly enlightening.
  • This article from Russ had some great insights into why it’s not wise to entirely rule out doing things the way service providers do just because you’re working in enterprise. I’ve had experience in both SPs and enterprise and I agree that there are things that can be learnt on both sides.
  • This is a great article from Chris Evans about the difficulties associated with managing legacy backup infrastructure.
  • The Pure Storage VM Analytics Collector is now available as an OVA.
  • If you’re thinking of updating your Mac’s operating environment, this is a fairly comprehensive review of what macOS Catalina has to offer, along with some caveats.
  • Anthony has been doing a bunch of cool stuff with Terraform recently, including using variable maps to deploy vSphere VMs. You can read more about that here.
  • Speaking of people who work at Veeam, Hal has put together a great article on orchestrating Veeam recovery activities to Azure.
  • Finally, the Brisbane VMUG meeting originally planned for Tuesday 8th has been moved to the 15th. Details here.

Random Short Take #19

Here are some links to some random news items and other content that I recently found interesting. You might find them interesting too. Episode 19 – let’s get tropical! It’s all happening.

  • I seem to link to Alastair’s blog a lot. That’s mainly because he’s writing about things that interest me, like this article on data governance and data protection. Plus he’s a good bloke.
  • Speaking of data protection, Chris M. Evans has been writing some interesting articles lately on things like backup as a service. Having worked in the service provider space for a piece of my career, I wholeheartedly agree that it can be a “leap of faith” on the part of the customer to adopt these kinds of services.
  • This post by Raffaello Poltronieri on VMware’s vRealize Operations session at Tech Field Day 19 makes for good reading.
  • This podcast episode from W. Curtis Preston was well worth the listen. I’m constantly fascinated by the challenges presented to infrastructure in media and entertainment environments, particularly when it comes to data protection.
  • I always enjoy reading Preston’s perspective on data protection challenges, and this article is no exception.
  • This article from Tom Hollingsworth was honest and probably cut too close to the bone with a lot of readers. There are a lot of bad habits that we develop in our jobs, whether we’re coding, running infrastructure, or flipping burgers. The key is to identify those behaviours and work to address them where possible.
  • Over at SimplyGeek.co.uk, Gavin has been posting a number of Ansible-related articles, including this one on automating vSphere VM and ova deployments. A number of folks in the industry talk a tough game when it comes to automation, and it’s nice to see Gavin putting it on wax and setting a great example.
  • The Mark Of Cain have announced a national tour to commemorate the 30th anniversary of their Battlesick album. Unfortunately I may not be in the country when they’re playing in my part of the woods, but if you’re in Australia you can find out more information here.