Random Short Take #26

Welcome to my semi-regular, random news post in a short format. This is #26. I was going to start naming them after my favourite basketball players. This one could be the Korver edition, for example. I don’t think that’ll last though. We’ll see. I’ll stop rambling now.

Datrium Enhances DRaaS – Makes A Cool Thing Cooler

Datrium recently made a few announcements to the market. I had the opportunity to speak with Brian Biles (Chief Product Officer, Co-Founder), Sazzala Reddy (Chief Technology Officer and Co-Founder), and Kristin Brennan (VP of Marketing) about the news and thought I’d cover it here.

 

Datrium DRaaS with VMware Cloud

Before we talk about the new features, let’s quickly revisit the DRaaS for VMware Cloud offering, announced by Datrium in August this year.

[image courtesy of Datrium]

The cool thing about this offering was that, according to Datrium, it “gives customers complete, one-click failover and failback between their on-premises data center and an on-demand SDDC on VMware Cloud on AWS”. There are some real benefits to be had for Datrium customers, including:

  • Highly optimised, and more efficient than some competing solutions;
  • Consistent management for both on-premises and cloud workloads;
  • Eliminates the headaches as enterprises scale;
  • Single-click resilience;
  • Simple recovery from current snapshots or old backup data;
  • Cost-effective failback from the public cloud; and
  • Purely software-defined DRaaS on hyperscale public clouds for reduced deployment risk long term.

But what if you want a little flexibility in terms of where those workloads are recovered? Read on.

Instant RTO

So you’re protecting your workloads in AWS, but what happens when you need to stand up stuff fast in VMC on AWS? This is where Instant RTO can really help. There’s no rehydration or backup “recovery” delay. Datrium tells me you can perform massively parallel VM restarts (hundreds at a time) and you’re ready to go in no time at all. The full RTO varies by run-book plan, but by booting VMs from a live NFS datastore, you know it won’t take long. Failback uses VADP.

[image courtesy of Datrium]

The only cost during normal business operations (when not testing or deploying DR) is the cost of storing ongoing backups. And these are are automatically deduplicated, compressed and encrypted. In the event of a disaster, Datrium DRaaS provisions an on-demand SDDC in VMware Cloud on AWS for recovery. All the snapshots in S3 are instantly made executable on a live, cloud-native NFS datastore mounted by ESX hosts in that SDDC, with caching on NVMe flash. Instant RTO is available from Datrium today.

DRaaS Connect

DRaaS Connect extends the benefits of Instant RTO DR to any vSphere environment. DRaaS Connect is available for two different vSphere deployment models:

  • DRaaS Connect for VMware Cloud offers instant RTO disaster recovery from an SDDC in one AWS Availability Zone (AZ) to another;
  • DRaaS Connect for vSphere On Prem integrates with any vSphere physical infrastructure on-premises.

[image courtesy of Datrium]

DRaaS Connect for vSphere On Prem extends Datrium DRaaS to any vSphere on-premises infrastructure. It will be managed by a DRaaS cloud-based control plane to define VM protection groups and their frequency, replication and retention policies. On failback, DRaaS will return only changed blocks back to vSphere and the local on-premises infrastructure through DRaaS Connect.

The other cool things to note about DRaaS Connect is that:

  • There’s no Datrium DHCI system required
  • It’s a downloadable VM
  • You can start protecting workloads in minutes

DRaaS Connect will be available in Q1 2020.

 

Thoughts and Further Reading

Datrium announced some research around disaster recovery and ransomware in enterprise data centres in concert with the product announcements. Some of it wasn’t particularly astonishing, with folks keen to leverage pay as you go models for DR, and wanting easier mechanisms for data mobility. What was striking is that one of the main causes of disasters is people, not nature. Years ago I remember we used to plan for disasters that invariably involved some kind of flood, fire, or famine. Nowadays, we need to plan for some script kid pumping some nasty code onto our boxes and trashing critical data.

I’m a fan of companies that focus on disaster recovery, particularly if they make it easy for consumers to access their services. Disasters happen frequently. It’s not a matter of if, just a matter of when. Datrium has acknowledged that not everyone is using their infrastructure, but that doesn’t mean they can’t offer value to customers using VMC on AWS. I’m not 100% sold on Datrium’s vision for “disaggregated HCI” (despite Hugo’s efforts to educate me), but I am a fan of vendors focused on making things easier to consume and operate for customers. Instant RTO and DRaaS Connect are both features that round out the DRaaS for VMwareCloud on AWS quite nicely.

I haven’t dived as deep into this as I’d like, but Andre from Datrium has written a comprehensive technical overview that you can read here. Datrium’s product overview is available here, and the product brief is here.

Random Short Take #25

Want some news? In a shorter format? And a little bit random? Here’s a short take you might be able to get behind. Welcome to #25. This one seems to be dominated by things related to Veeam.

  • Adam recently posted a great article on protecting VMConAWS workloads using Veeam. You can read it about it here.
  • Speaking of Veeam, Hal has released v2 of MS Office 365 Backup Analysis Tool. You can use it to work out how much capacity you’ll need to protect your O365 workloads. And you can figure out what your licensing costs will be, as well as a bunch of other cool stuff.
  • And in more Veeam news, the VeeamON Virtual event is coming up soon. It will be run across multiple timezones and should be really interesting. You can find out more about that here.
  • This article by Russ on copyright and what happens when bots go wild made for some fascinating reading.
  • Tech Field Day turns 10 years old this year, and Stephen has been running a series of posts covering some of the history of the event. Sadly I won’t be able to make it to the celebration at Tech Field Day 20, but if you’re in the right timezone it’s worthwhile checking it out.
  • Need to connect to an SMB share on your iPad or iPhone? Check out this article (assuming you’re running iOS 13 or iPadOS 13.1).
  • It grinds my gears when this kind of thing happens. But if the mighty corporations have launched a line of products without thinking it through, we shouldn’t expect them to maintain that line of products. Right?
  • Storage and Hollywood can be a real challenge. This episode of Curtis‘s podcast really got into some of the details with Jeff Rochlin.

 

Veeam Basics – Cloud Tier And v10

Disclaimer: I recently attended Veeam Vanguard Summit 2019.  My flights, accommodation, and some meals were paid for by Veeam. There is no requirement for me to blog about any of the content presented and I am not compensated by Veeam for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Overview

Depending on how familiar you are with Veeam, you may already have heard of the Cloud Tier feature. This was new in Veeam Availability Suite 9.5 Update 4, and “is the built-in automatic tiering feature of Scale-out Backup Repository that offloads older backup files to more affordable storage, such as cloud or on-premises object storage”. The idea is you can use the cloud (or cloud-like on-premises storage resources) to make more effective (read: economical) use of your primary storage repositories. You can read more about Veeam’s object storage capabilities here.

 

v10 Enhancements

Move, Copy, Move and Copy

In 9.5 U4 the Move mode was introduced:

  • Policy allows chunks of data to be stripped out of a backup files
  • Metadata remains locally on the performance tier
  • Data moved and offloaded into capacity tier
  • Capacity Tier backed by an object storage repository

The idea was that your performance tier provided the landing zone for backup data, and the capacity tier was an object storage repository that data was moved to. Rhys does a nice job of covering Cloud Tier here.

Copy + Move

In v10, you’ll be able to do both copy and move activities on older backup data. Here are some things to note about copy mode:

  • Still uses the same mechanics as Move
  • Data is chunked and offloaded to the Capacity Tier
  • Unlike Move we don’t dehydrate VBK / VIB / VRB
  • Like Move this ensures that all restore functionality is retained
  • Still makes use of the Archive Index and similar to Move
  • Will not duplicate blocks being offloaded from the Performance Tier
  • Both Copy + Move is fully supported
  • Copy + Move will share block data between them

[image courtesy of Veeam]

With Copy and Move the Capacity Tier will contain a copy of every backup file that has been created as well as offloaded data from the Performance Tier. Anthony does a great job of covering off the Cloud Tier Copy feature in more depth here.

Immutability

One of the features I’m really excited about (because I’m into some weird stuff) is the Cloud Tier Immutability feature.

  • Guarantees additional protection for data stored in Object storage
  • Protects against malicious users and accidental deletion (ITP Theory)
  • Applies to data offloaded to capacity tier for Move or Copy
  • Protects the most recent (more important) backup points
  • Beware of increased storage consumption and S3 costs

 

Thoughts and Further Reading

The idea of moving protection data to a cheaper storage repository isn’t a new one. Fifteen years ago we were excited to be enjoying backup to disk as a new way of doing data protection. Sure, it wasn’t (still isn’t) as cheap as tape, but it was a lot more flexible and performance oriented. Unfortunately, the problem with disk-based backup systems is that you need a lot of disk to keep up with the protection requirements of primary storage systems. And then you probably want to keep many, many copies of this data for a long time. Deduplication and compression helps with this problem, but it’s not magic. Hence the requirement to move protection data to lower tiers of storage.

Veeam may have been a little late to market with this feature, but their implementation in 9.5 U4 is rock solid. It’s the kind of thing we’ve come to expect from them. With v10 the addition of the Copy mode, and the Immutability feature in Cloud Tier, should give people cause to be excited. Immutability is a really handy feature, and provides the kind of security that people should be focused on when looking to pump data into the cloud.

I still have some issues with people using protection data as an “archive” – that’s not what it is. Rather, this is a copy of protection data that’s being kept for a long time. It keeps auditors happy. And fits nicely with people’s idea of what archives are. Putting my weird ideas about archives versus protection data aside, the main reason you’d want to move or copy data to a cheaper tier of disk is to save money. And that’s not a bad thing, particularly if you’re working with enterprise protection policies that don’t necessarily make sense (e.g. keeping all backup data for seven years). I’m looking forward to v10 coming soon, and taking these features for a spin.

Veeam Vanguard Summit 2019 – (Fairly) Full Disclosure

Disclaimer: I recently attended Veeam Vanguard Summit 2019.  My flights, accommodation, and some meals were paid for by Veeam. There is no requirement for me to blog about any of the content presented and I am not compensated by Veeam for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my notes on gifts, etc, that I received as an attendee at Veeam Vanguard Summit 2019. Apologies if it’s a bit dry but I’m just trying to make it clear what I received during this event to ensure that we’re all on the same page as far as what I’m being influenced by. I’m going to do this in chronological order, as that was the easiest way for me to take notes during the week. Whilst every attendee’s situation is different, I was paid by my employer to be at this event.

 

Saturday

My wife kindly dropped me at the airport on Saturday evening. I flew Emirates economy class from BNE – DXB – PRG courtesy of Veeam. I had a 3 hour layover at DXB. In DXB I managed to locate the Emirates Business lounge and eventually found the smoked salmon. The Emirates lounge at BNE is also super nice compared to the Qantas one (sorry Qantas!).

 

Sunday

I landed in Prague Sunday afternoon and took a taxi to my friend Max‘s house. We went for a wander to Hanga’r Bar where I had 3 beers that Max kindly paid for. We then headed in to the city centre so Al Rasheed could drop his luggage off. We then dropped by Restaurace Mincova and had some sausage, pickled cheese and a couple more beers. Al kindly paid for this. We then returned to Max’s house for dinner with his family. Max’s family also put me up for the night.

 

Monday

On the way to the hotel (the Hilton in Prague Old Town) Monday, Max and I stopped by the Macao and Wok Restaurant for lunch. I had a variety of Chinese-style dumplings and 2 beers. I then caught up with the other Aussie Vanguards (and Drew). We stopped at a place called Sklep Na Porici and I had 2 Pilsner Urquell unfiltered beers. At the hotel before dinner Steven Onofaro bought me a beer in the hotel bar.

For dinner we had a welcome reception at T-Anker. It was a rooftop bar / restaurant with stunning views of the city. The staff were a little surprised that we all wanted to eat our meals at the same time, but I eventually managed to get hold of a chicken schnitzel and mashed potatoes. I also had 4 beers. We stopped at a bar called Potrefená Husa (?) on the way back to the hotel. I had another beer that David Kawula paid for. At the hotel I had another beer, paid for by Shane Williford, before heading to bed.

 

Tuesday

I had breakfast at the hotel, consisting of eggs, bacon, chicken sausage, and a flat white. The beauty of the hotel was that it didn’t matter what coffee you ordered, it would invariably be a flat white. Matt Crape gave me a 3D-printed Vanguard thing before the sessions started, and I picked up a Vanguard pin as well.

During the break I had coffee and a chicken, ham, and cheese panini snack. Lunch was in the hotel, and I had beef, fish, pasta, roast vegetables and some water. During the afternoon break I helped myself to some coffee and an apple tatin. Adam Fisher kindly gave me some CDs from his rock and roll days. They were really cool.

For dinner a few of us went to the Restaurant White Horse in the Old Town Square. I had a few beers and the grilled spicy sausage. I then had 2 beers at the hotel before retiring for the night.

 

Wednesday

For breakfast on Wednesday I headed to the hotel buffet and had mushrooms, bacon, scrambled eggs, yoghurt, cheese, ham, and 2 flat whites. During the morning break I helped myself to a bagel with smoked salmon and cream cheese and some coffee. Lunch was in the hotel, and I had basmati rice, chicken, perch, smoked salmon, water, and chocolate cake.

During the afternoon break I had some coffee, a small cheese cake tart, and a tiny tandoori chicken wrap. I had two beers at the hotel bar before we caught a shuttle over to the Staropramen brewery. There I had a 5 or 6 beers and a variety of finger food, including a beef tartar, dry egg yolk, capers, onion, mayo and bread chips served in bone. From there we headed to The Dubliner bar for a few more beers.

 

Thursday

I skipped breakfast on Thursday in favour of some sleep. I had a light lunch at the hotel, consisting of some pasta, rice, and beef. When I got back to my room I found a gift glass from Staropramen Brewery courtesy of Veeam.

For dinner about 10 of us headed to a Mexican restaurant called Agave. I had 3 Coronas, a burrito with prawns, and some guacamole. The food was great, as was the company, but the service was pretty slow.

 

Friday

On Friday I had breakfast at the hotel, consisting of mushrooms, bacon, scrambled eggs, yoghurt, cheese, ham, and 2 flat whites. I then walked around Prague for a few hours, and took a car service to the airport at my expense. Big thanks to Veeam for having me over for the week, and big thanks to everyone who spent time with me at the event (and after hours) – it’s a big part of what makes this stuff fun. And I’m looking forward to sharing some of what I learnt when I’m a little less jet-lagged.

Cohesity – NAS Data Migration Overview

Data Migration

Cohesity NAS Data Migration, part of SmartFiles, was recently announced as a generally available feature within the Cohesity DataPlatform 6.4 release (after being mentioned in the 6.3 release blog post). The idea behind it is that you can use the feature to perform the migration of NAS data from a primary source to the Cohesity DataPlatform. It is supported for NAS storage registered as SMB or NFS (so it doesn’t necessarily need to be a NAS appliance as such, it can also be a file share hosted somewhere).

 

What To Think About

There are a few things to think about when you configure your migration policy, including:

  • The last time the file was accessed;
  • Last time the file was modified; and
  • The size of the file.

You also need to think about how frequently you want to run the job. Finally, it’s worth considering which View you want the archived data to reside on.

 

What Happens?

When the data is migrated an SMB2 symbolic link is left in place of the file with the same name as the file and the original data is moved to the Cohesity View. Note that on Windows boxes, remote to remote symbolic links are disabled, so you need to run these commands:

C:\Windows\system32>fsutil behavior set SymlinkEvaluation R2R:1
C:\Windows\system32>fsutil behavior query SymlinkEvaluation

Once the data is migrated to the Cohesity cluster, subsequent read and write operations are performed on the Cohesity host. You can move data back to the environment by mounting the Cohesity target View on a Windows client, and copying it back to the NAS.

 

Configuration Steps

To get started, select File Services, and click on Data Migration.

Click on the Migrate Data to configure a migration job.

You’ll need to give it a name.

 

The next step is to select the Source. If you already have a NAS source configured, you’ll see it here. Otherwise you can register a Source.

Click on the arrow to expand the registered NAS mount points.

Select the mount point you’d like to use.

Once you’ve selected the mount point, click on Add.

You then need to select the Storage Domain (formerly known as a ViewBox) to store the archived data on.

You’ll need to provide a name, and configure schedule options.

You can also configure advanced settings, including QoS and exclusions. Once you’re happy, click on Migrate and the job will be created.

You can then run the job immediately, or wait for the schedule to kick in.

 

Other Things To Consider

You’ll need to think about your anti-virus options as well. You can register external anti-virus software or install the anti-virus app from the Cohesity Marketplace

 

Thoughts And Further Reading

Cohesity have long positioned their secondary storage solution as something more than just a backup and recovery solution. There’s some debate about the difference between storage management and data management, but Cohesity seem to have done a good job of introducing yet another feature that can help users easily move data from their primary storage to their secondary storage environment. Plenty of backup solutions have positioned themselves as archive solutions, but many have been focused on moving protection data, rather than primary data from the source. You’ll need to do some careful planning around sizing your environment, as there’s always a chance that an end user will turn up and start accessing files that you thought were stale. And I can’t say with 100% certainty that this solution will transparently work with every line of business application in your environment. But considering it’s aimed at SMB and NFS shares, it looks like it does what it says on the tin, and moves data from one spot to another.

You can read more about the new features in Cohesity DataPlatform 6.4 (Pegasus) on the Cohesity site, and Blocks & Files covered the feature here. Alastair also shared some thoughts on the feature here.

Random Short Take #24

Want some news? In a shorter format? And a little bit random? This listicle might be for you. Welcome to #24 – The Kobe Edition (not a lot of passing, but still entertaining). 8 articles too. Which one was your favourite Kobe? 8 or 24?

  • I wrote an article about how architecture matters years ago. It’s nothing to do with this one from Preston, but he makes some great points about the importance of architecture when looking to protect your public cloud workloads.
  • Commvault GO 2019 was held recently, and Chin-Fah had some thoughts on where Commvault’s at. You can read all about that here. Speaking of Commvault, Keith had some thoughts as well, and you can check them out here.
  • Still on data protection, Alastair posted this article a little while ago about using the Cohesity API for reporting.
  • Cade just posted a great article on using the right transport mode in Veeam Backup & Replication. Goes to show he’s not just a pretty face.
  • VMware vFORUM is coming up in November. I’ll be making the trip down to Sydney to help out with some VMUG stuff. You can find out more here, and register here.
  • Speaking of VMUG, Angelo put together a great 7-part series on VMUG chapter leadership and tips for running successful meetings. You can read part 7 here.
  • This is a great article on managing Rubrik users from the CLI from Frederic Lhoest.
  • Are you into Splunk? And Pure Storage? Vaughn has you covered with an overview of Splunk SmartStore on Pure Storage here.

Clumio’s DPaaS Is All SaaS

I recently had the chance to speak to Clumio’s Head of Product Marketing, Steve Siegel, about what they do, and thought I’d share a few notes here.

 

Clumio?

Clumio have raised $51M+ in Series A and B funding. They were founded in 2017, built on public cloud technology, and came out of stealth in August.

 

The Approach

Clumio want to be able to deliver a data management platform in the cloud. The first real opportunity they identified was Backup as a Service. The feeling was that there were too many backup models across private, public cloud, Software as a Service (SaaS), and none of them were particularly easy to take advantage of in an effective manner. This can be a real problem when you’re looking to protect critical information assets.

 

Proper SaaS

The answer, as far as Clumio were concerned, was to develop an “authentic SaaS” offering. This offering provides all of the features you’d expect from a SaaS-based DPaaS (yes, we’re now officially in acronym hell), including:

  • On-demand scalability
  • Ease of management
  • Predictable costs
  • Global compliance
  • Always-on security – with data encrypted in-flight and at-rest

The platform is mainly built on AWS at this stage, but there are plans in place to leverage the other hyperscalers in the future. Clumio charge per VM, with the subscription fee including support. They have plans to improve capabilities, with:

  • AWS support in Dec 2019
  • O365 support in Q1 2020

They currently support the following public cloud workloads:

  • VMware Cloud on AWS; and
  • AWS – extending backup and recovery to support EBC and EC2 workloads (RDS to follow soon after)

 

Thoughts and Further Reading

If you’re a regular reader of this blog, you’ll notice that I’ve done a bit with data protection technologies over the years. From the big enterprise software shops to the “next-generation” data protection providers, as well as the consumer-side stuff and the “as a Service crowd”. There are a bunch of different ways to do data protection, and some work better than others. Clumio feel strongly that the “[s]implicity of SaaS is winning”, and there’s definitely an argument to be made that the simplicity of the approach is a big reason why the likes of Clumio will receive some significant attention from the marketplace.

That said, the success of services is ultimately determined by a few factors. In my opinion, a big part of what’s important when evaluating these types of services is whether they can technically service the requirements you have. If you’re an HP-UX shop running a bunch of on-premises tin, you might find that this type of service isn’t going to be much use. And if you’re using a cloud-based service but don’t really have decent connectivity to said cloud, you’re going to have a tough time getting your data back when something goes wrong. But that’s not all there is to it. You also need to look at how much it’s going to cost you to consume the service, and think about what it’s going to cost when something goes wrong. It’s all well and good if your daily ingress charges are relatively low with AWS, but if you need to get a bunch of data back out in a hurry, you might find it’s not a great experience, financially speaking. There are a bunch of factors that will impact this though, so you really need to do some modelling before you go down that path.

I’m a big fan of SaaS offerings when they’re done well, and I hope Clumio continue to innovate in the future and expand their support for workloads and infrastructure topologies. They’ve picked up a few customers, and are hiring smart people. You can read more about them over at Blocks & Files, and Ken Nalbone also covered them over at Gestalt IT.

Aparavi Announces File Protect & Insight – Helps With Third Drawer Down

I recently had the opportunity to speak to Victoria Grey (CMO), Darryl Richardson (Chief Product Evangelist), and Jonathan Calmes (VP Business Development) from Aparavi regarding their File Protect and Insight solution. If you’re a regular reader, you may remember I’m quite a fan of Aparavi’s approach and have written about them a few times. I thought I’d share some of my thoughts on the announcement here.

 

FPI?

The title is a little messy, but think of your unstructured data in the same way you might look at the third drawer down in your kitchen. There’s a bunch of stuff in there and no-one knows what it all does, but you know it has some value. Aparavi describe File Protect and Insight (FPI), as “[f]ile by file data protection and archive for servers, endpoints and storage devices featuring data classification, content level search, and hybrid cloud retention and versioning”. It takes the data you’re not necessarily sure about, and makes it useful. Potentially.

It comes with a range of features out of the box, including:

  • Data Awareness
    • Data classification
    • Metadata aggregation
    • Policy driven workflows
  • Global Security
    • Role-based permissions
    • Encryption (in-flight and at rest)
    • File versioning
  • Data Search and Access
    • Anywhere / anytime file access
    • Seamless cloud integration
    • Full-content search

 

How Does It Work?

The solution is fairly simple to deploy. There’s a software appliance installed on-premises (this is known as the aggregator). There’s a web-accessible management console, and you configure your sources to be protected via network access.

[image courtesy of Aparavi]

You get the ability to mount backup data from any point in time, and you can provide a path that can be shared via the network to users to access that data. Regardless of where you end up storing the data, you leave the index on-premises, and search against the index, not the source. This saves you in terms of performance and speed. There’s also a good story to be had in terms of cloud provider compatibility. And if you’re looking to work with an on-premises / generic S3 provider, chances are high that the solution won’t have too many issues with that either.

 

Thoughts

Data protection is hard to do well at the best of times, and data management is even harder to get right. Enterprises are busy generating terabytes of data and are struggling to a) protect it successfully, and b) make use of that protected data in an intelligent fashion. It seems that it’s no longer enough to have a good story around periodic data protection – most of the vendors have proven themselves capable in this regard. What differentiates companies is the ability to make use of that protected data in new and innovative ways that can increase the value to that data to the business that’s generating it.

Companies like Aparavi are doing a pretty good job of taking the madness that is your third drawer down and providing you with some semblance of order in the chaos. This can be a real advantage in the enterprise, not only for day to day data protection activities, but also for extended retention and compliance challenges, as well as storage optimisation challenges that you may face. You still need to understand what the data is, but something like FPI can help you to declutter what that data is, making it easier to understand.

I also like some of the ransomware detection capabilities being built into the product. It’s relatively rudimentary for the moment, but keeping a close eye on the percentage of changed data is a good indicator of wether or not something is going badly wrong with the data sources you’re trying to protect. And if you find yourself the victim of a ransomware attack, the theory is that Aparavi has been storing a secondary, immutable copy of your data that you can recover from.

People want a lot of different things from their data protection solutions, and sometimes it’s easy to expect more than is reasonable from these products without really considering some of the complexity that can arise from that increased level of expectation. That said, it’s not unreasonable that your data protection vendors should be talking to you about data management challenges and deriving extra value from your secondary data. A number of people have a number of ways to do this, and not every way will be right for you. But if you’ve started noticing a data sprawl problem, or you’re looking to get a bit more from your data protection solution, particularly for unstructured data, Aparavi might be of some interest. You can read the announcement here.

Backblaze Announces Version 7.0 – Keep Your Stuff For Longer

Backblaze recently announced Version 7.0 of its cloud backup solution for consumer and business and I thought I’d run through the announcement here.

 

Extended Version History

30 Days? 1 Year? 

One of the key parts of this announcement is support for extended retention of backup data. All Backblaze computer backup accounts have 30-Day Version History included with their backup license. But you can now extend that to 1 year if you like. Note that this will cost an additional $2/month and is charged based on your license type (monthly, yearly, or 2-year). It’s also prorated to align with your existing subscription.

Forever

Want to have a more permanent relationship with you protection data? You can also elect to keep it forever, at the cost of an additional $2/month (aligned to your license plan type) plus $0.005/GB/Month for versions modified on your computer more than 1 year ago. There’s a handy FAQ that you can read here. Note that all pricing from Backblaze is in US dollars.

[image courtesy of Backblaze]

 

Other Updates

Are you trying to back up really large files (like videos)? You might already know that Backblaze takes large files and chunks them into smaller ones before uploading them to the Internet. Upload performance has now been improved, with the maximum packet size being increased from 30MB to 100MB. This allows the Backblaze app to transmit data more efficiently by better leveraging threading. According to Backblaze, this also “smoothes out upload performance, reduces sensitivity to latency, and leads to smaller data structures”.

Other highlights of this release include:

  • For the aesthetically minded amongst you, the installer now looks better on higher resolution displays;
  • For Windows users, an issue with OpenSSL and Intel’s Apollo Lake chipsets has now been resolved; and
  • For macOS users, support for Catalina is built in. (Note that this is also available with the latest version 6 binary).

Availability?

Version 7.0 will be rolled out to all users over the next few weeks. If you can’t wait, there are two ways to get hold of the new version:

 

Thoughts and Further Reading

It seems weird that I’ve been covering Backblaze as much as I have, given their heritage in the consumer data protection space, and my focus on service providers and enterprise offerings. But Backblaze has done a great job of making data protection accessible and affordable for a lot of people, and they’ve done it in a fairly transparent fashion at the same time. Note also that this release covers both consumers and business users. The addition of extended retention capabilities to their offering, improved performance, and some improved compatibility is good news for Backblaze users. It’s really easy to setup and get started with the application, they support a good variety of configurations, and you’ll sleep better knowing your data is safely protected (particularly if you accidentally fat-finger an important document and need to recover an older version). If you’re thinking about signing up, you can use this affiliate link I have and get yourself a free month (and I’ll get one too).

If you’d like to know more about the features of Version 7.0, there’s a webinar you can jump on with Yev. The webinar will be available on BrightTalk (registration is required) and you can sign up by visiting the Backblaze BrightTALK channel. You can also read more details on the Backblaze blog.