Druva Update – Q3 2020

I caught up with my friend W. Curtis Preston from Druva a little while ago to talk about what the company has been up to. It seems like quite a bit, so I thought I’d share some notes here.

 

DXP and Company Update

Firstly, Druva’s first conference, DXP, is coming up shortly. There’s an interesting range of topics and speakers, and it looks to be jam packed with useful info. You can find out more and register for that here. The company seems to be going from strength to strength, enjoying 50% year-on-year growth, and 70% for Phoenix in particular (its DC product).

If you’re into Gartner Peer Insights – Druva has taken out the top award in 3 categories – file analysis, DRaaS, and data centre backup. Preston also tells me Druva is handling around 5 million backups a day, for what it’s worth. Finally, if you’re into super fluffy customer satisfaction metrics, Druva is reporting an “industry-leading NPS score of 88” that has been third-party verified.

 

Product News

It’s Fun To Read The CCPA

If you’re unfamiliar, California has released its version of the GDPR, know as the California Consumer Privacy Act. Druva has created a template for data types that shouldn’t be stored in plain text and can flag them as they’re backed up. It can also do the same thing in email, and you can now do a federated search against both of these things. If anything turns up that shouldn’t be there, you can go and remove problematic files.

ServiceNow Automation

Druva now has support for automated SNOW ticket creation. It’s based on some advanced logic, too. For example, if a backup fails 3 times, a ticket will be created and can be routed to the people who should be caring about such things.

More APIs

There’s been a lot of done work to deliver more APIs, and a more robust RBAC implementation.

DRaaS

DRaaS is currently only for VMware, VMC, and AWS-based workloads. Preston tells me that users are getting an RTO of 15-20 minutes, and an RPO of 1 hour. Druva added failback support a little while ago (one VM at a time). That feature has now been enhanced, and you can failback as many workloads as you want. You can also add a prefix or suffix to a VM name, and Druva has added a failover prerequisite check as well.

 

Other Notes

In other news, Druva is now certified on VMC on Dell. It’s added support for Microsoft Teams and support for Slack. Both useful if you’ve stopped storing your critical data in email and started storing it in collaboration apps instead.

Storage Insights and Recommendations

There’s also a storage insights feature that is particularly good for unstructured data. Say, for example, that 30% of your backups are media files, you might not want to back them up (unless you’re in the media streaming business, I guess). You can delete bad files from backups, and automatically create an exclusion for those file types.

Support for K8s

Support for everyone’s favourite container orchestration system has been announced, not yet released. Read about that here. You can now do a full backup of an entire K8s environment (AWS only in v1). This includes Docker containers, mounted volumes, and DBs referenced in those containers.

NAS Backup

Druva has enhanced its NAS backup in two ways, the first of which is performance. Preston tells me the current product is at least 10X faster than one year ago. Also, for customers already using a native recovery mechanism like snapshots, Druva has also added the option to backup directly to Glacier, which cuts your cost in half.

Oracle Support

For Oracle, Druva has what Preston describes as “two solid options”. Right now there’s an OVA that provides a ready to go, appliance-like experience, uses the image copy format (supporting block-level incremental, and incremental merge). The other option will be announced next week at DxP.

 

Thoughts and Further Reading

Some of these features seem like incremental improvements, but when you put it all together, it makes for some impressive reading. Druva has done a really impressive job, in my opinion, of sticking with the built in the cloud, for the cloud mantra that dominates much of its product design. The big news is the support for K8s, but things like multi-VM failback with the DRaaS solution is nothing to sneeze at. There’s more news coming shortly, and I look forward to covering that. In the meantime, if you have the time, be sure to check out DXP – I think it will be quite an informative event.

 

 

Zerto Announces 8.5 and Zerto Data Protection

Zerto recently announced 8.5 of its product, along with a new offering, Zerto Data Protection (ZDP). I had the good fortune to catch up with Caroline Seymour (VP, Product Marketing) about the news and thought I’d share some thoughts here.

 

ZDP, Yeah You Know Me

Global Pandemic for $200 Please, Alex

In “these uncertain times”, organisations are facing new challenges

  • No downtime, no data loss, 24/7 availability
  • Influx of remote work
  • Data growth and sprawl
  • Security threats
  • Acceleration of cloud

Many of these things were already a problem, and the global pandemic has done a great job highlighting them.

“Legacy Architecture”

Zerto paints a bleak picture of the “legacy architecture” adopted by many of the traditional dat protection solutions, positing that many IT shops need to use a variety of tools to get to a point where operations staff can sleep better at night. Disaster recovery, for example, is frequently handled via replication for mission-critical applications, with backup being performed via periodic snapshots for all other applications. ZDP aims to being all this together under one banner of continuous data protection, delivering:

  • Local continuous backup and long-term retention (LTR) to public cloud; and
  • Pricing optimised for backup.

[image courtesy of Zerto]

Features

[image courtesy of Zerto]

So what do you get with ZDP? Some neat features, including:

  • Continuous backup with journal
  • Instant restore from local journal
  • Application consistent recovery
  • Short-term SLA policy settings
  • Intelligent index and search
  • LTR to disk, object or Cloud (Azure, AWS)
  • LTR policies, daily incremental with weekly, monthly or yearly fulls
  • Data protection workflows

 

New Licensing

It wouldn’t be a new software product without some mention of new licensing. If you want to use ZDP, you get:

  • Backup for short-term retention and LTR;
  • On-premises or backup to cloud;
  • Analytics; and
  • Orchestration and automation for backup functions.

If you’re sticking with (the existing) Zerto Cloud Edition, you get:

  • Everything in ZDP;
  • Disaster Recovery for on-premises and cloud;
  • Multi-cloud support; and
  • Orchestration and automation.

 

Zerto 8.5

A big focus of Zerto’s recently has been VMware on public cloud support, including the various flavours of VMware on Azure, AWS, and Oracle Cloud. There are a bunch of reasons why this approach has proven popular with existing VMware customers looking to migrate from on-premises to public cloud, including:

  • Native VMware support – run existing VMware workloads natively on IaaS;
  • Policies and configuration don’t need to change;
  • Minimal changes – no need to refactor applications; and
  • IaaS benefits- reliability, scale, and operational model.

[image courtesy of Zerto]

New in 8.5

With 8.5, you can now backup directly to Microsoft Azure and AWS. You also get instant file and folder restores to production. There’s now support for VMware on public cloud disaster recovery and data protection for Microsoft Azure VMware Solution, Google Cloud VMware Engine, and the Oracle Cloud VMware Solution. You also get platform automation and lifecycle management features, including:

  • Auto-evacuate for recovery hosts;
  • Auto-populate for recovery hosts; and
  • Encryption capabilities.

And finally, a Zerto PowerShell Cmdlets Module has also been released.

 

Thoughts and Further Reading

The writing’s been on the wall for some time that Zerto might need to expand its solution offering to incorporate backup and recovery. Continuous data protection is a great feature and my experience with Zerto has been that it does what it says on the tin. The market, however, is looking for ways to consolidate solution offerings in order to save a few more dollarydoos and keep the finance department happy. I haven’t seen the street pricing for ZDP, but Seymour seemed confident that it stacks up well against the more traditional data protection options on the market, particularly when compared against offerings that incorporate components that deal with CDP and periodic data protection with different tools. There’s a new TCO calculator on the Zerto website, and there’s also the opportunity to talk to a Zerto account representative about your particular needs.

I’ve always treated regular backup and recovery and disaster recovery as very different things, mainly because they are. Companies frequently make the mistake of trying to cobble together some kind of DR solution using traditional backup and recovery tools. I’m interested to see how Zerto goes with this approach. It’s not the first company to converge elements that fit in the data protection space together, and it will be interesting to see how much of the initial uptake of ZDP is with existing customers or net new logos. The broadening of support for the VMware on X public cloud workloads is good news for enterprises too (putting aside my thoughts on whether or not that’s a great long term strategy for said enterprises). There’s some interesting stuff happening, and I’m looking forward to see how the story unfolds over the next 6 – 12 months.

Datadobi Announces DobiProtect

Datadobi recently announced DobiProtect. I had the opportunity to speak with Michael Jack and Carl D’Halluin about the announcement, and thought I’d share some thoughts here.

 

The Problem

Disaster Recovery

Modern disaster recovery solutions tend more towards business continuity than DR. The challenge with data replication solutions is that it’s a trivial thing to replicate corruption from your primary storage to your DR storage. Backup systems are vulnerable too, and most instances you need to make some extra effort to ensure you’ve got a replicated catalogue, and that your backup data is not isolated. Invariably, you’ll be looking to restore to like hardware in order to reduce the recovery time. Tape is still a pain to deal with, and invariably you’re also at the mercy of people and processes going wrong.

What Do Customers Need?

To get what you need out of a robust DR system, there are a few criteria that need to be met, including:

  • An easy way to select business-critical data;
  • A simple way to make a golden copy in native format;
  • A bunker site in a DC or cloud;
  • A manual air-gap procedure;
  • A way to restore to anything; and
  • A way to failover if required.

 

Enter DobiProtect

What Does It Do?

The idea is that you have two sites with a manual air-gap between them, usually controlled by a firewall of some type. The first site is where you run your production workload, and there’ll likely be a subset of data that is really quirte important to your business. You can use DobiProtect to get that data from your production site to DR (it might even be in a bunker!). In order to get the data from Production to DR, DobiProtect scans the data before it’s pulled across to DR. Note that the data is pulled, not pushed. This is important as it means that there’s no obvious trace of the bunker’s existence in production.

[image courtesy of Datadobi]

If things go bang, you can recover to any NAS or Object.

  • Browse golden copy
  • Select by directory structure, folder, or object patterns
  • Mounts and shares
  • Specific versions

Bonus Use Case

One of the more popular use cases that Datadobi spoke to me about was heterogeneous edge-to-core protection. Data on the edge is usually more vulnerable, and not every organisation has the funding to put robust protection mechanisms in place at every edge site to protect critical data. With the advent of COVID-19, many organisations have been pushing more data to the edge in order for remote workers to have better access to data. The challenge then becomes keeping that data protected in a reliable fashion. DobiProtect can be used to pull data from the core once data has been pulled back from the edge. Because it’s a software only product, your edge storage can be anything that supports object, SMB, or NFS, and the core could be anything else. This provides a lot of flexibility in terms of the expense traditionally associated with DR at edge sites.

[image courtesy of Datadobi]

 

Thoughts and Further Reading

The idea of an air-gapped site in a bunker somewhere is the sort of thing you might associate with a James Bond story. In Australia these aren’t exactly a common thing (bunkers, not James Bond stories), but Europe and the US is riddled with them. As Jack pointed out in our call, “[t]he first rule of bunker club – you don’t talk about the bunker”. Datadobi couldn’t give me a list of customers using this type of solution because all of the customers didn’t want people to know that they were doing things this way. It seems a bit like security via obscurity, but there’s no point painting a big target on your back or giving clues out for would-be crackers to get into your environment and wreak havoc.

The idea that your RPO is a day, rather than minutes, is also confronting for some folks. But the idea of this solution is that you’ll use it for your absolutely mission critical can’t live without it data, not necessarily your virtual machines that you may be able to recover normally if you’re attacked or the magic black smoke escapes from one of your hosts. If you’ve gone to the trouble of looking into acquiring some rack space in a bunker, limited the people in the know to a handful, and can be bothered messing about with a manual air-gap process, the data you’re looking to protect is clearly pretty important.

Datadobi has a rich heritage in data migration for both file and object storage systems. It makes sense that eventually customer demand would drive them down this route to deliver a migration tool that ostensibly runs all the time as sort of data protection tool. This isn’t designed to protect everything in your environment, but for the stuff that will ruin your business if it goes away, it’s very likely worth the effort and expense. There are some folks out there actively looking for ways to put you over a barrel, so it’s important to think about what it’s worth to your organisation to avoid that if possible.

BackupAssist Announces BackupAssist ER

BackupAssist recently announced BackupAssist ER. I recently had the opportunity to speak with Linus Chang (CEO), Craig Ryan, and Madeleine Tan about the announcement.

 

BackupAssist

Founded in 2001, BackupAssist is focussed primarily on the small to medium enterprise (under 500 seats). They sell the product via a variety of mechanisms, including:

  • Direct
  • Partners
  • Distribution channels

 

Challenges Are Everywhere

Some of the challenges faced by the average SME when it comes to data protection include the following:

  • Malware
  • COVID-19
  • Compliance

So what does the average SME need when it comes to selecting a data protection solution?

  • Make it affordable
  • Automatic offsite backups with history and retention
  • Most recoveries are local – make them fast!
  • The option to recover in the cloud if needed (the fallback to the fallback)

 

What Is It?

So what exactly is BackupAssist ER? It’s backup and recovery software.

[image courtesy of BackupAssist]

It’s deployed on Windows servers, and has support for disk to disk to cloud as a protection topology.

CryptoSafeGuard

Another cool feature is CryptoSafeGuard, providing the following features:

  • Shield from unauthorised access
  • Detect – Alert – Preserve

Disaster Recovery

  • VM Instant boot (converting into a Hyper-V guest)
  • BMR (catering for dissimilar hardware)
  • Download cloud backup anywhere

Data Recovery

The product supports the granular recovery of files, Exchange, and applications.

Data Handling and Control

A key feature of the solution is the approach to data handling, offering:

  • Accessibility
  • Portability
  • Retention

It uses the VHDX file format to store protection data. It can also backup to Blob storage. Chang also advised that they’re working on introducing S3 compatibility at some stage.

Retention

The product supports a couple of different retention schemes, including:

  • Local – Keep N copies (GFS is coming)
  • Cloud – Keep X copies
  • Archival – Keep a backup on a HDD, and retain for years

Pricing

BackupAssist ER is licensed in a variety of ways. Costs are as follows:

  • Per physical machine – $399 US annually;
  • Per virtual guest machine – $199 US annually; and
  • Per virtual host machine – $699 US annually.

There are discounts available for multi-year subscriptions, as well as discounts to be had if you’re looking to purchase licensing for more than 5 machines.

 

Thoughts and Further Reading

Chang noted that BackupAssist is “[n]ot trying to be the best, but the best fit”. You’ll see that a lot of the capability is Microsoft-centric, with support for Windows and Hyper-V. This makes sense when you look at what the SME market is doing in terms of leveraging Microsoft platforms to deliver their IT requirements. Building a protection product that covers every platform is time-consuming and expensive in terms of engineering effort. What Chang and the team have been focussed on is delivering data protection products to customers at a particular price point while delivering the right amount of technology.

The SME market is notorious for wanting to consume quality product at a particular price point. Every interaction I’ve had with customers in the SME segment has given me a crystal clear understanding of “Champagne tastes on a beer budget”. But in much the same way that some big enterprise shops will never stop doing things at a glacial pace, so too will many SME shops continue to look for high value at a low cost. Ultimately, compromises need to be made to meet that price point, hence the lack of support for features such as VMware. That doesn’t mean that BackupAssist can’t meet your requirements, particularly if you’re running your business’s IT on a couple of Windows machines. For this it’s well suited, and the flexibility on offer in terms of disk targets, retention, and recovery should be motivation to investigate further. It’s a bit of a nasty world out there, so anything you can do to ensure your business data is a little safer should be worthy of further consideration. You can read the press release here.

Pure Storage and Cohesity Announce Strategic Partnership and Pure FlashRecover

Pure Storage and Cohesity announced a strategic partnership and a new joint solution today. I had the opportunity to speak with Amy Fowler and Biswajit Mishra from Pure Storage, along with Anand Nadathur and Chris Wiborg from Cohesity, and thought I’d share my notes here.

 

Friends In The Market

The announcement comes in two parts, with the first being that Pure Storage and Cohesity are forming a strategic partnership. The idea behind this is that, together, the companies will deliver “industry-leading storage innovations from Pure Storage with modern, flash-optimised backup from Cohesity”.  There are plenty of things in common between the companies, including the fact that they’re both, as Wiborg puts it, “keenly focused on doing the right thing for the customer”.

 

Pure FlashRecover Powered By Cohesity

Partnerships are exciting and all, but what was of more interest was the Pure FlashRecover announcement. What is it exactly? It’s basically Cohesity DataProtect running on Cohesity-certified compute nodes (the whitebox gear you might be familiar with if you’ve bought Cohesity tin previously), using Pure’s FlashBlades as the storage backend.

[image courtesy of Pure Storage]

FlashRecover has a targeted general availability for Q4 CY2020 (October). It will be released in the US initially, with other regions to follow. From a go to market perspective, Pure will handle level 1 and level 2 support, with Cohesity support being engaged for escalations. Cohesity DataProtect will be added to the Pure price list, and Pure becomes a Cohesity Technology Partner.

 

Thoughts

My first thought when I heard about this was why would you? I’ve traditionally associated scalable data protection and secondary storage with slower, high-capacity appliances. But as we talked through the use cases, it started to make sense. FlashBlades by themselves aren’t super high capacity devices, but neither are the individual nodes in Cohesity appliances. String a few together and you have enough capacity to do data protection and fast recovery in a predictable fashion. FlashBlade supports 75 nodes (I think) [Edit: it scales up to 150x 52TB nodes. Thanks for the clarification from Andrew Miller] and up to 1PB of data in a single namespace. Throw in some of the capabilities that Cohesity DataProtect brings to the table and you’ve got an interesting solution. The knock on some of the next-generation data protection solutions has been that recovery can still be quite time-consuming. The use of all-flash takes away a lot of that pain, especially when coupled with a solution like FlashBlade that delivers some pretty decent parallelism in terms of getting data recovered back to production quickly.

An evolving use case for protection data is data reuse. For years, application owners have been stuck with fairly clunky ways of getting test data into environments to use with application development and testing. Solutions like FlashRecover provide a compelling story around protection data being made available for reuse, not just recovery. Another cool thing is that when you invest in FlashBlade, you’re not locking yourself into a particular silo, you can use the FlashBlade solution for other things too.

I don’t work with Pure Storage and Cohesity on a daily basis anymore, but in my previous role I had the opportunity to kick the tyres extensively with both the Cohesity DataProtect solution and the Pure Storage FlashBlade. I’m an advocate of both of these companies because of the great support I received from both companies from pre-sales through to post-sales support. They are relentlessly customer focused, and that really translates in both the technology and the field experience. I can’t speak highly enough of the engagement I’ve experienced with both companies, from both a blogger’s experience, and as an end user.

FlashRecover isn’t going to be appropriate for every organisation. Most places, at the moment, can probably still get away with taking a little time to recover large amounts of data if required. But for industries where time is money, solutions like FlashRecover can absolutely make sense. If you’d like to know more, there’s a comprehensive blog post over at the Pure Storage website, and the solution brief can be found here.

Random Short Take #40

Welcome to Random Short Take #40. Quite a few players have worn 40 in the NBA, including the flat-top king Frank Brickowski. But my favourite player to wear number 40 was the Reign Man – Shawn Kemp. So let’s get random.

  • Dell EMC PowerProtect Data Manager 19.5 was released in early July and Preston covered it pretty comprehensively here.
  • Speaking of data protection software releases and enhancements, we’ve barely recovered from the excitement of Veeam v10 being released and Anthony is already talking about v11. More on that here.
  • Speaking of Veeam, Rhys posted a very detailed article on setting up a Veeam backup repository on NFS using a Pure Storage FlashBlade environment.
  • Sticking with the data protection theme, I penned a piece over at Gestalt IT for Druva talking about OneDrive protection and why it’s important.
  • OpenDrives has some new gear available – you can read more about that here.
  • The nice folks at Spectro Cloud recently announced that its first product is generally available. You can read the press release here.
  • Wiliam Lam put out a great article on passing through the integrated GPU on Apple Mac minis with ESXi 7.
  • Time passes on, and Christian recently celebrated 10 years on his blog, which I think is a worthy achievement.

Happy Friday!

Backup Awareness Month, Backblaze, And A Simple Question

Last month was Backup Awareness Month (at least according to Backblaze). It’s not formally recognised by any government entities, and it’s more something that was made up by Backblaze. But I’m a big fan of backup awareness, so I endorse making up stuff like this. I had a chance to chat to Yev over at Backblaze about the results of a survey Backblaze runs annually and thought I’d share my thoughts here. Yes, I know I’m a bit behind, but I’ve been busy.

As I mentioned previously, as part of the backup awareness month celebrations, Backblaze reaches out to folks in the US and asks a basic question: “How often do you backup all the data on your computer?”. This has shown some interesting facts about consumer backup habits. There has been a positive decrease in the amount of people stating that they have never backed up their data (down to around one fifth of the respondents), and the frequency of which backup has increased.

Other takeaways from the results include:

  • Almost 50% of people lose their data each year;
  • 41% of people do not completely understand the difference between cloud backup and cloud storage;
  • Millennials are the generation most likely to backup their data daily; and
  • Seniors (65+) have gone from being the best age group at backing up data to the worst.

 

Thoughts

I bang on a lot about how important backup (and recovery) is across both the consumer and enterprise space. Surveys like this are interesting because they highlight, I think, the importance of regularly backing up our data. We’re making more and more of it, and it’s not magically protected by the benevolent cloud fairies, so it’s up to us to protect it. Particularly if it’s important to us. It’s scary to think that one in two people are losing data on a regular basis, and scarier still that most folks don’t understand the distinction between cloud storage and cloud backup. I was surprised that Millennials are most likely to backup their data, but my experience with younger generations really only extends to my children, so they’re maybe not the best indicator of what the average consumer is doing. It’s also troubling that older folk are struggling to keep on top of backups. Anecdotally that lines up with my experience as well. So I think it’s great that Yev and the team at Backblaze have been on something of a crusade to educate people about cloud backup and how it can help them. I love that the company is all about making it easier for consumers, not harder.

As an industry we need to be better at making things simple for people to consume, and more transparent in terms of what can be achieved with technology. I know this blog isn’t really focused on consumer technology, and it might seem a bit silly that I carry on a bit about consumer backup. But you all have data stored some place or another that means something to you. And I know not all of you are protecting it appropriately. Backup is like insurance. It’s boring. People don’t like paying for it. But when something goes bang, you’ll be glad you have it. If these kind of posts can raise some awareness, and get one more person to protect the data that means something to them in an effective fashion, then I’ll be happy with that.

Random Short Take #38

Welcome to Random Short Take #38. Not a huge amount of players have worn 38 in the NBA, and I’m not going to pretend I was ever a Kwame Brown fan. Although it did seem like he had a tough time of it. Anyway let’s get random.

  • Ransomware is the new hotness. Or, rather, protecting storage systems from ransomware is the new hotness. My man Chin-Fah had a writeup on that here. It’s not a matter of if, but rather when you’ll run into a problem. It’s been interesting to see the various approaches being taken by the storage vendors and the data protection companies.
  • Applications for the vExpert program intake for the second half of 2020 are open, but closing soon. It’s a fantastic program to be a part of, so if you think you’ve got the goods, you can apply here. I also recommend this article from Christopher on his experiences.
  • This was a great article from Alastair on some of the differences between networking with AWS and VMC on AWS. As someone who works for a VMware Cloud Provider, I can confirm that NSX (T or V, I don’t care) has a whole slew of capabilities and whole slew of integration challenges.
  • Are you Zoomed out? I am. Even when you think the problem can’t be the network, it might just be the network (I hope my friends in networking appreciate that it’s not always the storage). John Nicholson posted a typically comprehensive overview of how your bandwidth might be one of the things keeping you from demonstrating excellent radio voice on those seemingly endless meetings you’re doing at the moment. It could also be that you’re using crap audio devices too, but I think John’s going to cover that in the future.
  • Scale Computing has a good story to tell about what it’s been doing with a large school district in the U.S. Read more about that here.
  • This is one of those promotions aimed at my friends in Northern America more than folks based where I am, but I’m always happy to talk about deals on data protection. StorCentric has launched its “Retrospect Dads & Grads Promotion” offering a free 90-Day subscription license for every Retrospect Backup product. You can read more about that here.
  • Pure//Accelerate Online was this week, and Max did a nice write-up on Pure Storage File Services over at Gestalt IT.
  • Rancher Labs recently announced the general availability of Longhorn (a cloud-native container storage solution). I’m looking forward to digging in to this a bit more over the next little while.

 

 

Random Short Take #36

Welcome to Random Short Take #36. Not a huge amount of players have worn 36 in the NBA, but Shaq did (at the end of his career), and Marcus Smart does. This one, though, goes out to one of my favourite players from the modern era, Rasheed Wallace. It seems like Boston is the common thread here. Might have something to do with those hall of fame players wearing numbers in the low 30s. Or it might be entirely unrelated.

  • Scale Computing recently announced its all-NVMe HC3250DF as a new appliance targeting core data centre and edge computing use cases. It offers higher performance storage, networking and processing. You can read the press release here.
  • Dell EMC PowerStore has been announced. Chris Mellor covered the announcement here. I haven’t had time to dig into this yet, but I’m keen to learn more. Chris Evans also wrote about it here.
  • Rubrik Andes 5.2 was recently announced. You can read a wrap-up from Mellor here.
  • StorCentric’s Nexsan recently announced the E-Series 32F Storage Platform. You can read the press release here.
  • In what can only be considered excellent news, Preston de Guise has announced the availability of the second edition of his book, “Data Protection: Ensuring Data Availability”. It will be available in a variety of formats, with the ebook format already being out. I bought the first edition a few times to give as a gift, and I’m looking forward to giving away a few copies of this one too.
  • Backblaze B2 has been huge for the company, and Backblaze B2 with S3-compatible API access is even huger. Read more about that here. Speaking of Backblaze, it just released its hard dive stats for Q1, 2020. You can read more on that here.
  • Hal recently upgraded his NUC-based home lab to vSphere 7. You can read more about the process here.
  • Jon recently posted an article on a new upgrade command available in OneFS. If you’re into Isilon, you might just be into this.

VeeamON 2020 Is Online

VeeamON 2020 would have happened already this year, but these are crazy times, and like most vendors, Veeam has chosen to move the event online, rather than run the gauntlet of having a whole bunch of folks in one place and risk the rapid spread of COVID-19. The new dates for the event are June 17 – 18. You can find more information about VeeamON 2020 here and register for the event here.

The agenda is jam-packed with a range of interesting topics around data protection, spread across a range of tracks, including Architecture and Design, Implementation Best Practices, and Operations and Support. It’s not just marketing fluff either, there’s plenty there for technical folk to sink their teeth into.

 

Thoughts

Six months ago I thought I’d be heading to Vegas for this event. But a lot can change in a short period of time, and a lot has changed. The broader topic of online conferences versus in-person events is an interesting one, and not something I can do justice to here. This isn’t something that Veeam necessarily wanted to do, but it makes sense not to put a whole mess of people in the same space. What I’m interested to see is whether the tech vendors, including Veeam, will notice that not running large scale in-person events actually saves a bunch of money, and look to do more of these once things have gone back to whatever passes for normal in the future. Or whether, as a few people have commented, the events don’t get as much engagement because people aren’t present and can’t commit the time. As much as I’ve come to hate the frequent flights to the U.S.A. to attend tech conferences, it does make it easier to be present in terms of time zones and distractions. If I’m watching events in Pacific Time from my home, it’s usually the middle of the night to make the keynote. And I have the day job to consider as well.

That said, I think it’s fantastic that companies like Veeam have been able to adjust their approach to what was a fairly traditional model when it came to customer and partner engagement. Sure, we won’t be able to get together for a meal in person, but we’ll still have the opportunity to hear about what Veeam’s been up to, and find out a little more about what’s coming next. Ultimately, that’s what these kind of events are about.