Aparavi Announces File Protect & Insight – Helps With Third Drawer Down

I recently had the opportunity to speak to Victoria Grey (CMO), Darryl Richardson (Chief Product Evangelist), and Jonathan Calmes (VP Business Development) from Aparavi regarding their File Protect and Insight solution. If you’re a regular reader, you may remember I’m quite a fan of Aparavi’s approach and have written about them a few times. I thought I’d share some of my thoughts on the announcement here.

 

FPI?

The title is a little messy, but think of your unstructured data in the same way you might look at the third drawer down in your kitchen. There’s a bunch of stuff in there and no-one knows what it all does, but you know it has some value. Aparavi describe File Protect and Insight (FPI), as “[f]ile by file data protection and archive for servers, endpoints and storage devices featuring data classification, content level search, and hybrid cloud retention and versioning”. It takes the data you’re not necessarily sure about, and makes it useful. Potentially.

It comes with a range of features out of the box, including:

  • Data Awareness
    • Data classification
    • Metadata aggregation
    • Policy driven workflows
  • Global Security
    • Role-based permissions
    • Encryption (in-flight and at rest)
    • File versioning
  • Data Search and Access
    • Anywhere / anytime file access
    • Seamless cloud integration
    • Full-content search

 

How Does It Work?

The solution is fairly simple to deploy. There’s a software appliance installed on-premises (this is known as the aggregator). There’s a web-accessible management console, and you configure your sources to be protected via network access.

[image courtesy of Aparavi]

You get the ability to mount backup data from any point in time, and you can provide a path that can be shared via the network to users to access that data. Regardless of where you end up storing the data, you leave the index on-premises, and search against the index, not the source. This saves you in terms of performance and speed. There’s also a good story to be had in terms of cloud provider compatibility. And if you’re looking to work with an on-premises / generic S3 provider, chances are high that the solution won’t have too many issues with that either.

 

Thoughts

Data protection is hard to do well at the best of times, and data management is even harder to get right. Enterprises are busy generating terabytes of data and are struggling to a) protect it successfully, and b) make use of that protected data in an intelligent fashion. It seems that it’s no longer enough to have a good story around periodic data protection – most of the vendors have proven themselves capable in this regard. What differentiates companies is the ability to make use of that protected data in new and innovative ways that can increase the value to that data to the business that’s generating it.

Companies like Aparavi are doing a pretty good job of taking the madness that is your third drawer down and providing you with some semblance of order in the chaos. This can be a real advantage in the enterprise, not only for day to day data protection activities, but also for extended retention and compliance challenges, as well as storage optimisation challenges that you may face. You still need to understand what the data is, but something like FPI can help you to declutter what that data is, making it easier to understand.

I also like some of the ransomware detection capabilities being built into the product. It’s relatively rudimentary for the moment, but keeping a close eye on the percentage of changed data is a good indicator of wether or not something is going badly wrong with the data sources you’re trying to protect. And if you find yourself the victim of a ransomware attack, the theory is that Aparavi has been storing a secondary, immutable copy of your data that you can recover from.

People want a lot of different things from their data protection solutions, and sometimes it’s easy to expect more than is reasonable from these products without really considering some of the complexity that can arise from that increased level of expectation. That said, it’s not unreasonable that your data protection vendors should be talking to you about data management challenges and deriving extra value from your secondary data. A number of people have a number of ways to do this, and not every way will be right for you. But if you’ve started noticing a data sprawl problem, or you’re looking to get a bit more from your data protection solution, particularly for unstructured data, Aparavi might be of some interest. You can read the announcement here.

Backblaze Announces Version 7.0 – Keep Your Stuff For Longer

Backblaze recently announced Version 7.0 of its cloud backup solution for consumer and business and I thought I’d run through the announcement here.

 

Extended Version History

30 Days? 1 Year? 

One of the key parts of this announcement is support for extended retention of backup data. All Backblaze computer backup accounts have 30-Day Version History included with their backup license. But you can now extend that to 1 year if you like. Note that this will cost an additional $2/month and is charged based on your license type (monthly, yearly, or 2-year). It’s also prorated to align with your existing subscription.

Forever

Want to have a more permanent relationship with you protection data? You can also elect to keep it forever, at the cost of an additional $2/month (aligned to your license plan type) plus $0.005/GB/Month for versions modified on your computer more than 1 year ago. There’s a handy FAQ that you can read here. Note that all pricing from Backblaze is in US dollars.

[image courtesy of Backblaze]

 

Other Updates

Are you trying to back up really large files (like videos)? You might already know that Backblaze takes large files and chunks them into smaller ones before uploading them to the Internet. Upload performance has now been improved, with the maximum packet size being increased from 30MB to 100MB. This allows the Backblaze app to transmit data more efficiently by better leveraging threading. According to Backblaze, this also “smoothes out upload performance, reduces sensitivity to latency, and leads to smaller data structures”.

Other highlights of this release include:

  • For the aesthetically minded amongst you, the installer now looks better on higher resolution displays;
  • For Windows users, an issue with OpenSSL and Intel’s Apollo Lake chipsets has now been resolved; and
  • For macOS users, support for Catalina is built in. (Note that this is also available with the latest version 6 binary).

Availability?

Version 7.0 will be rolled out to all users over the next few weeks. If you can’t wait, there are two ways to get hold of the new version:

 

Thoughts and Further Reading

It seems weird that I’ve been covering Backblaze as much as I have, given their heritage in the consumer data protection space, and my focus on service providers and enterprise offerings. But Backblaze has done a great job of making data protection accessible and affordable for a lot of people, and they’ve done it in a fairly transparent fashion at the same time. Note also that this release covers both consumers and business users. The addition of extended retention capabilities to their offering, improved performance, and some improved compatibility is good news for Backblaze users. It’s really easy to setup and get started with the application, they support a good variety of configurations, and you’ll sleep better knowing your data is safely protected (particularly if you accidentally fat-finger an important document and need to recover an older version). If you’re thinking about signing up, you can use this affiliate link I have and get yourself a free month (and I’ll get one too).

If you’d like to know more about the features of Version 7.0, there’s a webinar you can jump on with Yev. The webinar will be available on BrightTalk (registration is required) and you can sign up by visiting the Backblaze BrightTALK channel. You can also read more details on the Backblaze blog.

Random Short Take #23

Want some news? In a shorter format? And a little bit random? This listicle might be for you.

  • Remember Retrospect? They were acquired by StorCentric recently. I hadn’t thought about them in some time, but they’re still around, and celebrating their 30th anniversary. Read a little more about the history of the brand here.
  • Sometimes size does matter. This article around deduplication and block / segment size from Preston was particularly enlightening.
  • This article from Russ had some great insights into why it’s not wise to entirely rule out doing things the way service providers do just because you’re working in enterprise. I’ve had experience in both SPs and enterprise and I agree that there are things that can be learnt on both sides.
  • This is a great article from Chris Evans about the difficulties associated with managing legacy backup infrastructure.
  • The Pure Storage VM Analytics Collector is now available as an OVA.
  • If you’re thinking of updating your Mac’s operating environment, this is a fairly comprehensive review of what macOS Catalina has to offer, along with some caveats.
  • Anthony has been doing a bunch of cool stuff with Terraform recently, including using variable maps to deploy vSphere VMs. You can read more about that here.
  • Speaking of people who work at Veeam, Hal has put together a great article on orchestrating Veeam recovery activities to Azure.
  • Finally, the Brisbane VMUG meeting originally planned for Tuesday 8th has been moved to the 15th. Details here.

Backblaze Has A (Pod) Birthday, Does Some Cool Stuff With B2

Backblaze has been on my mind a lot lately. And not just because of their recent expansion into Europe. The Storage Pod recently turned ten years old, and I was lucky enough to have the chance to chat with Yev Pusin and Andy Klein about that news and some of the stuff they’re doing with B2, Tiger Technology, and Veeam.

 

10 Years Is A Long Time

The Backblaze Storage Pod (currently version 6) recently turned 10 years old. That’s a long time for something to be around (and successful) in a market like cloud storage. I asked to Yev and Andy about where they saw the pod heading, and whether they thought there was room for Flash in the picture. Andy pointed out that, with around 900PB under management, Flash still didn’t look like the most economical medium for this kind of storage task. That said, they have seen the main HDD manufacturers starting to hit a wall in terms of the capacity per drive that they can deliver. Nonetheless, the challenge isn’t just performance, it’s also the fact that people are needing more and more capacity to store their stuff. And it doesn’t look like they can produce enough Flash to cope with that increase in requirements at this stage.

Version 7.0

We spoke briefly about what Pod 7.0 would look like, and it’s going to be a “little bit faster”, with the following enhancements planned:

  • Updating the motherboard
  • Upgrade the CPU and consider using an AMD CPU
  • Updating the power supply units, perhaps moving to one unit
  • Upgrading from 10Gbase-T to 10GbE SFP+ optical networking
  • Upgrading the SATA cards
  • Modifying the tool-less lid design

They’re looking to roll this out in 2020 some time.

 

Tiger Style?

So what’s all this about Veeam, Tiger Bridge, and Backblaze B2? Historically, if you’ve been using Veeam from the cheap seats, it’s been difficult to effectively leverage object storage to use as a repository for longer term data storage. Backblaze and Tiger Technology have gotten together to develop an integration that allows you to use B2 storage to copy your Veeam protection data to the Backblaze cloud. There’s a nice overview of the solution that you can read here, and you can read some more comprehensive instructions here.

 

Thoughts and Further Reading

I keep banging on about it, but ten years feels like a long time to be hanging around in tech. I haven’t managed to stay with one employer longer than 7 years (maybe I’m flighty?). Along with the durability of the solution, the fact that Backblaze made the design open source, and inspired a bunch of companies to do something similar, is a great story. It’s stuff like this that I find inspiring. It’s not always about selling black boxes to people. Sometimes it’s good to be a little transparent about what you’re doing, and relying on a great product, competitive pricing, and strong support to keep customers happy. Backblaze have certainly done that on the consumer side of things, and the team assures me that they’re experiencing success with the B2 offering and their business-oriented data protection solution as well.

The Veeam integration is an interesting one. While B2 is an object storage play, it’s not S3-compliant, so they can’t easily leverage a lot of the built-in options delivered by the bigger data protection vendors. What you will see, though, is that they’re super responsive when it comes to making integrations available across things like NAS devices, and stuff like this. If I get some time in the next month, I’ll look at setting this up in the lab and running through the process.

I’m not going to wax lyrical about how Backblaze is democratising data access for everyone, as they’re in business to make money. But they’re certainly delivering a range of products that is enabling a variety of customers to make good use of technology that has potentially been unavailable (in a simple to consume format) previously. And that’s a great thing. I glossed over the news when it was announced last year, but the “Rebel Alliance” formed between Backblaze, Packet and ServerCentral is pretty interesting, particularly if you’re looking for a more cost-effective solution for compute and object storage that isn’t reliant on hyperscalers. I’m looking forward to hearing about what Backblaze come up with in the future, and I recommend checking them out if you haven’t previously. You can read Ken‘s take over at Gestalt IT here.

Brisbane VMUG – October 2019

hero_vmug_express_2011

*Update – This meeting has now been moved to the 15th October. 

The October 2019 edition of the Brisbane VMUG meeting will be held on Tuesday 8th October at Fishburners (Level 2, 155 Queen St, Brisbane) from 4pm – 6pm. It’s sponsored by Rubrik and promises to be a great afternoon.

Here’s the agenda:

  • VMUG Intro
  • VMware Presentation
  • Rubrik Presentation: Automating VM Protection in Rubrik with vSphere Tags (and other cool stuff….)
  • Q&A
  • Refreshments and drinks post-event.

Rubrik have gone to great lengths to make sure this will be a fun and informative session and I’m really looking forward to hearing about their solution. After the VMUG wraps up at 6pm, feel free to come along to Brewbrik at The Pool Terrace & Bar on Level 4 at Next Hotel, Queen Street Mall (just down the road from Fishburners). Brewbrik is an informal get together of Rubrik customers, partners, prospects and general hangers-on. Rubrik will be shouting drinks and food. You can find out more information and register for the event here. I hope to see you there. Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.

Random Short Take #22

Oh look, another semi-regular listicle of random news items that might be of some interest.

  • I was at Pure Storage’s //Accelerate conference last week, and heard a lot of interesting news. This piece from Chris M. Evans on FlashArray//C was particularly insightful.
  • Storage Field Day 18 was a little while ago, but that doesn’t mean that the things that were presented there are no longer of interest. Stephen Foskett wrote a great piece on IBM’s approach to data protection with Spectrum Protect Plus that’s worth read.
  • Speaking of data protection, it’s not just for big computers. Preston wrote a great article on the iOS recovery process that you can read here. As someone who had to recently recover my phone, I agree entirely with the idea that re-downloading apps from the app store is not a recovery process.
  • NetApp were recently named a leader in the Gartner Magic Quadrant for Primary Storage. Say what you will about the MQ, a lot of folks are still reading this report and using it to help drive their decision-making activities. You can grab a copy of the report from NetApp here. Speaking of NetApp, I’m happy to announce that I’m now a member of the NetApp A-Team. I’m looking forward to doing a lot more with NetApp in terms of both my day job and the blog.
  • Tom has been on a roll lately, and this article on IT hero culture, and this one on celebrity keynote speakers, both made for great reading.
  • VMworld US was a little while ago, but Anthony‘s wrap-up post had some great content, particularly if you’re working a lot with Veeam.
  • WekaIO have just announced some work their doing Aiden Lab at the Baylor College of Medicine that looks pretty cool.
  • Speaking of analyst firms, this article from Justin over at Forbes brought up some good points about these reports and how some of them are delivered.

Independent Research Firm Cites Druva As A Strong Performer in latest Data Resiliency Solutions Wave

Disclaimer: This is a sponsored post and you’ll probably see the content elsewhere on the Internet. Druva provided no editorial input and the words and opinions in this post are my own.

Druva was among the select companies that Forrester invited to participate in their latest Data Resiliency Solutions Wave, for Q3 2019. In its debut for this report, Druva was cited as a Strong Performer in Data Resilience. I recently had an opportunity to speak to W. Curtis Preston, Druva’s Chief Technologist, about the report, and thought I’d share some of my thoughts here.

 

Let’s Get SaaS-y

Druva was the only company listed in the Forrester Wave™ Data Resiliency Solutions whose products are only offered as a service. One of the great things about Software-as-a-Service (SaaS) is that the vendor takes care of everything for you. Other models of solution delivery require hardware, software (or both) to be installed on-premises or close to the workload you want to protect. The beauty of a SaaS delivery model is that Druva can provide you with a data protection solution that they manage from end to end. If you’re hoping that there’ll be some new feature delivered as part of the solution, you don’t have to worry about planning the upgrade to the latest version; Druva takes care of that for you. There’s no need for you to submit change management documentation or negotiate infrastructure outages with key stakeholders. And if something goes wrong with the platform upgrade, the vendor will take care of it. All you need to worry about is ensuring that your network access is maintained and you’re paying the bills. If your capacity is growing out of line with your expectations, it’s a problem for Druva, not you. And, as I alluded to earlier, you get access to features in a timely fashion. Druva can push those out when they’ve tested them, and everyone gets access to them without having to wait. Their time to market is great, and there aren’t a lot of really long release cycles involved.

 

Management

The report also called out how easy it was to manage Druva, as Forrester gave them their highest score 5 (out of 5) in this category. All of their services are available via a single management interface. I don’t recall at what point in my career I started to pay attention to vendors talking to me about managing everything from a single pane of glass. I think that the nature of enterprise infrastructure operations dictates that we look for unified management solutions wherever we can. Enterprise infrastructure is invariably complicated, and we want simplicity wherever we can get it. Having everything on one screen doesn’t always mean that things will be simple, but Druva has focused on ensuring that the management experience delivers on the promise of simplified operations. The simplified operations are also comprehensive, and there’s support for cloud-native / AWS resources (with CloudRanger), data centre workloads (with Druva Phoenix) and SaaS workloads (with Druva inSync) via a single pane of glass.  Although not included in the report, Druva also supports backing up endpoints, such as laptops and mobile devices.

 

Deduplication Is No Joke

One of Forrester’s criteria was whether or not a product offered deduplication. Deduplication has radically transformed the data protection storage market. Prior to the widespread adoption of deduplication and compression technologies in data protection storage, tape provided the best value in terms of price and capacity. This all changed when enterprises were able to store many copies of their data in the space required by one copy. Druva uses deduplication effectively in its solution, and has a patent on its implementation of the technology. They also leverage global deduplication in their solution, providing enterprises with an efficient use of protection data storage. Note that this capability needs to be in a single AWS region, as you wouldn’t want it running across regions. The key to Druva’s success with deduplication has been also due to its use of DynamoDB to support deduplication operations at scale.

 

Your Security Is Their Concern

Security was a key criterion in Forrester’s evaluation, and Druva received another 5 – the highest score possible – in that category as well. One of the big concerns for enterprises is the security of protection data being stored in cloud platforms. There’s no point spending a lot of money trying to protect your critical information assets if a copy of those same assets has been left exposed on the Internet for all to see. With Druva’s solution, everything stored in S3 is sharded and stored as separate objects. They’re not just taking big chunks of your protection data and storing them in buckets for everyone to see. Even if someone were able to access the storage, and put all of the pieces back together, it would be useless because all of these shards are also encrypted.  In addition, the metadata needed to re-assemble the shards is stored separately in DynamoDB and is also encrypted.

 

Thoughts

I believe being named a Strong Performer in the Forrester Wave™ Data Resiliency Solutions validates what Druva’s been telling me when it comes to their ability to protect workloads in the data centre, the cloud, and in SaaS environments. Their strength seems to lie in their ability to leverage native cloud tools effectively to provide their customers with a solution that is simple to operate and consume. If you have petabytes of seismic data you need to protect, Druva (and the laws of physics) may not be a great fit for you. But if you have less esoteric requirements and a desire to reduce your on-premises footprint and protect workloads across a number of environments, then Druva is worthy of further consideration. If you wanted to take a look at the report yourself, you can do so here (registration required).

Random Short Take #21

Here’s a semi-regular listicle of random news items that might be of some interest.

  • This is a great article covering QoS enhancements in Purity 5.3. Speaking of Pure Storage I’m looking forward to attending Pure//Accelerate in Austin in the next few weeks. I’ll be participating in a Storage Field Day Exclusive event as well – you can find more details on that here.
  • My friends at Scale Computing have entered into an OEM agreement with Acronis to add more data protection and DR capabilities to the HC3 platform. You can read more about that here.
  • Commvault just acquired Hedvig for a pretty penny. It will be interesting to see how they bring them into the fold. This article from Max made for interesting reading.
  • DH2i are presenting a webinar on September 10th at 11am Pacific, “On the Road Again – How to Secure Your Network for Remote User Access”. I’ve spoken to the people at DH2i in the past and they’re doing some really interesting stuff. If your timezone lines up with this, check it out.
  • This was some typically insightful coverage of VMworld US from Justin Warren over at Forbes.
  • I caught up with Zerto while I was at VMworld US last week, and they talked to me about their VAIO announcement. Justin Paul did a good job of summarising it here.
  • Speaking of VMworld, William has posted links to the session videos – check it out here.
  • Project Pacific was big news at VMworld, and I really enjoyed this article from Joep.

Backblaze’s World Tour Of Europe

I spoke with Ahin Thomas at VMworld US last week about what Backblaze has been up to lately. The big news is that they’ve expanded data centre operations into Europe (Amsterdam specifically). Here’s a blog post from Backblaze talking about their new EU DC, and these three articles do a great job of explaining the process behind the DC selection.

So what does this mean exactly? If you’re not so keen on keeping your data in a US DC, you can create an account and start leveraging the EU region. There’s no facility to migrate existing data (at this stage), but if you have a lot of data you want to upload, you could use the B2 Fireball to get it in there.

 

Thoughts and Further Reading

When you think of Backblaze it’s likely that you think of their personal backup product, and the aforementioned hard drive stats and storage pod reference designs. So it might seem a little weird to see them giving briefings at a show like VMworld. But their B2 business is ramping up, and a lot of people involved in delivering VMware-based cloud services are looking at object storage as a way to do cost-effective storage at scale. There are also plenty of folks in the mid-market segment trying to find more cost effective ways to store older data and protect it without making huge investments in the traditional data protection offerings on the market.

It’s still early days in terms of some of the features on offer from Backblaze that can leverage multi-region capabilities, but the EU presence is a great first step in expanding their footprint and giving non-US customers the option to use resources that aren’t located on US soil. Sure, you’re still dealing with a US company, and you’re paying in US dollars, but at least you’ve got a little more choice in terms of where the data will be stored. I’ve been a Backblaze customer for my personal backups for some time, and I’m always happy to hear good news stories coming out of the company. I’m a big fan of the level of transparency they’ve historically shown, particularly when other vendors have chosen to present their solutions as magical black boxes. Sharing things like the storage pod design and hard drive statistics goes a long way to developing trust in Backblaze as the keeper of your stuff.

The business of using cloud storage for data protection and scalable file storage isn’t as simple as jamming a few rackmount boxes in a random DC, filling them with hard drives, charging $5 a month, and waiting for the money to roll in. There’s a lot more to it than that. You need to have a product that people want, you need to know how to deliver that product, and you need to be able to evolve as technology (and the market) evolves. I’m happy to see that Backblaze have moved into storage services with B2, and the move to the EU is another sign of that continuing evolution. I’m looking forward (with some amount of anticipation) to hearing what’s next with Backblaze.

If you’re thinking about taking up a subscription with Backblaze – you can use my link to sign up and I’ll get a free month and you will too.

VMware – VMworld 2019 – HBI3487BUS – Rethink Data Protection & Management for VMware

Disclaimer: I recently attended VMworld 2019 – US.  My flights and accommodation were paid for by Digital Sense, and VMware provided me with a free pass to the conference and various bits of swag. There is no requirement for me to blog about any of the content presented and I am not compensated by VMware for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my rough notes from “HBI3487BUS – Rethink Data Protection & Management for VMware”, presented by Curt Hayes (Cloud and Data Center Engineer, Regeneron) and Mike Palmer (Chief Product Officer, Druva). You can grab a PDF copy of my notes from here.

 

The World is Changing

Cloud Storage Costs Continue To Decline

  • 67 price decreases in AWS storage with CAGR of (60%) – AWS
  • 68% (110+) of countries have Data protection and privacy legislation – United Nations
  • 40% of IT will be “Versatilists” by 2021 – Gartner
  • 54% of CIOs believe streamlining storage is best opportunity for cost optimisation – ESG
  • 80% of enterprises will migrate away and close their on-premises DCs by 2025 – Gartner
  • 256% of increase in demand for Data scientists in last 5 years – Indeed

Druva’s 4 Pillars of Value

  • Costs Decrease – storage designed to optimise performance and cost reduces per TB costs, leaving more money for innovation
  • Eliminate Effort – Capacity management, patching, upgrades, certification, training, professional services gone.
  • Retire HW/SW silos – Druva builds in data services: DR, Archive, eDiscovery and more
  • Put Data to work – eliminating silos allows global tagging. Searchability, access and governance.

The best work you can do is when you don’t have to do it.

Curt (customer) says “[d]ata is our greatest asset”.

Regeneron’s Drivers to Move to Cloud

Challenges

Opportunities

Ireland backup platform is nearing end-of-life

Regeneron has a perfect opportunity to consider cloud as an alternative solution for backup and DR

3 distinct tools for managing backups

Harmonize backup tool set

Expansion and upgrades are costly and time-consuming

Minimize operational overhead

Need to improve business continuity posture

Instantly enable offsite backups & disaster recovery requirement

Scientists have tough time accessing the data they need

Advanced search capabilities to offer greater value added data services

Regeneron’s TCO Analysis

Druva Enables Intelligent Tiering in the Cloud

Traditional, expensive, and inflexible on-premises storage

  • Limited and expensive to scale and store
  • Complex administration
  • Lack of visibility and data silos
  • Tradeoff between cost and visibility for Long Term Retention requirements

Modern, scalable and cost-effective multi-tier storage

  • Scalable, efficient cloud story
  • Intelligent progressive tiering of data for maximum cost effiency with minimum effort
  • Support cloud bursting, hot/cold data
  • Cost efficient storage on most innovative AWS tiers
  • Enable reporting / audit on historical data

Regeneron’s Adoption of Cloud Journey

  • DC modernisation / consolidation
  • Workload migration to the cloud – Amazon EC2
  • Simplify and streamline backup / recovery and DR
  • Longer-term retention for advanced data mining
  • Protecting cloud applications – Sharepoint, O365, etc
  • Future – do more with data

 

How Did Druva help?

Basics

  • Cheaper
  • Simpler
  • Faster
  • Unified protection

Future Proof

  • Scalable
  • Ease of integration
  • No training
  • Business continuity

Data Value

  • Search
  • Data Mining
  • Analytics

Looking Beyond Data Protection …

 

Thoughts and Further Reading

I think the folks at Druva have been doing some cool stuff lately, and chances are quite high that I’ll be writing more about them in the future. There’s a good story with their cloud-native architecture, and it was nice to hear how a customer leveraged them to do things better than they had been doing previously.

Two things really stood out to me during this session. The first was the statement “[t]he best work you can do is when you don’t have to do it”. I’ve heard it said before that the best storage operation is one you don’t have to do, and I think we sometimes lose site of how this approach can help us get stuff done in a more efficient fashion, ultimately leading to focussing our constrained resources elsewhere.

The second was the idea of looking beyond data protection. The “secondary storage” market is riding something of a gravy train at the moment, with big investment from the VC funds in current and next-generation data protection (management?) solutions. There’s been some debate over how effective these solutions are at actually deriving value from that secondary data, but you’d have to think they’re in a prime position to succeed. I’m curious to see just what shape that value takes when we all start to agree on the basic premise.

Sponsored sessions aren’t everyone’s cup of tea, but I like hearing from customers about how it’s worked out well for them. And the cool thing about VMworld is that there’s a broader ecosystem supporting VMware across a number of different technology stacks. This makes for a diverse bunch of sessions, and I think it makes for an extremely interesting vendor conference. If you want to learn a bit more about what Druva have been up to, check out my post from Tech Field Day 19 here, and you can also find a useful overview of the product here. Good session. 3.5 stars.