Veeam Basics – Configuring A Scale-Out Backup Repository

I’ve been doing some integration testing with Pure Storage and Veeam in the lab recently, and thought I’d write an article on configuring a scale-out backup repository (SOBR). To learn more about SOBR configurations, you can read the Veeam documentation here. This post from Rick Vanover also covers the what and the why of SOBR. In this example, I’m using a couple of FlashBlade-based NFS repositories that I’ve configured as per these instructions. Each NFS repository is mounted on a separate Linux virtual machine. I’m using a Windows-based Veeam Backup & Replication server running version 9.5 Update 4.

 

Process

Start by going to Backup Infrastructure -> Scale-out Repositories and click on Add Scale-out Repository.

Give it a name, maybe something snappy like “Scale-out Backup Repository 1”?

Click on Add to add the backup repositories.

When you click on Add, you’ll have the option to select the backup repositories you want to use. You can select them all, but for the purpose of this exercise, we won’t.

In this example, Backup Repository 1 and 2 are the NFS locations I configured previously. Select those two and click on OK.

You’ll now see the repositories listed as Extents.

Click on Advanced to check the advanced setttings are what you expect them to be. Click on OK.

Click Next to continue. You’ll see the following message.

You then choose the placement policy. It’s strongly recommended that you stick with Data locality as the placement policy.

You can also pick object storage to use as a Capacity Tier.

You’ll also have an option to configure the age of the files to be moved, and when they can be moved. And you might want to encrypt the data uploaded to your object storage environment, depending on where that object storage lives.

Once you’re happy, click on Apply. You’ll be presented with a summary of the configuration (and hopefully there won’t be any errors).

 

Thoughts

The SOBR feature, in my opinion, is pretty cool. I particularly like the ability to put extents in maintenance mode. And the option to use object storage as a capacity tier is a very useful feature. You get some granular control in terms of where you put your backup data, and what kind of performance you can throw at the environment. And as you can see, it’s not overly difficult to configure the environment. There are a few things to keep on mind though. Make sure your extents are stored on resilient hardware. If you keep your backup sets together with the data locality option, you’l be a sad panda if that extent goes bye bye. And the same goes for the performance option. You’ll also need Enterprise or Enterprise Plus editions of Veeam Backup & Replication for this feature to work. And you can’t use this feature for these types of jobs:

  • Configuration backup job;
  • Replication jobs (including replica seeding);
  • VM copy jobs; and
  • Veeam Agent backup jobs created by Veeam Agent for Microsoft Windows 1.5 or earlier and Veeam Agent for Linux 1.0 Update 1 or earlier.

There are any number of reasons why a scale-out backup repository can be a handy feature to use in your data protection environment. I’ve had the misfortune in the past of working with products that were difficult to manage from a data mobility perspective. Too many times I’ve been stuck going through all kinds of mental gymnastics working out how to migrate data sets from one storage platform to the next. With this it’s a simple matter of a few clicks and you’re on your way with a new bucket. The tiering to object feature is also useful, particularly if you need to keep backup sets around for compliance reasons. There’s no need to spend money on these living on performance disk if you can comfortably have them sitting on capacity storage after a period of time. And if you can control this movement through a policy-driven approach, then that’s even better. If you’re new to Veeam, it’s worth checking out a feature like this, particularly if you’re struggling with media migration challenges in your current environment. And if you’re an existing Enterprise or Enterprise Plus customer, this might be something you can take advantage of.

Random Short Take #14

Here are a few links to some random news items and other content that I found interesting. You might find them interesting too. Episode 14 – giddy-up!

Dell EMC Announces PowerProtect Software (And Hardware)

Disclaimer: I recently attended Dell Technologies World 2019.  My flights, accommodation and conference pass were paid for by Dell Technologies via the Media, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Last week at Dell Technologies World there were a number of announcements made regarding Data Protection. I thought I’d cover them here briefly. Hopefully I’ll have the chance to dive a little deeper into the technology in the next few weeks.

 

PowerProtect Software

The new PowerProtect software is billed as Dell EMC’s “Next Generation Data Management software platform” and provides “data protection, replication and reuse, as well as SaaS-based management and self-service capabilities that give individual data owners the autonomy to control backup and recovery operations”. It currently offers support for:

  • Oracle;
  • Microsoft SQL;
  • VMware;
  • Windows Filesystems; and
  • Linux Filesystems.

More workload support is planned to arrive in the next little while. There are some nice features included, such as automated discovery and on-boarding of databases, VMs and Data Domain protection storage. There’s also support for tiering protection data to public cloud environments, and support for SaaS-based management is a nice feature too. You can view the data sheet here.

 

PowerProtect X400

The PowerProtect X400 is being positioned by Dell EMC as a “multi-dimensional” appliance, with support for both scale out and scale up expansion.

There are three “bits” to the X400 story. There’s the X400 cube, which is the brains of the operation. You then scale it out using either X400F (All-Flash) or X400H (Hybrid) cubes. The All-Flash version can be configured from 64 – 448TB of capacity, delivering up to 22.4PB of logical capacity. The Hybrid version runs from 64 – 384TB of capacity, and can deliver up to 19.2PB of logical capacity. The logical capacity calculation is based on “10x – 50x deduplication ratio”. You can access the spec sheet here, and the data sheet can be found here.

Scale Up and Out?

So what do Dell EMC mean by “multi-dimensional” then? It’s a neat marketing term that means you can scale up and out as required.

  • Scale-up with grow-in-place capacity expansion (16TB); and
  • Scale-out compute and capacity with additional X400F or X400H cubes (starting at 64TB each).

This way you can “[b]enefit from the linear scale-out of performance, compute, network and capacity”.

 

IDPA

Dell EMC also announced that the Integrated Data Protection Appliance (IDPA) was being made available in an 8-24TB version, providing a lower capacity option to service smaller environments.

 

Thoughts and Further Reading

Everyone I spoke to at Dell Technologies World was excited about the PowerProtect announcement. Sure, it’s their job to be excited about this stuff, but there’s a lot here to be excited about, particularly if you’re an existing Dell EMC data protection customer. The other “next-generation” data protection vendors seem to have given the 800 pound gorilla the wakeup call it needed, and the PowerProtect offering is a step in the right direction. The scalability approach used with the X400 appliance is potentially a bit different to what’s available in the market today, but it seems to make sense in terms of reducing the footprint of the hardware to a manageable amount. There were some high numbers being touted in terms of performance but I won’t be repeating any of those until I’ve seen this for myself in the wild. The all-flash option seems a little strange at first, as this normally associated with data protection, but I think it’s competitive nod to some of the other vendors offering top of rack, all-flash data protection.

So what if you’re an existing Data Domain / NetWorker / Avamar customer? There’s no need to panic. You’ll see continued development of these products for some time to come. I imagine it’s not a simple thing for an established company such as Dell EMC to introduce a new product that competes in places with something it already sells to customers. But I think it’s the right thing for them to do, as there’s been significant pressure from other vendors when it comes to telling a tale of simplified data protection leveraging software-defined solutions. Data protection requirements have seen significant change over the last few years, and this new architecture is a solid response to those changes.

The supported workloads are basic for the moment, but a cursory glance through most enterprise environments would be enough to reassure you that they have the most common stuff covered. I understand that existing DPS customers will also get access to PowerProtect to take it for a spin. There’s no word yet on what the migration path for existing customers looks like, but I have no doubt that people have already thought long and hard about what that would look like and are working to make sure the process is field ready (and hopefully straightforward). Dell EMC PowerProtect Software platform and PowerProtect X400 appliance will be generally available in July 2019.

For another perspective on the announcement, check out Preston‘s post here.

Random Short Take #13

Here are a few links to some random news items and other content that I found interesting. You might find them interesting too. Let’s dive in to lucky number 13.

Cohesity Marketplace – A Few Notes

 

Cohesity first announced their Marketplace offering in late February. I have access to a Cohesity environment (physical and virtual) in my lab, and I’ve recently had the opportunity to get up and running on some of the Marketplace-ready code, so I thought I’d share my experiences here.

 

Prerequisites

I’m currently running version 6.2 of Cohesity’s DataPlatform. I’m not sure whether this is widely available yet or still only available for early adopter testing. My understanding is that the Marketplace feature will be made generally available to Cohesity customers when 6.3 ships. The Cohesity team did install a minor patch (6.2a) on my cluster as it contained some small but necessary fixes. In this version of the code, a gflag is set to show the Apps menu. The “Enable Apps Management” in the UI under Admin – Cluster Settings was also enabled. You’ll also need to nominate an unused private subnet for the apps to use.

 

Current Application Availability

The Cohesity Marketplace has a number of Cohesity-developed and third-party apps available to install, including:

  • Splunk – Turn machine data into answers
  • SentinelOne – AI-powered threat prevention purpose built for Cohesity
  • Imanis Data – NoSQL backup, recovery, and replication
  • Cohesity Spotlight – Analyse file audit logs and find anomalous file-access patterns
  • Cohesity Insight – Search inside unstructured data
  • Cohesity EasyScript – Create, upload, and execute customised scripts
  • ClamAV – Anti-virus scans for file data

Note that none of the apps need more than Read permissions on the nominated View(s).

 

Process

App Installation

To install the app you want to run on your cluster, click on “Get App”, then enter your Helios credentials.

Review the EULA and click on “Accept & Get” to proceed. You’ll then be prompted to select the cluster(s) you want to deploy the app on. In this example, I have 5 clusters in my Helios environment. I want to install the app on C1, as it’s the physical cluster.

Using An App

Once your app is installed, it’s fairly straightforward to run it. Click on More, then Apps to access your installed apps.

 

Then you just need to click on “Run App” to get started

You’ll be prompted to set the Read Permissions for the App, along with QoS. It’s my understanding that the QoS settings are relative to other apps running on the cluster, not data protection activities, etc. The Read Permissions are applied to one or more Views. This can be changed after the initial configuration. Once the app is running you can click on Open App. In this example I’m using the Cohesity Insight app to look through some unstructured data stored on a View.

 

Thoughts

I’ve barely scratched the surface of what you achieve with the Marketplace on Cohesity’s DataPlatform. The availability of the Marketplace (and the ability to run apps on the platform) is another step closer to Cohesity’s vision of extracting additional value from secondary storage. Coupled with Cohesity’s C4000 series hardware (or perhaps whatever flavour you want to run from Cisco or HPE or the like), I can imagine you’re going to be able to do a heck a lot with this capability, particularly as more apps are validated with the platform.

I hope to do a lot more testing of this capability over the next little while, and I’ll endeavour to report back with my findings. If you’re a current Cohesity customer and haven’t talked to your account team about this capability, it’s worth getting in touch to see what you can do in terms of an evaluation. Of course, it’s also worth noting that, as with most things technology related, just because you can, doesn’t always mean you should. But if you have the use case, this is a cool capability on top of an already interesting platform.

IBM Spectrum Protect Plus – More Than Meets The Eye

Disclaimer: I recently attended Storage Field Day 18.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

IBM recently presented at Storage Field Day 18. You can see videos of their presentation here, and download my rough notes from here.

 

We Want A Lot From Data Protection

Data protection isn’t just about periodic protection of applications or files any more. Or, at the very least, we seem to want more than that from our data protection solutions. We want:

  • Application / data recovery – providing data availability;
  • Disaster Recovery – recovering from a minor to major data loss;
  • BCP – reducing the risk to the business, employees, market perception;
  • Application / data reuse – utilise for new routes to market; and
  • Cyber resiliency – recover the business from a compromised attack.

There’s a lot to cover there. And it could be argued that you’d need five different solutions to meet those requirements successfully. With IBM Spectrum Protect Plus (SPP) though, you’re able to meet a number of those requirements.

 

There’s Much That Can Be Done

IBM are positioning SPP as a tool that can help you extend your protection options beyond the traditional periodic data protection solution. You can use it for:

  • Data management / operational recovery – modernise and expanded use cases with instant data access, instant recovery leveraging snapshots;
  • Backup – traditional backup / recovery using streaming backups; and
  • Archive – long-term data retention / compliance, corporate governance.

 

Key Design Principles

Easy Setup

  • Deploy Anywhere: virtual appliance, cloud, bare metal;
  • Zero touch application agents;
  • Automated deployment for IBM Cloud for VMware; and
  • IBM SPP Blueprints.

The benefits of this include:

  • Easy to get started;
  • Reduced deployment costs; and
  • Hybrid and multi-cloud configurations.

Protect

  • Protect databases and applications hosted on-premises or in cloud;
  • Incremental forever using native hypervisor, database, and OS APIs; and
  • Efficient data reduction using deduplication and compression.

The benefits of this include:

  • Efficiency through reduced storage and network usage;
  • Stringent RPOs compliance with a reduced backup window; and
  • Application backup with multi-cloud portability.

Manage

  • Centralised, SLA-driven management;
  • Simple, secure RBAC based user self service; and
  • Lifecycle management of space efficient point-in-time snapshots.

The benefits of this include:

  • Lower TCO by reducing operational costs;
  • Consistent management / governance of multi-cloud environments; and
  • Secure by design with RBAC.

Recover, Reuse

  • Instant access / sandbox for DevOps and test environments;
  • Recover applications in cloud or data centre; and
  • Global file search and recovery.

The benefits of this include:

  • Improved RTO via instant access;
  • Eliminate time finding the right copy (file search across all snapshots with a globally indexed namespace);
  • Data reuse (versus backup as just an insurance policy); and
  • Improved agility; efficiently capture and use copy of production data for test.

 

One Workflow, Multiple Use Cases

There’s a lot you can with SPP, and the following diagram shows the breadth of the solution.

[image courtesy of IBM]

 

Thoughts and Further Reading

When I first encountered IBM SPP at Storage Field Day 15, I was impressed with their approach to policy-driven protection. It’s my opinion that we’re asking more and more of modern data protection solutions. We don’t just want to use them as insurance for our data and applications any more. We want to extract value from the data. We want to use the data as part of test and development workflows. And we want to manipulate the data we’re protecting in ways that have proven difficult in years gone by. It’s not just about having a secondary copy of an important file sitting somewhere safe. Nor is it just about using that data to refresh an application so we can test it with current business problems. It’s all of those things and more. This add complexity to the solution, as many people who’ve administered data protection solutions have found out over the years. To this end, IBM have worked hard with SPP to ensure that it’s a relatively simple process to get up and running, and that you can do what you need out of the box with minimal fuss.

If you’re already operating in the IBM ecosystem, a solution like SPP can make a lot of sense, as there are some excellent integration points available with other parts of the IBM portfolio. That said, there’s no reason you can’t benefit from SPP as a standalone offering. All of the normal features you’d expect in a modern data protection platform are present, and there’s good support for enhanced protection use cases, such as analytics.

Enrico had some interesting thoughts on IBM’s data protection lineup here, and Chin-Fah had a bit to say here.

Cohesity Is (Data)Locked In

Disclaimer: I recently attended Storage Field Day 18.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Cohesity recently presented at Storage Field Day 18. You can see their videos from Storage Field Day 18 here, and download a PDF copy of my rough notes from here.

 

The Cohesity Difference?

Cohesity covered a number of different topics in its presentation, and I thought I’d outline some of the Cohesity features before I jump into the meat and potatoes of my article. Some of the key things you get with Cohesity are:

  • Global space efficiency;
  • Data mobility;
  • Data resiliency & compliance;
  • Instant mass restore; and
  • Apps integration.

I’m going to cover 3 of the 5 here, and you can check the videos for details of the Cohesity MarketPlace and the Instant Mass Restore demonstration.

Global Space Efficiency

One of the big selling points for the Cohesity data platform is the ability to deliver data reduction and small file optimisation.

  • Global deduplication
    • Modes: inline, post-process
  • Archive to cloud is also deduplicated
  • Compression
    • Zstandard algorithm (read more about that here)
  • Small file optimisation
    • Better performance for reads and writes
    • Benefits from deduplication and compression

Data Mobility

There’s also an excellent story when it comes to data mobility, with the platform delivering the following data mobility features:

  • Data portability across clouds
  • Multi-cloud replication and archival (1:many)
  • Integrated indexing and search across locations

You also get simultaneous, multi-protocol access and a comprehensive set of file permissions to work with.

 

But What About Archives And Stuff?

Okay, so all of that stuff is really cool, and I could stop there and you’d probably be happy enough that Cohesity delivers the goods when it comes to a secondary storage platform that delivers a variety of features. In my opinion, though, it gets a lot more interesting when you have a look at some of the archival features that are built into the platform.

Flexible Archive Solutions

  • Archive either on-premises or to cloud;
  • Policy driven archival schedule for long term data retention
  • Data an be retrieved to the same or a different Cohesity cluster; and
  • Archived data is subject to further deduplication.

Data Resiliency and Compliance – ensures data integrity

  • Erasure coding;
  • Highly available; and
  • DataLock and legal hold.

Achieving Compliance with File-level DataLock

In my opinion, DataLock is where it gets interesting in terms of archive compliance.

  • DataLock enables WORM functionality at a file level;
  • DataLock adheres to regulatory acts;
  • Can automatically lock a file after a period of inactivity;
  • Files can be locked manually by setting file attributes;
  • Minimum and maximum retention times can be set; and
  • Cohesity provides a unique RBAC role for Data Security administration.

DataLock on Backups

  • DataLock enables WORM functionality;
  • Prevent changes by locking Snapshots;
  • Applied via backup policy; and
  • Operations performed by Data Security administrators.

 

Ransomware Detection

Cohesity also recently announced the ability to look within Helios for Ransomware. The approach taken is as follows: Prevent. Detect. Respond.

Prevent

There’s some good stuff built into the platform to help prevent ransomware in the first place, including:

  • Immutable file system
  • DataLock (WORM)
  • Multi-factor authentication

Detect

  • Machine-driven anomaly detection (backup data, unstructured data)
  • Automated alert

Respond

  • Scalable file system to store years worth of backup copies
  • Google-like global actionable search
  • Instant mass restore

 

Thoughts and Further Reading

The conversation with Cohesity got a little spirited in places at Storage Field Day 18. This isn’t unusual, as Cohesity has had some problems in the past with various folks not getting what they’re on about. Is it data protection? Is it scale-out NAS? Is it an analytics platform? There’s a lot going on here, and plenty of people (both inside and outside Cohesity) have had a chop at articulating the real value of the solution. I’m not here to tell you what it is or isn’t. I do know that a lot of the cool stuff with Cohesity wasn’t readily apparent to me until I actually had some stick time with the platform and had a chance to see some of its key features in action.

The DataLock / Security and Compliance piece is interesting to me though. I’m continually asking vendors what they’re doing in terms of archive platforms. A lot of them look at me like I’m high. Why wouldn’t you just use software to dump your old files up to the cloud or onto some cheap and deep storage in your data centre? After all, aren’t we all using software-defined data centres now? That’s certainly an option, but what happens when that data gets zapped? What if the storage platform you’re using, or the software you’re using to store the archive data, goes bad and deletes the data you’re managing with it? Features such as DataLock can help with protecting you from some really bad things happening.

I don’t believe that data protection data should be treated as an “archive” as such, although I think that data protection platform vendors such as Cohesity are well placed to deliver “archive-like” solutions for enterprises that need to retain protection data for long periods of time. I still think that pushing archive data to another, dedicated, tier is a better option than simply calling old protection data “archival”. Given Cohesity’s NAS capabilities, it makes sense that they’d be an attractive storage target for dedicated archive software solutions.

I like what Cohesity have delivered to date in terms of a platform that can be used to deliver data insights to derive value for the business. I think sometimes the message is a little muddled, but in my opinion some of that is because everyone’s looking for something different from these kinds of platforms. And these kinds of platforms can do an awful lot of things nowadays, thanks in part to some pretty smart software and some grunty hardware. You can read some more about Cohesity’s Security and Compliance story here,  and there’s a fascinating (if a little dated) report from Cohasset Associates on Cohesity’s compliance capabilities that you can access here. My good friend Keith Townsend also provided some thoughts on Cohesity that you can read here.

Veeam Vanguard 2019

I was very pleased to get an email from Rick Vanover yesterday letting me know I was accepted as part of the Veeam Vanguard Program for 2019. This is my first time as part of this program, but I’m really looking forward to participating in it. Big shout out to Dilupa Ranatunga and Anthony Spiteri for nominating me in the first place, and for Rick and the team for having me as part of the program. Also, (and I’m getting a bit parochial here) special mention of the three other Queenslanders in the program (Rhys Hammond, Nathan Oldfield, and Chris Gecks). There’s going to be a lot of cool stuff happening with Veeam and in data protection generally this year and I can’t wait to get started. More soon.

Imanis Data and MDL autoMation Case Study

Background

I’ve covered Imanis Data in the past, but am the first to admit that their focus area is not something I’m involved with on a daily basis. They recently posted a press release covering a customer success story with MDL autoMation. I had the opportunity to speak with both Peter Smails from Imanis Data, as well as Eric Gutmann from MDL autoMation. Whilst I enjoy speaking to vendors about their successes in the market, I’m even more intrigued by customer champions and what they have to say about their experience with a vendor’s offering. It’s one thing to talk about what you’ve come up with as a product, and how you think it might work well in the real world. It’s entirely another thing to have a customer take the time to speak to people on your behalf and talk about how your product works for them. Ultimately, these are usually interesting conversations, and it’s always useful for me to hear about how various technologies are applied in the real world. Note that I spoke to them separately, so Gutmann wasn’t being pushed in a certain direction by Imanis Data – he’s just really enthusiastic about the solution.

 

The Case Study

The Customer

Founded in 2006, MDL autoMation (MDL) is “one of the automotive industry’s leaders in the application of IoT and SaaS-based technologies for process improvement, automated customer recognition, vehicle tracking and monitoring, personalised customer service and sales, and inventory management”. Gutmann explained to me that for them, “every single customer is a VIP”. There’s a lot of stuff happening on the back-end to make sure that the customer’s experience is an extremely smooth one. MongoDB provides the foundation for the solution. When they first deployed the environment, they used MongoDB Cloud Manager to protect the environment, but struggled to get it to deliver the results they required.

 

Key Challenges

MDL moved to another provider, and spent approximately six months with getting it running. It worked well at the time, and met their requirements, saving them money and delivering quick backup on-premises and quick restores. There were a few issues though, including the:

  • Cost and complexity of backup and recovery for 15-node, sharded, MongoDB deployment across three data centres;
  • Time and complexity associated with daily refresh to non-sharded QA test cluster (it would take 2 days to refresh QA); and
  • Inability to use Active Directory for user access control.

 

Why Imanis Data?

So what got Gutmann and MDL excited about Imanis Data? There were a few reasons that Eric outlined for me, including:

  • 10x backup storage efficiency;
  • 26x faster QA refresh time – incremental restore;
  • 95% reduction in number policies to manage – enterprise policy engine, the number of policies to manage was reduced from 40 to 2; and
  • Native integration with Active Directory.

It was cheaper again than the previous provider, and, as Gutmann puts it “[i]t took literally hours to implement the Imanis product”. MDL are currently protecting 1.6TB of data, and it takes 7 minutes every hour to backup any changes.

 

Conclusion and Further Reading

Data protection is a problem that everyone needs to deal with at some level. Whether you have “traditional” infrastructure delivering your applications, or one of those fancy new NoSQL environments, you still need to protect your stuff. There are a lot of built-in features with MongoDB to ensure it’s resilient, but keeping the data safe is another matter. Coupled with that is the fact that developers have relied on data recovery activities to get data in to quality assurance environments for years now. Add all that together and you start to see why customers like MDL are so excited when they come across a solution that does what they need it to do.

Working in IT infrastructure (particularly operations) can be a grind at times. Something always seems to be broken or about to break. Something always seems to be going a little bit wrong. The best you can hope for at times is that you can buy products that do what you need them to do to ensure that you can produce value for the business. I think Imanis Data have a good story to tell in terms of the features they offer to protect these kinds of environments. It’s also refreshing to see a customer that is as enthusiastic as MDL is about the functionality and performance of the product, and the engagement as a whole. And as Gutmann pointed out to me, his CEO is always excited about the opportunity to save money. There’s no shame in being honest about that requirement – it’s something we all have to deal with one way or another.

Note that neither of us wanted to focus on the previous / displaced solution, as it serves no real purpose to talk about another vendor in a negative light. Just because that product didn’t do what MDL wanted it to do, doesn’t mean that that product wouldn’t suit other customers and their particular use cases. Like everything in life, you need to understand what your needs and wants are, prioritise them, and then look to find solutions that can fulfil those requirements.

Cohesity – Helios Article and Upcoming Webinar

I’ve written about Cohesity’s Helios offering previously, and also wrote a short article on upgrading multiple clusters using Helios. I think it’s a pretty neat offering, so to that end I’ve written an article on Cohesity’s blog about some of the cool stuff you can do with Helios. I’m also privileged to be participating in a webinar in late January with Cohesity’s Jon Hildebrand. We’ll be running through some of these features from a more real-world perspective, including doing silly things like live demos. You can get further details on the webinar here.