Random Short Take #65

Welcome to Random Short take #65. Last one for the year I think.

  • First up, this handy article from Steve Onofaro on replacing certificates in VMware Cloud Director 10.3.1.
  • Speaking of cloud, I enjoyed this article from Chris M. Evans on the AWS “wobble” (as he puts it) in us-east-1 recently. Speaking of articles Chris has written recently, check out his coverage of the Pure Storage FlashArray//XL announcement.
  • Speaking of Pure Storage, my friend Jon wrote about his experience with ActiveCluster in the field recently. You can find that here. I always find these articles to be invaluable, if only because they demonstrate what’s happening out there in the real world.
  • Want some press releases? Here’s one from Datadobi announcing it has released new Starter Packs for DobiMigrate ranging from 1PB up to 7PB.
  • Data protection isn’t just something you do at the office – it’s a problem for home too. I’m always interested to hear how other people tackle the problem. This article from Jeff Geerling (and the associated documentation on Github) was great.
  • John Nicholson is a smart guy, so I think you should check out his articles on benchmarking (and what folks are getting wrong). At the moment this is a 2-part series, but I suspect that could be expanded. You can find Part 1 here and Part 2 here. He makes a great point that benchmarking can be valuable, but benchmarking like it’s 1999 may not be the best thing to do (I’m paraphrasing).
  • Speaking of smart people, Tom Andry put together a great article recently on dispelling myths around subwoofers. If you or a loved one are getting worked up about subwoofers, check out this article.
  • I had people ask me if I was doing a predictions post this year. I’m not crazy enough to do that, but Mellor is. You can read his article here.

In some personal news (and it’s not LinkedIn official yet) I recently quit my job and will be taking up a new role in the new year. I’m not shutting the blog down, but you might see a bit of a change in the content. I can’t see myself stopping these articles, but it’s likely there’ll be less of the data protection howto articles being published. But we’ll see. In any case, wherever you are, stay safe, happy holidays, and see you on the line next year.

Random Short Take #64

Welcome to Random Short take #64. It’s the start of the last month of the year. We’re almost there.

  • Want to read an article that’s both funny and informative? Look no further than this beginner’s guide to subnetting. I did Elizabethan literature at uni, so it was good to get a reminder on Shakespeare’s involvement in IP addressing.
  • Continuing with the amusing articles, Chris Colotti published a video of outtakes from some Cohesity lightboard sessions that had me cracking up. It’s always nice when people don’t take themselves too seriously.
  • On a more serious note, data hoarding is a problem (I know this because I’ve been guilty of it), and this article from Preston outlines some of the reasons why it can be a bad thing for business.
  • Still on data protection, Howard Oakley looks at checking the integrity of Time Machine backups in this post. I’ve probably mentioned this a few times previously, but if you find macOS behaviour baffling at times, Howard likely has an article that can explain why you’re seeing what you’re seeing.
  • Zerto recently announced Zerto In-Cloud for AWS – you read more about that here. Zerto is really starting to put together a comprehensive suite of DR solutions. Worth checking out.
  • Still on press releases, Datadobi has announced new enhancements to DobiMigrate with 5.13. The company also recently validated Google Cloud Storage as an endpoint for its DobiProtect solution.
  • Leaseweb Global is also doing stuff with Google Cloud – you can read more about that here.
  • Finally, this article over at Blocks and Files on what constitutes a startup made for some interesting reading. Some companies truly are Peter Pans at this point, whilst others are holding on to the idea that they’re still in startup mode.

StorONE Announces S1:Backup

StorONE recently announced details of its S1:Backup product. I had the opportunity to talk about the announcement with Gal Naor and George Crump about the news and thought I’d share some brief thoughts here.

 

The Problem

Talk to people in the tech sector today, and you’ll possibly hear a fair bit about how ransomware is a real problem for them, and a scary one at that. Most all of the data protection solution vendors are talking about how they can help customers quickly recover from ransomware events, and some are particularly excited about how they can let you know you’ve been hit in a timely fashion. Which is great. A good data protection solution is definitely important to an organisation’s ability to rapidly recover when things go pop. But what about those software-based solutions that themselves have become targets of the ransomware gangs? What do you do when someone goes after both your primary and secondary storage solution? It costs a lot of money to deliver immutable solutions that are resilient to the nastiness associated with ransomware. Unfortunately, most organisations continue to treat data protection as an overpriced insurance policy and are reluctant to spend more than the bare minimum to keep these types of solutions going. It’s alarming the number of times I’ve spoken to customers using software-based data protection solutions that are out of support with the vendor just to save a few thousand dollars a year in maintenance costs.

 

The StorONE Solution

So what do you get with S1:Backup? Quite a bit, as it happens.

[image courtesy of StorONE]

You get Flash-based data ingestion in an immutable format, with snapshots being taken every 30 seconds.

[image courtesy of StorONE]

You also get fast consolidation of multiple incremental backup jobs (think synthetic fulls, etc.), thanks to the high performance of the StorONE platform. Speaking of performance, you also get quick recovery capabilities, and the other benefits of the StorONE platform (namely high availability and high performance).

And if you’re looking for long term retention that’s affordable, you can take advantage of StorONE’s ability to cope well with 90% capacity utilisation, rapid RAID rebuild times, and the ability to start small and grow.

 

Thoughts and Further Reading

Ransomware is a big problem, particularly when it hits you across both primary and secondary storage platforms. Storage immutability has become a super important piece of the puzzle that vendors are trying to solve. Like many things though, it does require some level of co-operation to make sure non-integrated systems are functioning across the tack in an integrated fashion. There are all kinds of ways to attack this issue, with some hardware vendors insisting that they’re particular interpretation of immutability is the only way to go, while some software vendors are quite keen on architecting air gaps into solutions to get around the problem. And I’m sure there’s a tape guy sitting up the back muttering about how tape is the ultimate air gap. Whichever way you want to look at it, I don’t think any one vendor has the solution that is 100% guaranteed to keep you safe from the folks in hoodies intent on trashing your data. So I’m pleased that StorONE is looking at this problem and wanting to work with the major vendors to develop a cost-effective solution to the issue. It may not be right for everyone, and that’s fine. But on the face of it, it certainly looks like a compelling solution when compared to rolling your own storage platforms and hoping that you don’t get hit.

Doing data protection well is hard, and made harder by virtue of the fact that many organisations treat it as a necessary evil. Sadly, it seems that CxOs only really start to listen after they’ve been rolled, not beforehand. Sometimes the best you can do is be prepared for when disaster strikes. If something like the StorONE solution is going to be the difference between losing the whole lot, or coming back from an attack quickly, it seems like it’s worth checking out. I can assure you that ignoring the problem will only end in tears. It’s also important to remember that a robust data protection solution is just another piece of the puzzle. You still need to need to look at your overall security posture, including securing your assets and teaching your staff good habits. Finally, if it seems like I’m taking aim at software-based solutions, I’m not. I’m the first to acknowledge that any system is susceptible if it isn’t architected and deployed in a secure fashion – regardless of whether it’s integrated or not. Anyway, if you’d like another take on the announcement, Mellor covered it here.

Datadobi, DobiProtect, and Forward Progress

I recently had the opportunity to speak Carl D’Halluin from Datadobi about DobiProtect, and thought I’d share some thoughts here. I wrote about DobiProtect in the past, particularly in relation to disaster recovery and air gaps. Things have progressed since then, as they invariably do, and there’s a bit more to the DobiProtect story now.

 

Ransomware Bad, Data Protection Good

If you’re paying attention to any data protection solution vendors at the moment, you’re no doubt hearing about ransomware attacks. These are considered to be Very Bad Things (™).

What Happens

  • Ransomware comes in through zero-day exploit or email attachments
  • Local drive content encrypted
  • Network shares encrypted – might be fast, might be slow
  • Encrypted file accessed and ransom message appears

How It Happens

Ransomware attacks are executed via many means, including social engineering, software exploits, and “malvertising” (my second favourite non-word next to performant). The timing of these attacks is important to note as well, as some ransomware will lay dormant and launch during a specific time period (a public holiday, for example). Sometimes ransomware will slowly and periodically encrypt content , but generally speaking it will begin encrypting files as quickly as possible. It might not encrypt everything either, but you can bet that it will be a pain regardless.

Defense In Depth

Ransomware protection isn’t just about data protection though. There are many layers you need to consider (and protect), including:

  • Human – hard to control, not very good at doing what they’re told.
  • Physical – securing the locations where data is stored is important.
  • End Points – BYOD can be a pain to manage effectively, and keeping stuff up to date seems to be challenging for the most mature organisations.
  • Networks – there’s a lot of work that needs to go into making sure workloads are both secure and accessible.
  • Application – sometimes they’re just slapped in there and we’re happy they run.
  • Data – It’s everything, but super exposed if you don’t get the rest of this right.

 

DobiProtect Then?

The folks at Datadobi tell me DobiProtect is the ideal solution for protecting the data layer as part of your defence in depth strategy as it is:

  • Software defined
  • Designed for the scale and complexity of file and / or object datasets
  • A solution that compliments existing capabilities such as storage system snapshots
  • Easy to deploy and does not impact existing configurations
  • A solution that is cost effective and flexible

 

Where Does It Fit?

DobiProtect plays to the strength of Datadobi – file and object storage. As such, it’s not designed to handle your traditional VM and DB protection, this remains the domain of the usual suspects.

[image courtesy of Datadobi]

Simple Deployment

The software-only nature of the solution, and the flexibility of going between file and object, means that it’s pretty easy to deploy as well.

[image courtesy of Datadobi]

Architecture

From an architecture perspective, it’s pretty straight forward as well, with the Core handling the orchestration and monitoring, and software proxies used for data movement.

[image courtesy of Datadobi]

 

Thoughts

I’ve been involved in the data protection business in some form or another for over two decades now. As you can imagine, I’ve seen a whole bunch of different ways to solve problems. In my day job I generally promote modern approaches to solving the challenge of protecting data in an efficient and cost-effective fashion. It can be hard to do this well, at scale, across the variety of workloads that you find in the modern enterprise nowadays. It’s not just some home directories, a file server, and one database that you have to protect. Now there’s SaaS workloads, 5000 different database options, containers, endpoints, and all kinds of other crazy stuff. The thing linking that all together is data, and the requirement to protect that data in order for the business to do its business – whether that’s selling widgets or providing services to the general public.

Protecting file and object workloads can be a pain. But why not just use a vendor that can roughly do the job rather than using a very specific solution like DobiProtect? I asked D’Halluin the same question, and his response was along the following lines. The kind of customers Datadobi is working with on a regular basis have petabytes of unstructured data they need to protect, and they absolutely need to be sure that it’s being protected properly. Not just from a quality of recovered data perspective, but also from a defensible compliance position. It’s not just about pointing out to the auditors that the data protection solution “should” be working. There’s a lot of legislation and stuff in place to ensure that it needs to be more than that. So it’s oftentimes worth investing in a solution that can reliably deliver against that compliance requirement.

Ransomware attacks can be the stuff of nightmares, particularly if you aren’t prepared. Any solution that is helping you to protect yourself (and, more importantly, recover) from attacks is a Very Good Thing™. Just be sure to check that the solution you’re looking at does what you think it will do. And then check again, because it’s not a matter of if, but when.

Random Short Take #62

Welcome to Random Short take #62. It’s Friday afternoon, so I’ll try and keep this one brief.

  • Tony was doing some stuff in his lab and needed to clean up a bunch of ports in his NSX-T segment. Read more about what happened next here.
  • Speaking of people I think of when I think automation, Chris Wahl wrote a thought-provoking article on deep work that is well worth checking out.
  • While we’re talking about work, Nitro has published its 2022 Productivity Report. You can read more here.
  • This article from Backblaze on machine learning and predicting hard drive failure rates was interesting. Speaking of Backblaze, if you’re thinking about signing up with them, use my code and we’ll both get some free time.
  • Had a security problem? Need to recover? How do you know when to hit the big red button? Preston can help.
  • Speaking of doom and gloom (i.e. losing data), Curtis’s recent podcast episode covering ZFS and related technologies made for some great listening.
  • Have you been looking for a “A Unique Technology to Scan and Interrogate Petabyte-Scale Unstructured Data Lakes”? Maybe, maybe not. If you have, Datadobi has you covered with Datadobi Query Language. You can read the press release here.
  • I love when bloggers take the time to do hands-on articles, and this one from Dennis Faucher covering VMware Tanzu Community Edition was fantastic.

Random Short Take #61

Welcome to Random Short take #61.

  • VMworld is on this week. I still find the virtual format (and timezones) challenging, and I miss the hallway track and the jet lag. There’s nonetheless some good news coming out of the event. One thing that was announced prior to the event was Tanzu Community Edition. William Lam talks more about that here.
  • Speaking of VMworld news, Viktor provided a great summary on the various “projects” being announced. You can read more here.
  • I’ve been a Mac user for a long time, and there’s stuff I’m learning every week via Howard Oakley’s blog. Check out this article covering the Recovery Partition. While I’m at it, this presentation he did on Time Machine is also pretty ace.
  • Facebook had a little problem this week, and the Cloudflare folks have provided a decent overview of what happened. As someone who works for a service provider, this kind of stuff makes me twitchy.
  • Fibre Channel? Cloud? Chalk and cheese? Maybe. Read Chin-Fah’s article for some more insights. Personally, I miss working with FC, but I don’t miss the arguing I had to do with systems and networks people when it came to the correct feeding and watering of FC environments.
  • Remote working has been a challenge for many organisations, with some managers not understanding that their workers weren’t just watching streaming video all day, but actually being more productive. Not everything needs to be a video call, however, and this post / presentation has a lot of great tips on what does and doesn’t work with distributed teams.
  • I’ve had to ask this question before. And Jase has apparently had to answer it too, so he’s posted an article on vSAN and external storage here.
  • This is the best response to a trio of questions I’ve read in some time.

Random Short Take #60

Welcome to Random Short take #60.

  • VMware Cloud Director 10.3 went GA recently, and this post will point you in the right direction when it comes to planning the upgrade process.
  • Speaking of VMware products hitting GA, VMware Cloud Foundation 4.3 became available about a week ago. You can read more about that here.
  • My friend Tony knows a bit about NSX-T, and certificates, so when he bumped into an issue with NSX-T and certificates in his lab, it was no big deal to come up with the fix.
  • Here’s everything you wanted to know about creating an external bootable disk for use with macOS 11 and 12 but were too afraid to ask.
  • I haven’t talked to the good folks at StarWind in a while (I miss you Max!), but this article on the new All-NVMe StarWind Backup Appliance by Paolo made for some interesting reading.
  • I loved this article from Chin-Fah on storage fear, uncertainty, and doubt (FUD). I’ve seen a fair bit of it slung about having been a customer and partner of some big storage vendors over the years.
  • This whitepaper from Preston on some of the challenges with data protection and long-term retention is brilliant and well worth the read.
  • Finally, I don’t know how I came across this article on hacking Playstation 2 machines, but here you go. Worth a read if only for the labels on some of the discs.

Infrascale Puts The Customer First

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Infrascale recently presented at Storage Field Day 22. You can see videos of the presentation here, and download my rough notes from here.

 

Infrascale and Customer Experience

Founded in 2011, Infrascale is headquartered is in Reston, Virginia, with around 170 employees and offices in the Ukraine and India as well. As COO Brian Kuhn points out in the presentation, the company is “[a]ll about customers and their data”. Infrascale’s vision is “to be the most trusted data protection provider”.

Build Trust via Four Ps

Predictable

  • Reliable connections, response time, product
  • Work side by side like a dependable friend

Personal

  • People powered – partners, not numbers
  • Your success is our success

Proficient

  • Support and product experts with the right tools
  • Own the issue from beginning to end

Proactive

  • Onboarding, outreach to proactively help you
  • Identify issues before they impact your business

“Human beings dealing with human beings”

 

Product Portfolio

Infrascale Cloud Application Backup (ICAB)

SaaS Backup

  • Backup Microsoft 365, Google Workspace, Salesforce, Box, and Dropbox
  • Recover individual items (mail, file, or record) or entire mailboxes, folders, or databases
  • Close the retention gap between the SaaS provider and corporate, legal, and / or regulatory policy

Infrascale Cloud Backup (ICB)

Endpoint Backup

  • Backup desktop, laptop, or mobile devices directly to the cloud – wherever you work
  • Recover data in seconds – and with ease
  • Optimised for branch office and remote / home workers
  • Provides ransomware detection and remediation

Infrascale Backup and Disaster Recovery (IBDR)

Backup and DR / DRaaS for Servers

  • Backup mission-critical servers to both an on-premises and bootable cloud appliance
  • Boot ready in ~2 minutes (locally or in the cloud)
  • Restore system images or files / folders
  • Optimised for VMware and Hyper-V VMs and Windows bare metal

 

Digging Deeper with IBDR

What Is It?

Infrascale describes IBDR as a hybrid-cloud solution, with hardware and software on-premises, and service infrastructure in the cloud. In terms of DR as a service, Infrascale provides the ability to backup and replicate your data to a secondary location. In the event of a disaster, customers have the option to restore individual files and folders, or the entire infrastructure if required. Restore locations are flexible as well, with a choice of on-premises or in the cloud. Importantly, you also have the ability to failback when everything’s sorted out.

One of the nice features of the service is unlimited DR and failover testing, and there are no fees attached to testing, recovery, or disaster failover.

Range

The IBDR solution also comes in a few different versions, as the table below shows.

[image courtesy of Infrascale]

The appliances are also available in a range of shapes and sizes.

[image courtesy of Infrascale]

Replication Options

In terms of replication, there are multiple destinations available, and you can fairly easily fire up workloads in the Infrascale cloud if need be.

[image courtesy of Infrascale]

 

Thoughts and Further Reading

Anyone who’s worked with data protection solutions will understand that it can be difficult to put together a combination of hardware and software that meets the needs of the business from a commercial, technical, and process perspective – particularly when you’re starting at a small scale and moving up from there. Putting together a managed service for data protection and disaster recovery is possibly harder still, given that you’re trying to accommodate a wide variety of use cases and workloads. And doing this using commercial off-the-shelf offerings can be a real pain. You’re invariably tied to the roadmap of the vendor in terms of features, and your timeframes aren’t normally the same as your vendor (unless you’re really big). So there’s a lot to be said for doing it yourself. If you can get the software stack right, understand what your target market wants, and get everything working in a cost-effective manner, you’re onto a winner.

I commend Infrascale for the level of thought the company has given to this solution, its willingness to work with partners, and the fact that it’s striving to be the best it can in the market segment it’s targeting. My favourite part of the presentation was hearing the phrase “we treat [data] like it’s our own”. Data protection, as I’ve no doubt rambled on about before, is hard, and your customers are trusting you with getting them out of a pickle when something goes wrong. I think it’s great that the folks at Infrascale have this at the centre of everything they’re doing. I get the impression that it’s “all care, all responsibility” when it comes to the approach taken with this offering. I think this counts for a lot when it comes to data protection and DR as a service offerings. I’ll be interested to see how support for additional workloads gets added to the platform, but what they’re doing now seems to be enough for many organisations. If you want to know more about the solution, the resource library has some handy datasheets, and you can get an idea of some elements of the recommended retail pricing from this document.

Cohesity DataProtect Delivered As A Service – SaaS Connector

I recently wrote about my experience with Cohesity DataProtect Delivered as a Service. One thing I didn’t really go into in that article was the networking and resource requirements for the SaaS Connector deployment. It’s nothing earth-shattering, but I thought it was worthwhile noting nonetheless.

In terms of the VM that you deploy for each SaaS Connector, it has the following system requirements:

  • 4 CPUs
  • 10 GB RAM
  • 20 GB disk space (100 MB throughput, 100 IOPs)
  • Outbound Internet connection

In terms of scaleability, the advice from Cohesity at the time of writing is to deploy “one SaaS Connector for each 160 VMs or 16 TB of source data. If you have more data, we recommend that you stagger their first full backups”. Note that this is subject to change. The outbound Internet connectivity is important. You’ll (hopefully) have some kind of firewall in place, so the following ports need to be open.

Port
Protocol
Target
Direction (from Connector)
Purpose

443

TCP

helios.cohesity.com

Outgoing

Connection used for control path

443

TCP

helios-data.cohesity.com

Outgoing

Used to send telemetry data

22, 443

TCP

rt.cohesity.com

Outgoing

Support channel

11117

TCP

*.dmaas.helios.cohesity.com

Outgoing

Connection used for data path

29991

TCP

*.dmaas.helios.cohesity.com

Outgoing

Connection used for data path

443

TCP

*.cloudfront.net

Outgoing

To download upgrade packages

443

TCP

*.amazonaws.com

Outgoing

For S3 data traffic

123, 323

UDP

ntp.google.com or internal NTP

Outgoing

Clock sync

53

TCP & UDP

8.8.8.8 or internal DNS

Bidirectional

Host resolution

Cohesity recommends that you deploy more than one SaaS Connector, and you can scale them out depending on the number of VMs / how much data you’re protecting with the service.

If you’re having concerns with bandwidth, you can configure the bandwidth used by the SaaS Connector via Helios.

Navigate to Settings -> SaaS Connections and click on Bandwidth Usage Options. You can then add a rule.

You then schedule bandwidth usage, potentially for quiet times (particularly useful in small environments where Internet connections may be shared with end users). There’s support for upload and download traffic, and multiple schedules as well.

And that’s pretty much it. Once you have your SaaS Connectors deployed you can monitor everything from Helios.

 

Random Short Take #58

Welcome to Random Short take #58.

  • One of the many reasons I like Chin-Fah is that he isn’t afraid to voice his opinion on various things. This article on what enterprise storage is (and isn’t) made for some insightful reading.
  • VMware Cloud Director 10.3 is now GA – you can read more about it here.
  • Feeling good about yourself? That’ll be quite enough of that thanks. This article from Tom on Value Added Resellers (VARs) and technical debt goes in a direction you might not expect. (Spoiler: staff are the technical debt). I don’t miss that part of the industry at all.
  • Speaking of work, this article from Preston on being busy was spot on. I’ve worked in many places in my time where it’s simply alarming how much effort gets expended in not achieving anything. It’s funny how people deal with it in different ways too.
  • I’m not done with articles by Preston though. This one on configuring a NetWorker AFTD target with S3 was enlightening. It’s been a long time since I worked with NetWorker, but this definitely wasn’t an option back then.  Most importantly, as Preston points out, “we backup to recover”, and he does a great job of demonstrating the process end to end.
  • I don’t think I talk about data protection nearly enough on this weblog, so here’s another article from a home user’s perspective on backing up data with macOS.
  • Do you have a few Rubrik environments lying around that you need to report on? Frederic has you covered.
  • Finally, the good folks at Backblaze are changing the way they do storage pods. You can read more about that here.

*Bonus Round*

I think this is the 1000th post I’ve published here. Thanks to everyone who continues to read it. I’ll be having a morning tea soon.