Apple TV (1st Generation) – A Few Notes (2022 Edition)

Apple TV (1st Generation)

It seems silly to be writing about a device that went end of life over a decade ago, but I recently came across a 1st Generation Apple TV for less than a price of a carton of domestic beer and had an itch that needed scratching. You can read about the Apple TV family of devices here. There’s also an Apple TV (1st Generation) overview here. I think I bought my first one in 2009 or 2010 – not long before the end of its usefulness, if I recall correctly. At the time (and to this day), I was fascinated by the idea of being able to stream content to a television from my computer. I’d messed about with cheap hard drive based streaming devices, and even have a Pixel Magic HD MediaBox sitting in one of my cupboards. This was my first foray into Apple-based media handling (beyond plugging my iMac into the TV). The 1st Generation device was cool in that it was able to store data locally on its hard drive that was synced from iTunes. Unfortunately, the hardware was a little underpowered for what you paid for it, and you were locked in to the Apple ecosystem when it came to content selection. By that time I’d already invested in iTunes for music, but the video side of the equation was still a ways away from the relatively seamless experience that it is today.

Enter The FireCore

It took about 5 minutes to realise that only being able to watch Apple content was going to be a pain, so I paid for an app (aTV Flash) from FireCore that effectively enabled the Apple TV to load software like nitoTV and XBMC. You booted off a USB stick, loaded some code, and then you could run the FireCore apps and the Apple TV code at the same time. I thought it was pretty neat, although it again highlighted how the Apple TV wasn’t that great a performer when it came to watching any real variety of media formats. That said, it handled music videos pretty well, and I remember it playing standard definition DivX without too much trouble. FireCore Support is still up on the website, and I was even able to login to my account again and download the files I needed to get up and running with this new box.

High Definition, Or More Than Standard Definition? 

I harp on a bit about the specs of the Apple TV, but it really wasn’t all that bad. If you wanted support for 1080p content, however, you really needed to install a third-party card: the Broadcom Crystal HD card (model BCM70015). FireCore support for the card is outlined here. You can view an installation guide here. There were a few different options for accessing the capabilities of the Crystal HD card, including using FireCore. I think I booted a USB stick running Crystalbuntu. There’s also an article on playing non-iTunes video that’s worth looking at.

Other Notes

You probably won’t be able to watch Netflix with it, even with the HD card installed and a working copy of XBMC. The native youTube app won’t work anymore either, I think. And there’s no chance you can log in to the Apple servers, or watch or listen to any of your content with modern versions of Apple Music or TV. I will admit, I have some old versions of macOS running as VMs, and I haven’t fired them up to see whether I could get iTunes to connect to the AppleTV at that point. Maybe something to waste a few more hours with on the weekend.

If you try to ssh into the box, you’ll likely get an error and you’ll need to configure your client to deal with a legacy ssh connection.

ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 [email protected]

Once you’ve done that, you can login to the box and have a poke around. The username is frontrow. The password isn’t hard to guess.

Also, if you’re having trouble with the Smart Installer for nitoTV, you’ll need to track down a copy of MacOSXUpdCombo10.4.9Intel.dmg and ftp it to ~/Documents. You should then be able to run the installer.

 

Thoughts

I get what I need from my Apple TV (4th Generation) box nowadays via Plex and various streaming services, but my fascination with these little boxes that can connect you to various media sources remains a drain on my disposable income. Just as my Boxee box is no longer anything more than a fancy paperweight, so too has the utility of my various, older generation Apple TV devices waned over time. It’s not just an interesting lesson in the useful lifecycle of technology devices (“How dare I expect something to be functional after ten years”), but also an interesting reminder of how little control we have over the content we continue to pay the big studios for. I’m sure I’ve opined over the years about the number of times I’ve purchased Enter The Dragon and various Star Wars episodes on a plethora of different formats and resolutions, never really owning a “license” to consume the movie across various resolutions and devices. But the Apple TV (1st Generation) really brings home the fact that, even when I’ve purchased a copy of media from Apple, when and how I watch that piece of media is somewhat out of my control.

Hey, I’m not saying you need to be a weirdo like me and buy everything on physical media and then own multiple players of various formats. Heck, they’re just movies after all. And when you’re buying digital content from Apple they are reasonably clear about the fact that you’re really not in control of said media. But it’s nonetheless a scary thought to think about how much money we plough into this stuff. Just to have working devices sitting obsolete on the shelf within 5 years. Which reminds me, I should fire up my 2nd and 3rd Generation devices and see what they can do.

Random Short Take #66

Happy New Year. Let’s get random.

  • Excited about VMware Cloud Director releases? Me too. 10.3.2 GA was recently announced, and you can read more about that here.
  • Speaking of Cloud Director, Al Rasheed put together this great post on deploying VCD 10.3.x – you can check it out here
  • Getting started with VMware Cloud on AWS but feeling a bit confused by some of the AWS terminology? Me too. Check out this extremely useful post on Amazon VPCs from a VMware perspective.
  • Still on VMware Cloud on AWS. So you need some help with HCX? My colleague Greg put together this excellent guide a little while ago – highly recommended. This margarita recipe is also highly recommended, if you’re into that kind of thing. 
  • Speaking of hyperscalers, Mellor put together a nice overview of Hyve Solutions here
  • Detecting audio problems in your home theatre? Are you though? Tom Andry breaks down what you should be looking for here.  
  • Working with NSX-T and needing to delete route advertisement filters via API? Say no more
  • Lost the password you set on that Raspbian install? Frederic has you covered

Random Short Take #65

Welcome to Random Short take #65. Last one for the year I think.

  • First up, this handy article from Steve Onofaro on replacing certificates in VMware Cloud Director 10.3.1.
  • Speaking of cloud, I enjoyed this article from Chris M. Evans on the AWS “wobble” (as he puts it) in us-east-1 recently. Speaking of articles Chris has written recently, check out his coverage of the Pure Storage FlashArray//XL announcement.
  • Speaking of Pure Storage, my friend Jon wrote about his experience with ActiveCluster in the field recently. You can find that here. I always find these articles to be invaluable, if only because they demonstrate what’s happening out there in the real world.
  • Want some press releases? Here’s one from Datadobi announcing it has released new Starter Packs for DobiMigrate ranging from 1PB up to 7PB.
  • Data protection isn’t just something you do at the office – it’s a problem for home too. I’m always interested to hear how other people tackle the problem. This article from Jeff Geerling (and the associated documentation on Github) was great.
  • John Nicholson is a smart guy, so I think you should check out his articles on benchmarking (and what folks are getting wrong). At the moment this is a 2-part series, but I suspect that could be expanded. You can find Part 1 here and Part 2 here. He makes a great point that benchmarking can be valuable, but benchmarking like it’s 1999 may not be the best thing to do (I’m paraphrasing).
  • Speaking of smart people, Tom Andry put together a great article recently on dispelling myths around subwoofers. If you or a loved one are getting worked up about subwoofers, check out this article.
  • I had people ask me if I was doing a predictions post this year. I’m not crazy enough to do that, but Mellor is. You can read his article here.

In some personal news (and it’s not LinkedIn official yet) I recently quit my job and will be taking up a new role in the new year. I’m not shutting the blog down, but you might see a bit of a change in the content. I can’t see myself stopping these articles, but it’s likely there’ll be less of the data protection howto articles being published. But we’ll see. In any case, wherever you are, stay safe, happy holidays, and see you on the line next year.

StorCentric Announces Nexsan Unity 7.0

Nexsan (a StorCentric company) recently announced version 7.0 of its Unity software platform. I had the opportunity to speak to StorCentric CTO Surya Varanasi about the announcement and thought I’d share a few of my thoughts here.

 

What’s New?

In short, there’s a fair bit that’s gone into this release, and I’ll cover these below.

Protocol Enhancements

The Unity platform already supported FC, iSCSI, NFS, and SMB. It now supports S3 as well, making interoperability with data protection software that supports S3 as a target even simpler. It also means you can do stuff with Object Locking, and I’ll cover that below.

.

[image courtesy of Nexsan]

There have also been some enhancements to the speeds supported on the Unity hardware interfaces, and FC now supports up to 32Gbps, and support for 1/10/25/40/100GbE over Ethernet.

Security, Compliance and Ransomware Protection

Unity now supports immutable volume and file system snapshots for data protection. This provides secure point-in-time copies of data for business continuity.  As I mentioned before, there’s also support for object locking, enabling bucket or object-level protection for a specified retention period to create immutable copies of data. This allows enterprises to address compliance, regulatory and other data protection requirements. Finally, there’s now support for pool-scrubbing to detect and remediate bit rot to avoid data corruption.

Performance Improvements

There have been increases in total throughput capability, with Varanasi telling me that Total Throughput has increased up to 13GB/s on existing platforms. There’s also been a significant improvement in the Unity to Assureon ingestion rate. I’ve written a little about the Unbreakable Backup solution before, and there’s a lot to like about the architecture.

[image courtesy of Nexsan]

 

Thoughts

This is the first time that Nexsan has announced enhancements to its Unity platform without incorporating some kind of hardware refresh, so the company is testing the waters in some respects. I think it’s great when storage companies are able to upgrade their existing hardware platforms with software and offering improved performance and functionality. There’s a lot to like in this release, particularly when it comes to the improved security and data integrity capabilities. Sure, not everyone wants object storage available on their midrange storage array, but it makes it a lot more accessible, particularly if you only need a few 100TB of object. The object lock capability, along with the immutable snapshotting for SMB and NFS users, really helps improve the overall integrity and resiliency of the platform as well.

StorCentric now has a pretty broad portfolio of storage and data protection products available, and you can see the integrations between the different lines are only going to increase as time goes on. The company has been positioning itself as a data-centric company for some time, and working hard to ensure that improved security is a big part of that solution. I think there’s a great story here for customers looking to leverage one vendor to deliver storage, data protection, and data security capabilities into the enterprise. The bad guys in hoodies are always looking for ways to make your day unpleasant, so when vendors are working to tighten up their integrations across a variety of products, it can only be a good thing in terms of improving the resilience and availability of your critical information assets. I’m looking forward to hearing what’s next with Nexsan and StorCentric.

Pure Storage – A Few Thoughts on Pure as-a-Service

I caught up with Matt Oostveen from Pure Storage in August to talk about Pure as-a-Service. It’s been a while since any announcements were made, but I’ve been meaning to write up a few notes on the offering and what I thought of it. So here we are.

 

What Is It?

Oostveen describes Pure Storage as a “software company that sells storage arrays”. The focus at Pure has always been on giving the customer an exceptional experience, which invariably means controlling the stack from end-to-end. To that end, Pure as-a-Service could be described more as a feat of financial, rather than technical, engineering. You’re “billed on actual consumption, with minimum commitments starting at 50 TiB”. Also of note is the burst capability, allowing a level of comfort in understanding both the floor and the ceiling of the consumption levels you may decide to consume. You can choose what kind of storage you want – block, file, or object. You also get access to orchestration tools to manage everything. You also get access to Evergreen Storage, so your hardware stays up to date, and it’s available in four easy to understand tiers of storage.

 

Why Is It?

In this instance, I think the what isn’t as interesting as the why. Oostveen and I spoke about the need for a true utility model to enable companies to deliver on the promise of digital transformation. He noted that many of the big transactions that were occurring were CFO to CFO engagements, rather than the CTO deciding on the path forward for applications and infrastructure. In short, price is always a driver, and simplicity is also very important. Pure has worked to ensure that the offering delivers on both of those fronts.

 

Thoughts

IT is complicated nowadays. You’re dealing with cloud, SaaS, micro-SaaS, distributed, and personalised IT. You’re invariably trying to accommodate the role of data in your organisation, and you’re no doubt facing challenges with getting applications running not just in your core, but also in the cloud and the edge. We talk a lot about how infrastructure can be used to solve a number of the challenges facing organisations, but I have no doubt that if most business leaders never had to deal with infrastructure and the associated challenges it presents they’d be over the moon. Offerings like Pure as-a-Service go some of the way to elevating that conversation from speeds and feeds to something more aligned with business outcomes. It strikes me that these kinds of offerings will have great appeal to both the folks in charge of finance inside big enterprises and the potentially the technical folk trying to keep the lights on whilst a budget decrease gets lobbed at them every year.

I’ve written about Pure enthusiastically in the past because I think the company has a great grasp of some of the challenges that many organisations are facing nowadays. I think that the expansion into other parts of the cloud ecosystem, combined with a willingness to offer flexible consumption models for solutions that were traditionally offered as lease or buy is great. But I don’t think this makes sense without everything that Pure has done previously as a company, from the focus on getting the most out of All-Flash hardware, to a relentless drive for customer satisfaction, to the willingness to take a chance on solutions that are a little outside the traditional purview of a storage array company.

As I’ve said many times before, IT can be hard. There are a lot of things that you need to consider when evaluating the most suitable platform for your applications. Pure Storage isn’t the only game in town, but in terms of storage vendors offering flexible and powerful storage solutions across a variety of topologies, it seems to be a pretty compelling one, and definitely worth checking out.

Random Short Take #64

Welcome to Random Short take #64. It’s the start of the last month of the year. We’re almost there.

  • Want to read an article that’s both funny and informative? Look no further than this beginner’s guide to subnetting. I did Elizabethan literature at uni, so it was good to get a reminder on Shakespeare’s involvement in IP addressing.
  • Continuing with the amusing articles, Chris Colotti published a video of outtakes from some Cohesity lightboard sessions that had me cracking up. It’s always nice when people don’t take themselves too seriously.
  • On a more serious note, data hoarding is a problem (I know this because I’ve been guilty of it), and this article from Preston outlines some of the reasons why it can be a bad thing for business.
  • Still on data protection, Howard Oakley looks at checking the integrity of Time Machine backups in this post. I’ve probably mentioned this a few times previously, but if you find macOS behaviour baffling at times, Howard likely has an article that can explain why you’re seeing what you’re seeing.
  • Zerto recently announced Zerto In-Cloud for AWS – you read more about that here. Zerto is really starting to put together a comprehensive suite of DR solutions. Worth checking out.
  • Still on press releases, Datadobi has announced new enhancements to DobiMigrate with 5.13. The company also recently validated Google Cloud Storage as an endpoint for its DobiProtect solution.
  • Leaseweb Global is also doing stuff with Google Cloud – you can read more about that here.
  • Finally, this article over at Blocks and Files on what constitutes a startup made for some interesting reading. Some companies truly are Peter Pans at this point, whilst others are holding on to the idea that they’re still in startup mode.

22dot6 Releases TASS Cloud Suite

22dot6 sprang from stealth in May 2021. and recently announced its TASS Cloud Suite. I had the opportunity to once again catch up with Diamond Lauffin about the announcement, and thought I’d share some thoughts here.

 

The Product

If you’re unfamiliar with the 22dot6 product, it’s basically a software or hardware-based storage offering that delivers:

  • File and storage management
  • Enterprise-class data services
  • Data and systems profiling and analytics
  • Performance, scalability
  • Virtual, physical, and cloud capabilities, with NFS, SMB, and S3 mixed protocol support

According to Lauffin, it’s built on a scale-out, parallel architecture, and can deliver great pricing and performance per GiB.

Components

It’s Linux-based, and can leverage any bare-metal machine or VM. Metadata services live on scale-out, redundant nodes (VSR nodes), and data services are handled via single, clustered, or redundant nodes (DSX nodes).

[image courtesy of 22dot6]

TASS

The key to this all making some kind of sense is TASS (the Transcendent Abstractive Storage System). 22dot6 describes this as a “purpose-built, objective based software integrating users, applications and data services with physical, virtual and cloud-based architectures globally”. Sounds impressive, doesn’t it? Valence is the software that drives everything, providing the ability to deliver NAS and object over physical and virtual storage, in on-premises, hybrid, or public cloud deployments. It’s multi-vendor capable, offering support for third-party storage systems, and does some really neat stuff with analytics to ensure your storage is performing the way you need it to.

 

The Announcement

22dot6 has announced the TASS Cloud Suite, an “expanded collection of cloud specific features to enhance its universal storage software Valence”. Aimed at solving many of the typical problems users face when using cloud storage, it addresses:

  • Private cloud, with a “point-and-click transcendent capability to easily create an elastic, scale-on-demand, any storage, anywhere, private cloud architecture”
  • Hybrid cloud, by combining local and cloud resources into one big pool of storage
  • Cloud migration and mobility, with a “zero stub, zero pointer” architecture
  • Cloud-based NAS / Block / S3 Object consolidation, with a “transparent, multi-protocol, cross-platform support for all security and permissions with a single point-and-click”

There’s also support for cloud-based data protection, WORM encoding of data, and a comprehensive suite of analytics and reporting.

 

Thoughts and Further Reading

I’ve had the pleasure of speaking to Lauffin about 22dot6 on 2 occasions now, and I’m convinced that he’s probably one of the most enthusiastic storage company founders / CEOs I’ve ever been briefed by. He’s certainly been around for a while, and has seen a whole bunch of stuff. In writing this post I’ve had a hard time articulating everything that Lauffin tells me 22dot6 can do, while staying focused on the cloud part of the announcement. Clearly I should have done an overview post in May and then I could just point you to that. In short, go have a look at the website and you’ll see that there’s quite a bit going on with this product.

The solution seeks to address a whole raft of issues that anyone familiar with modern storage systems will have come across at one stage or another. I remain continually intrigued by how various solutions work to address storage virtualisation challenges, while still making a system that works in a seamless manner. Then try and do that at scale, and in multiple geographical locations across the world. It’s not a terribly easy problem to solve, and if Lauffin and his team can actually pull it off, they’ll be well placed to dominate the storage market in the near future.

Spend any time with Lauffin and you realise that everything about 22dot6 speaks to many of the lessons learned over years of experience in the storage industry, and it’s refreshing to see a company trying to take on such a wide range of challenges and fix everything that’s wrong with modern storage systems. What I can’t say for sure, having never had any real stick time with the solution, is whether it works. In Lauffin’s defence, he has offered to get me in contact with some folks for a demo, and I’ll be taking him up on that offer. There’s a lot to like about what 22dot6 is trying to do here, with the Valance Cloud Suite being a small part of the bigger picture. I’m looking forward to seeing how this goes for 22dot6 over the next year or two, and will report back after I’ve had a demo.

StorONE Announces S1:Backup

StorONE recently announced details of its S1:Backup product. I had the opportunity to talk about the announcement with Gal Naor and George Crump about the news and thought I’d share some brief thoughts here.

 

The Problem

Talk to people in the tech sector today, and you’ll possibly hear a fair bit about how ransomware is a real problem for them, and a scary one at that. Most all of the data protection solution vendors are talking about how they can help customers quickly recover from ransomware events, and some are particularly excited about how they can let you know you’ve been hit in a timely fashion. Which is great. A good data protection solution is definitely important to an organisation’s ability to rapidly recover when things go pop. But what about those software-based solutions that themselves have become targets of the ransomware gangs? What do you do when someone goes after both your primary and secondary storage solution? It costs a lot of money to deliver immutable solutions that are resilient to the nastiness associated with ransomware. Unfortunately, most organisations continue to treat data protection as an overpriced insurance policy and are reluctant to spend more than the bare minimum to keep these types of solutions going. It’s alarming the number of times I’ve spoken to customers using software-based data protection solutions that are out of support with the vendor just to save a few thousand dollars a year in maintenance costs.

 

The StorONE Solution

So what do you get with S1:Backup? Quite a bit, as it happens.

[image courtesy of StorONE]

You get Flash-based data ingestion in an immutable format, with snapshots being taken every 30 seconds.

[image courtesy of StorONE]

You also get fast consolidation of multiple incremental backup jobs (think synthetic fulls, etc.), thanks to the high performance of the StorONE platform. Speaking of performance, you also get quick recovery capabilities, and the other benefits of the StorONE platform (namely high availability and high performance).

And if you’re looking for long term retention that’s affordable, you can take advantage of StorONE’s ability to cope well with 90% capacity utilisation, rapid RAID rebuild times, and the ability to start small and grow.

 

Thoughts and Further Reading

Ransomware is a big problem, particularly when it hits you across both primary and secondary storage platforms. Storage immutability has become a super important piece of the puzzle that vendors are trying to solve. Like many things though, it does require some level of co-operation to make sure non-integrated systems are functioning across the tack in an integrated fashion. There are all kinds of ways to attack this issue, with some hardware vendors insisting that they’re particular interpretation of immutability is the only way to go, while some software vendors are quite keen on architecting air gaps into solutions to get around the problem. And I’m sure there’s a tape guy sitting up the back muttering about how tape is the ultimate air gap. Whichever way you want to look at it, I don’t think any one vendor has the solution that is 100% guaranteed to keep you safe from the folks in hoodies intent on trashing your data. So I’m pleased that StorONE is looking at this problem and wanting to work with the major vendors to develop a cost-effective solution to the issue. It may not be right for everyone, and that’s fine. But on the face of it, it certainly looks like a compelling solution when compared to rolling your own storage platforms and hoping that you don’t get hit.

Doing data protection well is hard, and made harder by virtue of the fact that many organisations treat it as a necessary evil. Sadly, it seems that CxOs only really start to listen after they’ve been rolled, not beforehand. Sometimes the best you can do is be prepared for when disaster strikes. If something like the StorONE solution is going to be the difference between losing the whole lot, or coming back from an attack quickly, it seems like it’s worth checking out. I can assure you that ignoring the problem will only end in tears. It’s also important to remember that a robust data protection solution is just another piece of the puzzle. You still need to need to look at your overall security posture, including securing your assets and teaching your staff good habits. Finally, if it seems like I’m taking aim at software-based solutions, I’m not. I’m the first to acknowledge that any system is susceptible if it isn’t architected and deployed in a secure fashion – regardless of whether it’s integrated or not. Anyway, if you’d like another take on the announcement, Mellor covered it here.

Random Short Take #63

Welcome to Random Short take #63. It’s Friday morning, and the weekend is in sight.

  • I really enjoyed this article from Glenn K. Lockwood about how just looking for an IOPS figure can be a silly thing to do, particularly with HPC workloads. “If there’s one constant in HPC, it’s that everyone hates I/O.  And there’s a good reason: it’s a waste of time because every second you wait for I/O to complete is a second you aren’t doing the math that led you to use a supercomputer in the first place.”
  • Speaking of things that are a bit silly, it seems like someone thought getting on the front foot with some competitive marketing videos was a good idea. It rarely is though.
  • Switching gears a little, you may have been messing about with Tanzu Community Edition and asking yourself how you could SSH to a node. Ask no more, as Mark has your answer.
  • Speaking of storage companies that are pretty pleased with how things are going, Weka has put out this press release on its growth.
  • Still on press releases, Imply had some good news to share at Druid Summit recently.
  • Intrigued by Portworx and want to know more? Check out these two blog posts on configuring multi-cloud application portability (here and here) – they are excellent. Hat tip to my friend Mike at Pure Storage for the links.
  • I loved this article on project heroics from Chris Wahl. I’ve got a lot more to say about this and the impact this behaviour can have on staff but some of it is best not committed to print at this stage.
  • Finally, I replaced one of my receivers recently and cursed myself once again for not using banana plugs. They just make things a bit easier to deal with.

Datadobi, DobiProtect, and Forward Progress

I recently had the opportunity to speak Carl D’Halluin from Datadobi about DobiProtect, and thought I’d share some thoughts here. I wrote about DobiProtect in the past, particularly in relation to disaster recovery and air gaps. Things have progressed since then, as they invariably do, and there’s a bit more to the DobiProtect story now.

 

Ransomware Bad, Data Protection Good

If you’re paying attention to any data protection solution vendors at the moment, you’re no doubt hearing about ransomware attacks. These are considered to be Very Bad Things (™).

What Happens

  • Ransomware comes in through zero-day exploit or email attachments
  • Local drive content encrypted
  • Network shares encrypted – might be fast, might be slow
  • Encrypted file accessed and ransom message appears

How It Happens

Ransomware attacks are executed via many means, including social engineering, software exploits, and “malvertising” (my second favourite non-word next to performant). The timing of these attacks is important to note as well, as some ransomware will lay dormant and launch during a specific time period (a public holiday, for example). Sometimes ransomware will slowly and periodically encrypt content , but generally speaking it will begin encrypting files as quickly as possible. It might not encrypt everything either, but you can bet that it will be a pain regardless.

Defense In Depth

Ransomware protection isn’t just about data protection though. There are many layers you need to consider (and protect), including:

  • Human – hard to control, not very good at doing what they’re told.
  • Physical – securing the locations where data is stored is important.
  • End Points – BYOD can be a pain to manage effectively, and keeping stuff up to date seems to be challenging for the most mature organisations.
  • Networks – there’s a lot of work that needs to go into making sure workloads are both secure and accessible.
  • Application – sometimes they’re just slapped in there and we’re happy they run.
  • Data – It’s everything, but super exposed if you don’t get the rest of this right.

 

DobiProtect Then?

The folks at Datadobi tell me DobiProtect is the ideal solution for protecting the data layer as part of your defence in depth strategy as it is:

  • Software defined
  • Designed for the scale and complexity of file and / or object datasets
  • A solution that compliments existing capabilities such as storage system snapshots
  • Easy to deploy and does not impact existing configurations
  • A solution that is cost effective and flexible

 

Where Does It Fit?

DobiProtect plays to the strength of Datadobi – file and object storage. As such, it’s not designed to handle your traditional VM and DB protection, this remains the domain of the usual suspects.

[image courtesy of Datadobi]

Simple Deployment

The software-only nature of the solution, and the flexibility of going between file and object, means that it’s pretty easy to deploy as well.

[image courtesy of Datadobi]

Architecture

From an architecture perspective, it’s pretty straight forward as well, with the Core handling the orchestration and monitoring, and software proxies used for data movement.

[image courtesy of Datadobi]

 

Thoughts

I’ve been involved in the data protection business in some form or another for over two decades now. As you can imagine, I’ve seen a whole bunch of different ways to solve problems. In my day job I generally promote modern approaches to solving the challenge of protecting data in an efficient and cost-effective fashion. It can be hard to do this well, at scale, across the variety of workloads that you find in the modern enterprise nowadays. It’s not just some home directories, a file server, and one database that you have to protect. Now there’s SaaS workloads, 5000 different database options, containers, endpoints, and all kinds of other crazy stuff. The thing linking that all together is data, and the requirement to protect that data in order for the business to do its business – whether that’s selling widgets or providing services to the general public.

Protecting file and object workloads can be a pain. But why not just use a vendor that can roughly do the job rather than using a very specific solution like DobiProtect? I asked D’Halluin the same question, and his response was along the following lines. The kind of customers Datadobi is working with on a regular basis have petabytes of unstructured data they need to protect, and they absolutely need to be sure that it’s being protected properly. Not just from a quality of recovered data perspective, but also from a defensible compliance position. It’s not just about pointing out to the auditors that the data protection solution “should” be working. There’s a lot of legislation and stuff in place to ensure that it needs to be more than that. So it’s oftentimes worth investing in a solution that can reliably deliver against that compliance requirement.

Ransomware attacks can be the stuff of nightmares, particularly if you aren’t prepared. Any solution that is helping you to protect yourself (and, more importantly, recover) from attacks is a Very Good Thing™. Just be sure to check that the solution you’re looking at does what you think it will do. And then check again, because it’s not a matter of if, but when.