Verity ES Springs Forth – Promises Swift Eradication of Data

Verity ES recently announced its official company launch and the commercial availability of its Verity ES data eradication enterprise software solution. I had the opportunity to speak to Kevin Enders about the announcement and thought I’d briefly share some thoughts here.

 

From Revert to Re-birth?

Revert, a sister company of Verity ES, is an on-site data eradication service provider. It’s also a partner for a number of Storage OEMs.

The Problem

The folks at Revert have had an awful lot of experience with data eradication in big enterprise environments. With that experience, they’d observed a few challenges, namely:

  • The software doing the data eradication was too slow;
  • Eradicating data in enterprise environments introduced particular requirements at high volumes; and
  • Larger capacity HDDs and SDDs were a real problem to deal with.

The Real Problem?

Okay, so the process to get rid of old data on storage and compute devices is a bit of a problem. But what’s the real problem? Organisations need to get rid of end of life data – particularly from a legal standpoint – in a more efficient way. Just as data growth continues to explode, so too does the requirement to delete the old data.

 

The Solution

Verity ES was spawned to develop software to solve a number of the challenges Revert were coming across in the field. There are two ways to do it:

  • Eliminate the data destructively (via device shredding / degaussing); or
  • Non-destructively (using software-based eradication).

Why Eradicate?

Why eradicate? It’s a sustainable approach, enables residual value recovery, and allows for asset re-use. But it nonetheless needs to be secure, economical, and operationally simple to do. How does Verity ES address these requirements? It has Product Assurance Certification from ADISA. It’s also developed software that’s more efficient, particularly when it comes to those troublesome high capacity drives.

[image courtesy of Verity ES]

Who’s Buying?

Who’s this product aimed at? Primarily enterprise DC operators, hyperscalers, IT asset disposal companies, and 3rd-party hardware maintenance providers.

 

Thoughts

If you’ve spent any time on my blog you’ll know that I write a whole lot about data protection, and this is probably one of the first times that I’ve written about data destruction as a product. But it’s an interesting problem that many organisations are facing now. There is a tonne of data being generated every day, and some of that data needs to be gotten rid of, either because it’s sitting on equipment that’s old and needs to be retired, or because legislatively there’s a requirement to get rid of the data.

The way we tackle this problem has changed over time too. One of the most popular articles on this blog was about making an EMC CLARiiON CX700 useful again after EMC did a certified erasure on the array. There was no data to be found on the array, but it was able to be repurposed as lab equipment, and enjoyed a few more months of usefulness. In the current climate, we’re all looking at doing more sensible things with our old disk drives, rather than simply putting a bullet in them (except for the Feds – but they’re a bit odd). Doing this at scale can be challenging, so it’s interesting to see Verity ES step up to the plate with a solution that promises to help with some of these challenges. It takes time to wipe drives, particularly when you need to do it securely.

I should be clear that this data doesn’t go out and identify what data needs to be erased – you have to do that through some other tools. So it won’t tell you that a bunch of PII is buried in a home directory somewhere, or sitting in a spot it shouldn’t be. It also won’t go out and dig through your data protection data and tell you what needs to go. Hopefully, though, you’ve got tools that can handle that problem for you. What this solution does seem to do is provide organisations with options when it comes to cost-effective, efficient data eradication. And that’s something that’s going to become crucial as we continue to generate data, need to delete old data, and do so on larger and larger disk drives.

VMware Cloud on AWS – I4i.metal – A Few Notes …

At VMware Explore 2022 in the US, VMware announced a number of new offerings for VMware Cloud on AWS, including a new bare-metal instance type: the I4i.metal. You can read the official blog post here. I thought it would be useful to provide some high-level details and cover some of the caveats that punters should be aware of.

 

By The Numbers

What do you get from a specifications perspective?
  • The CPU is 3rd generation Intel Xeon Ice Lake @ 2.4GHz / Turbo 3.5GHz
  • 64 physical cores, supporting 128 logical cores with Hyper Threading (HT)
  • 1024 GiB memory
  • 30 TiB NVMe (Raw local capacity)
  • Up to 75 Gbps networking speed
So, how does the I4i.metal compare with the i3.metal? You get roughly 2x compute, storage, and memory, with improved network speed as well.
FAQ Highlights
Can I use custom core counts? Yep, the I4i will support physical custom core counts of 8, 16, 24, 30, 36, 48, 64.
Is there stretched cluster support? Yes, you can deploy these in stretched clusters (of the same host type).
Can I do in-cluster conversions? Yes, read more about that here.
Other Considerations
Why does the sizer say 20 TiB useable for the I4i? Around 7 TiB is consumed by the cache tier at the moment, so you’ll see different numbers in the sizer. And your useable storage numbers will obviously be impacted by the usual constraints around failures to tolerate (FTT) and RAID settings.
Region Support?
The I4i.metal instances will be available in the following Regions (and Availability Zones):
  • US East (N. Virginia) – use1-az1, use1-az2, use1-az4, use1-az5, use1-az6
  • US West (Oregon) – usw2-az1, usw2-az2, usw2-az3, usw2-az4
  • US West (N. California) – usw1-az1, usw1-az3
  • US East (Ohio) – use2-az1, use2-az2, use2-az3
  • Canada (Central) – cac1-az1, cac1-az2
  • Europe (Ireland) – euw1-az1, euw1-az2, euw1-az3
  • Europe (London) – euw2-az1, euw2-az2, euw2-az3
  • Europe (Frankfurt) – euc1-az1, euc1-az2, euc1-az3
  • Europe (Paris) –  euw3-az1, euw3-az2, euw3-az3
  • Asia Pacific (Singapore) – apse1-az1, apse1-az2, apse1-az3
  • Asia Pacific (Sydney) – apse2-az1, apse2-az2, apse2-az3
  • Asia Pacific (Tokyo) – apne1-az1, apne1-az2, apne1-az4

Other Regions will have availability over the coming months.

 

Thoughts

The i3.metal isn’t going anywhere, but it’s nice to have an option that supports more cores and it a bit more storage and RAM. The I4i.metal is great for SQL workloads and VDI deployments where core count can really make a difference. Coupled with the addition of supplemental storage via VMware Cloud Flex Storage and Amazon FSx for NetApp ONTAP, there are some great options available to deal with the variety of workloads customers are looking to deploy on VMware Cloud on AWS.

On another note, if you want to hear more about all the cloudy news from VMware Explore US, I’ll be presenting at the Brisbane VMUG meeting on October 12th, and my colleague Ray will be doing something in Sydney on October 19th. If you’re in the area, come along.

StorONE Announces Per-Drive Licensing Model

StorONE recently announced details of its Per-Drive Licensing Model. I had the opportunity to talk about the announcement with Gal Naor and George Crump about the news and thought I’d share some brief thoughts here.

 

Scale For Free?

Yes, at least from a licensing perspective. If you’ve bought storage from many of the traditional array vendors over the years, you would have likely paid for capacity-based licensing. Every time you upgraded the capacity of your array, there was usually a charge associated with that upgrade, beyond the hardware uplift costs. The folks at StorONE think it’s probably time that they stopped punishing customers for using higher capacity drives, so they’re shifting everything to a per-drive model.

How it Works

As I mentioned at the start, StorONE Scale-For-Free pricing is on a per-drive basis, so you can use the highest capacity, highest density drives without penalty, rather than metering capacity. The pricing is broken down thusly:

  • Price per HDD $/month
  • Price per SSD $/month
  • Minimum $/month
  • Cloud Use Case – $ per month by VM instance required

The idea is that this ultimately lowers the storage price per TB and brings some level of predictability to storage pricing.

How?

The key to this model is the availability of some key features in the StorONE solution, namely:

  • A rewritten and collapsed I/O stack (meaning do more with a whole lot less)
  • Auto-tiering improvements (leading to more consistent and predictable performance across HDD and SDD)
  • High performance erasure coding (meaning super fast recovery from drive failure)

 

But That’s Not All

Virtual Storage Containers

With Virtual Storage Containers (VSC), you can apply different data services and performance profiles to different workloads (hosted on the same media) in a granular and flexible fashion. For example, if you need 4 drives and 50,000 IOPS for your File Services, you can do that. In the same environment you might also need to use a few drives for Object storage with different replication. You can do that too.

[image courtesy of StorONE]

Ransomware Detection (and Protection)

StorONE has been pretty keen on its ransomware protection capabilities, with the option to run immutable snapshots on volumes every 30 seconds and store over 500,000+ snaps per volume. But it has added in some improved telemetry to enable earlier detection of potential ransomware events on volumes, as well as introducing dual-key deletion of snapshots and improved two-factor authentication.

 

Thoughts

There are many things that are certain in life, including the fact that no matter how much capacity you buy for your storage array on day one, by month 18 you’re looking at ways to replace some of that capacity with higher capacity. In my former life as a diskslinger I helped many customers upgrade their arrays with increased capacity drives, and most, if not all of them, had to pay a licensing bump as well as a hardware cost for the privilege. The storage vendors would argue that that’s just the model, and for as long as you can get away with it, it is. Particularly when hardware is getting cheaper and cheaper, you need something to drive revenue. So it’s nice to see a company like StorONE looking to shake things up a little in an industry that’s probably had its way with customers for a while now. Not every storage vendor is looking to punish customers for expanding their environments, but it’s nice that those customers that were struggling with this have the option to look at other ways of using the capacity they need in a cost-effective and predictable. manner.

This doesn’t really work without the other enhancements that have gone in to StorONE, such as the improved erasure coding and automated tiering. Having a cool business model isn’t usually enough to deliver a great solution. I’m looking forward to hearing from the StorONE team in the near future about how this has been received by both existing and new customers, and what other innovations they come out with in the next 12 months.

Datadobi Announces StorageMAP

Datadobi recently announced StorageMAP – a “solution that provides a single pane of glass for organizations to manage unstructured data across their complete data storage estate”. I recently had the opportunity to speak with Carl D’Halluin about the announcement, and thought I’d share some thoughts here.

 

The Problem

So what’s the problem enterprises are trying to solve? They have data all over the place, and it’s no longer a simple activity to work out what’s useful and what isn’t. Consider the data on a typical file / object server inside BigCompanyX.

[image courtesy of Datadobi]

As you can see, there’re all kinds of data lurking about the place, including data you don’t want to have on your server (e.g. Barry’s slightly shonky home videos), and data you don’t need any more (the stuff you can move down to a cheaper tier, or even archive for good).

What’s The Fix?

So how do you fix this problem? Traditionally, you’ll try and scan the data to understand things like capacity, categories of data, age, and so forth. You’ll then make some decisions about the data based on that information and take actions such as relocating, deleting, or migrating it. Sounds great, but it’s frequently a tough thing to make decisions about business data without understanding the business drivers behind the data.

[image courtesy of Datadobi]

What’s The Real Fix?

The real fix, according to Datadobi, is to add a bit more automation and smarts to the process, and this relies heavily on accurate tagging of the data you’re storing. D’Halluin pointed out to me that they don’t suggest you create complex tags for individual files, as you could be there for years trying to sort that out. Rather, you add tags to shares or directories, and let the StorageMAP engine make recommendations and move stuff around for you.

[image courtesy of Datadobi]

Tags can represent business ownership, the role of the data, any action to be taken, or other designations, and they’re user definable.
[image courtesy of Datadobi]

How Does This Fix It?

You’ll notice that the process above looks awfully similar to the one before – so how does this fix anything? The key, in my opinion at least, is that StorageMAP takes away the requirement for intervention from the end user. Instead of going through some process every quarter to “clean up the server”, you’ve got a process in place to do the work for you. As a result, you’ll hopefully see improved cost control, better storage efficiency across your estate, and (hopefully) you’ll be getting a little bit more value from your data.

 

Thoughts

Tools that take care of everything for you have always had massive appeal in the market, particularly as organisations continue to struggle with data storage at any kind of scale. Gone are the days when your admins had an idea where everything on a 9GB volume was stored, or why it was stored there. We now have data stored all over the place (both officially and unofficially), and it’s becoming impossible to keep track of it all.

The key things to consider with these kinds of solutions is that you need to put in the work with tagging your data correctly in the first place. So there needs to be some thought put into what your data looks like in terms of business value. Remember that mp4 video files might not be warranted in the Accounting department, but your friends in Marketing will be underwhelmed if you create some kind of rule to automatically zap mp4s. The other thing to consider is that you need to put some faith in the system. This kind of solution will be useless if folks insist on not deleting anything, or not “believing” the output of the analytics and reporting. I used to work with customers who didn’t want to trust a vendor’s automated block storage tiering because “what does it know about my workloads?”. Indeed. The success of these kind of intelligence and automation tools relies to a certain extent on folks moving away from faith-based computing as an operating model.

But enough ranting from me. I’ve covered Datadobi a bit over the last few years, and it makes sense that all of these announcements have finally led to the StorageMAP product. These guys know data, and how to move it.

StorCentric Announces Nexsan Unity NV10000

Nexsan (a StorCentric company) recently announced the Nexsan Unity NV10000. I thought I’d share a few of my thoughts here.

What Is It? 
In the immortal words of Silicon Valley: “It’s a box“. But the Nexsan Unity NV10000 is a box with some fairly decent specifications packed in a small form-factor, including support for various 1DWPD NVMe SSDs and the latest Intel Xeon processors.
Protocol Support
Protocol support, as would be expected with the Unity, is broad, with support for File (NFS, SMB), Block (iSCSI, FC), and Object (S3) data storage protocols within the one unified platform.
Performance Enhancements
These were hinted at with the release of Unity 7.0, but the Nexsan Unity NV10000 boosts performance by increasing bandwidths of up to 25GB/s, enabling you to scale performance up as your application needs evolve.

Other Useful Features

As you’d expect from this kind of storage array, the Nexsan Unity NV10000 also delivers features such as:

  • High availability (HA);
  • Snapshots;
  • ESXi integration;
  • In-line compression;
  • FASTier™ caching;
  • Asynchronous replication;
  • Data at rest encryption; and
  • Storage pool scrubbing to protect against bit rot, avoiding silent data corruption.

Backup Target?

Unity supports a comprehensive Host OS matrix and is certified as a Veeam Ready Repository for backups. Interestingly, the Nexsan Unity NV10000 also provides data security, regulations compliance, and ransomware recoverability. The platform also supports immutable block and file and S3 object locking, for data backup that is unchangeable and cannot be encrypted, even by internal bad actors.

Thoughts

I’m not as much of a diskslinger as I used to be, but I’m always interested to hear about what StorCentric / Nexsan has been up to with its storage array releases. It strikes me that the company does well by focussing on those features that customers are looking for (fast storage, peace of mind, multiple protocols) and also by being able to put it in a form-factor that appeals in terms of storage density. While the ecosystem around StorCentric is extensive, it makes sense for the most part, with the various components coming together well to form a decent story. I like that the company has really focussed on ensuring that Unity isn’t just a cool product name, but also a key part of the operating environment that powers the solution.

Retrospect Announces Retrospect Backup 18.5

Retrospect recently announced an update to its Backup (18.5) product. I had the opportunity to speak to JG Heithcock (GM, Retrospect) about the announcement and thought I’d briefly share some thoughts here.

 

What’s New?

Anomaly Detection

You can now detect anomalies in systems based on customisable filters and thresholds tailored to individual environments. It still relies on someone doing something about it, but it’s definitely a positive step forward. You can also configure the anomaly detection to work with Retrospect’s scripting / orchestration engine, kicking off various processes when something has gone wrong.

Retrospect Management Console Integration

This capability has been integrated wth the Management Console, and you can now view anomalies across a business or partner’s entire client base in a single pane of glass.

[image courtesy of Retrospect]

Improved Microsoft Azure Blob Integration

You can now set individual immutable retention policies for different backup sets within the same Azure Storage Container. This capability was already available with Retrospect’s AWS S3 integration.

Streamlined Immutable Backup User Experience

Automatically create cloud buckets with immutable backups supported by default. There’s also support for StorCentric’s Unity S3 capability out of the box.

LTO-9 Support

Is tape dead? Maybe. But there are still people using it, and this release includes support for LTO-9, with capacities up to 18TB (45TB compressed).

 

Thoughts

Retrospect Backup 18.5 is a free upgrade to Retrospect Backup 18. While it doesn’t set the world on fire in terms of a broad range of features, there’s some stuff in here that should get existing users excited, and give those considering the product a little more to mull over. Retrospect has been chipping away slowly but surely over the years, and I think it provides the traditional SME market with something that’s been difficult to get until recently: a solid data protection solution, with modern capabilities such as ransomware detection and object storage support, for a price that won’t send customers in that segment packing. I think that’s pretty good, and I look forward to see how things progress over the next 6 – 12 months.

StorCentric Announces Nexsan Unity 7.0

Nexsan (a StorCentric company) recently announced version 7.0 of its Unity software platform. I had the opportunity to speak to StorCentric CTO Surya Varanasi about the announcement and thought I’d share a few of my thoughts here.

 

What’s New?

In short, there’s a fair bit that’s gone into this release, and I’ll cover these below.

Protocol Enhancements

The Unity platform already supported FC, iSCSI, NFS, and SMB. It now supports S3 as well, making interoperability with data protection software that supports S3 as a target even simpler. It also means you can do stuff with Object Locking, and I’ll cover that below.

.

[image courtesy of Nexsan]

There have also been some enhancements to the speeds supported on the Unity hardware interfaces, and FC now supports up to 32Gbps, and support for 1/10/25/40/100GbE over Ethernet.

Security, Compliance and Ransomware Protection

Unity now supports immutable volume and file system snapshots for data protection. This provides secure point-in-time copies of data for business continuity.  As I mentioned before, there’s also support for object locking, enabling bucket or object-level protection for a specified retention period to create immutable copies of data. This allows enterprises to address compliance, regulatory and other data protection requirements. Finally, there’s now support for pool-scrubbing to detect and remediate bit rot to avoid data corruption.

Performance Improvements

There have been increases in total throughput capability, with Varanasi telling me that Total Throughput has increased up to 13GB/s on existing platforms. There’s also been a significant improvement in the Unity to Assureon ingestion rate. I’ve written a little about the Unbreakable Backup solution before, and there’s a lot to like about the architecture.

[image courtesy of Nexsan]

 

Thoughts

This is the first time that Nexsan has announced enhancements to its Unity platform without incorporating some kind of hardware refresh, so the company is testing the waters in some respects. I think it’s great when storage companies are able to upgrade their existing hardware platforms with software and offering improved performance and functionality. There’s a lot to like in this release, particularly when it comes to the improved security and data integrity capabilities. Sure, not everyone wants object storage available on their midrange storage array, but it makes it a lot more accessible, particularly if you only need a few 100TB of object. The object lock capability, along with the immutable snapshotting for SMB and NFS users, really helps improve the overall integrity and resiliency of the platform as well.

StorCentric now has a pretty broad portfolio of storage and data protection products available, and you can see the integrations between the different lines are only going to increase as time goes on. The company has been positioning itself as a data-centric company for some time, and working hard to ensure that improved security is a big part of that solution. I think there’s a great story here for customers looking to leverage one vendor to deliver storage, data protection, and data security capabilities into the enterprise. The bad guys in hoodies are always looking for ways to make your day unpleasant, so when vendors are working to tighten up their integrations across a variety of products, it can only be a good thing in terms of improving the resilience and availability of your critical information assets. I’m looking forward to hearing what’s next with Nexsan and StorCentric.

22dot6 Releases TASS Cloud Suite

22dot6 sprang from stealth in May 2021. and recently announced its TASS Cloud Suite. I had the opportunity to once again catch up with Diamond Lauffin about the announcement, and thought I’d share some thoughts here.

 

The Product

If you’re unfamiliar with the 22dot6 product, it’s basically a software or hardware-based storage offering that delivers:

  • File and storage management
  • Enterprise-class data services
  • Data and systems profiling and analytics
  • Performance, scalability
  • Virtual, physical, and cloud capabilities, with NFS, SMB, and S3 mixed protocol support

According to Lauffin, it’s built on a scale-out, parallel architecture, and can deliver great pricing and performance per GiB.

Components

It’s Linux-based, and can leverage any bare-metal machine or VM. Metadata services live on scale-out, redundant nodes (VSR nodes), and data services are handled via single, clustered, or redundant nodes (DSX nodes).

[image courtesy of 22dot6]

TASS

The key to this all making some kind of sense is TASS (the Transcendent Abstractive Storage System). 22dot6 describes this as a “purpose-built, objective based software integrating users, applications and data services with physical, virtual and cloud-based architectures globally”. Sounds impressive, doesn’t it? Valence is the software that drives everything, providing the ability to deliver NAS and object over physical and virtual storage, in on-premises, hybrid, or public cloud deployments. It’s multi-vendor capable, offering support for third-party storage systems, and does some really neat stuff with analytics to ensure your storage is performing the way you need it to.

 

The Announcement

22dot6 has announced the TASS Cloud Suite, an “expanded collection of cloud specific features to enhance its universal storage software Valence”. Aimed at solving many of the typical problems users face when using cloud storage, it addresses:

  • Private cloud, with a “point-and-click transcendent capability to easily create an elastic, scale-on-demand, any storage, anywhere, private cloud architecture”
  • Hybrid cloud, by combining local and cloud resources into one big pool of storage
  • Cloud migration and mobility, with a “zero stub, zero pointer” architecture
  • Cloud-based NAS / Block / S3 Object consolidation, with a “transparent, multi-protocol, cross-platform support for all security and permissions with a single point-and-click”

There’s also support for cloud-based data protection, WORM encoding of data, and a comprehensive suite of analytics and reporting.

 

Thoughts and Further Reading

I’ve had the pleasure of speaking to Lauffin about 22dot6 on 2 occasions now, and I’m convinced that he’s probably one of the most enthusiastic storage company founders / CEOs I’ve ever been briefed by. He’s certainly been around for a while, and has seen a whole bunch of stuff. In writing this post I’ve had a hard time articulating everything that Lauffin tells me 22dot6 can do, while staying focused on the cloud part of the announcement. Clearly I should have done an overview post in May and then I could just point you to that. In short, go have a look at the website and you’ll see that there’s quite a bit going on with this product.

The solution seeks to address a whole raft of issues that anyone familiar with modern storage systems will have come across at one stage or another. I remain continually intrigued by how various solutions work to address storage virtualisation challenges, while still making a system that works in a seamless manner. Then try and do that at scale, and in multiple geographical locations across the world. It’s not a terribly easy problem to solve, and if Lauffin and his team can actually pull it off, they’ll be well placed to dominate the storage market in the near future.

Spend any time with Lauffin and you realise that everything about 22dot6 speaks to many of the lessons learned over years of experience in the storage industry, and it’s refreshing to see a company trying to take on such a wide range of challenges and fix everything that’s wrong with modern storage systems. What I can’t say for sure, having never had any real stick time with the solution, is whether it works. In Lauffin’s defence, he has offered to get me in contact with some folks for a demo, and I’ll be taking him up on that offer. There’s a lot to like about what 22dot6 is trying to do here, with the Valance Cloud Suite being a small part of the bigger picture. I’m looking forward to seeing how this goes for 22dot6 over the next year or two, and will report back after I’ve had a demo.

StorONE Announces S1:Backup

StorONE recently announced details of its S1:Backup product. I had the opportunity to talk about the announcement with Gal Naor and George Crump about the news and thought I’d share some brief thoughts here.

 

The Problem

Talk to people in the tech sector today, and you’ll possibly hear a fair bit about how ransomware is a real problem for them, and a scary one at that. Most all of the data protection solution vendors are talking about how they can help customers quickly recover from ransomware events, and some are particularly excited about how they can let you know you’ve been hit in a timely fashion. Which is great. A good data protection solution is definitely important to an organisation’s ability to rapidly recover when things go pop. But what about those software-based solutions that themselves have become targets of the ransomware gangs? What do you do when someone goes after both your primary and secondary storage solution? It costs a lot of money to deliver immutable solutions that are resilient to the nastiness associated with ransomware. Unfortunately, most organisations continue to treat data protection as an overpriced insurance policy and are reluctant to spend more than the bare minimum to keep these types of solutions going. It’s alarming the number of times I’ve spoken to customers using software-based data protection solutions that are out of support with the vendor just to save a few thousand dollars a year in maintenance costs.

 

The StorONE Solution

So what do you get with S1:Backup? Quite a bit, as it happens.

[image courtesy of StorONE]

You get Flash-based data ingestion in an immutable format, with snapshots being taken every 30 seconds.

[image courtesy of StorONE]

You also get fast consolidation of multiple incremental backup jobs (think synthetic fulls, etc.), thanks to the high performance of the StorONE platform. Speaking of performance, you also get quick recovery capabilities, and the other benefits of the StorONE platform (namely high availability and high performance).

And if you’re looking for long term retention that’s affordable, you can take advantage of StorONE’s ability to cope well with 90% capacity utilisation, rapid RAID rebuild times, and the ability to start small and grow.

 

Thoughts and Further Reading

Ransomware is a big problem, particularly when it hits you across both primary and secondary storage platforms. Storage immutability has become a super important piece of the puzzle that vendors are trying to solve. Like many things though, it does require some level of co-operation to make sure non-integrated systems are functioning across the tack in an integrated fashion. There are all kinds of ways to attack this issue, with some hardware vendors insisting that they’re particular interpretation of immutability is the only way to go, while some software vendors are quite keen on architecting air gaps into solutions to get around the problem. And I’m sure there’s a tape guy sitting up the back muttering about how tape is the ultimate air gap. Whichever way you want to look at it, I don’t think any one vendor has the solution that is 100% guaranteed to keep you safe from the folks in hoodies intent on trashing your data. So I’m pleased that StorONE is looking at this problem and wanting to work with the major vendors to develop a cost-effective solution to the issue. It may not be right for everyone, and that’s fine. But on the face of it, it certainly looks like a compelling solution when compared to rolling your own storage platforms and hoping that you don’t get hit.

Doing data protection well is hard, and made harder by virtue of the fact that many organisations treat it as a necessary evil. Sadly, it seems that CxOs only really start to listen after they’ve been rolled, not beforehand. Sometimes the best you can do is be prepared for when disaster strikes. If something like the StorONE solution is going to be the difference between losing the whole lot, or coming back from an attack quickly, it seems like it’s worth checking out. I can assure you that ignoring the problem will only end in tears. It’s also important to remember that a robust data protection solution is just another piece of the puzzle. You still need to need to look at your overall security posture, including securing your assets and teaching your staff good habits. Finally, if it seems like I’m taking aim at software-based solutions, I’m not. I’m the first to acknowledge that any system is susceptible if it isn’t architected and deployed in a secure fashion – regardless of whether it’s integrated or not. Anyway, if you’d like another take on the announcement, Mellor covered it here.

Random Short Take #57

Welcome to Random Short Take #57. Only one player has worn 57 in the NBA. So it looks like this particular bit is done. Let’s get random.

  • In the early part of my career I spent a lot of time tuning up old UNIX workstations. I remember lifting those SGI CRTs from desk to desk was never a whole lot of fun. This article about a Sun Ultra 1 project bought back a hint of nostalgia for those days (but not enough to really get into it again). Hat tip to Scott Lowe for the link.
  • As you get older, you realise that people talk a whole lot of rubbish most of the time. This article calling out audiophiles for the practice was great.
  • This article on the Backblaze blog about one company’s approach to building its streaming media capability on B2 made for interesting reading.
  • DH2i recently announced the general availability of DxEnterprise (DxE) for Containers, enabling cloud-native Microsoft SQL Server container Availability Groups outside and inside Kubernetes.
  • Speaking of press releases, Zerto has made a few promotions recently. You can keep up with that news here.
  • I’m terrible when it comes to information security, but if you’re looking to get started in the field, this article provides some excellent guidance on what you should be focussing on.
  • We all generally acknowledge that NTP is important, and most of us likely assume that it’s working. But have you been checking? This article from Tony does a good job of outlining some of the reasons you should be paying some more attention to NTP.
  • This is likely the most succinct article from John you’ll ever read, and it’s right on the money too.