Random Short Take #71

Welcome to Random Short Take #71. A bit of home IT in this one. Let’s get random.

Datadobi Announces StorageMAP

Datadobi recently announced StorageMAP – a “solution that provides a single pane of glass for organizations to manage unstructured data across their complete data storage estate”. I recently had the opportunity to speak with Carl D’Halluin about the announcement, and thought I’d share some thoughts here.

 

The Problem

So what’s the problem enterprises are trying to solve? They have data all over the place, and it’s no longer a simple activity to work out what’s useful and what isn’t. Consider the data on a typical file / object server inside BigCompanyX.

[image courtesy of Datadobi]

As you can see, there’re all kinds of data lurking about the place, including data you don’t want to have on your server (e.g. Barry’s slightly shonky home videos), and data you don’t need any more (the stuff you can move down to a cheaper tier, or even archive for good).

What’s The Fix?

So how do you fix this problem? Traditionally, you’ll try and scan the data to understand things like capacity, categories of data, age, and so forth. You’ll then make some decisions about the data based on that information and take actions such as relocating, deleting, or migrating it. Sounds great, but it’s frequently a tough thing to make decisions about business data without understanding the business drivers behind the data.

[image courtesy of Datadobi]

What’s The Real Fix?

The real fix, according to Datadobi, is to add a bit more automation and smarts to the process, and this relies heavily on accurate tagging of the data you’re storing. D’Halluin pointed out to me that they don’t suggest you create complex tags for individual files, as you could be there for years trying to sort that out. Rather, you add tags to shares or directories, and let the StorageMAP engine make recommendations and move stuff around for you.

[image courtesy of Datadobi]

Tags can represent business ownership, the role of the data, any action to be taken, or other designations, and they’re user definable.
[image courtesy of Datadobi]

How Does This Fix It?

You’ll notice that the process above looks awfully similar to the one before – so how does this fix anything? The key, in my opinion at least, is that StorageMAP takes away the requirement for intervention from the end user. Instead of going through some process every quarter to “clean up the server”, you’ve got a process in place to do the work for you. As a result, you’ll hopefully see improved cost control, better storage efficiency across your estate, and (hopefully) you’ll be getting a little bit more value from your data.

 

Thoughts

Tools that take care of everything for you have always had massive appeal in the market, particularly as organisations continue to struggle with data storage at any kind of scale. Gone are the days when your admins had an idea where everything on a 9GB volume was stored, or why it was stored there. We now have data stored all over the place (both officially and unofficially), and it’s becoming impossible to keep track of it all.

The key things to consider with these kinds of solutions is that you need to put in the work with tagging your data correctly in the first place. So there needs to be some thought put into what your data looks like in terms of business value. Remember that mp4 video files might not be warranted in the Accounting department, but your friends in Marketing will be underwhelmed if you create some kind of rule to automatically zap mp4s. The other thing to consider is that you need to put some faith in the system. This kind of solution will be useless if folks insist on not deleting anything, or not “believing” the output of the analytics and reporting. I used to work with customers who didn’t want to trust a vendor’s automated block storage tiering because “what does it know about my workloads?”. Indeed. The success of these kind of intelligence and automation tools relies to a certain extent on folks moving away from faith-based computing as an operating model.

But enough ranting from me. I’ve covered Datadobi a bit over the last few years, and it makes sense that all of these announcements have finally led to the StorageMAP product. These guys know data, and how to move it.

StorCentric Announces Nexsan Unity NV10000

Nexsan (a StorCentric company) recently announced the Nexsan Unity NV10000. I thought I’d share a few of my thoughts here.

What Is It? 
In the immortal words of Silicon Valley: “It’s a box“. But the Nexsan Unity NV10000 is a box with some fairly decent specifications packed in a small form-factor, including support for various 1DWPD NVMe SSDs and the latest Intel Xeon processors.
Protocol Support
Protocol support, as would be expected with the Unity, is broad, with support for File (NFS, SMB), Block (iSCSI, FC), and Object (S3) data storage protocols within the one unified platform.
Performance Enhancements
These were hinted at with the release of Unity 7.0, but the Nexsan Unity NV10000 boosts performance by increasing bandwidths of up to 25GB/s, enabling you to scale performance up as your application needs evolve.

Other Useful Features

As you’d expect from this kind of storage array, the Nexsan Unity NV10000 also delivers features such as:

  • High availability (HA);
  • Snapshots;
  • ESXi integration;
  • In-line compression;
  • FASTier™ caching;
  • Asynchronous replication;
  • Data at rest encryption; and
  • Storage pool scrubbing to protect against bit rot, avoiding silent data corruption.

Backup Target?

Unity supports a comprehensive Host OS matrix and is certified as a Veeam Ready Repository for backups. Interestingly, the Nexsan Unity NV10000 also provides data security, regulations compliance, and ransomware recoverability. The platform also supports immutable block and file and S3 object locking, for data backup that is unchangeable and cannot be encrypted, even by internal bad actors.

Thoughts

I’m not as much of a diskslinger as I used to be, but I’m always interested to hear about what StorCentric / Nexsan has been up to with its storage array releases. It strikes me that the company does well by focussing on those features that customers are looking for (fast storage, peace of mind, multiple protocols) and also by being able to put it in a form-factor that appeals in terms of storage density. While the ecosystem around StorCentric is extensive, it makes sense for the most part, with the various components coming together well to form a decent story. I like that the company has really focussed on ensuring that Unity isn’t just a cool product name, but also a key part of the operating environment that powers the solution.

Random Short Take #70

Welcome to Random Short Take #70. Let’s get random.

Random Short Take #69

Welcome to Random Short Take #69. Let’s get random.

StorCentric Announces Nexsan Unity 7.0

Nexsan (a StorCentric company) recently announced version 7.0 of its Unity software platform. I had the opportunity to speak to StorCentric CTO Surya Varanasi about the announcement and thought I’d share a few of my thoughts here.

 

What’s New?

In short, there’s a fair bit that’s gone into this release, and I’ll cover these below.

Protocol Enhancements

The Unity platform already supported FC, iSCSI, NFS, and SMB. It now supports S3 as well, making interoperability with data protection software that supports S3 as a target even simpler. It also means you can do stuff with Object Locking, and I’ll cover that below.

.

[image courtesy of Nexsan]

There have also been some enhancements to the speeds supported on the Unity hardware interfaces, and FC now supports up to 32Gbps, and support for 1/10/25/40/100GbE over Ethernet.

Security, Compliance and Ransomware Protection

Unity now supports immutable volume and file system snapshots for data protection. This provides secure point-in-time copies of data for business continuity.  As I mentioned before, there’s also support for object locking, enabling bucket or object-level protection for a specified retention period to create immutable copies of data. This allows enterprises to address compliance, regulatory and other data protection requirements. Finally, there’s now support for pool-scrubbing to detect and remediate bit rot to avoid data corruption.

Performance Improvements

There have been increases in total throughput capability, with Varanasi telling me that Total Throughput has increased up to 13GB/s on existing platforms. There’s also been a significant improvement in the Unity to Assureon ingestion rate. I’ve written a little about the Unbreakable Backup solution before, and there’s a lot to like about the architecture.

[image courtesy of Nexsan]

 

Thoughts

This is the first time that Nexsan has announced enhancements to its Unity platform without incorporating some kind of hardware refresh, so the company is testing the waters in some respects. I think it’s great when storage companies are able to upgrade their existing hardware platforms with software and offering improved performance and functionality. There’s a lot to like in this release, particularly when it comes to the improved security and data integrity capabilities. Sure, not everyone wants object storage available on their midrange storage array, but it makes it a lot more accessible, particularly if you only need a few 100TB of object. The object lock capability, along with the immutable snapshotting for SMB and NFS users, really helps improve the overall integrity and resiliency of the platform as well.

StorCentric now has a pretty broad portfolio of storage and data protection products available, and you can see the integrations between the different lines are only going to increase as time goes on. The company has been positioning itself as a data-centric company for some time, and working hard to ensure that improved security is a big part of that solution. I think there’s a great story here for customers looking to leverage one vendor to deliver storage, data protection, and data security capabilities into the enterprise. The bad guys in hoodies are always looking for ways to make your day unpleasant, so when vendors are working to tighten up their integrations across a variety of products, it can only be a good thing in terms of improving the resilience and availability of your critical information assets. I’m looking forward to hearing what’s next with Nexsan and StorCentric.

Pure Storage – A Few Thoughts on Pure as-a-Service

I caught up with Matt Oostveen from Pure Storage in August to talk about Pure as-a-Service. It’s been a while since any announcements were made, but I’ve been meaning to write up a few notes on the offering and what I thought of it. So here we are.

 

What Is It?

Oostveen describes Pure Storage as a “software company that sells storage arrays”. The focus at Pure has always been on giving the customer an exceptional experience, which invariably means controlling the stack from end-to-end. To that end, Pure as-a-Service could be described more as a feat of financial, rather than technical, engineering. You’re “billed on actual consumption, with minimum commitments starting at 50 TiB”. Also of note is the burst capability, allowing a level of comfort in understanding both the floor and the ceiling of the consumption levels you may decide to consume. You can choose what kind of storage you want – block, file, or object. You also get access to orchestration tools to manage everything. You also get access to Evergreen Storage, so your hardware stays up to date, and it’s available in four easy to understand tiers of storage.

 

Why Is It?

In this instance, I think the what isn’t as interesting as the why. Oostveen and I spoke about the need for a true utility model to enable companies to deliver on the promise of digital transformation. He noted that many of the big transactions that were occurring were CFO to CFO engagements, rather than the CTO deciding on the path forward for applications and infrastructure. In short, price is always a driver, and simplicity is also very important. Pure has worked to ensure that the offering delivers on both of those fronts.

 

Thoughts

IT is complicated nowadays. You’re dealing with cloud, SaaS, micro-SaaS, distributed, and personalised IT. You’re invariably trying to accommodate the role of data in your organisation, and you’re no doubt facing challenges with getting applications running not just in your core, but also in the cloud and the edge. We talk a lot about how infrastructure can be used to solve a number of the challenges facing organisations, but I have no doubt that if most business leaders never had to deal with infrastructure and the associated challenges it presents they’d be over the moon. Offerings like Pure as-a-Service go some of the way to elevating that conversation from speeds and feeds to something more aligned with business outcomes. It strikes me that these kinds of offerings will have great appeal to both the folks in charge of finance inside big enterprises and the potentially the technical folk trying to keep the lights on whilst a budget decrease gets lobbed at them every year.

I’ve written about Pure enthusiastically in the past because I think the company has a great grasp of some of the challenges that many organisations are facing nowadays. I think that the expansion into other parts of the cloud ecosystem, combined with a willingness to offer flexible consumption models for solutions that were traditionally offered as lease or buy is great. But I don’t think this makes sense without everything that Pure has done previously as a company, from the focus on getting the most out of All-Flash hardware, to a relentless drive for customer satisfaction, to the willingness to take a chance on solutions that are a little outside the traditional purview of a storage array company.

As I’ve said many times before, IT can be hard. There are a lot of things that you need to consider when evaluating the most suitable platform for your applications. Pure Storage isn’t the only game in town, but in terms of storage vendors offering flexible and powerful storage solutions across a variety of topologies, it seems to be a pretty compelling one, and definitely worth checking out.

22dot6 Releases TASS Cloud Suite

22dot6 sprang from stealth in May 2021. and recently announced its TASS Cloud Suite. I had the opportunity to once again catch up with Diamond Lauffin about the announcement, and thought I’d share some thoughts here.

 

The Product

If you’re unfamiliar with the 22dot6 product, it’s basically a software or hardware-based storage offering that delivers:

  • File and storage management
  • Enterprise-class data services
  • Data and systems profiling and analytics
  • Performance, scalability
  • Virtual, physical, and cloud capabilities, with NFS, SMB, and S3 mixed protocol support

According to Lauffin, it’s built on a scale-out, parallel architecture, and can deliver great pricing and performance per GiB.

Components

It’s Linux-based, and can leverage any bare-metal machine or VM. Metadata services live on scale-out, redundant nodes (VSR nodes), and data services are handled via single, clustered, or redundant nodes (DSX nodes).

[image courtesy of 22dot6]

TASS

The key to this all making some kind of sense is TASS (the Transcendent Abstractive Storage System). 22dot6 describes this as a “purpose-built, objective based software integrating users, applications and data services with physical, virtual and cloud-based architectures globally”. Sounds impressive, doesn’t it? Valence is the software that drives everything, providing the ability to deliver NAS and object over physical and virtual storage, in on-premises, hybrid, or public cloud deployments. It’s multi-vendor capable, offering support for third-party storage systems, and does some really neat stuff with analytics to ensure your storage is performing the way you need it to.

 

The Announcement

22dot6 has announced the TASS Cloud Suite, an “expanded collection of cloud specific features to enhance its universal storage software Valence”. Aimed at solving many of the typical problems users face when using cloud storage, it addresses:

  • Private cloud, with a “point-and-click transcendent capability to easily create an elastic, scale-on-demand, any storage, anywhere, private cloud architecture”
  • Hybrid cloud, by combining local and cloud resources into one big pool of storage
  • Cloud migration and mobility, with a “zero stub, zero pointer” architecture
  • Cloud-based NAS / Block / S3 Object consolidation, with a “transparent, multi-protocol, cross-platform support for all security and permissions with a single point-and-click”

There’s also support for cloud-based data protection, WORM encoding of data, and a comprehensive suite of analytics and reporting.

 

Thoughts and Further Reading

I’ve had the pleasure of speaking to Lauffin about 22dot6 on 2 occasions now, and I’m convinced that he’s probably one of the most enthusiastic storage company founders / CEOs I’ve ever been briefed by. He’s certainly been around for a while, and has seen a whole bunch of stuff. In writing this post I’ve had a hard time articulating everything that Lauffin tells me 22dot6 can do, while staying focused on the cloud part of the announcement. Clearly I should have done an overview post in May and then I could just point you to that. In short, go have a look at the website and you’ll see that there’s quite a bit going on with this product.

The solution seeks to address a whole raft of issues that anyone familiar with modern storage systems will have come across at one stage or another. I remain continually intrigued by how various solutions work to address storage virtualisation challenges, while still making a system that works in a seamless manner. Then try and do that at scale, and in multiple geographical locations across the world. It’s not a terribly easy problem to solve, and if Lauffin and his team can actually pull it off, they’ll be well placed to dominate the storage market in the near future.

Spend any time with Lauffin and you realise that everything about 22dot6 speaks to many of the lessons learned over years of experience in the storage industry, and it’s refreshing to see a company trying to take on such a wide range of challenges and fix everything that’s wrong with modern storage systems. What I can’t say for sure, having never had any real stick time with the solution, is whether it works. In Lauffin’s defence, he has offered to get me in contact with some folks for a demo, and I’ll be taking him up on that offer. There’s a lot to like about what 22dot6 is trying to do here, with the Valance Cloud Suite being a small part of the bigger picture. I’m looking forward to seeing how this goes for 22dot6 over the next year or two, and will report back after I’ve had a demo.

StorONE Announces S1:Backup

StorONE recently announced details of its S1:Backup product. I had the opportunity to talk about the announcement with Gal Naor and George Crump about the news and thought I’d share some brief thoughts here.

 

The Problem

Talk to people in the tech sector today, and you’ll possibly hear a fair bit about how ransomware is a real problem for them, and a scary one at that. Most all of the data protection solution vendors are talking about how they can help customers quickly recover from ransomware events, and some are particularly excited about how they can let you know you’ve been hit in a timely fashion. Which is great. A good data protection solution is definitely important to an organisation’s ability to rapidly recover when things go pop. But what about those software-based solutions that themselves have become targets of the ransomware gangs? What do you do when someone goes after both your primary and secondary storage solution? It costs a lot of money to deliver immutable solutions that are resilient to the nastiness associated with ransomware. Unfortunately, most organisations continue to treat data protection as an overpriced insurance policy and are reluctant to spend more than the bare minimum to keep these types of solutions going. It’s alarming the number of times I’ve spoken to customers using software-based data protection solutions that are out of support with the vendor just to save a few thousand dollars a year in maintenance costs.

 

The StorONE Solution

So what do you get with S1:Backup? Quite a bit, as it happens.

[image courtesy of StorONE]

You get Flash-based data ingestion in an immutable format, with snapshots being taken every 30 seconds.

[image courtesy of StorONE]

You also get fast consolidation of multiple incremental backup jobs (think synthetic fulls, etc.), thanks to the high performance of the StorONE platform. Speaking of performance, you also get quick recovery capabilities, and the other benefits of the StorONE platform (namely high availability and high performance).

And if you’re looking for long term retention that’s affordable, you can take advantage of StorONE’s ability to cope well with 90% capacity utilisation, rapid RAID rebuild times, and the ability to start small and grow.

 

Thoughts and Further Reading

Ransomware is a big problem, particularly when it hits you across both primary and secondary storage platforms. Storage immutability has become a super important piece of the puzzle that vendors are trying to solve. Like many things though, it does require some level of co-operation to make sure non-integrated systems are functioning across the tack in an integrated fashion. There are all kinds of ways to attack this issue, with some hardware vendors insisting that they’re particular interpretation of immutability is the only way to go, while some software vendors are quite keen on architecting air gaps into solutions to get around the problem. And I’m sure there’s a tape guy sitting up the back muttering about how tape is the ultimate air gap. Whichever way you want to look at it, I don’t think any one vendor has the solution that is 100% guaranteed to keep you safe from the folks in hoodies intent on trashing your data. So I’m pleased that StorONE is looking at this problem and wanting to work with the major vendors to develop a cost-effective solution to the issue. It may not be right for everyone, and that’s fine. But on the face of it, it certainly looks like a compelling solution when compared to rolling your own storage platforms and hoping that you don’t get hit.

Doing data protection well is hard, and made harder by virtue of the fact that many organisations treat it as a necessary evil. Sadly, it seems that CxOs only really start to listen after they’ve been rolled, not beforehand. Sometimes the best you can do is be prepared for when disaster strikes. If something like the StorONE solution is going to be the difference between losing the whole lot, or coming back from an attack quickly, it seems like it’s worth checking out. I can assure you that ignoring the problem will only end in tears. It’s also important to remember that a robust data protection solution is just another piece of the puzzle. You still need to need to look at your overall security posture, including securing your assets and teaching your staff good habits. Finally, if it seems like I’m taking aim at software-based solutions, I’m not. I’m the first to acknowledge that any system is susceptible if it isn’t architected and deployed in a secure fashion – regardless of whether it’s integrated or not. Anyway, if you’d like another take on the announcement, Mellor covered it here.

Random Short Take #61

Welcome to Random Short take #61.

  • VMworld is on this week. I still find the virtual format (and timezones) challenging, and I miss the hallway track and the jet lag. There’s nonetheless some good news coming out of the event. One thing that was announced prior to the event was Tanzu Community Edition. William Lam talks more about that here.
  • Speaking of VMworld news, Viktor provided a great summary on the various “projects” being announced. You can read more here.
  • I’ve been a Mac user for a long time, and there’s stuff I’m learning every week via Howard Oakley’s blog. Check out this article covering the Recovery Partition. While I’m at it, this presentation he did on Time Machine is also pretty ace.
  • Facebook had a little problem this week, and the Cloudflare folks have provided a decent overview of what happened. As someone who works for a service provider, this kind of stuff makes me twitchy.
  • Fibre Channel? Cloud? Chalk and cheese? Maybe. Read Chin-Fah’s article for some more insights. Personally, I miss working with FC, but I don’t miss the arguing I had to do with systems and networks people when it came to the correct feeding and watering of FC environments.
  • Remote working has been a challenge for many organisations, with some managers not understanding that their workers weren’t just watching streaming video all day, but actually being more productive. Not everything needs to be a video call, however, and this post / presentation has a lot of great tips on what does and doesn’t work with distributed teams.
  • I’ve had to ask this question before. And Jase has apparently had to answer it too, so he’s posted an article on vSAN and external storage here.
  • This is the best response to a trio of questions I’ve read in some time.