VMware Cloud on AWS – What’s New – February 2024

It’s been a little while since I posted an update on what’s new with VMware Cloud on AWS, so I thought I’d share some of the latest news.

 

M7i.metal-24xl Announced

It’s been a few months since it was announced at AWS re:Invent 2023, but the M7i.metal-24xl (one of the catchier host types I’ve seen) is going to the change the way we approach storage-heavy VMC on AWS deployments.

What is it?

It’s a host without local storage. There are 48 physical cores (96 logical cores with Hyper-Threading enabled). It has 384 GiB memory. The key point is that there are flexible NFS storage options to choose from – VMware Cloud Flex Storage or Amazon FSx for NetApp ONTAP. There’s support for up to 37.5 Gbps networking speed, and it supports always-on memory encryption using Intel Total Memory Encryption (TME).

Why?

Some of the potential use cases for this kind of host type are as follows:

  • CPU Intensive workloads
    • Image processing
    • Video encoding
    • Gaming servers
  • AI/ML Workloads
    • Code Generation
    • Natural Language Processing
    • Classical Machine Learning
    • Workloads with limited resource requirements
  • Web and application servers
    • Microservices/Management services
    • Secondary data stores/database applications
  • Ransomware & Disaster Recovery
    • Modern Ransomware Recovery
    • Next-gen DR
    • Compliance and Risk Management

Other Notes

New (greenfield) customers can deploy the M7i.metal-24xl in the first cluster using 2-16 nodes. Existing (brownfield) customers can deploy the M7i.metal-24xl in secondary clusters in the same SDDC. In terms of connectivity, we recommend you take advantage of VPC peering for your external storage connectivity. Note that there is no support for multi-AZ deployments, nor is there support for single node deployments. If you’d like to know more about the M7i.metal-24xl, there’s an excellent technical overview here.

 

vSAN Express Storage Architecture on VMware Cloud on AWS

SDDC Version 1.24 was announced in November 2023, and with that came support for vSAN Express Storage Architecture (ESA) on VMC on AWS. There’s some great info on what’s included in the 1.24 release here, but I thought I’d focus on some of the key constraints you need to look at when considering ESA in your VMC on AWS environment.

Currently, the following restrictions apply to vSAN ESA in VMware Cloud on AWS:
  • vSAN ESA is available for clusters using i4i hosts only.
  • vSAN ESA is not supported with stretched clusters.
  • vSAN ESA is not supported with 2-host clusters.
  • After you have deployed a cluster, you cannot convert from vSAN ESA to vSAN OSA or vice versa.
So why do it? There are plenty of reasons, including better performance, enhanced resource efficiency, and several improvements in terms of speed and resiliency. You can read more about it here.

VMware Cloud Disaster Recovery Updates

There have also been some significant changes to VCDR, with the recent announcement that we now support a 15-minute Recovery Point Objective (down from 30 minutes). There have also been a number of enhancements to the ransomware recovery capability, including automatic Linux security sensor installation in the recovery workflow (trust me, once you’ve done it manually a few times you’ll appreciate this). With all the talk of supplemental storage above, it should be noted that “VMware Cloud DR does not support recovering VMs to VMware Cloud on AWS SDDC with NFS-mounted external datastores including Amazon FSx for NetApp datastores, Cloud Control Volumes or VMware Cloud Flex Storage”. Just in case you had an idea that this might be something you want to do.

 

Thoughts

Much of the news about VMware has been around the acquisition by Broadcom. It certainly was news. In the meantime, however, the VMware Cloud on AWS product and engineering teams have continued to work on releasing innovative features and incremental improvements. The encouraging thing about this is that they are listening to customers and continuing to adapt the solution architecture to satisfy those requirements. This is a good thing for both existing and potential customers. If you looked at VMware Cloud on AWS three years ago and ruled it out, I think it’s worth looking at again.

Nexsan Announces Unity NV6000

Nexsan recently announced the Nexsan Unity NV6000. I had the chance to speak to Andy Hill about it, and thought I’d share some thoughts here.

 

What Is It?

[image courtesy of Nexsan]

I’ve said it before, and I’ll say it again … in the immortal words of Silicon Valley: “It’s a box”. And a reasonably powerful one at that, coming loaded with the following specifications.

Supported Protocols SAN (Fibre Channel, iSCSI), NAS (NFS, SMB 1.0 to 3.0, FTP), Object (S3)
Disk Bays | Rack U 60 | 4U
Maximum Drives with Expansion 180
Maximum Raw Capacity (chassis | total) 1.12 PB Raw | 3.36 PB Raw
System Memory (DRAM) per controller up to 128GB
FASTier 2.5″ SSD Drives (TB) 1.92 | 3.84 | 7.68 | 15.36
3.5” 7.2K SAS Drives (TB) 4 | 6 | 8 | 10 | 12 | 14 | 16 | 18 | 20
2.5″ NVME 1DWPD SSDs (TB) N/A
Host Connectivity 16/32Gb FC | 10/25/40/100 GbE
Max CIFS | NFS File Systems 512
Data Protection: Immutable Snapshots, S3 Object-Locking, and optional Unbreakable Backup.

It’s a dual-controller platform, with each controller containing 2x Intel Xeon Silver CPUs and a 12Gb/s SAS backplane. Note that you get access to the following features included as part of the platform license:

  • Nexsan’s FASTier® Caching – Use solid-state to accelerate the performance of the underlying spinning disks
  • Nexsan Unity software version 7.0, with important enhancements to power, enterprise-class security, compliance, and ransomware protection
  • Enhanced Performance – Up to 100,000 IOPs
  • Third-Party Software Support – Windows VSS, VMware, VAAI, Commvault, Veeam Ready Repository, and more
  • Multi-Protocol Support – SAN (Fibre Channel, iSCSI), NAS (NFS, CIF, SMB1 to SMB3, FTP), Object (S3), 16/32GB FC, 10/25/40/100 GbE
  • High Availability – No single point-of-failure architecture with dual redundant storage controllers, redundant power supplies and RAID

 

Other Features

Snapshot Immutability

The snapshot immutability claim caught my eye, as immutable means a lot of things to a lot of people. Hill mentioned that the snapshot IP used on Unity was developed in-house by Nexsan and isn’t the patched together solution that some other vendors promote as an immutable solution. There are some other smarts within Unity that should give users comfort that data can’t be easily gotten at. Once you’ve set retention periods for snapshots, for example, you can’t log in to the platform and the set the date forward and have those snapshots expire. The object storage componet also supports S3 Object Lock, which is good news for punters looking to take advantage of this feature.

Unified Protocol Support

It’s in the name, and Nexsan has done a good job of incorporating a variety of storage protocols and physical access methods into the Unity platform. There’s File, Block, and Object, and support for both FC and speedy Ethernet as well. In other words, something for everyone.

Assureon Integration

One of the other features I like about the Unity is the integration with Assureon. If you’re unfamiliar with Assureon, you can check it out here. It takes storage security and compliance to another level, and is worth looking into if you have a requirement for things like regulatory compliant storage, the ability to maintain chain of custody, and fun things like that.

 

Thoughts and Further Reading

Who cares about storage arrays any more? A surprising number of people, and with good reason. Some folks still need them in the data centre. And folks are also looking for storage arrays that can do more with less. I think this is where the Nexsan offering excels, with multi-protocol and multi-transport support, along with some decent security chops and an all-inclusive licensing model, it provides for cost-effective storage (thanks to a mix of spinning rust and solid-state drives) that competes well with the solutions that have traditionally dominated the midrange market. Additionally, integration with solutions like Assureon makes this a solution that’s worth a second look, particularly if you’re in the market for object storage with a lower barrier to entry (from a cost and capacity perspective) and the ability to deal with backup data in a secure fashion.

Arcitecta Announces Mediaflux Universal Data System

I had the opportunity to speak to Jason Lohrey and Robert Murphy from Arcitecta a little while ago about the company’s Mediaflux announcement. It was a great conversation, and I’m sad that I hadn’t heard about the company beforehand. In any case I figured I’d share some thoughts on the announcement.

 

What Is It?

The folks at Arcitecta describe the Mediaflux Universal Data System as “a convergence of data management, data orchestration, multi-protocol access, and storage in one platform”. The idea is that the system manages your data across all of your storage platforms. It’s not just clustered or distributed storage. It’s not just a control plane that gives you multi-protocol access to your storage platforms. It’s not just an orchestration engine that can move your data around as required. It’s all of these things and a bit more too. Features include:

  • Converges data management, orchestration and storage within a single platform – that’s right, it’s all in the one box.
  • Manages every aspect of the data lifecycle: On-premises and cloud, with globally distributed access.
  • Offers multi-protocol access and support. The system supports NFS, SMB, S3, SFTP and DICOM, among many others.
  • Empowers immense scalability. Mediaflux licensing is decoupled from the volume of data stored so organisations can affordably scale storage needs to hundreds of petabytes, accommodating hundreds of billions of files without the financial strain typically associated with such vast capacities. Note that Mediaflux’s pricing is based on the number of concurrent users.
  • Provides the option to forego third-party software and clustered file systems.
  • Supports multi-vendor storage environments, allowing customers to choose best-of-breed hardware.

Seem ambitious? Maybe, but it also seems like something that would be super useful.

 

Global Storage At A Worldwide Scale

One of the cool features of Mediaflux is how it handles distributed file systems, not just across data centres, but across continents. A key feature of the platform is the ability to deliver the same file system to every site.

[image courtesy of Arcitecta]

It has support for centralised file locking, as well as replication between sites. You can also configure variable retention policies for different site copies, giving you flexibility when it comes to how long you store your data in various locales. According to the folks at Arcitecta, it’s also happy to make the most of your bandwidth, and able to use up to 95% of the available bandwidth.

 

Thoughts And Further Reading

There have been a few data management / orchestration / unified control plane companies that have had a tilt at doing universal storage access well, across distances, and with support for multiple protocols. Sometimes the end result looks like an engineering project at best, and you have to hold your mouth right to have any hope of seeing your data again once you send it on its way. Putting these kinds of platforms together is no easy task, and that’s why this has been something of a journey for the team at Arcitecta. The company previously supported Mediaflux on top of third-party file and object systems, but customers needed a solution that was more flexible and affordable.

So why not just use the cloud? Because some people don’t like storing stuff in hyperscaler environments. And sometimes there’s a requirement for better performance than you can reasonably pay for in a cloud environment. And not every hyperscaler might have a presence where you want your data to be. All that said, if you do have data in the cloud, you can manage it with Mediaflux too.

I’m the first to admit that I haven’t had any recent experience with the type of storage systems that would benefit from something like Mediaflux, but on paper it solves a lot of the problems that enterprises come across when trying to make large datasets available across the globe, while managing the lifecycle of those datasets and keeping them readily available. Given some of the reference customers that are making use of the platform, it seems reasonable to assume that the company has been doing something right. As with all things storage, your mileage might vary, but if you’re running into roadblocks with the storage platforms you know and love, it might be time to talk to the nice people in Melbourne about what they can do for you. If you’d like to read more, you can download a Solution Brief as well.

StorPool Announces Version 21

StorPool recently announced version 21 of its storage platform, offering improvements across data protection, efficiency, availability, and compatibility. I had the opportunity to speak to Boyan Krosnov and Alex Ivanov and wanted to share some thoughts.

 

“Magic” Scale-out Erasure Coding

One of the main features announced with Version 21 was “magic” scale-out erasure coding. Previously, StorPool offered triple replication protection of data across nodes. Now, with at least five all-NVMe storage servers, you can take advantage of this new erasure coding. Key capabilities include:

  • Near-zero performance impact even for Tier 0/Tier 1 workloads;
  • Data redundancy across nodes, as information is protected across servers with two parity objects so that any two servers can fail and data remains safe and accessible;
  • Great flexibility and operational efficiency. With per-volume policy management, volumes can be protected with triple replication or Erasure Coding, with per-volume live conversion between data protection schemes;
  • Always-on, non-disruptive operations – up to two storage nodes can be rebooted or brought down for maintenance while the entire storage system remains running with all data remaining available; and
  • Incremental mesh encoding and recovery.

 

Other New Features

But that’s not all. There’s also been work done in the following areas:

  • Improved iSCSI scalability – with support for exporting up to 1000 iSCSI targets per server
  • CloudStack plug-in improvements – introduces support for CloudStack’s volume encryption and partial zone-wide storage that enables easy live migration between compute hosts.
  • OpenNebula add-on improvements – now supports multi-cluster deployments where multiple StorPool sub-clusters behave as a single large-scale primary storage system with a unified global namespace
  • OpenStack Cinder driver improvements – Easy deployment with Canonical Charmed OpenStack and OpenStack instances managed with kolla-ansible
  • Deep integration with Proxmox Virtual Environment – introduces end-to-end automation of all storage operations in Proxmox VE deployments
  • Additional hardware and software compatibility – increased the number of validated hardware and operating systems resulting in easier deployment of StorPool Storage in customers’ preferred environments

 

Thoughts and Further Reading

It’s been a little while since I’ve written about StorPool, and the team continues to add features to the platform and grow in terms of customer adoption and maturity in the market. Every time I speak to Alex and Boyan, I get a strong sense that they’re relentlessly focussed on making the platform more stable, more performance-oriented, and easier to operate. I’m a fan of many of the design principles the company has adopted for its platform, including the use of standard server hardware, fitting in with customer workflows, and addressing the needs of demanding applications. It’s great that it scales linearly, but it’s as equally exciting, at least to me, that it “fades into the background”. Good infrastructure doesn’t want to be mentioned every day, it just needs to work (and work well). The folks at StorPool understand this, and seem to working hard to ensure that the platform, and the service that supports it, meets this requirement to fade into the background. It’s not necessarily “magic”, but it can be done with good code. StorPool has been around since 2012, is self-funded, profitable, and growing. I’ve enjoyed watching the evolution of the product since I was first introduced to it, and am looking forward to seeing what’s next in future releases. For another perspective on the announcement, check out this article over at Gestalt IT.

VMware Cloud on AWS – Melbourne Region Added

VMware recently announced that VMware Cloud on AWS is now available in the AWS Asia-Pacific (Melbourne) Region. I thought I’d share some brief thoughts here along with a video I did with my colleague Satya.

 

What?

VMware Cloud on AWS is now available to consume in three Availability Zones (apse4-az1, apse4-az2, apse4-az3) in the Melbourne Region. From a host type – you have the option to deploy either I3en.metal or I4i.metal hosts. There is also support for stretched clusters and PCI-DSS compliance if required. The full list of VMware Cloud on AWS Regions and Availability Zones is here.

 

Why Is This Interesting?

Since the launch of VMware Cloud on AWS, customers have only had one choice when it comes to a Region – Sydney. This announcement gives organisations the ability to deploy architectures that can benefit from both increased availability and resiliency by leveraging multi-regional capabilities.

Availability

VMware Cloud on AWS already offers platform availability at a number of levels, including a choice of Availability Zones, Partition Placement groups, and support for stretched clusters across two Availability Zones. There’s also support for VMware High Availability, as well as support for automatically remediating failed hosts.

Resilience

In addition to the availability options customers can take advantage of, VMware Cloud on AWS also provides support for a number of resilience solutions, including VMware Cloud Disaster Recovery (VCDR) and VMware Site Recovery. Previously, customers in Australia and New Zealand were able to leverage these VMware (or third-party) solutions and deploy them across multiple Availability Zones. Invariably, it would look like the below diagram, with workloads hosted in one Availability Zone, and a second Availability Zone being used as the recovery location for those production workloads.

With the introduction of a second Region in A/NZ, customers can now look to deploy resilience solutions that are more like this diagram:

In this example, they can choose to run production workloads in the Melbourne Region and recover workloads into the Sydney Region if something goes pear-shaped. Note that VCDR is not currently available to deploy in the Melbourne Region, although it’s expected to be made available before the end of 2023.

 

Why Else Should I Care?

Data Sovereignty 

There are a variety of legal, regulatory, and administrative obligations governing the access, use, security and preservation of information within various government and commercial organisations in Victoria. These regulations are both national and state-based, and in the case of the Melbourne Region, provide organisations in Victoria the opportunity to store data in VMware Cloud on AWS that may not otherwise have been possible.

Data Locality

Not all applications and data reside in the same location. Many organisations have a mix of workloads residing on-premises and in the cloud. Some of these applications are latency-sensitive, and the launch of the Melbourne Region provides organisations with the ability to host applications closer to that data, as well as accessing native AWS services with improved responsiveness over applications hosted in the Sydney Region.

 

How?

If you’re an existing VMware Cloud on AWS customer, head over to https://cloud.vmware.com. Login to the Cloud Services Console. Click on the VMware Cloud on AWS tile. Click on Inventory. Then click on Create SDDC.

 

Thoughts

Some of the folks in the US and Europe are probably wondering why on earth this is such a big deal for the Australian and New Zealand market. And plenty of folks in this part of the world are probably not that interested either. Not every organisation is going to benefit from or look to take advantage of the Melbourne Region. Many of them will continue to deploy workloads into one or two of the Sydney-based Availability Zones, with DR in another Availability Zone, and not need to do any more. But for those organisations looking for resiliency across geographical regions, this is a great opportunity to really do some interesting stuff from a disaster recovery perspective. And while it seems delightfully antiquated to think that, in this global world we live in, some information can’t cross state lines, there are plenty of organisations in Victoria facing just that issue, and looking at ways to store that data in a sensible fashion close to home. Finally, we talk a lot about data having gravity, and this provides many organisations in Victoria with the ability to run workloads closer to that centre of data gravity.

If you’d like to hear me talking about this with my learned colleague Satya, you can check out the video here. Thanks to Satya for prompting me to do the recording, and for putting it all together. We’re aiming to do this more regularly on a variety of VMware-related topics, so keep an eye out.

Verity ES Springs Forth – Promises Swift Eradication of Data

Verity ES recently announced its official company launch and the commercial availability of its Verity ES data eradication enterprise software solution. I had the opportunity to speak to Kevin Enders about the announcement and thought I’d briefly share some thoughts here.

 

From Revert to Re-birth?

Revert, a sister company of Verity ES, is an on-site data eradication service provider. It’s also a partner for a number of Storage OEMs.

The Problem

The folks at Revert have had an awful lot of experience with data eradication in big enterprise environments. With that experience, they’d observed a few challenges, namely:

  • The software doing the data eradication was too slow;
  • Eradicating data in enterprise environments introduced particular requirements at high volumes; and
  • Larger capacity HDDs and SDDs were a real problem to deal with.

The Real Problem?

Okay, so the process to get rid of old data on storage and compute devices is a bit of a problem. But what’s the real problem? Organisations need to get rid of end of life data – particularly from a legal standpoint – in a more efficient way. Just as data growth continues to explode, so too does the requirement to delete the old data.

 

The Solution

Verity ES was spawned to develop software to solve a number of the challenges Revert were coming across in the field. There are two ways to do it:

  • Eliminate the data destructively (via device shredding / degaussing); or
  • Non-destructively (using software-based eradication).

Why Eradicate?

Why eradicate? It’s a sustainable approach, enables residual value recovery, and allows for asset re-use. But it nonetheless needs to be secure, economical, and operationally simple to do. How does Verity ES address these requirements? It has Product Assurance Certification from ADISA. It’s also developed software that’s more efficient, particularly when it comes to those troublesome high capacity drives.

[image courtesy of Verity ES]

Who’s Buying?

Who’s this product aimed at? Primarily enterprise DC operators, hyperscalers, IT asset disposal companies, and 3rd-party hardware maintenance providers.

 

Thoughts

If you’ve spent any time on my blog you’ll know that I write a whole lot about data protection, and this is probably one of the first times that I’ve written about data destruction as a product. But it’s an interesting problem that many organisations are facing now. There is a tonne of data being generated every day, and some of that data needs to be gotten rid of, either because it’s sitting on equipment that’s old and needs to be retired, or because legislatively there’s a requirement to get rid of the data.

The way we tackle this problem has changed over time too. One of the most popular articles on this blog was about making an EMC CLARiiON CX700 useful again after EMC did a certified erasure on the array. There was no data to be found on the array, but it was able to be repurposed as lab equipment, and enjoyed a few more months of usefulness. In the current climate, we’re all looking at doing more sensible things with our old disk drives, rather than simply putting a bullet in them (except for the Feds – but they’re a bit odd). Doing this at scale can be challenging, so it’s interesting to see Verity ES step up to the plate with a solution that promises to help with some of these challenges. It takes time to wipe drives, particularly when you need to do it securely.

I should be clear that this data doesn’t go out and identify what data needs to be erased – you have to do that through some other tools. So it won’t tell you that a bunch of PII is buried in a home directory somewhere, or sitting in a spot it shouldn’t be. It also won’t go out and dig through your data protection data and tell you what needs to go. Hopefully, though, you’ve got tools that can handle that problem for you. What this solution does seem to do is provide organisations with options when it comes to cost-effective, efficient data eradication. And that’s something that’s going to become crucial as we continue to generate data, need to delete old data, and do so on larger and larger disk drives.

VMware Cloud on AWS – I4i.metal – A Few Notes …

At VMware Explore 2022 in the US, VMware announced a number of new offerings for VMware Cloud on AWS, including a new bare-metal instance type: the I4i.metal. You can read the official blog post here. I thought it would be useful to provide some high-level details and cover some of the caveats that punters should be aware of.

 

By The Numbers

What do you get from a specifications perspective?
  • The CPU is 3rd generation Intel Xeon Ice Lake @ 2.4GHz / Turbo 3.5GHz
  • 64 physical cores, supporting 128 logical cores with Hyper Threading (HT)
  • 1024 GiB memory
  • 30 TiB NVMe (Raw local capacity)
  • Up to 75 Gbps networking speed
So, how does the I4i.metal compare with the i3.metal? You get roughly 2x compute, storage, and memory, with improved network speed as well.
FAQ Highlights
Can I use custom core counts? Yep, the I4i will support physical custom core counts of 8, 16, 24, 30, 36, 48, 64.
Is there stretched cluster support? Yes, you can deploy these in stretched clusters (of the same host type).
Can I do in-cluster conversions? Yes, read more about that here.
Other Considerations
Why does the sizer say 20 TiB useable for the I4i? Around 7 TiB is consumed by the cache tier at the moment, so you’ll see different numbers in the sizer. And your useable storage numbers will obviously be impacted by the usual constraints around failures to tolerate (FTT) and RAID settings.
Region Support?
The I4i.metal instances will be available in the following Regions (and Availability Zones):
  • US East (N. Virginia) – use1-az1, use1-az2, use1-az4, use1-az5, use1-az6
  • US West (Oregon) – usw2-az1, usw2-az2, usw2-az3, usw2-az4
  • US West (N. California) – usw1-az1, usw1-az3
  • US East (Ohio) – use2-az1, use2-az2, use2-az3
  • Canada (Central) – cac1-az1, cac1-az2
  • Europe (Ireland) – euw1-az1, euw1-az2, euw1-az3
  • Europe (London) – euw2-az1, euw2-az2, euw2-az3
  • Europe (Frankfurt) – euc1-az1, euc1-az2, euc1-az3
  • Europe (Paris) –  euw3-az1, euw3-az2, euw3-az3
  • Asia Pacific (Singapore) – apse1-az1, apse1-az2, apse1-az3
  • Asia Pacific (Sydney) – apse2-az1, apse2-az2, apse2-az3
  • Asia Pacific (Tokyo) – apne1-az1, apne1-az2, apne1-az4

Other Regions will have availability over the coming months.

 

Thoughts

The i3.metal isn’t going anywhere, but it’s nice to have an option that supports more cores and it a bit more storage and RAM. The I4i.metal is great for SQL workloads and VDI deployments where core count can really make a difference. Coupled with the addition of supplemental storage via VMware Cloud Flex Storage and Amazon FSx for NetApp ONTAP, there are some great options available to deal with the variety of workloads customers are looking to deploy on VMware Cloud on AWS.

On another note, if you want to hear more about all the cloudy news from VMware Explore US, I’ll be presenting at the Brisbane VMUG meeting on October 12th, and my colleague Ray will be doing something in Sydney on October 19th. If you’re in the area, come along.

StorONE Announces Per-Drive Licensing Model

StorONE recently announced details of its Per-Drive Licensing Model. I had the opportunity to talk about the announcement with Gal Naor and George Crump about the news and thought I’d share some brief thoughts here.

 

Scale For Free?

Yes, at least from a licensing perspective. If you’ve bought storage from many of the traditional array vendors over the years, you would have likely paid for capacity-based licensing. Every time you upgraded the capacity of your array, there was usually a charge associated with that upgrade, beyond the hardware uplift costs. The folks at StorONE think it’s probably time that they stopped punishing customers for using higher capacity drives, so they’re shifting everything to a per-drive model.

How it Works

As I mentioned at the start, StorONE Scale-For-Free pricing is on a per-drive basis, so you can use the highest capacity, highest density drives without penalty, rather than metering capacity. The pricing is broken down thusly:

  • Price per HDD $/month
  • Price per SSD $/month
  • Minimum $/month
  • Cloud Use Case – $ per month by VM instance required

The idea is that this ultimately lowers the storage price per TB and brings some level of predictability to storage pricing.

How?

The key to this model is the availability of some key features in the StorONE solution, namely:

  • A rewritten and collapsed I/O stack (meaning do more with a whole lot less)
  • Auto-tiering improvements (leading to more consistent and predictable performance across HDD and SDD)
  • High performance erasure coding (meaning super fast recovery from drive failure)

 

But That’s Not All

Virtual Storage Containers

With Virtual Storage Containers (VSC), you can apply different data services and performance profiles to different workloads (hosted on the same media) in a granular and flexible fashion. For example, if you need 4 drives and 50,000 IOPS for your File Services, you can do that. In the same environment you might also need to use a few drives for Object storage with different replication. You can do that too.

[image courtesy of StorONE]

Ransomware Detection (and Protection)

StorONE has been pretty keen on its ransomware protection capabilities, with the option to run immutable snapshots on volumes every 30 seconds and store over 500,000+ snaps per volume. But it has added in some improved telemetry to enable earlier detection of potential ransomware events on volumes, as well as introducing dual-key deletion of snapshots and improved two-factor authentication.

 

Thoughts

There are many things that are certain in life, including the fact that no matter how much capacity you buy for your storage array on day one, by month 18 you’re looking at ways to replace some of that capacity with higher capacity. In my former life as a diskslinger I helped many customers upgrade their arrays with increased capacity drives, and most, if not all of them, had to pay a licensing bump as well as a hardware cost for the privilege. The storage vendors would argue that that’s just the model, and for as long as you can get away with it, it is. Particularly when hardware is getting cheaper and cheaper, you need something to drive revenue. So it’s nice to see a company like StorONE looking to shake things up a little in an industry that’s probably had its way with customers for a while now. Not every storage vendor is looking to punish customers for expanding their environments, but it’s nice that those customers that were struggling with this have the option to look at other ways of using the capacity they need in a cost-effective and predictable. manner.

This doesn’t really work without the other enhancements that have gone in to StorONE, such as the improved erasure coding and automated tiering. Having a cool business model isn’t usually enough to deliver a great solution. I’m looking forward to hearing from the StorONE team in the near future about how this has been received by both existing and new customers, and what other innovations they come out with in the next 12 months.

Datadobi Announces StorageMAP

Datadobi recently announced StorageMAP – a “solution that provides a single pane of glass for organizations to manage unstructured data across their complete data storage estate”. I recently had the opportunity to speak with Carl D’Halluin about the announcement, and thought I’d share some thoughts here.

 

The Problem

So what’s the problem enterprises are trying to solve? They have data all over the place, and it’s no longer a simple activity to work out what’s useful and what isn’t. Consider the data on a typical file / object server inside BigCompanyX.

[image courtesy of Datadobi]

As you can see, there’re all kinds of data lurking about the place, including data you don’t want to have on your server (e.g. Barry’s slightly shonky home videos), and data you don’t need any more (the stuff you can move down to a cheaper tier, or even archive for good).

What’s The Fix?

So how do you fix this problem? Traditionally, you’ll try and scan the data to understand things like capacity, categories of data, age, and so forth. You’ll then make some decisions about the data based on that information and take actions such as relocating, deleting, or migrating it. Sounds great, but it’s frequently a tough thing to make decisions about business data without understanding the business drivers behind the data.

[image courtesy of Datadobi]

What’s The Real Fix?

The real fix, according to Datadobi, is to add a bit more automation and smarts to the process, and this relies heavily on accurate tagging of the data you’re storing. D’Halluin pointed out to me that they don’t suggest you create complex tags for individual files, as you could be there for years trying to sort that out. Rather, you add tags to shares or directories, and let the StorageMAP engine make recommendations and move stuff around for you.

[image courtesy of Datadobi]

Tags can represent business ownership, the role of the data, any action to be taken, or other designations, and they’re user definable.
[image courtesy of Datadobi]

How Does This Fix It?

You’ll notice that the process above looks awfully similar to the one before – so how does this fix anything? The key, in my opinion at least, is that StorageMAP takes away the requirement for intervention from the end user. Instead of going through some process every quarter to “clean up the server”, you’ve got a process in place to do the work for you. As a result, you’ll hopefully see improved cost control, better storage efficiency across your estate, and (hopefully) you’ll be getting a little bit more value from your data.

 

Thoughts

Tools that take care of everything for you have always had massive appeal in the market, particularly as organisations continue to struggle with data storage at any kind of scale. Gone are the days when your admins had an idea where everything on a 9GB volume was stored, or why it was stored there. We now have data stored all over the place (both officially and unofficially), and it’s becoming impossible to keep track of it all.

The key things to consider with these kinds of solutions is that you need to put in the work with tagging your data correctly in the first place. So there needs to be some thought put into what your data looks like in terms of business value. Remember that mp4 video files might not be warranted in the Accounting department, but your friends in Marketing will be underwhelmed if you create some kind of rule to automatically zap mp4s. The other thing to consider is that you need to put some faith in the system. This kind of solution will be useless if folks insist on not deleting anything, or not “believing” the output of the analytics and reporting. I used to work with customers who didn’t want to trust a vendor’s automated block storage tiering because “what does it know about my workloads?”. Indeed. The success of these kind of intelligence and automation tools relies to a certain extent on folks moving away from faith-based computing as an operating model.

But enough ranting from me. I’ve covered Datadobi a bit over the last few years, and it makes sense that all of these announcements have finally led to the StorageMAP product. These guys know data, and how to move it.

StorCentric Announces Nexsan Unity NV10000

Nexsan (a StorCentric company) recently announced the Nexsan Unity NV10000. I thought I’d share a few of my thoughts here.

What Is It? 
In the immortal words of Silicon Valley: “It’s a box“. But the Nexsan Unity NV10000 is a box with some fairly decent specifications packed in a small form-factor, including support for various 1DWPD NVMe SSDs and the latest Intel Xeon processors.
Protocol Support
Protocol support, as would be expected with the Unity, is broad, with support for File (NFS, SMB), Block (iSCSI, FC), and Object (S3) data storage protocols within the one unified platform.
Performance Enhancements
These were hinted at with the release of Unity 7.0, but the Nexsan Unity NV10000 boosts performance by increasing bandwidths of up to 25GB/s, enabling you to scale performance up as your application needs evolve.

Other Useful Features

As you’d expect from this kind of storage array, the Nexsan Unity NV10000 also delivers features such as:

  • High availability (HA);
  • Snapshots;
  • ESXi integration;
  • In-line compression;
  • FASTier™ caching;
  • Asynchronous replication;
  • Data at rest encryption; and
  • Storage pool scrubbing to protect against bit rot, avoiding silent data corruption.

Backup Target?

Unity supports a comprehensive Host OS matrix and is certified as a Veeam Ready Repository for backups. Interestingly, the Nexsan Unity NV10000 also provides data security, regulations compliance, and ransomware recoverability. The platform also supports immutable block and file and S3 object locking, for data backup that is unchangeable and cannot be encrypted, even by internal bad actors.

Thoughts

I’m not as much of a diskslinger as I used to be, but I’m always interested to hear about what StorCentric / Nexsan has been up to with its storage array releases. It strikes me that the company does well by focussing on those features that customers are looking for (fast storage, peace of mind, multiple protocols) and also by being able to put it in a form-factor that appeals in terms of storage density. While the ecosystem around StorCentric is extensive, it makes sense for the most part, with the various components coming together well to form a decent story. I like that the company has really focussed on ensuring that Unity isn’t just a cool product name, but also a key part of the operating environment that powers the solution.