Rubrik CDM 4.1.1. – A Few Notes

Here are a few random notes on things in Rubrik‘s Cloud Data Management (CDM) 4.1.1-p4-2319 that I’ve come across in my recent testing in the lab. There’s not enough in each item to warrant a full post, hence the “few notes” format. Note that some of these things have been around for a while, I just wanted to note the specific version of Rubrik CDM I’m working with.

 

Guest OS Credentials

Rubrik uses Guest OS credentials for access to a VM’s operating system. When you add VM workload to your Rubrik environment, you may see the following message in the logs.

Note that it’s a warning, not an error. You can still backup the VM, just not to the level you might have hoped for. If you want to do a direct restore on a Linux guest, you’ll need an account with write access. For Windows, you’ll need something with administrative access. You could achieve this with either local or domain administrator accounts. This isn’t recommended though, and Rubrik suggests “a credential for a domain level account that has a small privilege set that includes administrator access to the relevant guests”. You could use a number of credentials across multiple groups of machines to reduce (to a small extent) the level of exposure, but there are plenty of CISOs and Windows administrators who are not going to like this approach.

So what happens if you don’t provide the credentials? My understanding is that you can still do file system consistent snapshots (provided you have a current version of VMware Tools installed), you just won’t be able to do application-consistent backups. For your reference, here’s the table from Rubrik discussing the various levels of available consistency.

Consistency level Description Rubrik usage
Inconsistent A backup that consists of copying each file to the backup target without quiescence.

File operations are not stopped The result is inconsistent time stamps across the backup and, potentially, corrupted files.

Not provided
Crash consistent A point-in-time snapshot but without quiescence.

•                Time stamps are consistent

•                Pending updates for open files are not saved

•                In-flight I/O operations are not completed

The snapshot can be used to restore the virtual machine to the same state that a hard reset would produce.

Provided only when:

•                The Guest OS does not have VMware Tools

•                The Guest OS has an out-of-date version of VMware Tools

The VM’s Application Consistency was manually set to Crash Consistent in the Rubrik UI

File system consistent A point-in-time snapshot with quiescence.

•                Time stamps are consistent

•                Pending updates for open files are saved

•                In-flight I/O operations are completed

•                Application-specific operations may not be completed.

Provided when the guest OS has an up-to-date version of VMware Tools and application consistency is not supported for the guest OS.
Application consistent A point-in-time snapshot with quiescence and application-awareness.

•                Time stamps are consistent

•                Pending updates for open files are saved

•                In-flight I/O operations are completed

•                Application-specific operations are completed.

Provided when the guest OS has an up-to-date version of VMware Tools and application consistency is supported for the guest OS.

 

open-vm-tools

If you’re running something like Debian in your vSphere environment you may have chosen to use open-vm-tools rather than VMware’s package. There’s nothing wrong with this (it’s a VMware-supported configuration), but you’ll see that Rubrik currently has a bit of an issue with it.

It will still backup the VM, just not at the consistency level you may be hoping for. It’s on Rubrik’s list of things to fix. And VMware Tools is still a valid (and arguably preferred) option for supported Linux distributions. The point of open-vm-tools is that appliance vendors can distribute the tools with their VMs without violating licensing agreements.

 

Download Logs

It seems like a simple thing, but I really like the ability to download logs related to a particular error. In this example, I’ve got some issues with a SQL cluster I’m backing up. I can click on “Download Logs” and grab the info I need related to the SLA Activity. It’s a small thing, but it makes wading through logs to identify issues a little less painful.

Rubrik Basics – Multi-tenancy

I’ve been doing some work with Rubrik in our lab and thought it worth covering some of the basic features that I think are pretty neat. In this edition of Rubrik Basics, I thought I’d quickly cover off how to get started with the multi-tenancy feature. You can read a little about it here. And yes, I know, some of the Rubrik documentation doesn’t hyphenate the word. But this is the hill I’m dying on apparently.

 

Multi-tenancy and Role-based Access

Multi-tenancy means a lot of different things to a lot of different people. In the case of Rubrik, multi-tenancy is an extension of the RBAC scheme enables a central organisation to delegate administrative capabilities to multiple tenant organisations. That is, you’ll likely have one global administrator (probably the managed service provider) looking after the Rubrik environment and carving it up for use by a number of different client organisations (tenants).

Each tenant organisation has a subset of administrative privileges defined by the global organisation. A tenant’s administrative privileges are also specified on a per-organisation basis. The administrators of the tenant can then go and do their thing independently of the cluster administrator. Because Rubrik supports multiple Active Directory domains, you can still use AD authentication on a per-tenant basis.

 

A Rubrik cluster can have one central organisation and any number of tenant organisations. An organisation is a collection of the following elements:

  • Protected objects
  • Replication and archival targets
  • SLA Domains
  • Local users
  • Active Directory users and groups
  • Service credentials
  • Reports

 

The Impact

SLA Domains are the mechanism used to protect objects in the Rubrik environment. In the case of multi-tenancy, SLA Domains are impacted by virtue of which organisation creates them. If the SLA Domain is created outside of a tenant organisation (and assigned to that organisation), it cannot be altered by the users or AD groups of the tenant organisation. Those that are created within a tenant can be modified by that tenant.

Note also that a Tenant Organisation does not inherit Guest OS Credentials from the Global Organisation. If you want to use the Guest OS Credentials of the global org you’ll need to assign those on a per-tenant basis.

 

Other Thoughts

When it comes to offering products as a service, there’s a bit more to multi-tenancy in terms of network connectivity, reporting, QoS, and other things like that. But the foundation, in my opinion, is the ability to create tenants organisations on the platform and have those remain independent of each other. The key to this is tying multi-tenancy in to your RBAC scheme to ensure that the rules of the tenancy are being observed. Once you have that working correctly, it becomes a relatively simple exercise to start to add features to the platform that can take advantage of those rules.

Rubrik introduced multi-tenancy into Rubrik CDM with 4.1, and it seems to be a pretty well thought out implementation. It’s not a feature that enterprise bods are interested in, but it’s certainly something that service providers require to be able to satisfy their customers that the right people will be touching the right stuff. I’m looking forward to testing out some more of these features in the near future.

Cloudistics, Choice and Private Cloud

I’ve had my eye on Cloudistics for a little while now.  The published an interesting post recently on virtualisation and private cloud. It makes for an interesting read, and I thought I’d comment briefly and post this article if for no other reason than you can find your way to the post and check it out.

TL;DR – I’m rambling a bit, but it’s not about X versus Y, it’s more about getting your people and processes right.

 

Cloud, Schmoud

There are a bunch of different reasons why you’d want to adopt a cloud operating model, be it public, private or hybrid. These include the ability to take advantage of:

  • On-demand service;
  • Broad network access;
  • Resource pooling;
  • Rapid elasticity; and
  • Measured service, or pay-per-use.

Some of these aspects of cloud can be more useful to enterprises than others, depending in large part on where they are in their journey (I hate calling it that). The thing to keep in mind is that cloud is really just a way of doing things slightly differently to improve deficiencies in areas that are normally not tied to one particular piece of technology. What I mean by that is that cloud is a way of dealing with some of the issues that you’ve probably seen in your IT organisation. These include:

  • Poor planning;
  • Complicated network security models;
  • Lack of communication between IT and the business;
  • Applications that don’t scale; and
  • Lack of capacity planning.

Operating Expenditure

These are all difficult problems to solve, primarily because people running IT organisations need to be thinking not just about technology problems, but also people and business problems. And solving those problems takes resources, something that’s often in short supply. Coupled with the fact that many businesses feel like they’ve been handing out too much money to their IT organisations for years and you start to understand why many enterprises are struggling to adapt to new ways of doing things. One thing that public cloud does give you is a way to consume resources via OpEx rather than CapEx. The benefit here is that you’re only consuming what you need, and not paying for the whole thing to be built out on the off chance you’ll use it all over the five year life of the infrastructure. Private cloud can still provide this kind of benefit to the business via “showback” mechanisms that can really highlight the cost of infrastructure being consumed by internal business units. Everyone has complained at one time or another about the Finance group having 27 test environments, now they can let the executives know just what that actually costs.

Are You Really Cloud Native?

Another issue with moving to cloud is that a lot of enterprises are still looking to leverage Infrastructure-as-a-Service (IaaS) as an extension of on-premises capabilities rather than using cloud-native technologies. If you’ve gone with lift and shift (or “move and improve“) you’ve potentially just jammed a bunch of the same problems you had on-premises in someone else’s data centre. The good thing about moving to a cloud operating model (even if it’s private) is that you’ll get people (hopefully) used to consuming services from a catalogue, and taking responsibility for how much their footprint occupies. But if your idea of transformation is running SQL 2005 on Windows Server 2003 deployed from VMware vRA then I think you’ve got a bit of work to do.

 

Conclusion

As Cloudistics point out in their article, it isn’t really a conversation about virtualisation versus private cloud, as virtualisation (in my mind at least) is the platform that makes a lot of what we do nowadays with private cloud possible. What is more interesting is the private versus public debate. But even that one is no longer as clear cut as vendors would like you to believe. If a number of influential analysts are right, most of the world has started to realise that it’s all about a hybrid approach to cloud. The key benefits of adopting a new way of doing things are more about fixing up the boring stuff, like process. If you think you get your house in order simply by replacing the technology that underpins it then you’re in for a tough time.

Rubrik Basics – Role-based Access Control

I’ve been doing some work with Rubrik in our lab and thought it worth covering some of the basic features that I think are pretty neat. In this edition of Rubrik Basics, I thought I’d quickly cover off how to get started with the Role Based Access Control (RBAC) feature.

 

Roles

The concept of RBAC is not a new one. It is, however, one of the first things that companies with more than one staff member ask for when they have to manage infrastructure. Rubrik uses the concept of Roles to deliver particular access to their environment. The available roles are as follows:

  • Administrator role – Full access to all Rubrik operations on all objects;
  • End User role – For assigned objects: browse snapshots, recover files and Live Mount; and
  • No Access role – Cannot log in to the Rubrik UI and cannot make REST API calls.

The End User role has a set of privileges that align with the requirements of a backup operator role.

Privilege type Description
Download data from backups Data download only from assigned object types:

  • vSphere virtual machines
  • Hyper-V virtual machines
  • AHV virtual machines
  • Linux & Unix hosts
  • Windows hosts
  • NAS hosts
  • SQL Server databases
  • Managed volumes
Live Mount or Export virtual machine snapshot Live Mount or Export a snapshot only from specified virtual machines and only to specified target locations.
Export data from backups Export data only from specified source objects.
Restore data over source Write data from backups to the source location, overwriting existing data, only for assigned objects, and only when ‘Allow overwrite of original’ is enabled for the user account or group account.

The good news is that Rubrik supports local authentication as well as Active Directory. You can then tie these roles to particular groups within your organisation. You can have more than one domain that you use for authentication, but I’ll cover that in a future post on multi-tenancy.

I don’t believe that the ability to create custom roles is present (at least in the UI). I’m happy for people from Rubrik to correct me if I’ve gotten that wrong.

 

Configuration

Configuring access to the Rubrik environment for users is fairly straightforward. In this example I’ll be giving my domain account access to the Brik as an administrator. To get started, click on the Gear icon in the UI and select Users (under Access Management).

I don’t know who Grant Authorization is in real life, but he’s the guy who can help you out here (my dad jokes are both woeful and plentiful – just ask my children).

In this example I’m granting access to a domain user.

This example also assumes that you’ve added the domain to the appliance in the first place (and note that you can add multiple domains). In the dropdown box, select the domain the user resides in.

You can then search for a name. In this example, the user I’m searching for is danf. Makes sense, if you think about it.

Select the user account and click on Continue.

By default users are assigned No Access. If you have one of these accounts, the UI will let you enter a username and password and then kick you back to the login screen.

If I assign the user the End User role, I can assign access to various objects in the environment. Note that I can also provide access to overwrite original files if required. This is disabled by default.

In this example, however, I’m providing my domain account with full access via the Administrator role. Click on Assign to continue.

I can now log in to the Rubrik UI with my domain user account and do things.

And that’s it. In a future post I’ll be looking in to multi-tenancy and fun things you can do with organisations and multiple access levels.

Pure//Accelerate 2018 – Wrap-up and Link-o-rama

Disclaimer: I recently attended Pure//Accelerate 2018.  My flights, accommodation and conference pass were paid for by Pure Storage via the Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here’s a quick post with links to the other posts I did covering Pure//Accelerate 2018, as well as links to other articles related to the event that I found interesting.

 

Gestalt IT Articles

I wrote a series of articles about Pure Storage for Gestalt IT.

Pure Storage – You’ve Come A Long Way

//X Gon Give it to Ya

Green is the New Black

The Case for Data Protection with FlashBlade

 

Event-Related

Here’re the posts I did during the show. These were mainly from the analyst sessions I attended.

Pure//Accelerate 2018 – Wednesday General Session – Rough Notes

Pure//Accelerate 2018 – Thursday General Session – Rough Notes

Pure//Accelerate 2018 – Wednesday – Chat With Charlie Giancarlo

Pure//Accelerate 2018 – (Fairly) Full Disclosure

 

Pure Storage Press Releases

Here are some of the press releases from Pure Storage covering the major product announcements and news.

The Future of Infrastructure Design: Data-Centric Architecture

Introducing the New FlashArray//X: Shared Accelerated Storage for Every Workload

Pure Storage Announces AIRI™ Mini: Complete, AI-Ready Infrastructure for Everyone

Pure Storage Delivers Pure Evergreen Storage Service (ES2) Along with Major Upgrade to Evergreen Program

Pure Storage Launches New Partner Program

 

Pure Storage Blog Posts

A New Era Of Storage With NVMe & NVMe-oF

New FlashArray//X Family: Shared Accelerated Storage For Every Workload

Building A Data-Centric Architecture To Power Digital Business

Pure’s Evergreen Delivers Right-Sized Storage, Again And Again And Again

Pure1 Expands AI Capabilities And Adds Full Stack Analytics

 

Conclusion

I had a busy but enjoyable week. I would have liked the get to more of the technical sessions, but being given access to some of the top executives in the company via the Analyst and Influencer Experience was invaluable. Thanks again to Pure Storage (particularly Armi Banaria and Terri McClure) for having me along to the show.

Rubrik Basics – Archival Locations

I’ve been doing some work with Rubrik in our lab and thought it worth covering some of the basic features that I think are pretty neat. In this edition of Rubrik Basics, I thought I’d quickly cover off how to get started with the Archival Locations feature. You can read the datasheet here.

 

Rubrik and Archiving Policies

So what can you do with Archival Locations? Well, the idea is that you can copy data to another location for safe-keeping. Normally this data will live in that location for a longer period than it will in the on-premises Brik you’re using. You might, for example, keep data on your appliance for 30 days, and have archive data living in a cloud location for another 2 years.

 

Archival Location Support

Rubrik supports a variety of Archival Locations, including:

  • Public Cloud: Amazon Web Services S3, S3-IA, S3-RRS and Glacier; Microsoft Azure Blob Storage LRS, ZRS and GRS; Google Cloud Platform Nearline, Coldline, Multi-Regional and Regional; (also includes support for Government Cloud Options in AWS and Azure);
  • Private Cloud (S3 Object Store): Basho Riak, Cleversafe, Cloudian, EMC ECS, Hitachi Content Platform, IIJ GIO, Red Hat Ceph, Scality;
  • NFS: Any NFS v3 Compliant Target; and
  • Tape: All Major Tape Vendors via QStar.

What’s cool is that multiple, active archival locations can be configured for a Rubrik cluster. You can then select an archival location when an SLA policy is created or edited. This is particularly useful when you have a number of different tenants hosted on the same Brik.

 

Setup

To setup an Archival Location, click on the “Gear” icon in the Rubrik interface (in this example I’m using Rubrik CDM 4.1) and select “Archival Locations”.

Click on the + sign.

You can then choose the archival type, selecting from Amazon S3 (or Glacier), Azure, Google Cloud Platform, NFS or Tape (via QStar). In this example I’m setting up an Amazon S3 bucket.

You then need to select the Region and Storage Class, and provide your AWS Access Key, Secret Key and S3 Bucket.

You also need to choose the encryption type. I’m not using an external KMS in our lab, so I’ve used OpenSSL to generate a key using the following command.

Once you run that command, paste the contents of the PEM file.

Once you’ve added the location, you’ll see it listed, along with some high level statistics.

Once you have an Archival Location configured, you can add it to existing SLA Domains, or use it when you create a new SLA Domain.

Instant Archive

The Instant Archive feature can also be used to immediately queue a task to copy a new snapshot to a specified archival location. Note that the Instant Archive feature does not change the amount of time that a snapshot is retained locally on the Rubrik cluster. The Retention On Brik setting determines how long a snapshot is kept on the Rubrik cluster.

 

Thoughts

Rubrik’s Data Archival is flexible as well as simple to use. It’s easy to setup and works as promised. There is a bunch of stuff happening within the Rubrik environment that means that you can access protection data across multiple locations as well, so you might find that a combination of a Rubrik Brik and some cheap and deep NFS storage is a good option to store backup data for an extended period of time. You might also think about using this feature as a way to do data mobility or disaster recovery, depending on the type of disaster you’re trying to recover from.

Updated Articles Page

I recently had the opportunity to deploy a Rubrik r344 4-node appliance and thought I’d run through the basics of the installation. There’s a new document outlining the process on the articles page.

SolarWinds Articles

I’ve been writing some articles over in the Solarwinds Geek Speak community and other areas of the site covering fun things like SNMP, syslog and disaster recovery stuff. You can check them out here.

Syslog – The Blue-Collar Worker Of The Data Center

SNMP – It’s Not a Trap!

This is a Disaster! Knowing When to Call It

Disaster Recovery – How Logging Can Help Ensure You’ll Get There

Disaster Recovery – The Postmortem

Panasas Overview

A good friend of mine is friends with someone who works at Panasas and suggested I might like to hear from them. I had the opportunity to speak to some of the team, and I thought I’d write a brief overview of what they do. Hopefully I’ll have the opportunity to cover them in the future as I think they’re doing some pretty neat stuff.

 

It’s HPC, But Not As You Know It

I don’t often like to include that slide where the vendor compares themselves to other players in the market. In this case, though, I thought Panasas’s positioning of themselves as “commercial” HPC versus the traditional HPC storage (and versus enterprise scale-out NAS) is an interesting one. We talked through this a little, and my impression is that they’re starting to deal more and more with the non-traditional HPC-like use cases, such as media and entertainment, oil and gas, genomics folks, and so forth. A number of these workloads fall outside HPC, in the sense that traditional HPC has lived almost exclusively in government and the academic sphere. The roots are clearly in HPC, but there are “enterprise” elements creeping in, such as ease of use (at scale) and improved management functionality.

[image courtesy of Panasas]

 

Technology

It’s Really Parallel

The really value in Panasas’s offering is the parallel access to the storage. The more nodes you add, the more performance improves. In a serial system, a client can access data via one node in the cluster, regardless of the number of nodes available. In a parallel system, such as this one, a client accesses data that is spread across multiple nodes.

 

What About The Hardware?

The current offering from Panasas is called ActiveStor. The platform is comprised of PanFS running on Director Blades and Storage Blades. Here’s a picture of the Director Blades (ASD-100) and the Storage Blades (ASH-100). The Director has been transitioned to a 2U4N form factor (it used to be sit in the blade chassis).

[image courtesy of Panasas]

 

Director Nodes are the Control Plane of PanFS, and handle:

  • Metadata processing: directories, file names, access control checks, timestamps, etc.
  • Uses a transaction log to ensure atomicity and durability of structural changes
  • Coordination of client system actions to ensure single-system view and data-cache-coherence
  • “Realm” membership (Panasas’s name for the storage cluster), realm self-repair, etc.
  • Realm maintenance: file reconstruction, automatic capacity balancing, scrubbing, etc.

Storage Nodes are the Data Plane of PanFS, and deal with:

  • Storage of bulk user data for the realm, accessed in parallel by client systems
  • Also stores, but does not operate on, all the metadata of the system for the Director Nodes
  • API based upon the T10 SCSI “Object-Based Storage Device” that Panasas helped define

Storage nodes offer a variety of HDD (4TB, 6TB, 8TB, 10TB, or 12TB) and SSD capacities (480GB, 960GB, 1.9TB) depending on the type of workload you’re dealing with. The SSD is used for metadata and files smaller than 60KB. Everything else is stored on the larger drives.

 

DirectFlow Protocol

DirectFlow is a big part fo what differentiates Panasas from your average scale-out NAS offering. It does some stuff that’s pretty cool, including:

  • Support for parallel delivery of data to / from Storage Nodes
  • Support for fully POSIX-compliant semantics, unlike NFS and SMB
  • Support for strong data cache-coherency across client systems

It’s a proprietary protocol between clients and ActiveStor components, and there’s an installable kernel module for each client system (Linux and macOS). They tell me that pNFS is based upon DirectFlow, and they had a hand in defining pNFS.

 

Resilience

Scale out NAS is exciting but us enterprise types want to know about resilience. It’s all fun and games until someone fat fingers a file, or a disk dies. Well, Panasas, as it happens, have a little heritage when it comes to disk resilience. They use a N + 2 RAID 6 (10 wide + P & Q). You could have more disks working for you, but this number seems to work best for Panasas customers. In terms of realms, there are 3, 5 or 7 “rep sets” per realm. There’s also a “realm president”, and every Director has a backup director. There’s also:

  • Per-file erasure coding of striped files allows the whole cluster to help rebuild a file after a failure;
  • Only need to rebuild data protection on specific files instead of entire drives(s); and
  • The percentage of files in the cluster affected by any given failure approaches zero at scale.

 

Thoughts and Further Reading

I’m the first to admit that my storage experience to date has been firmly rooted in the enterprise space. But, much like my fascination with infrastructure associated wth media and entertainment, I fancy myself as an HPC-friendly storage guy. This is for no other reason than I think HPC workloads are pretty cool, and they tend to scale beyond what I normally see in the enterprise space (keeping in mind that I work in a smallish market). You say genomics to someone, or AI, and they’re enthusiastic about the outcomes. You say SQL 2012 to someone and they’re not as interested.

Panasas are positioning themselves as being suitable, primarily, for commercial HPC storage requirements. They have a strong heritage with traditional HPC workloads, and they seem to have a few customers using their systems for more traditional, enterprise-like NAS deployments as well. This convergence of commercial HPC, traditional and enterprise NAS requirements has presented some interesting challenges, but it seems like Panasas have addressed those in the latest iteration of its hardware. Dealing with stonking great big amounts of data at scale is a challenge for plenty of storage vendors, but Panasas have demonstrated an ability adapt to the evolving demands of their core market. I’m looking forward to seeing the next incarnation of their platform, and how they incorporate technologies such as InfiniBand into their offering.

There’s a good white paper available on the Panasas architecture that you can access here (registration required). El Reg also has some decent coverage of the current hardware offering here.

SwiftStack Announces 1space

SwiftStack recently announced 1space, and I was lucky enough to snaffle some time with Joe Arnold to talk more about what it all means. I thought it would be useful to do a brief post, as I really do like SwiftStack, and I feel like I don’t spend enough time talking about them.

 

The Announcement

So what exactly is 1space? It’s basically SwiftStack delivering access to their storage across both on-premises and public cloud. But what does that mean? Well, you get some cool features as a result, including:

  • Integrated multi-cloud access
  • Scale-out & high-throughput data movement
  • Highly reliable & available policy execution
  • Policies for lifecycle, data protection & migration
  • Optional, scale-out containers with AWS S3 support
  • Native access in public cloud (direct to S3, GCS, etc.)
  • Data created in public cloud accessible on-premises
  • Native format enabling cloud-native services

[image courtesy of SwiftStack]

According to Arnold, one of the really cool things about this is that it “provides universal access to over both file protocols and object APIs to a single storage namespace, it is increasingly used for distributed workflows across multiple geographic regions and multiple clouds”.

 

Metadata Search

But wait …

One of the really nice things that SwiftStack has done is add integrated metadata search via a desktop client for Windows, macOS, and Linux. It’s called MetaSync.

 

Thoughts

This has been a somewhat brief post, but something I did want to focus on was the fact that this product has been open-sourced. SwiftStack have been pretty keen on open source as a concept, and I think that comes through when you have a look at some of their contributions to the community. These contributions shouldn’t be underestimated, and I think it’s important that we call out when vendors are contributing to the open source community. Let’s face it, a whole lot of startups are taking advantage of code generated by the open source community, and a number of them have the good sense to know that it’s most certainly a two-way street, and they can’t relentlessly pillage the community without it eventually falling apart.

But this announcement isn’t just me celebrating the contributions of neckbeards from within the vendor community and elsewhere. SwiftStack have delivered something that is really quite cool. In much the same way that storage types won’t shut up about NVMe over Fabrics, cloud folks are really quite enthusiastic about the concept of multi-cloud connectivity. There are a bunch of different use cases where it makes sense to leverage a universal namespace for your applications. If you’d like to see SwiftStack in action, check out this YouTube channel (there’s a good video about 1space here) and if you’d like to take SwiftStack for a spin, you can do that here.