Updated Articles Page

I recently had the opportunity to configure multi-tenancy in my Cohesity lab environment and thought I’d run through the basics. There’s a new document outlining the process on the articles page.

Pure Storage Goes All In On Hybrid … Cloud

I recently had the opportunity to hear from Chadd Kenney about Pure Storage’s Cloud Data Services announcement and thought it worthwhile covering here. But before I get into that, Pure have done a little re-branding recently. You’ll now hear them referring to Cloud Data Infrastructure (their on-premises instances of FlashArray, FlashBlade, FlashStack) and Cloud Data Management (being their Pure1 instances).

 

The Announcement

So what is “Cloud Data Services”? It’s comprised of:

According to Kenney, “[t]he right strategy is and not or, but the enterprise is not very cloudy, and the cloud is not very enterprise-y”. If you’ve spent time in any IT organisation, you’ll see that there is, indeed, a “Cloud divide” in play. What we’ve seen in the last 5 – 10 years is a marked difference in application architectures, consumption and management, and even storage offerings.

[image courtesy of Pure Storage]

 

Cloud Block Store

The first part of the puzzle is probably the most interesting for those of us struggling to move traditional application stacks to a public cloud solution.

[image courtesy of Pure Storage]

According to Pure, Cloud Block Store offers:

  • High reliability, efficiency, and performance;
  • Hybrid mobility and protection; and
  • Seamless APIs on-premises and cloud.

Kenney likens building a Purity solution on AWS to the approach Pure took in the early days of their existence, when they took off the shelf components and used optimised software to make them enterprise-ready. Now they’re doing the same thing with AWS, and addressing a number of the shortcomings of the underlying infrastructure through the application of the Purity architecture.

Features

So why would you want to run virtual Pure controllers on AWS? The idea is that Cloud Block Store:

  • Aggregates performance and reliability across many cloud stores;
  • Can be deployed HA across two availability zones (using active cluster);
  • Is always thin, deduplicated, and compressed;
  • Delivers instant space-saving snapshots; and
  • Is always encrypted.

Management and Orchestration

If you have previous experience with Purity, you’ll appreciate the management and orchestration experience remains the same.

  • Same management, with Pure1 managing on-premises instances and instances in the cloud
  • Consistent APIs on-premises and in cloud
  • Plugins to AWS and VMware automation
  • Open, full-stack orchestration

Use Cases

Pure say that you can use this kind of solution in a number of different scenarios, including DR, backup, and migration in and between clouds. If you want to use ActiveCluster between AWS regions, you might have some trouble with latency, but in those cases other replication options are available.

[image courtesy of Pure Storage]

Not that Cloud Block Store is available in a few different deployment configurations:

  • Test/Dev – using a single controller instance (EBS can’t be attached to more than one EC2 instance)
  • Production – ActiveCluster (2 controllers, either within or across availability zones)

 

CloudSnap

Pure tell us that we’ve moved away from “disk to disk to tape” as a data protection philosophy and we now should be looking at “Flash to Flash to Cloud”. CloudSnap allows FlashArray snapshots to be easily sent to Amazon S3. Note that you don’t necessarily need FlashBlade in your environment to make this work.

[image courtesy of Pure Storage]

For the moment, this only being certified on AWS.

 

StorReduce for AWS

Pure acquired StorReduce a few months ago and now they’re doing something with it. If you’re not familiar with them, “StorReduce is an object storage deduplication engine, designed to enable simple backup, rapid recovery, cost-effective retention, and powerful data re-use in the Amazon cloud”. You can leverage any array, or existing backup software – it doesn’t need to be a Pure FlashArray.

Features

According to Pure, you get a lot of benefits with StorReduce, including:

  • Object fabric – secure, enterprise ready, highly durable cloud object storage;
  • Efficient – Reduces storage and bandwidth costs by up to 97%, enabling cloud storage to cost-effectively replace disk & tape;
  • Fast – Fastest Deduplication engine on the market. 10s of GiB/s or more sustained 24/7;
  • Cloud Native – Native S3 interface enabling openness, integration, and data portability. All Data & Metadata stored in object store;
  • Single namespace – Stores in a single data hub across your data centre to enable fast local performance and global data protection; and
  • Scalability – Software nodes scale linearly to deliver 100s of PBs and 10s of GBs bandwidth.

 

Thoughts and Further Reading

The title of this post was a little misleading, as Pure have been doing various cloud things for some time. But sometimes I give in to my baser instincts and like to try and be creative. It’s fine. In my mind the Cloud Block Store for AWS piece of the Cloud Data Services announcement is possibly the most interesting one. It seems like a lot of companies are announcing these kinds of virtualised versions of their hardware-based appliances that can run on public cloud infrastructure. Some of them are just encapsulated instances of the original code, modified to deal with a VM-like environment, whilst others take better advantage of the public cloud architecture.

So why are so many of the “traditional” vendors producing these kinds of solutions? Well, the folks at AWS are pretty smart, but it’s a generally well understood fact that the enterprise moves at enterprise pace. To that end, they may not be terribly well positioned to spend a lot of time and effort to refactor their applications to a more cloud-friendly architecture. But that doesn’t mean that the CxOs haven’t already been convinced that they don’t need their own infrastructure anymore. So the operations folks are being pushed to migrate out of their DCs and into public cloud provider infrastructure. The problem is that, if you’ve spent a few minutes looking at what the likes of AWS and GCP offer, you’ll see that they’re not really doing things in the same way that their on-premises comrades are. AWS expects you to replicate your data at an application level, for example, because those EC2 instances will sometimes just up and disappear.

So how do you get around the problem of forcing workloads into public cloud without a lot of the safeguards associated with on-premises deployments? You leverage something like Pure’s Cloud Block Store. It overcomes a lot of the issues associated with just running EC2 on EBS, and has the additional benefit of giving your operations folks a consistent management and orchestration experience. Additionally, you can still do things like run ActiveCluster between and within Availability Zones, so your mission critical internal kitchen roster application can stay up and running when an EC2 instance goes bye bye. You’ll pay a bit less or more than you would with normal EBS, but you’ll get some other features too.

I’ve argued before that if enterprises are really serious about getting into public cloud, they should be looking to work towards refactoring their applications. But I also understand that the reality of enterprise application development means that this type of approach is not always possible. After all, enterprises are (generally) in the business of making money. If you come to them and can’t show exactly how they’ save money by moving to public cloud (and let’s face it, it’s not always an easy argument), then you’ll find it even harder to convince them to undertake significant software engineering efforts simply because the public cloud folks like to do things a certain way. I’m rambling a bit, but my point is that these types of solutions solve a problem that we all wish didn’t exist but it does.

Justin did a great write-up here that I recommend reading. Note that both Cloud Block Store and StorReduce are in Beta with planned general availability in 2019.

OT – I Voted. Now It’s Over To You

Eric Siebert has opened up voting for the Top vBlog 2018. I’m listed on the vLaunchpad and you can vote for me under storage and independent blog categories as well. There are a bunch of great blogs listed on Eric’s vLaunchpad, so if nothing else you may discover someone you haven’t heard of before, and chances are they’ll have something to say that’s worth checking out. If this stuff seems a bit needy, it is. But it’s also nice to have people actually acknowledging what you’re doing. I’m hoping that people find this blog useful, because it really is a labour of love (random vendor t-shirts notwithstanding).

Rubrik Announces Cloud Data Management 5.0 – Drops In A Shedload Of Enhancements

I recently had the opportunity to hear from Chris Wahl about Rubrik CDM 5.0 (codename Andes) and thought it worthwhile covering here.

 

Announcement Summary

  • Instant recovery for Oracle databases;
  • NAS Direct Archive to protect massive unstructured data sets;
  • Microsoft Office 365 support via Polaris SaaS Platform;
  • SAP-certified protection for SAP HANA;
  • Policy-driven protection for Epic EHR; and
  • Rubrik works with Rubrik Datos IO to protect NoSQL databases.

 

New Features and Enhancements

As you can see from the list above, there’s a bunch of new features and enhancements. I’ll try and break down a few of these in the section below.

Oracle Protection

Rubrik have had some level of capability with Oracle protection for a little while now, but things are starting to hot up with 5.0.

  • Simplified configuration (Oracle Auto Protection and Live Mount, Oracle Granular SLA Policy Assignments, and Oracle Automated Instance and Database Discovery)
  • Orchestration of operational and PiT recoveries
  • Increased control for DBAs

NAS Direct Archive

People have lots of data now. Like, a real lot. I don’t know how many Libraries of Congress exactly, but it can be a lot. Previously, you’d have to buy a bunch of Briks to store this data. Rubrik have recognised that this can be a bit of a problem in terms of footprint. With NAS Direct Archive, you can send the data to an “archive” target of your choice. So now you can protect a big chunk of data that goes through the Rubrik environment to end target such as object storage, public cloud, or NFS. The idea is to reduce the amount of Rubrik devices you need to buy. Which seems a bit weird, but their customers will be pretty happy to spend their money elsewhere.

[image courtesy of Rubrik]

It’s simple to get going, requiring a tick of a box to be configured. The metadata remains protected with the Rubrik cluster, and the good news is that nothing changes from the end user recovery experience.

Elastic App Service (EAS)

Rubrik now provides the ability to ingest DBs across a wider spectrum, allowing you to protect more of the DB-based applications you want, not just SQL and Oracle workloads.

SAP HANA Protection

I’m not really into SAP HANA, but plenty of organisations are. Rubrik now offer a SAP Certified Solution which, if you’ve had the misfortune of trying to protect SAP workloads before, is kind of a neat feature.

[image courtesy of Rubrik]

SQL Server Enhancements

There have been some nice enhancements with SQL Server protection, including:

  • A Change Block Tracking (CBT) filter driver to decrease backup windows; and
  • Support for group Volume Shadow Copy Service (VSS) snapshots.

So what about Group Backups? The nice thing about these is that you can protect many databases on the same SQL Server. Rather than process each VSS Snapshot individually, Rubrik will group the databases that belong to the same SLA Domain and process the snapshots as a batch group. There are a few benefits to this approach:

  • It reduces SQL Server overhead, as well as decreases the amount of time a backup requires to be completed; and
  • In turn, allowing customers to take more frequent backups of their databases delivering a lower RPO to the business.

vSphere Enhancements

Rubrik have done vSphere things since forever, and this release includes a few nice enhancements, including:

  • Live Mount VMDKs from a Snapshot – providing the option to choose to mount specific VMDKs instead of an entire VM; and
  • After selecting the VMDKs, the user can select a specific compatible VM to attach the mounted VMDKs.

Multi-Factor Authentication

The Rubrik Andes 5.0 integration with RSA SecurID will include RSA Authentication Manager 8.2 SP1+ and RSA SecurID Cloud Authentication Service. Note that CDM will not be supporting the older RADIUS protocol. Enabling this is a two-step process:

  • Add the RSA Authentication Manager or RSA Cloud Authentication Service in the Rubrik Dashboard; and
  • Enable RSA and associate a new or existing local Rubrik user or a new or existing LDAP server with the RSA Authentication Manager or RSA Cloud Authentication Service.

You also get the ability to generate API tokens. Note that if you want to interact with the Rubrik CDM CLI (and have MFA enabled) you’ll need these.

Other Bits and Bobs

There are a few other enhancements included, including:

  • Windows Bare Metal Recovery;
  • SLA Policy Advanced Configuration;
  • Additional Reporting and Metrics; and
  • Snapshot Retention Enhancements.

 

Thoughts and Further Reading

Wahl introduced the 5.0 briefing by talking about digital transformation as being, at its core, an automation play. The availability of a bunch of SaaS services can lead to fragmentation in your environment, and legacy technology doesn’t deal with with makes transformation. Rubrik are positioning themselves as a modern company, well-placed to help you with the challenges of protecting what can quickly become a complex and hard to contain infrastructure. It’s easy to sit back and tell people how transformation can change their business for the better, but these kinds of conversations often eschew the high levels of technical debt in the enterprise that the business is doing its best to ignore. I don’t really think that transformation is as simple as some vendors would have us believe, but I do support the idea that Rubrik are working hard to make complex concepts and tasks as simple as possible. They’ve dropped a shedload of features and enhancements in this release, and have managed to do so in a way that you won’t need to install a bunch of new applications to support these features, and you won’t need to do a lot to get up and running either. For me, this is the key advantage that the “next generation” data protection companies have over their more mature competitors. If you haven’t been around for decades, you very likely don’t offer support for every platform and application under the sun. You also likely don’t have customers that have been with you for 20 years that you need to support regardless of the official support status of their applications. This gives the likes of Rubrik the flexibility to deliver features as and when customers require them, while still focussing on keeping the user experience simple.

I particularly like the NAS Direct Archive feature, as it shows that Rubrik aren’t simply in this to push a bunch of tin onto their customers. A big part of transformation is about doing things smarter, not just faster. the folks at Rubrik understand that there are other solutions out there that can deliver large capacity solutions for protecting big chunks of data (i.e. NAS workloads), so they’ve focussed on leveraging other capabilities, rather than trying to force their customers to fill their data centres with Rubrik gear. This is the kind of thinking that potential customers should find comforting. I think it’s also the kind of approach that a few other vendors would do well to adopt.

*Update*

Here’re some links to other articles on Andes from other folks I read that you may find useful:

Cloudtenna Announces DirectSearch GA

I’ve covered Cloudtenna in the past and had the good fortune to chat with Aaron Ganek about the general availability of Cloudtenna’s universal search product – DirectSearch. I thought I’d share some of my thoughts here.

 

About Cloudtenna

Cloudtenna are focussed on delivering “[t]urn-key search infrastructure designed specifically for files”. If you think of Elasticsearch as being synonymous with log search, then you might also like to think of Cloudtenna delivering an equivalent capability with file search.

The Challenge

According to Cloudtenna, the problem is that “[e]nterprises can’t keep track of files that are pattered across on-premises, cloud, and SaaS apps” and traditional search is a one-size-fits-all solution. In Cloudtenna’s opinion though, file search requires personalised search that reflects things such as ACLs. It’s expensive and difficult to scale.

Cloudtenna’s Solution

So what do Cloudtenna do then? The key features are the ability to:

  • Efficiently ingress massive amounts of data
  • Understand and adhere to user permissions
  • Return queries in near real-time
  • Reduce index storage and compute costs

“DirectSearch” is now generally available, and allows for cross-silo search across services such as DropBox, Gmail, Slack, Confluence, and so on. It seems reasonably priced at $10 US per user per month. Note that users who sign-up before December 1st 2018 can get 3 months of a free trial with no credit card details required).

DirectSearch CORE

In parallel to the release of DirectSearch, Cloudtenna are also announcing DirectSearch CORE – delivered via an OEM Model. I asked Ganek where he thought this kind of solution was a good fit. He told me that he saw it falling into three main categories:

  • Digital workspace category – eg. VMware, Citrix. Companies that want to be able to connect files into virtual digital workspaces;
  • Storage space – large storage vendors with SMB and NFS solutions – they might want to provide a global namespace over those transports; and
  • SaaS collaboration – eg. companies delivering chat, bug tracking, word processing – unify those offerings and give a single view of files.

Cloudtenna describe DirectSearch CORE as a turn-key file search infrastructure offering:

  • Fast query latency;
  • ACL crunching;
  • Deduplication; and
  • Contextual intelligence.

ACLs

One of the big challenges with delivering a solution like DirectSearch is that every data source has its own permissions and ACL enforcement is a big challenge. Keep in mind that all of these different applications have their own version of authentication mechanisms, with some using open directory standards, and others doing proprietary stuff. And once you have authentication sorted out, you still need to ensure that users only get access to what they’re allowed to see. Cloudtenna tackle this challenge by ingesting “native ACLs” and normalising those ACLs with metadata.

 

Thoughts

Search is hard to do well. You want it to be quick, accurate, and easy to use. You also generally want it to be able to find stuff in all kinds of places. One of the problems with modern infrastructure is that we have access to a whole bunch of content repositories as part of our everyday corporate endeavours. I work with Slack, Dropbox, Box, OneDrive, SharePoint, file servers, Microsoft Teams, iMessage, email, and all kinds of systems as part of my job. I’m the first to admit that I don’t always have a good handle on where some stuff is. And sometimes I use the wrong system because it’s more convenient to access than the correct one is. Now multiply this problem out by the thousands of users in a decent-sized enterprise and you’ve got a recipe for disaster in terms of finding corporate knowledge in a timely fashion. Combine that with billions of files and you’re a passenger on Terry Tate’s pain train. Cloudtenna has quite a job on its hands in terms of delivering on the promise of “[b]ringing order to file chaos”, but if they can do that, it’ll be pretty cool. I’ll be signing up for a trial in the very near future and, if chaotic files aren’t your bag, then maybe you should give it a spin too.

Scale Computing Have Been Busy

I recently had the opportunity to get on a call with Alan Conboy to talk about what’s been happening with Scale Computing lately. It was an interesting chat, as always, and I thought I’d share some of the news here.

 

Detroit Rock City

It’s odd how sometimes I forget that pretty much every type of business in existence uses some form of IT. Arts and performance organisations, such as the Detroit Symphony Orchestra are no exception. They are also now very happy Scale customers. There’s a YouTube video detailing their experiences that you can check out here.

 

Lenovo Partnership

Scale and Lenovo recently announced a strategic partnership, focussed primarily on edge workloads, with particular emphasis on retail and industrial environments. You can download a solution brief here. This doesn’t mean that Lenovo are giving up on some of their other HCI partnerships, but it does give them a competent partner to attack the edge infrastructure market.

 

GCG, Yeah You Know Me

Grupo Colón Gerena is a Puerto Rico-based “restaurant management company that owns franchises of brands including Wendy’s, Applebee’s, Famous Davés, Sizzler’s, Longhorn Steakhouse, Olive Garden and Red Lobster throughout the island”. You may recall Puerto Rico suffered through some pretty devastating weather in 2017 thanks to Hurricane Maria. GCG have been running the bulk of their workload in Google Cloud since just before the event, and are still deciding whether they really want to move it back to an on-premises solution. There’s definitely a good story with Scale delivering workloads from the edge to the core and through to Google Cloud. You can read the full case study here.

 

Thoughts

It’s no big secret that I’m a fan of Scale Computing. And not just because I have an old HC1000 in my office that I fire up every now and then (Collier I’m still waiting on those SSDs you promised me a few years ago). They are relentlessly focussed on delivering easy to use solutions that work well and deliver great resiliency and performance, particularly in smaller environments. Their DRaaS play, and partnership with Google, has opened up some doors to customers that may not have considered Scale previously. The Lenovo partnership, and success with customers like GCG and DSO, is proof that Scale are doing a lot of good stuff in the HCI space.

Anyone who’s had the good fortune to deal with Scale, from their executives and founders through to their support staff, will tell you that they’re super easy to deal with and pretty good at what they do. It’s great to see them enjoying some success. It strikes me that they go about their business without a lot of the chest beating and carry on associated with some other vendors in the industry. This is a good thing, and I’m looking forward to seeing what comes next for them.

Brisbane VMUG – November 2018

hero_vmug_express_2011

The November 2018 edition of the Brisbane VMUG meeting (and last one of the year) will be held on Tuesday 20th November at Toobirds at 127 Creek Street from 4:30 pm – 6:30 pm. It’s sponsored by Cisco and promises to be a great afternoon.

Here’s the agenda:

  • VMUG Intro
  • VMware Presentation:Workspace ONE UEM Modern Management for Windows 10
  • Cisco Presentation:Cloud First in a Multi-cloud world
  • Q&A
  • Refreshments and drinks.

Cisco have gone to great lengths to make sure this will be a fun and informative session and I’m really looking forward to hearing what they’ve been up to. You can find out more information and register for the event here. I hope to see you there. Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.

Maxta Announces MxIQ

Maxta recently announced MxIQ. I had the opportunity to speak to Barry Phillips (Chief Marketing Officer) and Kiran Sreenivasamurthy (VP, Product Management) and thought I’d share some information from the announcement here. It’s been a while since I’ve covered Maxta, and you can read my previous thoughts on them here.

 

Introducing MxIQ

MxIQ is Maxta’s support and analytics solution and it focuses on four key aspects:

  • Proactive support through data analytics;
  • Preemptive recommendation engine;
  • Forecast capacity and performance trends; and
  • Resource planning assistance.

Historical data trends for capacity and performance are available, as well as metadata concerning cluster configuration, licensing information, VM inventory and logs.

Architecture

MxIQ is a server – client solution and the server component is currently hosted by Maxta in AWS. This can be decoupled from AWS and hosted in a private DC environment if customers don’t want their data sitting in AWS. The downside of this is that Maxta won’t have visibility into the environment, and you’ll lose a lot of the advantages of aggregated support data and analytics.

[image courtesy of Maxta]

There is a client component that runs on every node in the cluster in the customer site. Note that one agent in each cluster is active, with the other agents communicate with the active agent. From a security perspective, you only need to configure an outbound connection, as the server responds to client requests, but doesn’t initiate communications with the client. This may change in the future as Maxta adds increased functionality to the solution.

From a heartbeat perspective, the agent talks to the server every minute or so. If, for some reason, it doesn’t check in, a support ticket is automatically opened.

[image courtesy of Maxta]

Privileges

There are three privilege levels that are available with the MxIQ solution.

  • Customer
  • Partner
  • Admin

Note that the Admin (Maxta support) needs to be approved by the customer.

[image courtesy of Maxta]

The dashboard provides an easy to consume overview of what’s going on with managed Maxta clusters, and you can tell at a glance if there are any problems or areas of concern.

[image courtesy of Maxta]

 

Thoughts

I asked the Maxta team if they thought this kind of solution would result in more work for support staff as there’s potentially more information coming in and more support calls being generated. Their opinion was that, as more and more activities were automated, the workload would decrease. Additionally, logs are collected every four hours. This saves Maxta support staff time chasing environmental information after the first call is logged. I also asked whether the issue resolution was automated. Maxta said it wasn’t right now, as it’s still early days for the product, but that’s the direction it’s heading in.

The type of solution that Maxta are delivering here is nothing new in the marketplace, but that doesn’t mean it’s not valuable for Maxta and their customers. I’m a big fan of adding automated support and monitoring to infrastructure environments. It makes it easier for the vendor to gather information about how their product is being used, and it provides the ability for them to be proactive, and super responsive, to customer issues as the arise.

From what I can gather from my conversation with the Maxta team, it seems like there’s a lot of additional functionality they’ll be looking to add to the product as it matures. The real value of the solution will increase over time as customers contribute more and more telemetry data and support to the environment. This will obviously improve Maxta’s ability to respond quickly to support issues, and, potentially, give them enough information to avoid some of the more common problems in the first place. Finally, the capacity planning feature will no doubt prove invaluable as customers continue to struggle with growth in their infrastructure environments. I’m really looking forward to seeing how this product evolves over time.

NVMesh 2 – A Compelling Sequel From Excelero

The Announcement

Excelero recently announced NVMesh 2 – the next iteration of their NVMesh product. NVMesh is a software-only solution designed to pool NVMe-based PCIe SSDs.

[image courtesy of Excelero]

Key Features

There are three key features that have been added to NVMesh.

  • MeshConnect – adding support for traditional network technologies TCP/IP and Fibre Channel, giving NVMesh the widest selection of supported protocols and fabrics of software-defined storage platforms along with already supported InfiniBand, RoCE v2, RDMA and NVMe-oF.
  • MeshProtect – offering flexible protection levels for differing application needs, including mirrored and parity-based redundancy.
  • MeshInspect – with performance analytics for pinpointing anomalies quickly and at scale.

Performance

Excelero have said that NVMesh delivers “shared NVMe at local performance and 90+% storage efficiency that helps further drive down the cost per GB”.

Protection

There’s also a range of protection options available now. Excelero tell me that you can start at level 0 (no protection, lowest latency) all the way to “MeshProtect 10+2 (distributed dual parity)”. This allows customers to “choose their preferred level of performance and protection. [While] Distributing data redundancy services eliminates the storage controller bottleneck.”

Visibility

One of my favourite things about NVMesh 2 is the MeshInspect feature, with a “built-in statistical collection and display, stored in a scalable NoSQL database”.

[image courtesy of Excelero]

 

Thoughts And Further Reading

Excelero emerged form stealth mode at Storage Field Day 12. I was impressed with their offering back then, and they continue to add features while focussing on delivering top notch performance via a software-only solution. It feels like there’s a lot of attention on NVMe-based storage solutions, and with good reason. These things can go really, really fast. There are a bunch of startups with an NVMe story, and the bigger players are all delivering variations on these solutions as well.

Excelero seem well placed to capitalise on this market interest, and their decision to focus on a software-only play seems wise, particularly given that some of the standards, such as NVMe over TCP, haven’t been fully ratified yet. This approach will also appeal to the aspirational hyperscalers, because they can build their own storage solution, source their own devices, and still benefit from a fast software stack that can deliver performance in spades. Excelero also supports a wide range of transports now, with the addition of NVMe over FC and TCP support.

NVMesh 2 looks to be smoothing some of the rougher edges that were present with version 1, and I’m pumped to see the focus on enhanced visibility via MeshInspect. In my opinion these kinds of tools are critical to the uptake of solutions such as NVMesh in both the enterprise and cloud markets. The broadening of the connectivity story, as well as the enhanced resiliency options, make this something worth investigating. If you’d like to read more, you can access a white paper here (registration required).

Random Short Take #8

Here are a few links to some news items and other content that might be useful. Maybe.