Brisbane VMUG – November 2018

hero_vmug_express_2011

The November 2018 edition of the Brisbane VMUG meeting (and last one of the year) will be held on Tuesday 20th November at Toobirds at 127 Creek Street from 4:30 pm – 6:30 pm. It’s sponsored by Cisco and promises to be a great afternoon.

Here’s the agenda:

  • VMUG Intro
  • VMware Presentation:Workspace ONE UEM Modern Management for Windows 10
  • Cisco Presentation:Cloud First in a Multi-cloud world
  • Q&A
  • Refreshments and drinks.

Cisco have gone to great lengths to make sure this will be a fun and informative session and I’m really looking forward to hearing what they’ve been up to. You can find out more information and register for the event here. I hope to see you there. Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.

Maxta Announces MxIQ

Maxta recently announced MxIQ. I had the opportunity to speak to Barry Phillips (Chief Marketing Officer) and Kiran Sreenivasamurthy (VP, Product Management) and thought I’d share some information from the announcement here. It’s been a while since I’ve covered Maxta, and you can read my previous thoughts on them here.

 

Introducing MxIQ

MxIQ is Maxta’s support and analytics solution and it focuses on four key aspects:

  • Proactive support through data analytics;
  • Preemptive recommendation engine;
  • Forecast capacity and performance trends; and
  • Resource planning assistance.

Historical data trends for capacity and performance are available, as well as metadata concerning cluster configuration, licensing information, VM inventory and logs.

Architecture

MxIQ is a server – client solution and the server component is currently hosted by Maxta in AWS. This can be decoupled from AWS and hosted in a private DC environment if customers don’t want their data sitting in AWS. The downside of this is that Maxta won’t have visibility into the environment, and you’ll lose a lot of the advantages of aggregated support data and analytics.

[image courtesy of Maxta]

There is a client component that runs on every node in the cluster in the customer site. Note that one agent in each cluster is active, with the other agents communicate with the active agent. From a security perspective, you only need to configure an outbound connection, as the server responds to client requests, but doesn’t initiate communications with the client. This may change in the future as Maxta adds increased functionality to the solution.

From a heartbeat perspective, the agent talks to the server every minute or so. If, for some reason, it doesn’t check in, a support ticket is automatically opened.

[image courtesy of Maxta]

Privileges

There are three privilege levels that are available with the MxIQ solution.

  • Customer
  • Partner
  • Admin

Note that the Admin (Maxta support) needs to be approved by the customer.

[image courtesy of Maxta]

The dashboard provides an easy to consume overview of what’s going on with managed Maxta clusters, and you can tell at a glance if there are any problems or areas of concern.

[image courtesy of Maxta]

 

Thoughts

I asked the Maxta team if they thought this kind of solution would result in more work for support staff as there’s potentially more information coming in and more support calls being generated. Their opinion was that, as more and more activities were automated, the workload would decrease. Additionally, logs are collected every four hours. This saves Maxta support staff time chasing environmental information after the first call is logged. I also asked whether the issue resolution was automated. Maxta said it wasn’t right now, as it’s still early days for the product, but that’s the direction it’s heading in.

The type of solution that Maxta are delivering here is nothing new in the marketplace, but that doesn’t mean it’s not valuable for Maxta and their customers. I’m a big fan of adding automated support and monitoring to infrastructure environments. It makes it easier for the vendor to gather information about how their product is being used, and it provides the ability for them to be proactive, and super responsive, to customer issues as the arise.

From what I can gather from my conversation with the Maxta team, it seems like there’s a lot of additional functionality they’ll be looking to add to the product as it matures. The real value of the solution will increase over time as customers contribute more and more telemetry data and support to the environment. This will obviously improve Maxta’s ability to respond quickly to support issues, and, potentially, give them enough information to avoid some of the more common problems in the first place. Finally, the capacity planning feature will no doubt prove invaluable as customers continue to struggle with growth in their infrastructure environments. I’m really looking forward to seeing how this product evolves over time.

NVMesh 2 – A Compelling Sequel From Excelero

The Announcement

Excelero recently announced NVMesh 2 – the next iteration of their NVMesh product. NVMesh is a software-only solution designed to pool NVMe-based PCIe SSDs.

[image courtesy of Excelero]

Key Features

There are three key features that have been added to NVMesh.

  • MeshConnect – adding support for traditional network technologies TCP/IP and Fibre Channel, giving NVMesh the widest selection of supported protocols and fabrics of software-defined storage platforms along with already supported InfiniBand, RoCE v2, RDMA and NVMe-oF.
  • MeshProtect – offering flexible protection levels for differing application needs, including mirrored and parity-based redundancy.
  • MeshInspect – with performance analytics for pinpointing anomalies quickly and at scale.

Performance

Excelero have said that NVMesh delivers “shared NVMe at local performance and 90+% storage efficiency that helps further drive down the cost per GB”.

Protection

There’s also a range of protection options available now. Excelero tell me that you can start at level 0 (no protection, lowest latency) all the way to “MeshProtect 10+2 (distributed dual parity)”. This allows customers to “choose their preferred level of performance and protection. [While] Distributing data redundancy services eliminates the storage controller bottleneck.”

Visibility

One of my favourite things about NVMesh 2 is the MeshInspect feature, with a “built-in statistical collection and display, stored in a scalable NoSQL database”.

[image courtesy of Excelero]

 

Thoughts And Further Reading

Excelero emerged form stealth mode at Storage Field Day 12. I was impressed with their offering back then, and they continue to add features while focussing on delivering top notch performance via a software-only solution. It feels like there’s a lot of attention on NVMe-based storage solutions, and with good reason. These things can go really, really fast. There are a bunch of startups with an NVMe story, and the bigger players are all delivering variations on these solutions as well.

Excelero seem well placed to capitalise on this market interest, and their decision to focus on a software-only play seems wise, particularly given that some of the standards, such as NVMe over TCP, haven’t been fully ratified yet. This approach will also appeal to the aspirational hyperscalers, because they can build their own storage solution, source their own devices, and still benefit from a fast software stack that can deliver performance in spades. Excelero also supports a wide range of transports now, with the addition of NVMe over FC and TCP support.

NVMesh 2 looks to be smoothing some of the rougher edges that were present with version 1, and I’m pumped to see the focus on enhanced visibility via MeshInspect. In my opinion these kinds of tools are critical to the uptake of solutions such as NVMesh in both the enterprise and cloud markets. The broadening of the connectivity story, as well as the enhanced resiliency options, make this something worth investigating. If you’d like to read more, you can access a white paper here (registration required).

Random Short Take #8

Here are a few links to some news items and other content that might be useful. Maybe.

Vembu BDR Suite 4.0 Is Coming

Disclaimer

Vembu are a site sponsor of PenguinPunk.net. They’ve asked me to look at their product and write about it. I’m in the early stages of evaluating the BDR Suite in the lab, but thought I’d pass on some information about their upcoming 4.0 release. As always, if you’re interested in these kind of solutions, I’d encourage you to do your own evaluation and get in touch with the vendor, as everyone’s situation and requirements are different. I can say from experience that the Vembu sales and support staff are very helpful and responsive, and should be able to help you with any queries. I recently did a brief article on getting started with BDR Suite 3.9.1 that you can download from here.

 

New Features

So what’s coming in 4.0?

Hyper-V Cluster Backup

Vembu will support backing up VMs in a Hyper-V cluster and, even if VMs configured for backup are moved from one host to another, the incremental backup will continue to happen without any interruption.

Shared VHDx Backup

Vembu now supports backup of the shared VHDx of Hyper-V.

CheckSum-based Incrementals

Vembu uses CBT for incremental backups. And for some CBT failure cases they will be using CheckSum for the incremental to happen without any interruption.

Credential Manager

No need to enter credentials every time, Vembu Credential Manager now allows you to manage the credentials of the host and the VMs running in it. This will be particularly handy if you’re doing a lot of application-aware backup job configuration.

 

Thoughts

I had a chance to speak with Vembu about the product’s functionality. There’s a lot to like in terms of breadth of features. I’m interested in seeing how 4.0 goes when it’s released and hope to do a few more articles on the product then. If you’re looking to evaluate the product, this evaluator’s guide is as good place as any to start. As an aside, Vembu are also offering 10% off their suite this Halloween (until November 2nd) – see here for more details.

For a fuller view of what’s coming in 4.0, you can read Vladan‘s coverage here.

Updated Articles Page

I recently had the opportunity to deploy a Vembu BDR 3.9.1 Update 1 appliance and thought I’d run through the basics of getting started. There’s a new document outlining the process on the articles page.

Cohesity Basics – Excluding VMs Using Tags

I’ve been doing some work with Cohesity in our lab and thought it worth covering some of the basic features that I think are pretty neat. In this edition of Cohesity Basics, I thought I’d quickly cover off how to exclude VMs from protection jobs based on assigned tags. In this example I’m using version 6.0.1b_release-20181014_14074e50 (a “feature release”).

 

Process

The first step is to find the VM in vCenter that you want to exclude from a protection job. Right-click on the VM and select Tags & Custom Attributes. Click on Assign Tag.

In the Assign Tag window, click on the New Tag icon.

Assign a name to the new tag, and add a description if that’s what you’re into.

In this example, I’ve created a tag called “COH-Test”, and put it in the “Backup” category.

Now go to the protection job you’d like to edit.

Click on the Tag icon on the right-hand side. You can then select the tag you created in vCenter. Note that you may need to refresh your vCenter source for this new tag to be reflected.

When you select the tag, you can choose to Auto Protect or Exclude the VM based on the applied tags.

If you drill in to the objects in the protection job, you can see that the VM I wanted to exclude from this job has been excluded based on the assigned tag.

 

Thoughts

I’ve written enthusiastically about Cohesity’s Auto Protect feature previously. Sometimes, though, you need to exclude VMs from protection jobs. Using tags is a quick and easy way to do this, and it’s something that your virtualisation admin team will be happy to use too.

Elastifile Announces v3.0

Elastifile recently announced version 3.0 of their product. I had the opportunity to speak to Jerome McFarland (VP of Marketing) and thought I’d share some information from the announcement here. If you haven’t heard of them before, “Elastifile augments public cloud capabilities and facilitates cloud consumption by delivering enterprise-grade, scalable file storage in the cloud”.

 

The Announcement

ClearTier

One of the major features of the 3.0 release is “ClearTier”, delivering integration between file and object storage in public clouds. With ClearTier, you have object storage expanding the file system namespace. The cool thing about this is that Elastifile’s ECFS provides transparent read / write access to all data. No need to re-tool applications to take advantage of the improved economics of object storage in the public cloud.

How Does It Work?

All data is accessible through ECFS via a standard NFS mount, and application access to object data is routed automatically. Data tiering occurs automatically according to user-defined policies specifying:

  • Targeted capacity ratio between file and object;
  • Eligibility for data demotion (i.e. min time since last access); and
  • Promotion policies control response to object data access.

Bursting

ClearTier gets even more interesting when you combine it with Elastifile’s CloudConnect, by using CloudConnect to get data to the public cloud in the first place, and then using CloudTier to push data to object storage.

[image courtesy of Elastifile]

It becomes a simple process, and consists of two steps:

  1. Move on-premises data (from any NAS) to cloud-based object storage using CloudConnect; and
  2. Deploy ECFS with pointer to designated object store.

Get Snappy

ClearTier also provides the ability to store snapshots on an object tier. Snapshots occur automatically according to user- defined policies specifying:

  • Data to include;
  • Destination for snapshot (i.e. file storage / object storage); and
  • Schedule for snapshot creation.

The great thing is that all snapshots are accessible through ECFS via the same NFS mount.

 

Thoughts And Further Reading

I was pretty impressed with Elastifile’s CloudConnect solution when they first announced it. When you couple CloudConnect with something like ClearTier, and have it sitting on top of the ECFS foundation, it strikes me as a pretty cool solution. If you’re using applications that rely heavily on NFS, for example, ClearTier gives you a way to leverage the traditionally low cost of cloud object storage with the improved performance of file. I like the idea that you can play with the ratio of file and object, and I’m a big fan of not having to re-tool my file-centric applications to take advantage of object economics. The ability to store a bunch of snapshots on the object tier also adds increased flexibility in terms of data protection and storage access options.

The ability to burst workloads is exactly the kind of technical public cloud use case that we’ve been talking about in slideware for years now. The reality, however, has been somewhat different. It looks like Elastifile are delivering a solution that competes aggressively with some of the leading cloud providers’ object solutions, whilst also giving the storage array vendors, now dabbling in cloud solutions, pause for thought. There are a bunch of interesting use cases, particularly if you need to access a bunch of compute, and large data sets via file-based storage, in a cloud environment for short periods of time. If you’re looking for a cost-effective, scalable storage solution, I think that Elastifile are worth checking out.

Storage Field Day 17 – Wrap-up and Link-o-rama

Disclaimer: I recently attended Storage Field Day 17.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

This is a quick post to say thanks once again to Stephen and Ben, and the presenters at Storage Field Day 17. I had a super fun and educational time. For easy reference, here’s a list of the posts I did covering the events (they may not match the order of the presentations).

Storage Field Day – I’ll Be At Storage Field Day 17

Storage Field Day 17 – (Fairly) Full Disclosure

I Need Something Like Komprise For My Garage

NGD Systems Are On The Edge Of Glory

Intel’s Form Factor Is A Factor

StarWind Continues To Do It Their Way

 

Also, here’s a number of links to posts by my fellow delegates (in no particular order). They’re all very smart people, and you should check out their stuff, particularly if you haven’t before. I’ll attempt to keep this updated as more posts are published. But if it gets stale, the Storage Field Day 17 landing page will have updated links.

 

Max Mortillaro (@DarkkAvenger)

I will be at Storage Field Day 17! Wait what is a “Storage Field Day”?

The Rise of Computational Storage

Komprise: Data Management Made Easy

What future for Intel Optane?

 

Ray Lucchesi (@RayLucchesi)

Screaming IOP performance with StarWind’s new NVMeoF software & Optane SSDs

GreyBeards talk Computational Storage with Scott Shadley VP Marketing NGD Systems

 

Howard Marks (@DeepStorageNet)

 

Arjan Timmerman (@ArjanTim)

EP10 – Computational Storage: A Paradigm Shift In The Storage Industry with Scott Shadley and NGD Systems

 

Aaron Strong (@TheAaronStrong)

Komprise Systems Overview from #SFD17

NGD Systems from #SFD17

StarWind NVMeoF

 

Jeffrey Powers (@Geekazine)

Komprise Transforming Data Management with Disruption at SFD17

Starwind NVMe Over Fabrics for SMB and ROBO at SFD17

NGD Systems Help Make Cat Searches Go Faster with Better Results at SFD17

 

Joe Houghes (@JHoughes)

 

Luigi Danakos (@NerdBlurt)

Tech Stand UP Episode 8 – SFD17 – Initial Thoughts on Komprise Podcast

Tech Stand Up Episode 9 – SFD17 – Initial thoughts NGD Systems Podcast

 

Mark Carlton (@MCarlton1983)

 

Enrico Signoretti (@ESignoretti)

Secondary Storage Is The New Primary

The Era of Composable Storage Infrastructures is Coming

The Fascinating World Of Computational Storage

Secondary Data and Komprise with Krishna Subramanian

 

Jon Hudson (@_Desmoden)

 

[photo courtesy of Ramon]

StarWind Continues To Do It Their Way

Disclaimer: I recently attended Storage Field Day 17.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

StarWind recently presented at Storage Field Day 17. You can see their videos from Storage Field Day 17 here, and download a PDF copy of my rough notes from here.

 

StarWind Do All Kinds Of Stuff

I’ve written enthusiastically about StarWind previously. If you’re unfamiliar with them, they have three main focus areas:

They maintain a strict focus on the SMB and Enterprise ROBO markets, and aren’t looking to be the next big thing in the enterprise any time soon.

 

So What’s All This About NVMe [over Fabrics]?

According to Max and the team, NVMe over Fabrics is “the next big thing in [network] storage”. Here’s a photo of Max saying just that.

Why Hate SAS?

It’s not that people hate SAS, it’s just that the SAS protocol was designed for disk, and NVMe was designed for Flash devices.

SAS (iSCSI / iSER) NVMe [over Fabrics]
Complex driver built around archaic SCSI Simple driver built around block device (R/W)
Single short queue per controller One device = one controller, no bottlenecks
Single short queue per device Many long queues per device
Serialised access, locks Non-serialised access, no locks
Many-to-One-to-Many Many-to-Many, true Point-to-Point

 

You Do You, Boo

StarWind have developed their own NVMe SPDK for Windows Server (as Intel doesn’t currently provide one). In early development they had some problems with high CPU overheads. CPU might be a “cheap resource”, but you still don’t want to use up 8 cores dishing out IO for a single device. They’ve managed to move a lot of the work to user space and cut down on core consumption. They’ve also built their own Linux (CentOS) based initiator for NVMe over Fabrics. They’ve developed a NVMe-oF initiator for Windows by combining a Linux initiator and stub driver in the hypervisor. “We found the elegant way to bring missing SPDK functionality to Windows Server: Run it in a VM with proper OS! First benefit – CPU is used more efficiently”. They’re looking to do something similar with ESXi in the very near future.

 

Thoughts And Further Reading

I like to think of StarWind as the little company from the Ukraine that can. They have a long, rich heritage in developing novel solutions to everyday storage problems in the data centre. They’re not necessarily trying to take over the world, but they’ve demonstrated before that they have an ability to deliver solutions that are unique (and sometimes pioneering) in the marketplace. They’ve spent a lot of time developing software storage solutions over the years, so it makes sense that they’d be interested to see what they could do with the latest storage protocols and devices. And if you’ve ever met Max and Anton (and the rest of their team), it makes even more sense that they wouldn’t necessarily wait around for Intel to release a Windows-based SPDK to see what type of performance they could get out of these fancy new Flash devices.

All of the big storage companies are coming out with various NVMe-based products, and a number are delivering NVMe over Fabrics solutions as well. There’s a whole lot of legacy storage that continues to dominate the enterprise and SMB storage markets, but I think it’s clear from presentations such as StarWind’s that the future is going to look a lot different in terms of the performance available to applications (both at the core and edge).

You can check out this primer on NVMe over Fabrics here, and the ratified 1.0a specification can be viewed hereRay Lucchesi, as usual, does a much better job than I do of explaining things, and shares his thoughts here.