Brisbane VMUG – November 2022

The November 2022 edition of the Brisbane VMUG meeting will be held on Thursday 24th November at the Cube (QUT) from 5pm – 7pm. It’s sponsored by Pure Storage and promises to be a great afternoon. Register here.

Raise Your Kubernetes Infrastructure Status From Zero to Hero

If your developers or platform architects are asking you for storage features commonly found in your vSphere infrastructure but targeted towards Kubernetes, you are not alone – let Portworx help you go from “I don’t know” to “No problem”!

Locking yourself into a storage solution that is dependent on specific infrastructure is a sure way to reduce efficiency and flexibility for your developers and where their applications can run – Portworx elevates you to “Hero” status by:

  • Providing your team a consistent, cloud native storage layer you can utilise on ANY Kubernetes platform – whether on-premises or in the public cloud
  • Giving you the capability to provide Kubernetes native DR and business continuity not only for your persistent storage, but all of the Kubernetes objects associated with your applications (think SRM and vMSC for Kubernetes!)
  • Enabling you to provide Kubernetes-aware data protection, including ransomware protection and 3-2-1 backup compliance with RBAC roles that can fit the existing policies within your organisation
  • Delighting your developers that need access to modern databases such as Kafka, PostgreSQL, Cassandra, and more by delivering self-service deployments with best practices “built-in”, which accelerate development cycles without a dinosaur DBA or learning complex Kubernetes operators

Come join us to see how we can create your “Better Together” story with Tanzu and give you the tools and knowledge to bring agility for your developers to your underlying infrastructure for modern applications running on Kubernetes!

Mike Carpendale

Mike joined Pure Storage in April 2021 as the APJ Regions Platform Architect. He has 20+ years experience in the industry, ranging from his expert level hands-on experience of designing and managing large scale on-prem as-a-service offerings underpinned by VMware, to his more recent work in the public cloud. 

 

PIZZA AND NETWORKING BREAK!

 

This will be followed by:

VMware Session

Peter Hauck – Senior Solutions Engineer

VMware

 

And we will be finishing off with:

 

Preparing for VMware Certifications

With the increase of position requirements in the last few years, certifications help you demonstrate your skills and move you a step forward on getting better jobs. In this Community Ssession we will help you understand how to prepare for a VMware certification exam and some useful tips you can use during the exam.

We will talk about:

Different types of exams

  • How to schedule an exam
  • Where to get material to study
  • Lessons learned from the field per type of exam

Francisco Fernandez Cardarelli – Senior Consultant (4 x VCIX)

 

Soft drinks and vBeers will be available throughout the evening! We look forward to seeing you there! Doors open at 5pm. Please make your way to The Cube.

Random Short Take #80

Welcome to Random Short Take #80. Lots of press release news this week and some parochial book recommendations. Let’s get random.

Verity ES Springs Forth – Promises Swift Eradication of Data

Verity ES recently announced its official company launch and the commercial availability of its Verity ES data eradication enterprise software solution. I had the opportunity to speak to Kevin Enders about the announcement and thought I’d briefly share some thoughts here.

 

From Revert to Re-birth?

Revert, a sister company of Verity ES, is an on-site data eradication service provider. It’s also a partner for a number of Storage OEMs.

The Problem

The folks at Revert have had an awful lot of experience with data eradication in big enterprise environments. With that experience, they’d observed a few challenges, namely:

  • The software doing the data eradication was too slow;
  • Eradicating data in enterprise environments introduced particular requirements at high volumes; and
  • Larger capacity HDDs and SDDs were a real problem to deal with.

The Real Problem?

Okay, so the process to get rid of old data on storage and compute devices is a bit of a problem. But what’s the real problem? Organisations need to get rid of end of life data – particularly from a legal standpoint – in a more efficient way. Just as data growth continues to explode, so too does the requirement to delete the old data.

 

The Solution

Verity ES was spawned to develop software to solve a number of the challenges Revert were coming across in the field. There are two ways to do it:

  • Eliminate the data destructively (via device shredding / degaussing); or
  • Non-destructively (using software-based eradication).

Why Eradicate?

Why eradicate? It’s a sustainable approach, enables residual value recovery, and allows for asset re-use. But it nonetheless needs to be secure, economical, and operationally simple to do. How does Verity ES address these requirements? It has Product Assurance Certification from ADISA. It’s also developed software that’s more efficient, particularly when it comes to those troublesome high capacity drives.

[image courtesy of Verity ES]

Who’s Buying?

Who’s this product aimed at? Primarily enterprise DC operators, hyperscalers, IT asset disposal companies, and 3rd-party hardware maintenance providers.

 

Thoughts

If you’ve spent any time on my blog you’ll know that I write a whole lot about data protection, and this is probably one of the first times that I’ve written about data destruction as a product. But it’s an interesting problem that many organisations are facing now. There is a tonne of data being generated every day, and some of that data needs to be gotten rid of, either because it’s sitting on equipment that’s old and needs to be retired, or because legislatively there’s a requirement to get rid of the data.

The way we tackle this problem has changed over time too. One of the most popular articles on this blog was about making an EMC CLARiiON CX700 useful again after EMC did a certified erasure on the array. There was no data to be found on the array, but it was able to be repurposed as lab equipment, and enjoyed a few more months of usefulness. In the current climate, we’re all looking at doing more sensible things with our old disk drives, rather than simply putting a bullet in them (except for the Feds – but they’re a bit odd). Doing this at scale can be challenging, so it’s interesting to see Verity ES step up to the plate with a solution that promises to help with some of these challenges. It takes time to wipe drives, particularly when you need to do it securely.

I should be clear that this data doesn’t go out and identify what data needs to be erased – you have to do that through some other tools. So it won’t tell you that a bunch of PII is buried in a home directory somewhere, or sitting in a spot it shouldn’t be. It also won’t go out and dig through your data protection data and tell you what needs to go. Hopefully, though, you’ve got tools that can handle that problem for you. What this solution does seem to do is provide organisations with options when it comes to cost-effective, efficient data eradication. And that’s something that’s going to become crucial as we continue to generate data, need to delete old data, and do so on larger and larger disk drives.

Random Short Take #78

Welcome to Random Short Take #78. We’re hurtling towards the silly season. Let’s get random.

Brisbane VMUG – October 2022

The October 2022 edition of the Brisbane VMUG meeting will be held on Wednesday 12th October at the Cube (QUT) from 5pm – 7pm. It’s sponsored by NetApp and promises to be a great afternoon.

Two’s Company, Three’s a Cloud – NetApp, VMware and AWS

NetApp has had a strategic relationship with VMware for over 20 years, and with AWS for over 10 years. Recently at VMware Explore we made a significant announcement about VMC support for NFS Datastores provided by the AWS FSx for NetApp ONTAP service.

Come and learn about this exciting announcement and more on the benefits of NetApp with VMware Cloud. We will discuss architecture concepts, use cases and cover topics such as migration, data protection and disaster recovery as well as Hybrid Cloud configurations.

There will be a lucky door prize as well as a prize for best question on the night. Looking forward to see you there!

Wade Juppenlatz – Specialist Systems Engineer – QLD/NT

Chris (Gonzo) Gondek – Partner Technical Lead QLD/NT

 

PIZZA AND NETWORKING BREAK!

This will be followed by:

All the News from VMware Explore – (without the jet lag)

We will cover a variety of cloudy announcements from VMware Explore, including:

  • vSphere 8
  • vSAN 8
  • VMware Cloud on AWS
  • VMware Cloud Flex Storage
  • GCVE, OCVS, AVS
  • Cloud Universal
  • VMware Ransomware Recovery for Cloud DR

Dan Frith – Staff Solutions Architect – VMware Cloud on AWS, VMware

 

And we will be finishing off with:

Preparing for VMware Certifications

With the increase of position requirements in the last few years, certifications help you demonstrate your skills and move you a step forward on getting better jobs. In this Community Ssession we will help you understand how to prepare for a VMware certification exam and some useful tips you can use during the exam.

 

We will talk about:

  • Different types of exams
  • How to schedule an exam
  • Where to get material to study
  • Lessons learned from the field per type of exam

Francisco Fernandez Cardarelli – Senior Consultant (4 x VCIX)

 

Soft drinks and vBeers will be available throughout the evening! We look forward to seeing you there!

Doors open at 5pm. Please make your way to The Atrium, on Level 6.

You can find out more information and register for the event here. I hope to see you there. Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.

Random Short Take #77

Welcome to Random Short Take #77. Spring has sprung. Let’s get random.

Finally, the blog turned 15 years old recently (about a month ago). I’ve been so busy with the day job that I forgot to appropriately mark the occasion. But I thought we should do something. So if you’d like some stickers (I have some small ones for laptops, and some big ones because I can’t measure things properly), send me your address via this contact form and I’ll send you something as a thank you for reading along.

VMware Cloud on AWS – Supplemental Storage – A Few Notes …

At VMware Explore 2022 in the US, VMware announced a number of new offerings for VMware Cloud on AWS, including something we’re calling “Supplemental Storage”. There are some great (official) posts that have already been published, so I won’t go through everything here. I thought it would be useful to provide some high-level details and cover some of the caveats that punters should be aware of.

 

The Problem

VMware Cloud on AWS has been around for just over 5 years now, and in that time it’s proven to be a popular platform for a variety of workloads, industry verticals, and organisations of all different sizes. However, one of the challenges that a hyper-converged architecture presents is that resource growth is generally linear (depending on the types of nodes you have available). In the case of VMware Cloud on AWS, we (now) have 3 nodes available for use: the I3, I3en, and I4i. Each of these instances provides a fixed amount of CPU, RAM, and vSAN storage for use within your VMC cluster. So when your storage grows past a certain threshold (80%), you need to add an additional node. This is a longwinded way of saying that, even if you don’t need the additional CPU and RAM, you need to add it anyway. To address this challenge, VMware now offers what’s called “Supplemental Storage” for VMware Cloud on AWS. This is ostensibly external dat stores presented to the VMC hosts over NFS. This comes in two flavours: FSx for NetApp ONTAP and VMware Cloud Flex Storage. I’ll cover this in a little more detail below.

[image courtesy of VMware]

 

Amazon FSx for NetApp ONTAP

The first cab off the rank is Amazon FSx for NetApp ONTAP (or FSxN to its friends). This one is ONTAP-like storage made available to your VMC environment as a native service. It’s fully customer managed, and VMware managed from a networking perspective.

[image courtesy of VMware]

There’s a 99.99% Availability SLA attached to the service. It’s based on NetApp ONTAP, and offers support for:

  • Multi-Tenancy
  • SnapMirror
  • FlexClone
​Note that it currently requires VMware Managed Transit Gateway (vTGW) for Multi-AZ deployment (the only deployment architecture currently supported), and can connect to multiple clusters and SDDCs for scale. You’ll need to be on SDDC version 1.20 (or greater) to leverage this service in your SDDC, and there is currently no support for attachment to stretched clusters. While you can only connect datastores to VMC hosts using NFSv3, there is support for connecting directly to guest via other protocols. More information can be found in the FAQ here. There’s also a simulator you can access here that runs you through the onboarding process.

 

VMware Cloud Flex Storage

The other option for supplemental storage is VMware Cloud Flex Storage (sometimes referred to as VMC-FS). This is a datastore presented to your hosts over NFSv3.

Overview

VMware Cloud Flex Storage is:

  • A natively integrated cloud storage service for VMware Cloud on AWS that is fully managed by VMware;
  • Cost effective multi-cloud Cloud storage solution built on SCFS;
  • Delivered via a two-tier architecture for elasticity and performance (AWS S3 and local NVMe cache); and
  • Provides integrated Data-Management.

In short, VMware has taken a lot of the technology used in VMware Cloud Disaster Recovery (the result of the Datrium acquisition in 2020) and used it to deliver up to 400 TiB of storage per SDDC.

[image courtesy of VMware]
The intent of the solution, at this stage at least, is that it is only offered as a datastore for hosts via NFSv3, rather than other protocols directly to guests. There are some limitations around the supported topologies too, with stretched clusters not currently supported. From a disaster recovery perspective, it’s important to note that VMware Cloud Flex Storage is currently only offered on a single-AZ basis (although the supporting components are spread across multiple Availability Zones), and there is currently no support for VMware Cloud Disaster Recovery co-existence with this solution.

 

Thoughts
I’ve only been at VMware for a short period of time, but I’ve had numerous conversations with existing and potential VMware Cloud on AWS customers looking to solve their storage problems without necessarily putting everything on vSAN. There are plenty of reasons why you wouldn’t want to use vSAN for high capacity storage workloads, and I believe these two initial solutions go some ways to solving that issue. Many of the caveats that are wrapped around these two products at General Availability will be removed over time, and the traditional objections relating to VMware Cloud on AWS being not great at high-capacity, cost-effective storage will also have been removed.
Finally, if you’re an existing NetApp ONTAP customer, and were thinking about what you were going to do with that Petabyte of unstructured data you had lying about when you moved to VMware Cloud on AWS, or wanting to take advantage of the sweat equity you’ve poured into managing your ONTAP environment over the years, I think we’ve got you covered as well.

Random Short Take #76

Welcome to Random Short Take #76. Summer’s almost here. Let’s get random.

 

Random Short Take #75

Welcome to Random Short Take #75. Half the year has passed us by already. Let’s get random.

  • I talk about GiB all the time when sizing up VMware Cloud on AWS for customers, but I should take the time to check in with folks if they know what I’m blithering on about. If you don’t know, this explainer from my friend Vincent is easy to follow along with – A little bit about Gigabyte (GB) and Gibibyte (GiB) in computer storage.
  • MinIO has been in the news a bit recently, but this article from my friend Chin-Fah is much more interesting than all of that drama – Beyond the WORM with MinIO object storage.
  • Jeff Geerling seems to do a lot of projects that I either can’t afford to do, or don’t have the time to do. Either way, thanks Jeff. This latest one – Building a fast all-SSD NAS (on a budget) – looked like fun.
  • You like ransomware? What if I told you you can have it cross-platform? Excited yet? Read Melissa’s article on Multiplatform Ransomware for a more thorough view of what’s going on out there.
  • Speaking of storage and clouds, Chris M. Evans recently published a series of videos over at Architecting IT where he talks to NetApp’s Matt Watt about the company’s hybrid cloud strategy. You can see it here.
  • Speaking of traditional infrastructure companies doing things with hyperscalers, here’s the July 2022 edition of What’s New in VMware Cloud on AWS.
  • In press release news, Aparavi and Backblaze have joined forces. You can read more about that here.
  • I’ve spent a lot of money over the years trying to find the perfect media streaming device for home. I currently favour the Apple TV 4K, but only because my Boxee Box can’t keep up with more modern codecs. This article on the Best Device for Streaming for Any User – 2022 seems to line up well with my experiences to date, although I admit I haven’t tried the NVIDIA device yet. I do miss playing ISOs over the network with the HD Mediabox 100, but those were simpler times I guess.

StorONE Announces Per-Drive Licensing Model

StorONE recently announced details of its Per-Drive Licensing Model. I had the opportunity to talk about the announcement with Gal Naor and George Crump about the news and thought I’d share some brief thoughts here.

 

Scale For Free?

Yes, at least from a licensing perspective. If you’ve bought storage from many of the traditional array vendors over the years, you would have likely paid for capacity-based licensing. Every time you upgraded the capacity of your array, there was usually a charge associated with that upgrade, beyond the hardware uplift costs. The folks at StorONE think it’s probably time that they stopped punishing customers for using higher capacity drives, so they’re shifting everything to a per-drive model.

How it Works

As I mentioned at the start, StorONE Scale-For-Free pricing is on a per-drive basis, so you can use the highest capacity, highest density drives without penalty, rather than metering capacity. The pricing is broken down thusly:

  • Price per HDD $/month
  • Price per SSD $/month
  • Minimum $/month
  • Cloud Use Case – $ per month by VM instance required

The idea is that this ultimately lowers the storage price per TB and brings some level of predictability to storage pricing.

How?

The key to this model is the availability of some key features in the StorONE solution, namely:

  • A rewritten and collapsed I/O stack (meaning do more with a whole lot less)
  • Auto-tiering improvements (leading to more consistent and predictable performance across HDD and SDD)
  • High performance erasure coding (meaning super fast recovery from drive failure)

 

But That’s Not All

Virtual Storage Containers

With Virtual Storage Containers (VSC), you can apply different data services and performance profiles to different workloads (hosted on the same media) in a granular and flexible fashion. For example, if you need 4 drives and 50,000 IOPS for your File Services, you can do that. In the same environment you might also need to use a few drives for Object storage with different replication. You can do that too.

[image courtesy of StorONE]

Ransomware Detection (and Protection)

StorONE has been pretty keen on its ransomware protection capabilities, with the option to run immutable snapshots on volumes every 30 seconds and store over 500,000+ snaps per volume. But it has added in some improved telemetry to enable earlier detection of potential ransomware events on volumes, as well as introducing dual-key deletion of snapshots and improved two-factor authentication.

 

Thoughts

There are many things that are certain in life, including the fact that no matter how much capacity you buy for your storage array on day one, by month 18 you’re looking at ways to replace some of that capacity with higher capacity. In my former life as a diskslinger I helped many customers upgrade their arrays with increased capacity drives, and most, if not all of them, had to pay a licensing bump as well as a hardware cost for the privilege. The storage vendors would argue that that’s just the model, and for as long as you can get away with it, it is. Particularly when hardware is getting cheaper and cheaper, you need something to drive revenue. So it’s nice to see a company like StorONE looking to shake things up a little in an industry that’s probably had its way with customers for a while now. Not every storage vendor is looking to punish customers for expanding their environments, but it’s nice that those customers that were struggling with this have the option to look at other ways of using the capacity they need in a cost-effective and predictable. manner.

This doesn’t really work without the other enhancements that have gone in to StorONE, such as the improved erasure coding and automated tiering. Having a cool business model isn’t usually enough to deliver a great solution. I’m looking forward to hearing from the StorONE team in the near future about how this has been received by both existing and new customers, and what other innovations they come out with in the next 12 months.