Brisbane VMUG – New Year VMUG Beers

Now that holiday season is over, Brisbane VMUG would like to say thank you to its Community and Sponsors, who supported them as they got back in to in-person meetings last year. They’ve secured The Terrace at QUT from 2pm until 5pm on Friday 17th February, and would like to invite you to join them for some drinks, nibbles and networking.

There will be some prize giveaways and an opportunity to chill out and mingle with like-minded people from the vCommunity.

Please register here!

Random Short Take #83

Welcome to Random Short Take #83. Quite a few press releases in this one, so let’s get random.

Random Short Take #82

Happy New Year (to those who celebrate). Let’s get random.

Random Short Take #81

Welcome to Random Short Take #81. Last one for the year, because who really wants to read this stuff over the holiday season? Let’s get random.

Take care of yourselves and each other, and I’ll hopefully see you all on the line or in person next year.

VMware Cloud on AWS – TMCHAM – Part 8 – TRIM/UNMAP

In this edition of Things My Customers Have Asked Me (TMCHAM), I’m going to delve into some questions around TRIM/UNMAP and capacity reclamation on the VMware-managed VMware Cloud on AWS platform.

 

Why TRIM/UNMAP?

TRIM/UNMAP, in short, is the capability for operating systems to reclaim no longer used space on thin-provisioned filesystems. Why is this important? Imagine you have a thin-provisioned volume that has 100GB of capacity allocated to it. It consumes maybe 1GB when it’s first deployed. You then add 50GB of data to it. You then delete 50GB of data from the volume. You’ll still see 51GB of capacity being consumed on the filesystem. This is because older operating systems just mark the blocks as deleted, but don’t zero them out. Modern operating systems do support TRIM/UNMAP though, but the hypervisor needs to understand the commands being sent to it. You can read more on that here.

How I Do This For VMware Cloud on AWS?

You can contact your account team, and we raise a ticket to get the feature enabled. We had some minor issues recently that meant we weren’t enabling the feature, but if you’re running M16v12 or M18v5 (or above) on your SDDCs, you should be good to go. Note that this feature is enabled on a per-cluster basis, and you need to reboot the VMs in the cluster for it to take effect.

What About Migrating With HCX?

Do the VMs come across thin? Do you need to reclaim space first? If you’re using HCX to go from thick to thin, you should be fine. If you’re migrating thin to thin, it’s worth checking whether you’ve got any space reclamation in place on your source side. I’ve had customers report back that some environments have migrated across with higher than expected storage usage due to a lack of space reclamation happening on the source storage environment. You can use something like Live Optics to report on your capacity consumed vs allocated, and how much capacity can be reclaimed.

Why Isn’t This Enabled By Default?

I don’t know for sure, but I imagine it has something to do with the fact that TRIM/UNMAP has the potential to have a performance impact from a latency perspective, depending on the workloads running in the environment, and the amount of capacity being reclaimed at any given time. We recommend that you “schedule large space reclamation jobs during off-peak hours to reduce any potential impact”. Given that VMware Cloud on AWS is a fully-managed service, I imagine we want to control as many of the performance variables as possible to ensure our customers enjoy a reliable and stable platform. That said, TRIM/UNMAP is a really useful feature, and you should look at getting it enabled if you’re concerned about the potential for wasted capacity in your SDDC.

Brisbane VMUG – November 2022

The November 2022 edition of the Brisbane VMUG meeting will be held on Thursday 24th November at the Cube (QUT) from 5pm – 7pm. It’s sponsored by Pure Storage and promises to be a great afternoon. Register here.

Raise Your Kubernetes Infrastructure Status From Zero to Hero

If your developers or platform architects are asking you for storage features commonly found in your vSphere infrastructure but targeted towards Kubernetes, you are not alone – let Portworx help you go from “I don’t know” to “No problem”!

Locking yourself into a storage solution that is dependent on specific infrastructure is a sure way to reduce efficiency and flexibility for your developers and where their applications can run – Portworx elevates you to “Hero” status by:

  • Providing your team a consistent, cloud native storage layer you can utilise on ANY Kubernetes platform – whether on-premises or in the public cloud
  • Giving you the capability to provide Kubernetes native DR and business continuity not only for your persistent storage, but all of the Kubernetes objects associated with your applications (think SRM and vMSC for Kubernetes!)
  • Enabling you to provide Kubernetes-aware data protection, including ransomware protection and 3-2-1 backup compliance with RBAC roles that can fit the existing policies within your organisation
  • Delighting your developers that need access to modern databases such as Kafka, PostgreSQL, Cassandra, and more by delivering self-service deployments with best practices “built-in”, which accelerate development cycles without a dinosaur DBA or learning complex Kubernetes operators

Come join us to see how we can create your “Better Together” story with Tanzu and give you the tools and knowledge to bring agility for your developers to your underlying infrastructure for modern applications running on Kubernetes!

Mike Carpendale

Mike joined Pure Storage in April 2021 as the APJ Regions Platform Architect. He has 20+ years experience in the industry, ranging from his expert level hands-on experience of designing and managing large scale on-prem as-a-service offerings underpinned by VMware, to his more recent work in the public cloud. 

 

PIZZA AND NETWORKING BREAK!

 

This will be followed by:

VMware Session

Peter Hauck – Senior Solutions Engineer

VMware

 

And we will be finishing off with:

 

Preparing for VMware Certifications

With the increase of position requirements in the last few years, certifications help you demonstrate your skills and move you a step forward on getting better jobs. In this Community Ssession we will help you understand how to prepare for a VMware certification exam and some useful tips you can use during the exam.

We will talk about:

Different types of exams

  • How to schedule an exam
  • Where to get material to study
  • Lessons learned from the field per type of exam

Francisco Fernandez Cardarelli – Senior Consultant (4 x VCIX)

 

Soft drinks and vBeers will be available throughout the evening! We look forward to seeing you there! Doors open at 5pm. Please make your way to The Cube.

Random Short Take #80

Welcome to Random Short Take #80. Lots of press release news this week and some parochial book recommendations. Let’s get random.

Verity ES Springs Forth – Promises Swift Eradication of Data

Verity ES recently announced its official company launch and the commercial availability of its Verity ES data eradication enterprise software solution. I had the opportunity to speak to Kevin Enders about the announcement and thought I’d briefly share some thoughts here.

 

From Revert to Re-birth?

Revert, a sister company of Verity ES, is an on-site data eradication service provider. It’s also a partner for a number of Storage OEMs.

The Problem

The folks at Revert have had an awful lot of experience with data eradication in big enterprise environments. With that experience, they’d observed a few challenges, namely:

  • The software doing the data eradication was too slow;
  • Eradicating data in enterprise environments introduced particular requirements at high volumes; and
  • Larger capacity HDDs and SDDs were a real problem to deal with.

The Real Problem?

Okay, so the process to get rid of old data on storage and compute devices is a bit of a problem. But what’s the real problem? Organisations need to get rid of end of life data – particularly from a legal standpoint – in a more efficient way. Just as data growth continues to explode, so too does the requirement to delete the old data.

 

The Solution

Verity ES was spawned to develop software to solve a number of the challenges Revert were coming across in the field. There are two ways to do it:

  • Eliminate the data destructively (via device shredding / degaussing); or
  • Non-destructively (using software-based eradication).

Why Eradicate?

Why eradicate? It’s a sustainable approach, enables residual value recovery, and allows for asset re-use. But it nonetheless needs to be secure, economical, and operationally simple to do. How does Verity ES address these requirements? It has Product Assurance Certification from ADISA. It’s also developed software that’s more efficient, particularly when it comes to those troublesome high capacity drives.

[image courtesy of Verity ES]

Who’s Buying?

Who’s this product aimed at? Primarily enterprise DC operators, hyperscalers, IT asset disposal companies, and 3rd-party hardware maintenance providers.

 

Thoughts

If you’ve spent any time on my blog you’ll know that I write a whole lot about data protection, and this is probably one of the first times that I’ve written about data destruction as a product. But it’s an interesting problem that many organisations are facing now. There is a tonne of data being generated every day, and some of that data needs to be gotten rid of, either because it’s sitting on equipment that’s old and needs to be retired, or because legislatively there’s a requirement to get rid of the data.

The way we tackle this problem has changed over time too. One of the most popular articles on this blog was about making an EMC CLARiiON CX700 useful again after EMC did a certified erasure on the array. There was no data to be found on the array, but it was able to be repurposed as lab equipment, and enjoyed a few more months of usefulness. In the current climate, we’re all looking at doing more sensible things with our old disk drives, rather than simply putting a bullet in them (except for the Feds – but they’re a bit odd). Doing this at scale can be challenging, so it’s interesting to see Verity ES step up to the plate with a solution that promises to help with some of these challenges. It takes time to wipe drives, particularly when you need to do it securely.

I should be clear that this data doesn’t go out and identify what data needs to be erased – you have to do that through some other tools. So it won’t tell you that a bunch of PII is buried in a home directory somewhere, or sitting in a spot it shouldn’t be. It also won’t go out and dig through your data protection data and tell you what needs to go. Hopefully, though, you’ve got tools that can handle that problem for you. What this solution does seem to do is provide organisations with options when it comes to cost-effective, efficient data eradication. And that’s something that’s going to become crucial as we continue to generate data, need to delete old data, and do so on larger and larger disk drives.

Random Short Take #79

Welcome to Random Short Take #79. Where did October go? Let’s get random.

Random Short Take #78

Welcome to Random Short Take #78. We’re hurtling towards the silly season. Let’s get random.