Random Short Take #88

Welcome to Random Short Take #88. This one’s been sitting in my drafts folder for a while. Let’s get random.

Brisbane VMUG – April 2022

The April 2022 edition of the Brisbane VMUG meeting will be held on Thursday 28th April. It’s powered by VMware, Google Cloud, and Queensland University of Technology and promises to be a great event. It’s also an opportunity to welcome the new leaders to the Brisbane VMUG team: Claire O’Dwyer and Antony West.

Agenda

Google Cloud VMware Engine (GCVE) – Tech Overview and Key Use Cases

In this session we will cover the GCVE platform in depth, as well as GCVE’s technical advantages when compared to other “VMware on X” solutions. As well as diving into the GCVE solution, we will cover some key technical use cases for the platform (e.g. Backup/DR options from an on-premises DC to GCVE).

Delivered by Clay Quinn, Customer Engineer, Google Cloud.

This will be followed by:

Automating Deployments and Configuration Management with Salt

Salt is an open-source configuration management tool with some interesting and useful features. In this session we will cover some of the key capabilities and concepts of Salt and demonstrate how we can use Salt to deploy configure and manage environments.

Delivered by Mark Foley, Senior Solutions Engineer, VMware

PIZZA AND NETWORKING BREAK! (Exciting!)

And we will be finishing off with:

Migrating from NSX-V to NSX-T

Delivered by Tony Williamson, Senior Consultant, VMware PSO.

Soft drinks and vBeers will be available throughout the evening. We look forward to seeing you there! Doors open at 5pm.

You can find out more information and register for the event here. Note that the March 2022 meeting had to be rescheduled due to ‘Rona issues – I’ll update the blog when I have new dates for that one.

ANZ VMUG Virtual Event – November 2020

hero_vmug_express_2011

The November edition of the Brisbane VMUG meeting is a special one – we’re doing a joint session with a number of the other VMUG chapters in Australia and New Zealand. It will be held on Tuesday 17th November on Zoom from 3pm – 5pm AEST. It’s sponsored by Google Cloud for VMware and promises to be a great afternoon.

Here’s the agenda:

  • VMUG Intro
  • VMware Presentation: VMware SASE
  • Google Presentation: Google Cloud VMware Engine Overview
  • Q&A

Google Cloud has gone to great lengths to make sure this will be a fun and informative session and I’m really looking forward to hearing about Google Cloud VMware Engine. You can find out more information and register for the event here. I hope to see you there. Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.

Backblaze B2 And A Happy Customer

Backblaze recently published a case study with AK Productions. I had the opportunity to speak to Aiden Korotkin and thought I’d share some of my notes here.

 

The Problem

Korotkin’s problem was a fairly common one – he had lots of data from previous projects that had built up over the years. He’d been using a bunch of external drives to store this data, and had had a couple of external drives fail, including the backup drives. Google’s cloud storage option “seemed like a more redundant and safer investment financially to go into the cloud space”. He was already using G Suite. And so he migrated his old projects off hard drives and into the cloud. He had a credit with Google for a year to use its cloud platform. It became pretty expensive after that, not really feasible. Korotkin also stated that calculating the expected costs was difficult. He also felt that he needed to find something more private / secure.

 

The Solution

So how did he come by Backblaze? He did a bunch of research. Backblaze B2 consistently showed up in the top 15 results when online magazines were publishing their guides to cloud storage. He’d heard of it before, possibly seen a demo. The technology seemed very streamlined, exactly what he needed for his business. A bonus was that there were no extra steps to backup his QNAP NAS as well. This seemed like the best option.

Current Workflow

I asked Korotkin to walk me though his current workflow. B2 is being used as a backup target for the moment. Physics being what it is, it’s still “[h]ard to do video editing direct on the cloud”. The QNAP NAS houses current projects, with data mirrored to B2. Archives are uploaded to a different area of B2. After time, data is completely archived to the cloud.

How About Ingest?

Korotkin needed to move 12TB from Google to Backblaze. He used Flexify.IO to transfer from one cloud to the next. They walked him through how to do it. The good news is that they were able to do it in 12 hours.

It’s About Support

Korotkin noted that between Backblaze and Flexify.IO “the tech support experience was incredible”. He said that he “[f]elt like I was very much taken care of”. He got the strong impression that the support staff enjoyed helping him, and were with him through every step of the way. The most frustrating part of the migration, according to Korotkin, was dealing with Google generally. The offloading of the data from Google cost more money than he’s paid to date with Backblaze. “As a small business owner I don’t have $1500 just to throw away”.

 

Thoughts

I’ve been a fan of Backblaze for some time. I’m a happy customer when it comes to the consumer backup product, and I’ve always enjoyed the transparency it’s displayed as a company with regards to its pod designs and the process required to get to where it is today. I remain fascinated by the workflows required to do multimedia content creation successfully, and I think this story is a great tribute to the support culture of Backblaze. It’s nice to see that smaller shops, such as Korotkin’s, are afforded the same kind of care and support experience as some of the bigger customers might. This is a noticeable point of distinction when compared to working with the hyperscalers. It’s not that those folks aren’t happy to help, they’re just operating at a different level.

Korotkin’s approach was not unreasonable, or unusual, particularly for content creators. Keeping data safe is a challenge for small business, and solutions that make storing and protecting data easier are going to be popular. Korotkin’s story is a good one, and I’m always happy to hear these kinds of stories. If you find yourself shuffling external drives, or need a lot of capacity but don’t want to invest too heavily in on-premises storage, Backblaze has a good story in terms of both cloud storage and data protection.

Random Short Take #18

Here are some links to some random news items and other content that I recently found interesting. You might find them interesting too. Episode 18 – buckle up kids! It’s all happening.

  • Cohesity added support for Active Directory protection with version 6.3 of the DataPlatform. Matt covered it pretty comprehensively here.
  • Speaking of Cohesity, Alastair wrote this article on getting started with the Cohesity PowerShell Module.
  • In keeping with the data protection theme (hey, it’s what I’m into), here’s a great article from W. Curtis Preston on SaaS data protection, and what you need to consider to not become another cautionary tale on the Internet. Curtis has written a lot about data protection over the years, and you could do a lot worse than reading what he has to say. And that’s not just because he signed a book for me.
  • Did you ever stop and think just how insecure some of the things that you put your money into are? It’s a little scary. Shell are doing some stuff with Cybera to improve things. Read more about that here.
  • I used to work with Vincent, and he’s a super smart guy. I’ve been at him for years to start blogging, and he’s started to put out some articles. He’s very good at taking complex topics and distilling them down to something that’s easy to understand. Here’s his summary of VMware vRealize Automation configuration.
  • Tom’s take on some recent CloudFlare outages makes for good reading.
  • Google Cloud has announced it’s acquiring Elastifile. That part of the business doesn’t seem to be as brutal as the broader Alphabet group when it comes to acquiring and discarding companies, and I’m hoping that the good folks at Elastifile are looked after. You can read more on that here.
  • A lot of people are getting upset with terms like “disaggregated HCI”. Chris Mellor does a bang up job explaining the differences between the various architectures here. It’s my belief that there’s a place for all of this, and assuming that one architecture will suit every situation is a little naive. But what do I know?

Google WiFi – A Few Notes

Like a lot of people who work in IT as their day job, the IT situation at my house is a bit of a mess. I think the real reason for this is because, once the working day is done, I don’t want to put any thought into doing this kind of stuff. As a result, like a lot of tech folk, I have way more devices and blinking lights in my house than I really need. And I’m always sure to pile on a good helping of technical debt any time I make any changes at home. It wouldn’t be any fun without random issues to deal with from time to time.

Some Background – Apple Airport

I’ve been running an Apple Airport Extreme and a number of Airport Express devices in my house for a while in a mesh network configuration. Our house is 2 storeys and it was too hard to wire up properly with Ethernet after we bought it. I liked the Apple devices primarily because of the easy to use interface (via browser or phone), and Airplay, in my mind at least, was a killer feature. So I’ve stuck with these things for some time, despite the frequent flakiness I experienced with the mesh network (I’d often end up connected to an isolated access point with no network access – a reboot of the base station seemed to fix this) and the sometimes frustrating lack of visibility into what was going on in the network. 

Enter Google Wifi

I had some Frequent Flier points available that meant I could get a 3-pack of Google access points for under $200 AU (I think that’s about $15 in US currency). I’d already put up the Christmas tree, so I figured I could waste a few hours on re-doing the home network. I’m not going to do a full review of the Google Wifi solution, but if you’re interested in that kind of thing, Josh Odgers does a great job of that here. In short, it took me about an hour to place the three access points in the house and get everything connected. I have about 30 – 40 devices running, some of which are hardwired to a switch connected to my ISP’s NBN gateway, and most of which connect wirelessly. 

So What’s The Problem?

The problem was that I’d kind of just jammed the primary Google Wifi point into the network (attached to a dumb switch downstream of the modem). As a result, everything connecting wirelessly via the Google network had an IP range of 192.168.86.x, and all of my other devices were in the existing 10.x.x.x range. This wasn’t a massive problem, as the Google solution does a great job of routing stuff between the “wan” and “lan” subnets, but I started to notice that my pi-hole device wasn’t picking up hostnames properly, and some devices were getting confused about which DNS to use. Oh, and my port mapping for Plex was a bit messed up too. I also had wired devices (i.e. my desktop machine) that couldn’t see Airplay devices on the wireless network without turning on Wifi.

The Solution?

After a lot of Googling, I found part of the solution via this Reddit thread. Basically, what I needed to do was follow a more structured topology, with my primary Google device hanging off my ISP’s switch (and connected via the “wan” port on the Google Wifi device). I then connected the “lan” port on the Google device to my downstream switch (the one with the pi-hole, NAS devices, and other stuff connected to it). 

Now the pi-hole could play nicely on the network, and I could point my devices to it as the DNS server via the Google interface. I also added a few more reservations into my existing list of hostnames on the pi-hole (instructions here) so that it could correctly identify any non-DHCP clients. I also changed the DHCP range on the Google Wifi to a single IP address (the one used by the pi-hole) and made sure that there was a reservation set for the pi-hole on the Google side of things. The reason for this (I think) is that you can’t disable DHCP on the Google Wifi device. To solve the Plex port mapping issue, I set a manual port mapping on my ISP modem and pointed it to the static IP address of the primary Google Wifi device. I then created a port mapping on the Google side of things to point to my Plex Media Server. It took a little while, but eventually everything started to work. 

It’s also worth noting that I was able to reconfigure the Airport Express devices connected to speakers to join the new Wifi network and I can still use Airplay around the house as I did before.

Conclusion 

This seems like a lot of mucking about for what is meant to be a plug and play wireless solution. In Google’s defence though, my home network topology is a bit more fiddly than the average punter’s would be. If I wasn’t so in love with pi-hole, and didn’t have devices that I wanted to use static IP addresses and DNS, then I wouldn’t have had as many problems as I did with the setup. From a performance and usability standpoint, I think the Google solution is excellent. Of course, this might all go to hell in a hand basket when I ramp up IPv6 in the house, but for now it’s been working well. Coupled with the fact that my networking skills are pretty subpar and we should all just be happy I was able to post this article on the Internet from my house.