Tech Field Day 19 – (Fairly) Full Disclosure

Disclaimer: I recently attended Tech Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my notes on gifts, etc, that I received as a conference attendee at Tech Field Day 19. This is by no stretch an interesting post from a technical perspective, but it’s a way for me to track and publicly disclose what I get and how it looks when I write about various things. I’m going to do this in chronological order, as that was the easiest way for me to take notes during the week.

 

Sunday

My wife kindly dropped me at the domestic airport so I could make my way from BNE to SFO via SYD. I had ham and cheese in the Qantas Club before my flight. I flew Qantas economy class to SFO. The flights were paid for by Tech Field Day. Plane food was consumed on the flight. It was a generally good experience, lack of sleep notwithstanding. I stayed at the Plaza Suites Hotel. This was covered by Tech Field Day as well. I made my way to a friend’s house to spend some time before the event.

On Sunday evening I was fortunate to have dinner at Georgiana Comsa’s house with her and some friends. I’m always appreciative of good food and good conversation, and it was great to spend some time with them (jet lag notwithstanding).

 

Tuesday

On Tuesday I arrived at the hotel in the afternoon and was provided with a bad of various snacks by the Tech Field Day team. I had 3 Firestone Lager beers at the hotel bar. We had dinner at the hotel, consisting of:

  • Panzanella, with plums, cucumber, nduja vinaigrette, burrata, and brioche
  • Gazpacho, with shoyu-marinated heirloom tomatoes, and avocado creme fraiche
  • Artichokes, consisting of Monterey grilled artichoke halves, and mint-chutney mayo
  • Bavette, with 100% grass-fed beed from Marin Sun Farms, and pistachio leek chimichurri
  • Chicken, with mixed nut puree, summer squash, and harissa
  • Asparagus, à la plancha, with Romesco sauce
  • Homemade gelato, in some kind of banana and miso flavour.

I also had a Firestone lager and some pinot noir. It was all very delicious. I received a very cool wooden sign from Alastair Cooke as part of the Yankee gift swap. Emre also gave each of us some real Turkish Delight.

 

Wednesday

Wednesday morning was breakfast at the hotel, consisting of scrambled eggs, sausage, pancake, bacon, and coffee. During the pre-TFD delegate meeting, I was given OpenTechCast, VirtualBonzo.com, and TFD stickers.

During the Ixia session I had a coffee. At NetApp I had 2 still spring waters. Lunch at NetApp was antioxidant salad with feta cheese and pomegranate vinaigrette, braised short ribs with chilli citrus marinade, grilled salmon with feta and olive, and grilled asparagus. I also had a chocolate chip cookie after lunch. We had the dinner / reception at Faz Restaurant in San Jose. There was a variety of finger food on offer, including Mediterranean Bruschetta, black sesame encrusted Ahi Tuna with wasabi, vegetarian spring rolls with teriyaki dipping sauce, crab cakes with chipotle aioli, and fire-roasted beef mini-kabobs. I had 3 Modelo Especial beers, and 2 Firestone lagers at the hotel.

 

Thursday

Thursday morning was breakfast at the hotel, consisting of scrambled eggs, sausage, pancake, bacon, and coffee. At Automation Anywhere I grabbed a bottle of water before our session. Automation Anywhere also provided us with lunch from Dish Dash. I had a chicken shawarma wrap, tabouli, leafy salad, rice, falafel, and water. They also gave us a branded water container and phone pop socket.

At Druva we were given some personalised gifts by W. Curtis Preston. This was the first time I’ve had a particular tweet about a dream become a reality.

Thanks Curtis! (and thanks for signing my copy of your book too!)

We were also each given a Holy Stone HS160 drone and personalised coffee mug. I really like when sponsors take the time to find out a little more about each of the delegates. After the session we wandered over to Philz Coffee. I had a large, iced Mocha Tesora with dark chocolate, caramel, and cocoa. For dinner we went to River Rock Taproom. I had 2 Russian River Brewing STS Pils beers from the tap (there are about 40 different beers to choose from). We also had a variety of snacks, including:

  • Bruschetta, with tomatoes, herbs, onions, garlic, balsamic vinegar, olive oil, parmesan cheese on toasted baguette slices
  • Sliders, consisting of creamy habanero beef with caramelised onions and cheddar cheese
  • Garlic French Fries
  • Buffalo wings and creamy habanero
  • Popcorn shrimp served with orange chilli garlic
  • Candied bacon with caramelised steakhouse bacon served on toasted bread

By that stage jet lag had really caught up with me, so I retired to my room for an early night.

 

Friday

Friday morning was breakfast at the hotel, consisting of scrambled eggs, sausage, pancake, bacon, and coffee. At the VMware office I had a coffee, and picked up a vROps ninja sticker, Indy VMUG sticker, and VMware Cloud Assembly sticker and key ring. We had lunch at VMware. It was Mexican, and I had a tortilla, rice, refried beans, grated cheese, guacamole, sour cream and salsa, and some water.

At the finish of proceedings some delegates made their way to the airport, and the rest of us hung around for one last dinner together. I had a Firestone lager at hotel. We walked to Faultline Brewing Company for dinner. I had 2 Kolsch beers, and the Brewhouse Bacon Cheeseburger, consisting of an 8 oz. Angus beef patty, sharp cheddar cheese, applewood-smoked bacon, lettuce, tomatoes, red onion, roasted garlic aioli , a brioche bun, and kettle chips. I then had 3 Firestone lagers at the hotel bar.

 

Saturday

I was staying a little longer in the Bay Area, so on Saturday morning I had breakfast at the hotel, consisting of scrambled eggs, sausage, pancake, bacon, and coffee. I then took a car to a friend’s house in the Bay Area, courtesy of Tech Field Day. It was a great week. Thanks again to the Tech Field Day team for having me, thanks to the other delegates for being super nice and smart, and thanks to the presenters for some really educational and engaging sessions.

Tech Field Day 19 – Day 0

Disclaimer: I recently attended Tech Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

This is just a quick post to share some thoughts on day zero at Tech Field Day 19. Thanks again Stephen and the team at Gestalt IT for having me back, making sure everything is running according to plan and for just being really very decent people. I’ve really enjoyed catching up with the people I’ve met before and meeting the new delegates. Look out for some posts related to the Tech Field Day sessions in the next few weeks. And if you’re in a useful timezone, check out the live streams from the event here, or the recordings afterwards.

Here’s the rough schedule for the next three days (all times are ‘Merican Pacific).

Wednesday, Jun 26 8:30-9:30 Keysight Technologies Presents at Tech Field Day 19
Wednesday, Jun 26 10:30-12:30 NetApp Presents at Tech Field Day 19
Wednesday, Jun 26 13:00-14:00 On-Premise IT Roundtable Podcast Recording at Tech Field Day 19
Moderator: Ken Nalbone
Thursday, Jun 27 9:00-12:00 Automation Anywhere Presents at Tech Field Day 19
Thursday, Jun 27 14:00-16:00 Druva Presents at Tech Field Day 19
Thursday, Jun 27 16:00-17:00 On-Premise IT Roundtable Podcast Recording at Tech Field Day 19
Moderator: Ken Nalbone
Friday, Jun 28 9:30-14:00 VMware Presents at Tech Field Day 19

Or you can follow along with the live stream here.

Spectra Logic – BlackPearl Overview

I recently had the opportunity to take a briefing with Jeff Braunstein and Susan Merriman from Spectra Logic (one of those rare occasions where getting your badge scanned at a conference proves valuable), and thought I’d share some of my notes here.

 

BlackPearl Family

Spectra Logic sell a variety of products, but this briefing was focused primarily on the BlackPearl series. Braunstein described it as a “gateway” device, with both NAS and object front end interfaces, and backend capability that can move data to multiple types of archives.

[image courtesy of Spectra Logic]

It’s a hardware box, but at its core the value is in the software product. The idea is that the BlackPearl acts as a disk cache, and you configure policies to send the data to one or more storage targets. The cool thing is that it supports multiple retention policies, and these can be permanent too. By that I mean you could spool one copy to tape for long term storage, and have another copy of your data sit on disk for 90 days (or however long you wanted).

 

Local vs Remote Storage

Local

There are a few different options for local storage, including BlackPearl Object Storage Disk, functioning as “near line archive”. This is configured with 107 enterprise quality SATA drives, (and they’re looking at introducing 16TB drives next month), providing roughly 1.8PB RAW capacity. They function as power-down archive drives (using the drive spin down settings), and delivers a level of resilience and reliability by using ZFS as the file system,. There are also customer-configurable parity settings. Alternatively, you can pump data to Spectra Tape Libraries, for those of you who still want to use tape as a storage format.

 

Remote Storage Targets

In terms of remote storage targets, BlackPearl can leverage either public cloud, or other BlackPearl devices as replication targets. Replication to BlackPearl can be one way or bi-directional. Public Cloud support is available via Amazon S3 (and S3-like products such as Cloudian and Wasabi), and MS Azure. There is a concept of data immutability in the product, and you can turn on versioning to prevent your data management applications (or users) from accidentally clobbering your data.

Braunstein also pointed out that tape generations evolve, and BlackPearl has auto-migration capabilities. You can potentially have data migrate transparently from tape to tape (think LTO-6 to LTO-7), tape to disk, and tape to cloud.

 

[image courtesy of Spectra Logic]

In terms of how you leverage BlackPearl, some of that is dependent on the workflows you have in place to move your data. This could be manual, semi-automated, or automated (or potentially purpose built into existing applications). There’s a Spectra S3 RESTful API, and there’s heaps of information on developer.spectralogic.com on how to integrate BlackPearl into your existing applications and media workflows.

 

Thoughts

If you’re listening to the next-generation data protection vendors and big box storage folks, you’d wonder why companies such as Spectra Logic still focus on tape. It’s not because they have a rich heritage and deep experience in the tape market (although they do). There are plenty of use cases where tape still makes sense in terms of its ability to economically store large amounts of data in a relatively secure (off-line if required) fashion. Walk into any reasonably sized film production house and you’ll still see tape in play. From a density perspective (and durability), there’s a lot to like about tape. But BlackPearl is also pretty adept at getting data from workflows that were traditionally file-based and putting them on public cloud environments (the kind of environments that heavily leverage object storage interfaces). Sure, you can pump the data up to AWS yourself if you’re so inclined, but the real benefit of the BlackPearl approach, in my opinion, is that it’s policy-driven and fully automated. There’s less chance that you’ll fat finger the transfer of critical data to another location. This gives you the ability to focus on your core business, and not have to worry about data management.

I’ve barely scratched the surface of what BlackPearl can do, and I recommend checking out their product site for more information.

Random Short Take #17

Here are some links to some random news items and other content that I recently found interesting. You might find them interesting too. Episode 17 – am I over-sharing? There’s so much I want you to know about.

  • I seem to always be including a link from the Backblaze blog. That’s mainly because they write about things I’m interested in. In this case, they’ve posted an article discussing the differences between availability and durability that I think is worth your time.
  • Speaking of interesting topics, Preston posted an article on NetWorker Pools with Data Domain that’s worth looking at if you’re into that kind of thing.
  • Maintaining the data protection theme, Alastair wrote an interesting article titled “The Best Automation Is One You Don’t Write” (you know, like the best IO is one you don’t need to do?) as part of his work with Cohesity. It’s a good article, and not just because he mentions my name in it.
  • I recently wanted to change the edition of Microsoft Office I was using on my MacBook Pro and couldn’t really work out how to do it. In the end, the answer is simple. Download a Microsoft utility to remove your Office licenses, and then fire up an Office product and it will prompt you to re-enter your information at that point.
  • This is an old article, but it answered my question about validating MD5 checksums on macOS.
  • Excelero have been doing some cool stuff with Imperial College London – you can read more about that here.
  • Oh hey, Flixster Video is closing down. I received this in my inbox recently: “[f]ollowing the announcement by UltraViolet that it will be discontinuing its service on July 31, 2019, we are writing to provide you notice that Flixster Video is planning to shut down its website, applications and operations on October 31, 2019”. It makes sense, obviously, given UltraViolet’s demise, but it still drives me nuts. The ephemeral nature of digital media is why I still have a house full of various sized discs with various kinds of media stored on them. I think the answer is to give yourself over to the streaming lifestyle, and understand that you’ll never “own” media like you used to think you did. But I can’t help but feel like people outside of the US are getting shafted in that scenario.
  • In keeping up with the “random” theme of these posts, it was only last week that I learned that “Television, the Drug of the Nation” from the very excellent album “Hypocrisy Is the Greatest Luxury” by The Disposable Heroes of Hiphoprisy was originally released by Michael Franti and Rono Tse when they were members of The Beatnigs. If you’re unfamiliar with any of this I recommend you check them out.

Cohesity Basics – Configuring An External Target For Cloud Archive

I’ve been working in the lab with Pure Storage’s ObjectEngine and thought it might be nice to document the process to set it up as an external target for use with Cohesity’s Cloud Archive capability. I’ve written in the past about Cloud Tier and Cloud Archive, but in that article I focused more on the Cloud Tier capability. I don’t want to sound too pretentious, but I’ll quote myself from the other article: “With Cloud Archive you can send copies of snapshots up to the cloud to keep as a copy separate to the backup data you might have replicated to a secondary appliance. This is useful if you have some requirement to keep a monthly or six-monthly copy somewhere for compliance reasons.”

I would like to be clear that this process hasn’t been blessed or vetted by Pure Storage or Cohesity. I imagine they are working on delivering a validated solution at some stage, as they have with Veeam and Commvault. So don’t go out and jam this in production and complain to me when Pure or Cohesity tell you it’s wrong.

There are a couple of ways you can configure an external target via the Cohesity UI. In this example, I’ll do it from the dashboard, rather than during the protection job configuration. Click on Protection and select External Target.

You’ll then be presented with the New Target configuration dialogue.

In this example, I’m calling my external target PureOE, and setting its purpose as Archival (as opposed to Tiering).

The Type of target is “S3 Compatible”.

Once you select that, you’ll be asked for a bunch of S3-type information, including Bucket Name and Access Key ID. This assumes you’ve already created the bucket and configured appropriate security on the ObjectEngine side of things.

Enter the required information. I’ve de-selected compression and source side deduplication, as I’m wanting that the data reduction to be done by the ObjectEngine. I’ve also disabled encryption, as I’m guessing this will have an impact on the ObjectEngine as well. I need to confirm that with my friends at Pure. I’m using the fully qualified domain name of the ObjectEngine as the endpoint here as well.

Once you click on Register, you’ll be presented with a summary of the configuration.

You’re then right to use this as an external target for Archival parts of protection jobs within your Cohesity environment. Once you’ve run a few protection jobs, you should start to see files within the test bucket on the ObjectEngine. Don’t forget that, as fas as I’m aware, it’s still very difficult (impossible?) to remove external targets from the the Cohesity Data Platform, so don’t get too carried away with configuring a bunch of different test targets thinking that you can remove them later.

Scale Computing Announces HE500 Range

Scale Computing recently announced its “HC3 Edge Platform“. I had a chance to talk to Alan Conboy about it, and thought I’d share some of my thoughts here.

 

The Announcement

The HE500 series has been introduced to provide smaller customers and edge infrastructure environments with components that better meet the sizing and pricing requirements of those environments. There are a few different flavours of nodes, with every node offering E-2100 Intel CPUs, 32 – 64GB RAM, and dual power supplies. There are a couple of minor differences with regards to other configuration options.

  • HE500 – 4x 1,2,4 or 8TB HDD, 4x 1GbE, 4x 10GbE
  • HE550 – 1x 480GB or 960GB SSD, 3x 1,2, or 4TB HDD, 4x 1GbE, 4x 10GbE
  • HE550F – 4 x 240GB, 480GB, 960GB SSD, 4x 1GbE, 4x 10GbE
  • HE500T – 4x 1,2,4 or 8TB HDD, 8 x HDD 4TB, 8TB, 2x 1GbE
  • HE550TF – 4 x 240GB, 480GB, 960GB SSD, 2x 1GbE

The “T” version comes in a tower form factor, and offers 1GbE connectivity. Everything runs on Scale’s HC3 platform, and offers all of the features and support you expect with that platform. In terms of scalability, you can run up to 8 nodes in a cluster.

 

Thoughts And Further Reading

In the past I’ve made mention of Scale Computing and Lenovo’s partnership, and the edge infrastructure approach is also something that lends itself well to this arrangement. If you don’t necessarily want to buy Scale-badged gear, you’ll see that the models on offer look a lot like the SR250 and ST250 models from Lenovo. In my opinion, the appeal of Scale’s hyper-converged infrastructure story has always been the software platform that sits on the hardware, rather than the specifications of the nodes they sell. That said, these kinds of offerings play an important role in the market, as they give potential customers simple options to deliver solutions at a very competitive price point. Scale tell me that an entry-level 3-node cluster comes in at about US $16K, with additional nodes costing approximately $5K. Conboy described it as “[l]owering the barrier to entry, reducing the form factor, but getting access to the entire stack”.

Combine some of these smaller solutions with various reference architectures and you’ve got a pretty powerful offering that can be deployed in edge sites for a small initial outlay. People often deploy compute at the edge because they have to, not because they necessarily want to. Anything that can be done to make operations and support simpler is a good thing. Scale Computing are focused on delivering an integrated stack that meets those requirements in a lightweight form factor. I’ll be interested to see how the market reacts to this announcement. For more information on the HC3 Edge offering, you can grab a copy of the data sheet here, and the press release is available here. There’s a joint Lenovo – Scale Computing case study that can be found here.

Random Short Take #16

Here are a few links to some random news items and other content that I recently found interesting. You might find them interesting too. Episode 16 – please enjoy these semi-irregular updates.

  • Scale Computing has been doing a bit in the healthcare sector lately – you can read news about that here.
  • This was a nice roundup of the news from Apple’s recent WWDC from Six Colors. Hat tip to Stephen Foskett for the link. Speaking of WWDC news, you may have been wondering what happened to all of your purchased content with the imminent demise of iTunes on macOS. It’s still a little fuzzy, but this article attempts to shed some light on things. Spoiler: you should be okay (for the moment).
  • There’s a great post on the Dropbox Tech Blog from James Cowling discussing the mission versus the system.
  • The more things change, the more they remain the same. For years I had a Windows PC running Media Center and recording TV. I used IceTV as the XMLTV-based program guide provider. I then started to mess about with some HDHomeRun devices and the PC died and I went back to a traditional DVR arrangement. Plex now has DVR capabilities and it has been doing a reasonable job with guide data (and recording in general), but they’ve decided it’s all a bit too hard to curate guides and want users (at least in Australia) to use XMLTV-based guides instead. So I’m back to using IceTV with Plex. They’re offering a free trial at the moment for Plex users, and setup instructions are here. No, I don’t get paid if you click on the links.
  • Speaking of axe-throwing, the Cohesity team in Queensland is organising a social event for Friday 21st June from 2 – 4 pm at Maniax Axe Throwing in Newstead. You can get in contact with Casey if you’d like to register.
  • VeeamON Forum Australia is coming up soon. It will be held at the Hyatt Regency Hotel in Sydney on July 24th and should be a great event. You can find out more information and register for it here. The Vanguards are also planning something cool, so hopefully we’ll see you there.
  • Speaking of Veeam, Anthony Spiteri recently published his longest title in the Virtualization is Life! catalogue – Orchestration Of NSX By Terraform For Cloud Connect Replication With vCloud Director. It’s a great article, and worth checking out.
  • There’s a lot of talk and slideware devoted to digital transformation, and a lot of it is rubbish. But I found this article from Chin-Fah to be particularly insightful.

Tech Field Day – I’ll Be At Tech Field Day 19

Here’s some good news for you. I’ll be heading to the US in late June for my first Tech Field Day event – Tech Field Day 19 (as opposed to the Storage Field Day events I’ve attended previously). If you haven’t heard of the very excellent Tech Field Day events, you should check them out. I’m looking forward to a little time travel and spending time with some really smart people for a few days. It’s also worth checking back on the Tech Field Day 19 website during the event (June 26 – 28) as there’ll be video streaming and updated links to additional content. You can also see the list of delegates and event-related articles that have been published.

I think it’s a great line-up of both delegates and presenting companies this time around. (If more companies are added to the agenda I’ll update this).

I’d like to publicly thank in advance the nice folks from Tech Field Day who’ve seen fit to have me back, as well as my employer for letting me take time off to attend these events. Also big thanks to the companies presenting. It’s going to be a lot of fun. Seriously.

Cohesity Basics – Excluding VMs Using Tags – Real World Example

I’ve written before about using VM tags with Cohesity to exclude VMs from a backup. I wanted to write up a quick article using a real world example in the test lab. In this instance, we had someone deploying 200 VMs over a weekend to test a vendor’s storage array with a particular workload. The problem was that I had Cohesity set to automatically protect any new VMs that are deployed in the lab. This wasn’t a problem from a scalability perspective. Rather, the problem was that we were backing up a bunch of test data that didn’t dedupe well and didn’t need to be protected by what are ultimately finite resources.

As I pointed out in the other article, creating tags for VMs and using them as a way to exclude workloads from Cohesity is not a new concept, and is fairly easy to do. You can also apply the tags in bulk using the vSphere Web Client if you need to. But a quicker way to do it (and something that can be done post-deployment) is to use PowerCLI to search for VMs with a particular naming convention and apply the tags to those.

Firstly, you’ll need to log in to your vCenter.

PowerCLI C:\> Connect-VIServer vCenter

In this example, the test VMs are deployed with the prefix “PSV”, so this makes it easy enough to search for them.

PowerCLI C:\> get-vm | where {$_.name -like "PSV*"} | New-TagAssignment -Tag "COH-NoBackup"

This assumes that the tag already exists on the vCenter side of things, and you have sufficient permissions to apply tags to VMs. You can check your work with the following command.

PowerCLI C:\> get-vm | where {$_.name -like "PSV*"} | Get-TagAssignment

One thing to note. If you’ve updated the tags of a bunch of VMs in your vCenter environment, you may notice that the objects aren’t immediately excluded from the Protection Job on the Cohesity side of things. The reason for this is that, by default, Cohesity only refreshes vCenter source data every 4 hours. One way to force the update is to manually refresh the source vCenter in Cohesity. To do this, go to Protection -> Sources. Click on the ellipsis on the right-hand side of your vCenter source you’d like to refresh, and select Refresh.

You’ll then see that the tagged VMs are excluded in the Protection Job. Hat tip to my colleague Mike for his help with PowerCLI. And hat tip to my other colleague Mike for causing the problem in the first place.

Brisbane VMUG – July 2019

hero_vmug_express_2011

The July edition of the Brisbane VMUG meeting will be held on Tuesday 23rd July at Fishburners from 4 – 6pm. It’s sponsored by Pivotal and promises to be a great afternoon.

Here’s the agenda:

  • VMUG Intro
  • VMware and Pivotal Presentation: Rapid and automated deployment of Kubernetes with VMware and Pivotal
  • Q&A
  • Refreshments and drinks.

Pivotal have gone to great lengths to make sure this will be a fun and informative session and I’m really looking forward to hearing more about what they’re doing. You can find out more information and register for the event here. I hope to see you there. Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.