Random Short Take #39

Welcome to Random Short Take #39. Not a huge amount of players have worn 39 in the NBA, and I’m not going to pretend I’m any real fan of The Dwightmare. But things are tough all around, so let’s remain optimistic and push through to number 40. Anyway let’s get random.

  • VeeamON 2020 was online this week, and Anthony Spiteri has done a great job of summarising the major technical session announcements here.
  • I’ve known Howard Marks for a while now, and always relish the opportunity to speak with him when I can. This post is pretty hilarious, and I’m looking forward to reading the followup posts.
  • This is a great article from Alastair Cooke on COVID-19 and what En-Zed has done effectively to stop the spread. It was interesting to hear his thoughts on returning to the US, and I do agree that it’s going to be some time until I make the trip across the Pacific again.
  • Sometimes people get crazy ideas about how they might repurpose some old bits of technology. It’s even better when they write about their experiences in doing so. This article on automating an iPod Hi-Fi’s volume control over at Six Colors was fantastic.
  • Chris M. Evans put out a typically thought-provoking piece on data migration challenges recently that I think is worth checking out. I’ve been talking a lot to customers that are facing these challenges on a daily basis, and it’s interesting to see how, regardless of the industry vertical they operate in, it’s sometimes just a matter of the depth varying, so to speak.
  • I frequently bump into Ray Lucchesi at conferences, and he knows a fair bit about what does and doesn’t work. This article on his experiences recently with a number of virtual and online conferences is the epitome of constructive criticism.
  • Speaking of online conferences, the Australian VMUG UserCon will be virtual this year and will be held on the 30th July. You can find out more and register here.
  • Finally, if you’ve spent any time with me socially, you’ll know I’m a basketball nut. And invariably I’ll tell you that Deftones is may favouritest band ever. So it was great to come across this article about White Pony on one of my favourite sports (and popular culture) websites. If you’re a fan of Deftones, this is one to check out.

 

OT – Upgrading From macOS Mojave To Catalina (The Hard Way)

This post is really about the boring stuff I do when I have a day off and isn’t terribly exciting. TL;DR I had some problems upgrading to Catalina, and had to start from scratch.

 

Background

I’ve had an Apple Mac since around 2008. I upgraded from a 24″ iMac to a 27″ iMac and was super impressed with the process of migrating between machines, primarily because of Time Machine’s ability to recover settings, applications, and data in a fairly seamless fashion. I can’t remember what version of macOS I started with (maybe Leopard?), but I’ve moved steadily through the last few versions with a minimal amount of fuss. I was running Mojave on my iMac late last year when I purchased a refurbished 2018 Mac mini. At the time, I decided not to upgrade to Catalina, as I’d had a few issues with my work laptop and didn’t need the aggravation. So I migrated from the iMac to the Mac mini and kept on keeping on with Mojave.

Fast forward to a April this year, and the Mac mini gave up the ghost. With Apple shutting down its stores here in response to COVID-19, it was a 2 week turnaround at the local repair place to get the machine fixed. In the meantime, I was able to use Time Machine to load everything on a 2012 MacBook Pro that was being used sparingly. It was a bit clunky, but had an internal SSD and 16GB of RAM, so it could handle the basics pretty comfortably. When the Mac mini was repaired, I used Time Machine once again to move everything back. It’s important to note that this is everything (settings, applications, and data) that had been accumulated since 2008. So there’s a bit of cruft associated with this build. A bunch of 32-bit applications that I’d lost track of, widgets that were no longer really in use, and so on.

 

The Big Update

I took the day off on Friday last week. I’d been working a lot of hours since COVID-19 restrictions kicked in here, and I’d been filling my commuting time with day job work (sorry blog!). I thought it would be fun to upgrade the Mac mini to Catalina. I felt that things were in a reasonable enough state that I could work with what it had to offer, and I get twitchy when there’s an upgrade notification on the Settings icon. Just sitting there, taunting me.

I downloaded the installer and pressed on. No dice, my system volume wasn’t formatted with APFS. How could this be? Well, even though APFS has been around for a little while now, I’d been moving my installation across various machines. At the time when the APFS conversion was part of the macOS upgrade, I was running an iMac with a spinning disk as the system volume, and so it never prompted to do that upgrade. When I moved to the Mac mini, I didn’t do any macOS upgrade, so I guess it just kept working with the HFS+ volume. It seems a bit weird that Catalina doesn’t offer a workaround for this, but I may just have been looking in the wrong place. Now, there was a lot of chatter in the forums about rebooting into Recovery Mode and converting the drive to an APFS volume. No matter what I tried, I was unable to do this effectively (either using the Recovery Mode console with Mojave or with Catalina booting from USB). I followed articles like this one but just didn’t have the same experience. And when I erased the system drive and attempted to recover from Time Machine backups, it would re-erase the volume as HFS+. So, I don’t know, I guess I’m an idiot. The solution that finally worked for me was to erase the drive, format it as APFS, install Mojave from scratch, and recover from a Time Machine backup. Unfortunately, though, this seemed to only want to transfer around 800KB of settings data. The normal “wait a few hours while we copy your stuff” just didn’t happen. Sod knows why, but what I did know was that I was really wasting my day off with this stuff.

I also ran in to an issue trying to do the installation from USB. You can read about booting from external devices and the T2 security chip here, here, and here. I lost patience with the process and took a different approach.

 

Is That So Bad?

Not really. I have my Photos library and iTunes media on a separate volume. I have one email account that we have used POP with over the years, but I installed Thunderbird, recovered the profile from my Time Machine data, and modified profiles.ini to point to that profile (causing some flashbacks to my early days on a help desk supporting a Netscape user base). The other thing I had to do was recover my Plex database. You can read more on that here. It actually went reasonably well. I’d been storing my iPhone backups on a separate volume too, and had to follow this process to relocate those backup files. Otherwise, Microsoft, to their credit, has made the reinstallation process super simple with Microsoft 365. Once I had most everything setup again, I was able to perform the upgrade to Catalina.

 

Conclusion

If this process sounds like it was a bit of a pain, it was. I don’t know that Apple has necessarily dropped the ball in terms of usability in the last few years, but sometimes it feels like it. I think I just had really high expectations based on some good fortune I’d enjoyed over the past 12 years. I’m not sure what the term is exactly, but it’s possible that because I’ve invested this much money in a product, I’m more forgiving of the issues associated with the product. Apple has done a great job historically of masking the complexity of technology from the end user. Sometimes, though, you’re going to come across odd situations that potentially push you down an odd path. That’s what I tell myself anyway as I rue the time I lost on this upgrade. Was anyone else’s upgrade to Catalina this annoying?

Random Short Take #32

Welcome to Random Short Take #32. Lot of good players have worn 32 in the NBA. I’m a big fan of Magic Johnson, but honourable mentions go to Jimmer Fredette and Blake Griffin. It’s a bit of a weird time around the world at the moment, but let’s get to it.

  • Veeam 10 was finally announced a little while ago and is now available for deployment. I work for a service provider, and we use Veeam, so this article from Anthony was just what I was after. There’s a What’s New article from Veeam you can view here too.
  • I like charts, and I like Apple laptops, so this chart was a real treat. The lack of ports is nice to look at, I guess, but carrying a bag of dongles around with me is a bit of a pain.
  • VMware recently made some big announcements around vSphere 7, amongst other things. Ather Beg did a great job of breaking down the important bits. If you like to watch videos, this series from VMware’s recent presentations at Tech Field Day 21 is extremely informative.
  • Speaking of VMware Cloud Foundation, Cormac Hogan recently wrote a great article on getting started with VCF 4.0. If you’re new to VCF – this is a great resource.
  • Leaseweb Global recently announced the availability of 2nd Generation AMD EPYC powered hosts as part of its offering. I had a chance to speak with Mathijs Heikamph about it a little while ago. One of the most interesting things he said, when I questioned him about the market appetite for dedicated servers, was “[t]here’s no beating a dedicated server when you know the workload”. You can read the press release here.
  • This article is just … ugh. I used to feel a little sorry for businesses being disrupted by new technologies. My sympathy is rapidly diminishing though.
  • There’s a whole bunch of misinformation on the Internet about COVID-19 at the moment, but sometimes a useful nugget pops up. This article from Kieren McCarthy over at El Reg delivers some great tips on working from home – something more and more of us (at least in the tech industry) are doing right now. It’s not all about having a great webcam or killer standup desk.
  • Speaking of things to do when you’re working at home, JB posted a handy note on what he’s doing when it comes to lifting weights and getting in some regular exercise. I’ve been using this opportunity to get back into garage weights, but apparently it’s important to lift stuff more than once a month.

Random Short Take #31

Welcome to Random Short Take #31. Lot of good players have worn 31 in the NBA. You’d think I’d call this the Reggie edition (and I appreciate him more after watching Winning Time), but this one belongs to Brent Barry. This may be related to some recency bias I have, based on the fact that Brent is a commentator in NBA 2K19, but I digress …

  • Late last year I wrote about Scale Computing’s big bet on a small form factor. Scale Computing recently announced that Jerry’s Foods is using the HE150 solution for in-store computing.
  • I find Plex to be a pretty rock solid application experience, and most of the problems I’ve had with it have been client-related. I recently had a problem with a server update that borked my installation though, and had to roll back. Here’s the quick and dirty way to do that on macOS.
  • Here’s are 7 contentious thoughts on data protection from Preston. I think there are some great ideas here and I recommend taking the time to read this article.
  • I recently had the chance to speak with Michael Jack from Datadobi about the company’s announcement about its new DIY Starter Pack for NAS migrations. Whilst it seems that the professional services market for NAS migrations has diminished over the last few years, there’s still plenty of data out there that needs to be moved from on box to another. Robocopy and rsync aren’t always the best option when you need to move this much data around.
  • There are a bunch of things that people need to learn to do operations well. A lot of them are learnt the hard way. This is a great list from Jan Schaumann.
  • Analyst firms are sometimes misunderstood. My friend Enrico Signoretti has been working at GigaOm for a little while now, and I really enjoyed this article on the thinking behind the GigaOm Radar.
  • Nexsan recently announced some enhancements to its “BEAST” storage platforms. You can read more on that here.
  • Alastair isn’t just a great writer and moustache aficionado, he’s also a trainer across a number of IT disciplines, including AWS. He recently posted this useful article on what AWS newcomers can expect when it comes to managing EC2 instances.

Dell EMC PowerOne – Not V(x)block 2.0

Disclaimer: I recently attended Storage Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Dell EMC recently presented at Storage Field Day 19. You can see videos of the presentation here, and download my rough notes from here.

 

Not VxBlock 2.0?

Dell EMC describes PowerOne as “all-in-one autonomous infrastructure”. It’s converged infrastructure, meaning your storage, compute, and networking are all built into the rack. It’s a transportation-tested package and fully assembled when it ships. When it arrives, you can plug it in, fire up the API, and be up and going “within a few hours”.

Trey Layton is no stranger to Vblock / VxBlock, and he was very clear with the delegates that PowerOne is not replacing VxBlock. After all, VxBlock lets them sell Dell EMC external storage into Cisco UCS customers.

 

So What Is It Then?

It’s a rack or racks full of gear. All of which is now Dell EMC gear. And it’s highly automated and has some proper management around it too.

[image courtesy of Dell EMC]

So what’s in those racks?

  • PowerMax Storage – World’s “fastest” storage array
  • PowerEdge MX – industry leading compute
  • PowerSwitch – Declarative system fabric
  • PowerOne Controller – API-powered automation engine

PowerMax Storage

  • Zero-touch SAN config
  • Discovery / inventory of storage resources
  • Dynamically create storage volumes for clusters
  • Intelligent load balancing

PowerEdge MX Compute

  • Dynamically provision compute resources into clusters
  • Automated chassis expansion
  • Telemetry aggregation
  • Kinetic infrastructure

System Fabrics

  • Switches are 32Gbps
  • 98% reduction in network configuration steps
  • System fabric visibility and lifecycle management
  • Intent-based automated deployment and provision
  • PowerSwitch open networking

PowerOne Controller

  • Highly automates 1000s of tasks
  • Powered by Kubernetes and Ansible
  • Delivers next-gen autonomous outcomes via robust API capabilities

From a scalability perspective, you can go to 275 nodes in a pod, and you can look after up to 32 pods (I think). The technical specifications are here.

 

Thoughts and Further Reading

Converged infrastructure has always been an interesting architectural choice for the enterprise. When VCE first came into being 10+ years ago via Acadia, delivering consistent infrastructure experiences in the average enterprise was a time-consuming endeavour and not a lot of fun. It was also hard to do well. VCE changed a lot of that with Vblock, but you paid a premium. The reason you paid that premium was that VCE did a pretty decent job of putting together an architecture that was reliable and, more importantly, supportable by the vendor. It wasn’t just the IP behind this that made it successful though, it was the effort put into logistics and testing. And yes, a lot of that was built on the strength of spreadsheets and the blood, sweat and tears of the deployment engineers out in the field.

PowerOne feels like a very different beast in this regard. Dell EMC took us through a demo of the “unboxing” experience, and talked extensively about the lifecycle of the product. They also demonstrated many of the automation features included in the solution that weren’t always there with Vblock. I’ve been responsible for Vblock environments over the years, and a lot of the lifecycle management activities were very thoroughly documented, and extremely manual. PowerOne, on the other hand, doesn’t look like it relies extensively on documentation and spreadsheets to be managed effectively. But maybe that’s just because Trey and the team were able to demonstrate things so effectively.

So why would the average enterprise get tangled up in converged infrastructure nowadays? What with all the kids and their HCI solutions, and the public cloud, and the plethora of easy to consume infrastructure solutions available via competitive consumption models? Well, some enterprises don’t like relying on people within the organisation to deliver solutions for mission critical applications. These enterprises would rather leave that type of outcome in the hands of one trusted vendor. But they might still want that outcome to be hosted on-premises. Think of big financial institutions, and various government agencies looking after very important things. These are the kinds of customers that PowerOne is well suited to.

That doesn’t mean that what Dell EMC is doing with PowerOne isn’t innovative. In fact I think what they’ve managed to do with converged infrastructure is very innovative, within the confines of converged infrastructure. This type of approach isn’t for everyone though. There’ll always be organisations that can do it faster and cheaper themselves, but they may or may not have as much at stake as some of the other guys. I’m curious to see how much uptake this particular solution gets in the market, particularly in environments where HCI and public cloud adoption is on the rise. It strikes me that Dell EMC has turned a corner in terms of system integration too, as the out of the box experience looks really well thought out compared to some of its previous attempts at integration.

Random Short Take #30

Welcome to Random Short Take #30. You’d think 30 would be an easy choice, given how much I like Wardell Curry II, but for this one I’m giving a shout out to Rasheed Wallace instead. I’m a big fan of ‘Sheed. I hope you all enjoy these little trips down NBA memory lane. Here we go.

  • Veeam 10’s release is imminent. Anthony has been doing a bang up job covering some of the enhancements in the product. This article was particularly interesting because I work in a company selling Veeam and using vCloud Director.
  • Sticking with data protection, Curtis wrote an insightful article on backups and frequency.
  • If you’re in Europe or parts of the US (or can get there easily), like writing about technology, and you’re into cars and stuff, this offer from Cohesity could be right up your alley.
  • I was lucky enough to have a chat with Sheng Liang from Rancher Labs a few weeks ago about how it’s going in the market. I’m relatively Kubernetes illiterate, but it sounds like there’s a bit going on.
  • For something completely different, this article from Christian on Raspberry Pi, volumio and HiFiBerry was great. Thanks for the tip!
  • Spinning disk may be as dead as tape, if these numbers are anything to go by.
  • This was a great article from Matt Crape on home lab planning.
  • Speaking of home labs, Shanks posted an interesting article on what he has running. The custom-built rack is inspired.

Random Short Take #23

Want some news? In a shorter format? And a little bit random? This listicle might be for you.

  • Remember Retrospect? They were acquired by StorCentric recently. I hadn’t thought about them in some time, but they’re still around, and celebrating their 30th anniversary. Read a little more about the history of the brand here.
  • Sometimes size does matter. This article around deduplication and block / segment size from Preston was particularly enlightening.
  • This article from Russ had some great insights into why it’s not wise to entirely rule out doing things the way service providers do just because you’re working in enterprise. I’ve had experience in both SPs and enterprise and I agree that there are things that can be learnt on both sides.
  • This is a great article from Chris Evans about the difficulties associated with managing legacy backup infrastructure.
  • The Pure Storage VM Analytics Collector is now available as an OVA.
  • If you’re thinking of updating your Mac’s operating environment, this is a fairly comprehensive review of what macOS Catalina has to offer, along with some caveats.
  • Anthony has been doing a bunch of cool stuff with Terraform recently, including using variable maps to deploy vSphere VMs. You can read more about that here.
  • Speaking of people who work at Veeam, Hal has put together a great article on orchestrating Veeam recovery activities to Azure.
  • Finally, the Brisbane VMUG meeting originally planned for Tuesday 8th has been moved to the 15th. Details here.

Random Short Take #17

Here are some links to some random news items and other content that I recently found interesting. You might find them interesting too. Episode 17 – am I over-sharing? There’s so much I want you to know about.

  • I seem to always be including a link from the Backblaze blog. That’s mainly because they write about things I’m interested in. In this case, they’ve posted an article discussing the differences between availability and durability that I think is worth your time.
  • Speaking of interesting topics, Preston posted an article on NetWorker Pools with Data Domain that’s worth looking at if you’re into that kind of thing.
  • Maintaining the data protection theme, Alastair wrote an interesting article titled “The Best Automation Is One You Don’t Write” (you know, like the best IO is one you don’t need to do?) as part of his work with Cohesity. It’s a good article, and not just because he mentions my name in it.
  • I recently wanted to change the edition of Microsoft Office I was using on my MacBook Pro and couldn’t really work out how to do it. In the end, the answer is simple. Download a Microsoft utility to remove your Office licenses, and then fire up an Office product and it will prompt you to re-enter your information at that point.
  • This is an old article, but it answered my question about validating MD5 checksums on macOS.
  • Excelero have been doing some cool stuff with Imperial College London – you can read more about that here.
  • Oh hey, Flixster Video is closing down. I received this in my inbox recently: “[f]ollowing the announcement by UltraViolet that it will be discontinuing its service on July 31, 2019, we are writing to provide you notice that Flixster Video is planning to shut down its website, applications and operations on October 31, 2019”. It makes sense, obviously, given UltraViolet’s demise, but it still drives me nuts. The ephemeral nature of digital media is why I still have a house full of various sized discs with various kinds of media stored on them. I think the answer is to give yourself over to the streaming lifestyle, and understand that you’ll never “own” media like you used to think you did. But I can’t help but feel like people outside of the US are getting shafted in that scenario.
  • In keeping up with the “random” theme of these posts, it was only last week that I learned that “Television, the Drug of the Nation” from the very excellent album “Hypocrisy Is the Greatest Luxury” by The Disposable Heroes of Hiphoprisy was originally released by Michael Franti and Rono Tse when they were members of The Beatnigs. If you’re unfamiliar with any of this I recommend you check them out.

Random Short Take #16

Here are a few links to some random news items and other content that I recently found interesting. You might find them interesting too. Episode 16 – please enjoy these semi-irregular updates.

  • Scale Computing has been doing a bit in the healthcare sector lately – you can read news about that here.
  • This was a nice roundup of the news from Apple’s recent WWDC from Six Colors. Hat tip to Stephen Foskett for the link. Speaking of WWDC news, you may have been wondering what happened to all of your purchased content with the imminent demise of iTunes on macOS. It’s still a little fuzzy, but this article attempts to shed some light on things. Spoiler: you should be okay (for the moment).
  • There’s a great post on the Dropbox Tech Blog from James Cowling discussing the mission versus the system.
  • The more things change, the more they remain the same. For years I had a Windows PC running Media Center and recording TV. I used IceTV as the XMLTV-based program guide provider. I then started to mess about with some HDHomeRun devices and the PC died and I went back to a traditional DVR arrangement. Plex now has DVR capabilities and it has been doing a reasonable job with guide data (and recording in general), but they’ve decided it’s all a bit too hard to curate guides and want users (at least in Australia) to use XMLTV-based guides instead. So I’m back to using IceTV with Plex. They’re offering a free trial at the moment for Plex users, and setup instructions are here. No, I don’t get paid if you click on the links.
  • Speaking of axe-throwing, the Cohesity team in Queensland is organising a social event for Friday 21st June from 2 – 4 pm at Maniax Axe Throwing in Newstead. You can get in contact with Casey if you’d like to register.
  • VeeamON Forum Australia is coming up soon. It will be held at the Hyatt Regency Hotel in Sydney on July 24th and should be a great event. You can find out more information and register for it here. The Vanguards are also planning something cool, so hopefully we’ll see you there.
  • Speaking of Veeam, Anthony Spiteri recently published his longest title in the Virtualization is Life! catalogue – Orchestration Of NSX By Terraform For Cloud Connect Replication With vCloud Director. It’s a great article, and worth checking out.
  • There’s a lot of talk and slideware devoted to digital transformation, and a lot of it is rubbish. But I found this article from Chin-Fah to be particularly insightful.

Liqid Are Dynamic In The DC

Disclaimer: I recently attended Dell Technologies World 2019.  My flights, accommodation and conference pass were paid for by Dell Technologies via the Media, Analysts and Influencers program. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

As part of my attendance at Dell Technologies World 2019 I had the opportunity to attend Tech Field Day Extra sessions. You can view the videos from the session here, and download my rough notes from here.

 

Liqid

One of the presenters at Tech Field Day extra was Liqid, a company that specialises in composable infrastructure. So what does that mean then? Liqid “enables Composable Infrastructure with a PCIe fabric and software that orchestrates and manages bare-metal servers – storage, GPU, FPGA / TPU, Compute, Networking”. They say they’re not disaggregating DRAM as the industry’s not ready for that yet. Interestingly, Liqid have made sure they can do all of this with bare metal, as “[c]omposability without bare metal, with disaggregation, that’s just hyper-convergence”.

 

[image courtesy of Liqid]

The whole show is driven through Liqid Command Center, and there’s a switching PCIe fabric as well. You then combine this with various hardware elements, such as:

  • JBoF – Flash;
  • JBoN – Network;
  • JBoG – GPU; and
  • Compute nodes.

There are various expansion chassis options (network, storage, and graphics) and you can add in standard x86 servers. You can read about Liqid’s announcement around Dell EMC PowerEdge servers here.

Other Interesting Use Cases

Some of the more interesting use cases discussed by Liqid included “brownfield” deployments where customers don’t want to disaggregate everything. If they just want to disaggregate GPUs, for example, they can add a GPU pool to a Fabric. This can be done with storage as well. Why would you want to do this kind of thing with networking? There are apparently a few service providers that like the composable networking use case. You can also have multiple fabric types with Liquid managing cross composability.

[image courtesy of Liqid]

Customers?

Liqid have customers across a variety of workload types, including:

  • AI & Deep Learning
    • GPU Scale out
    • Enable GPU Peer-2-Peer at scale
    • GPU Dynamic Reallocation/Sharing
  • Dynamic Cloud
    • CSP, ISP, Private Cloud
    • Flexibility, Resource Utilisation, TCO
    • Bare Metal Cloud Product Offering
  • HPC & Clustering
    • High Performance Computing
    • Lowest Latency Interconnect
    • Enables Massive Scale Out
  • 5G Edge
    • Utilisation & Reduced Foot Print
    • High Performance Edge Compute
    • Flexibility and Ease of Scale Out

Thoughts and Further Reading

I’ve written enthusiastically about composable infrastructure in the past, and it’s an approach to infrastructure that continues to fascinate me. I love the idea of being able to move pools of resources around the DC based on workload requirements. This isn’t just moving VMs to machines that are bigger as required (although I’ve always thought that was cool). This is moving resources to where they need to be. We have the kind of interconnectivity technology available now that means we don’t need to be beholden to “traditional” x86 server architectures. Of course, the success of this approach is in no small part dependent on the maturity of the organisation. There are some workloads that aren’t going to be a good fit with composable infrastructure. And there are going to be some people that aren’t going to be a good fit either. And that’s fine. I don’t think we’re going to see traditional rack mount servers and centralised storage disappear off into the horizon any time soon. But the possibilities that composable infrastructure present to organisations that have possibly struggled in the past with getting the right resources to the right workload at the right time are really interesting.

There are still a small number of companies that are offering composable infrastructure solutions. I think this is in part because it’s viewed as a niche requirement that only certain workloads can benefit from. But as companies like Liqid are demonstrating, the technology is maturing at a rapid pace and, much like our approach to on-premises infrastructure versus the public cloud, I think it’s time that we take a serious look at how this kind of technology can help businesses worry more about their business and less about the resources needed to drive their infrastructure. My friend Max wrote about Liqid last year, and I think it’s worth reading his take if you’re in any way interested in what Liqid are doing.