Pure Storage Acquires Portworx

Pure Storage announced its intention to acquire Portworx in mid-September. Around that time I had the opportunity to talk about the news with Goutham Rao (Portworx CTO) and Matt Kixmoeller (Pure Storage VP, Strategy) and thought I’d share some brief thoughts here.

 

The News

Pure and Portworx have entered an agreement that will see Pure pay approximately $370M US in cash. Portworx will form a new Cloud Native Business Unit inside Pure to be led by Portworx CEO Murli Thirumale. All Portworx founders are joining Pure, with Pure investing significantly to grow the new business unit. According to Pure, “Portworx software to continue as-is, supporting deployments in any cloud and on-premises, and on any bare metal, VM, or array-based storage”. It was also noted that “Portworx solutions to be integrated with Pure yet maintain a commitment to an open ecosystem”.

About Portworx

Described as the “leading Kubernetes data services platform”, Portworx was founded in 2014 in Los Altos, CA. It runs a 100% software, subscription, and cloud business model with development and support sites in California, India, and Eastern Europe. The product has been GA since 2017, and is used by some of the largest enterprise and Cloud / SaaS companies globally.

 

What’s A Portworx?

The idea behind Portworx is that it gives you data services for any application, on any Kubernetes distribution, running on any cloud, any infrastructure, and at any stage of the application lifecycle. To that end, it’s broken up into a bunch of different components, and runs in the K8s control plane adjacent to the applications.

PX-Store

  • Software-defined storage layer that automates container storage for developers and admins
  • Consistent storage APIs: cloud, bare metal, or arrays

PX-Migrate

  • Easily move applications between clusters
  • Enables hybrid cloud and multi-cloud mobility

PX-Backup

  • Application-consistent backup for cloud native apps with all k8s artefacts and state
  • Backup to any cloud or on-premises object storage

PX-Secure

  • Implement consistent encryption and security policies across clouds
  • Enable multi-tenancy with access controls

PX-DR

  • Sync and async replication between Availability Zones and regions
  • Zero RPO active / active for high resiliency

PX-Autopilot

  • GitOps-driven automation allows for easier platform for non-storage experts to deploy stateful applications, monitors everything about an application, reacts and prevents problems from happening
  • Auto-scale storage as your app grows to reduce costs

 

How It Fits Together

When you bring Portworx into the Pure Storage picture, you start to see that it fits well with the existing Pure Storage picture. In the picture below you’ll also see support for the standard container storage interface (CSI) to work with other vendors.

[image courtesy of Pure Storage]

Also worth noting is that PX-Essentials remains free forever for workloads under 5TB and 5 nodes).

 

Thoughts and Further Reading

I think this is a great move by Pure, mainly because it lends them a whole lot more credibility with the DevOps folks. Pure was starting to make inroads with Pure Storage Orchestrator, and I think this move will strengthen that story. Giving Portworx access to Pure’s salesforce globally is also going to broaden its visibility in the market and open up doors to markets that may have been difficult to get into previously.

Persistent storage for containers is heating up. As Rao pointed out in our discussion, “as container adoption grows, storage becomes a problem”. Portworx already had a good story to tell in this space, and Pure is no slouch when it comes to delivering advanced storage capabilities across a variety of platforms. I like that the messaging has been firmly based in maintaining the openness of the platform and I’m interested to see what other integrations happen as the two companies start working more closely together. If you’d like another perspective on the news, check out Chris Evans’s article here.

Pure Storage Announces Second Generation FlashArray//C with QLC

Disclaimer: I recently attended Storage Field Day 20.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Pure Storage recently announced its second generation FlashArray//C – an all-QLC offering offering scads of capacity in a dense form factor. Pure Storage presented on this topic at Storage Field Day 20. You can see videos of the presentation here, and download my rough notes from here.

 

It’s A Box!

FlashArray//C burst on to the scene last year as an all-flash, capacity-optimised storage option for customers looking for storage that didn’t need to go quite as fast the FlashArray//X, but that wasn’t built on spinning disk. Available capacities range from 1.3PB to 5.2PB (effective).

[image courtesy of Pure Storage]

There are a number of models available, with a variety of capacities and densities.

  Capacity Physical
 

//C60-366

 

Up to 1.3PB effective capacity**

366TB raw capacity**

3U; 1000–1240 watts (nominal–peak)

97.7 lbs (44.3 kg) fully loaded

5.12” x 18.94” x 29.72” chassis

 

//C60-494

 

Up to 1.9PB effective capacity**

494TB raw capacity**

3U; 1000–1240 watts (nominal–peak)

97.7 lbs (44.3 kg) fully loaded

5.12” x 18.94” x 29.72” chassis

 

//C60-840

 

Up to 3.2PB effective capacity**

840TB raw capacity**

6U; 1480–1760 watts (nominal–peak)

177.0lbs (80.3kg) fully loaded

10.2” x 18.94 x 29.72” chassis

 

//C60-1186

 

Up to 4.6PB effective capacity**

1.2PB raw capacity**

6U; 1480–1760 watts (nominal–peak)

185.4 lbs (84.1 kg) fully loaded

15.35” x 18.94 x 29.72” chassis

 

//C60-1390

 

Up to 5.2PB effective capacity**

1.4PB raw capacity**

9U; 1960–2280 watts (nominal–peak)

273.2 lbs (123.9 kg) fully loaded

15.35” x 18.94 x 29.72” chassis

Workloads

There are reasons why the FlashArray//C could be a really compelling option for workload consolidation. More and more workloads are “business critical” in terms of both performance and availability. There’s a requirement to do more with less, while battling complexity, and a strong desire to manage everything via a single pane of glass.

There are some other cool things you could use the //C for as well, including:

  • Automated policy-based VM tiering between //X and //C arrays;
  • DR using the //X at production and //C at your secondary site;
  • Consolidating multiple //X array workloads on a single //C array for test and dev; and
  • Consolidating multiple //X array snapshots to a single //C array for long-term retention.

 

It’s a QLC World, Sort Of

The second generation is FlashArray//C means you can potentially now have flash all through the data centre.

  • Apps and VMs – provision your high performance workloads to //X, lower performance / high capacity workloads to //C
  • Modern Data Protection & Disaster Recovery – on-premises production applications on //X efficiently replicated or backed up to //C at DR site
  • User File Shares – User file access with Purity 6.0 via SMB, NFS

QLC nonetheless presents significant engineering challenges with traditionally high write latency and low endurance (when compared to SLC, MLC, and TLC). Pure Storage’s answer to that problem has been to engineer the crap out of DirectFlash to get the required results. I’d do a bad job of explaining it, so instead I recommend you check out Pete Kirkpatrick’s explanation.

 

Thoughts And Further Reading

I covered the initial FlashArray//C announcement here and many of the reasons why this type of offering is appealing remain the same. The knock on Pure Storage in the last few years has been that, while FlashArray//X is nice and fast and a snap to use, it couldn’t provide the right kind of capacity (i.e. cheap and deep) that a number of price-sensitive punters wanted.  Sure, they could go and buy the FlashArray//X and then look to another vendor for a dense storage option, but the motivation to run with a number of storage vendors in smaller enterprise shops is normally fairly low. The folks in charge of technology in these environments are invariably stretched in terms of bodies on the floor to run the environments, and cash in the bank to procure those solutions. A single vendor solution normally makes sense for them (as opposed to some of the larger shops, or specialist organisations that really have very specific requirements that can only be serviced by particular solutions).

So now Pure Storage has the FlashArray//C, and you can get it with some decent density, some useful features (thanks in part to some new features in Purity 6), and integration with the things you know and like about Pure Storage, such as Pure1 and Evergreen storage. It seems like Pure Storage has done an awful lot of work to squeeze performance out of QLC whilst ensuring that the modules don’t need replacing every other week. There’s a lot to like about the evolving Pure Storage story, and I’m interested to see how they tie it all together as the portfolio continues to expand. You can read the press release here, access the data sheet here, and read Mellor’s take on the news here.

Pure Storage and Cohesity Announce Strategic Partnership and Pure FlashRecover

Pure Storage and Cohesity announced a strategic partnership and a new joint solution today. I had the opportunity to speak with Amy Fowler and Biswajit Mishra from Pure Storage, along with Anand Nadathur and Chris Wiborg from Cohesity, and thought I’d share my notes here.

 

Friends In The Market

The announcement comes in two parts, with the first being that Pure Storage and Cohesity are forming a strategic partnership. The idea behind this is that, together, the companies will deliver “industry-leading storage innovations from Pure Storage with modern, flash-optimised backup from Cohesity”.  There are plenty of things in common between the companies, including the fact that they’re both, as Wiborg puts it, “keenly focused on doing the right thing for the customer”.

 

Pure FlashRecover Powered By Cohesity

Partnerships are exciting and all, but what was of more interest was the Pure FlashRecover announcement. What is it exactly? It’s basically Cohesity DataProtect running on Cohesity-certified compute nodes (the whitebox gear you might be familiar with if you’ve bought Cohesity tin previously), using Pure’s FlashBlades as the storage backend.

[image courtesy of Pure Storage]

FlashRecover has a targeted general availability for Q4 CY2020 (October). It will be released in the US initially, with other regions to follow. From a go to market perspective, Pure will handle level 1 and level 2 support, with Cohesity support being engaged for escalations. Cohesity DataProtect will be added to the Pure price list, and Pure becomes a Cohesity Technology Partner.

 

Thoughts

My first thought when I heard about this was why would you? I’ve traditionally associated scalable data protection and secondary storage with slower, high-capacity appliances. But as we talked through the use cases, it started to make sense. FlashBlades by themselves aren’t super high capacity devices, but neither are the individual nodes in Cohesity appliances. String a few together and you have enough capacity to do data protection and fast recovery in a predictable fashion. FlashBlade supports 75 nodes (I think) [Edit: it scales up to 150x 52TB nodes. Thanks for the clarification from Andrew Miller] and up to 1PB of data in a single namespace. Throw in some of the capabilities that Cohesity DataProtect brings to the table and you’ve got an interesting solution. The knock on some of the next-generation data protection solutions has been that recovery can still be quite time-consuming. The use of all-flash takes away a lot of that pain, especially when coupled with a solution like FlashBlade that delivers some pretty decent parallelism in terms of getting data recovered back to production quickly.

An evolving use case for protection data is data reuse. For years, application owners have been stuck with fairly clunky ways of getting test data into environments to use with application development and testing. Solutions like FlashRecover provide a compelling story around protection data being made available for reuse, not just recovery. Another cool thing is that when you invest in FlashBlade, you’re not locking yourself into a particular silo, you can use the FlashBlade solution for other things too.

I don’t work with Pure Storage and Cohesity on a daily basis anymore, but in my previous role I had the opportunity to kick the tyres extensively with both the Cohesity DataProtect solution and the Pure Storage FlashBlade. I’m an advocate of both of these companies because of the great support I received from both companies from pre-sales through to post-sales support. They are relentlessly customer focused, and that really translates in both the technology and the field experience. I can’t speak highly enough of the engagement I’ve experienced with both companies, from both a blogger’s experience, and as an end user.

FlashRecover isn’t going to be appropriate for every organisation. Most places, at the moment, can probably still get away with taking a little time to recover large amounts of data if required. But for industries where time is money, solutions like FlashRecover can absolutely make sense. If you’d like to know more, there’s a comprehensive blog post over at the Pure Storage website, and the solution brief can be found here.

Random Short Take #38

Welcome to Random Short Take #38. Not a huge amount of players have worn 38 in the NBA, and I’m not going to pretend I was ever a Kwame Brown fan. Although it did seem like he had a tough time of it. Anyway let’s get random.

  • Ransomware is the new hotness. Or, rather, protecting storage systems from ransomware is the new hotness. My man Chin-Fah had a writeup on that here. It’s not a matter of if, but rather when you’ll run into a problem. It’s been interesting to see the various approaches being taken by the storage vendors and the data protection companies.
  • Applications for the vExpert program intake for the second half of 2020 are open, but closing soon. It’s a fantastic program to be a part of, so if you think you’ve got the goods, you can apply here. I also recommend this article from Christopher on his experiences.
  • This was a great article from Alastair on some of the differences between networking with AWS and VMC on AWS. As someone who works for a VMware Cloud Provider, I can confirm that NSX (T or V, I don’t care) has a whole slew of capabilities and whole slew of integration challenges.
  • Are you Zoomed out? I am. Even when you think the problem can’t be the network, it might just be the network (I hope my friends in networking appreciate that it’s not always the storage). John Nicholson posted a typically comprehensive overview of how your bandwidth might be one of the things keeping you from demonstrating excellent radio voice on those seemingly endless meetings you’re doing at the moment. It could also be that you’re using crap audio devices too, but I think John’s going to cover that in the future.
  • Scale Computing has a good story to tell about what it’s been doing with a large school district in the U.S. Read more about that here.
  • This is one of those promotions aimed at my friends in Northern America more than folks based where I am, but I’m always happy to talk about deals on data protection. StorCentric has launched its “Retrospect Dads & Grads Promotion” offering a free 90-Day subscription license for every Retrospect Backup product. You can read more about that here.
  • Pure//Accelerate Online was this week, and Max did a nice write-up on Pure Storage File Services over at Gestalt IT.
  • Rancher Labs recently announced the general availability of Longhorn (a cloud-native container storage solution). I’m looking forward to digging in to this a bit more over the next little while.

 

 

Random Short Take #37

Welcome to Random Short Take #37. Not a huge amount of players have worn 37 in the NBA, but Metta World Peace did a few times. When he wasn’t wearing 15, and other odd numbers. But I digress. Let’s get random.

  • Pavilion Data recently added S3 capability to its platform. It’s based on a variant of MinIO, and adds an interesting dimension to what Pavilion Data has traditionally offered. Mellor provided some good coverage here.
  • Speaking of object storage, Dell EMC recently announced ECS 3.5. You can read more on that here. The architectural white paper has been updated to reflect the new version as well.
  • Speaking of Dell EMC, Preston posted a handy article on Data Domain Retention Lock and NetWorker. Have you pre-ordered Preston’s book yet? I’ll keep asking until you do.
  • Online events are all the rage at the moment, and two noteworthy events are coming up shortly: Pure//Accelerate and VeeamON 2020. Speaking of online events, we’re running a virtual BNEVMUG next week. Details on that here. ZertoCON Virtual is also a thing.
  • Speaking of Pure Storage, this article from Cody Hosterman on NVMe and vSphere 7 is lengthy, but definitely worth the read.
  • I can’t recall whether I mentioned that this white paper  covering VCD on VCF 3.9 is available now, and I can’t be bothered checking. So here it is.
  • I’m not just a fan of Backblaze because of its cool consumer backup solution and object storage platform, I’m also a big fan because of its blog. Articles like this one are a great example of companies doing corporate culture right (at least from what I can see).
  • I have the impression that Datadobi has been doing some cool stuff recently, and this story certainly seems to back it up.

Random Short Take #26

Welcome to my semi-regular, random news post in a short format. This is #26. I was going to start naming them after my favourite basketball players. This one could be the Korver edition, for example. I don’t think that’ll last though. We’ll see. I’ll stop rambling now.

Pure//Accelerate 2019 – Cloud Block Store for AWS

Disclaimer: I recently attended Pure//Accelerate 2019.  My flights, accommodation, and conference pass were paid for by Pure Storage. There is no requirement for me to blog about any of the content presented and I am not compensated by Pure Storage for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Cloud Block Store for AWS from Pure Storage has been around for a little while now. I had the opportunity to hear about it in more depth at the Storage Field Day Exclusive event at Pure//Accelerate 2019 and thought I’d share some thoughts here. You can grab a copy of my rough notes from the session here, and video from the session is available here.

 

Cloud Vision

Pure Storage has been focused on making everything related to their products effortless from day 1. An example of this approach is the FlashArray setup process – it’s really easy to get up and running and serving up storage to workloads. They wanted to do the same thing with anything they deliver via cloud services as well. There is, however, something of a “cloud divide” in operation in the industry. If you’re familiar with the various cloud deployment options, you’ll likely be aware that on-premises and hosted cloud is a bit different to public cloud. They:

  • Deliver different application architectures;
  • Deliver different management and consumption experience; and
  • Use different storage.

So what if Pure could build application portability and deliver common shared data services?

Pure have architected their cloud service to leverage what they call “Three Pillars”:

  • Build Your Cloud
  • Run anywhere
  • Protect everywhere

 

What Is It?

So what exactly is Cloud Block Store for AWS then? Well, imagine if you will, that you’re watching an episode of Pimp My Ride, and Xzibit is talking to an enterprise punter about how he or she likes cloud, and how he or she likes the way Pure Storage’s FlashArray works. And then X says, “Hey, we heard you liked these two things so we put this thing in the other thing”. Look, I don’t know the exact situation where this would happen. But anyway …

  • 100% software – deploys instantly as a virtual appliance in the cloud, runs only as long as you need it;
  • Efficient – deduplication, compression, and thin provisioning deliver capacity and performance economically;
  • Hybrid – easily migrate data bidirectionally, delivering data portability and protection across your hybrid cloud;
  • Consistent APIs – developers connect to storage the same way on-premises and in the cloud. Automated deployment with Cloud Formation templates;
  • Reliable, secure – delivers industrial-strength perfromance, reliability & protection with Multi-AZ HA, NDU, instant snaps and data at rest encryption; and
  • Flexible – pay as you go consumption model to best match your needs for production and development.

[image courtesy of Pure Storage]

Architecture

At the heart of it, the architecture for CVS is not dissimilar to the FlashArray architecture. There’re controllers, drives, NVRAM, and a virtual shelf.

  • EC2: CBS Controllers
  • EC2: Virtual Drives
  • Virtual Shelf: 7 Virtual drives in Spread Placement Group
  • EBS IO1: NVRAM, Write Buffer (7 total)
  • S3: Durable persistent storage
  • Instance Store: Non-Persistent Read Mirror

[image courtesy of Pure Storage]

What’s interesting, to me at least, is how they use S3 for persistent storage.

Procurement

How do you procure CBS for AWS? I’m glad you asked. There are two procurement options.

A – Pure as-a-Service

  • Offered via SLED / CLED process
  • Minimums 100TiB effective used capacity
  • Unified hybrid contracts (on-premises and CBS, CBS)
  • 1 year to 3 year contracts

B – AWS Marketplace

  • Direct to customer
  • Minimum, 10 TiB effective used capacity
  • CBS only
  • Month to month contract or 1 year contract

 

Use Cases

There are a raft of different use cases for CBS. Some of them made sense to me straight away, some of them took a little time to bounce around in my head.

Disaster Recovery

  • Production instance on-premises
  • Replicate data to public cloud
  • Fail over in DR event
  • Fail back and recover

Lift and shift

  • Production instance on-premises
  • Replicate data to public cloud
  • Run the same architecture as before
  • Run production on CBS

Use case: Dev / test

  • Replicate data to public cloud
  • Instantiate test / dev instances in public cloud
  • Refresh test / dev periodically
  • Bring changes back on-premises
  • Snapshots are more costly and slower to restore in native AWS

ActiveCluster

  • HA within an availability zone and / or across availability zones in an AWS region (ActiveCluster needs <11ms latency)
  • No downtime when a Cloud Block Store Instance goes away or there is a zone outage
  • Pure1 Cloud Mediator Witness (simple to manage and deploy)

Migrating VMware Environments

VMware Challenges

  • AWS does not recognise VMFS
  • Replicating volumes with VMFS will not do any good

Workaround

  • Convert VMFS datastore into vVOLs
  • Now each volume has the Guest VM’s file system (NTFS, EXT3, etc)
  • Replicate VMDK vVOLs to CBS
  • Now the volumes can be mounted to EC2 with matching OS

Note: This is for the VM’s data volumes. The VM boot volume will not be usable in AWS. The VM’s application will need to be redeployed in native AWS EC2.

VMware Cloud

VMware Challenges

  • VMware Cloud does not support external storage, it only supports vSAN

Workaround

  • Connect Guest VMs directly to CBS via iSCSI

Note: I haven’t verified this myself, and I suspect there may be other ways to do this. But in the context of Pure’s offering, it makes sense.

 

Thoughts and Further Reading

There’s been a feeling in some parts of the industry for the last 5-10 years that the rise of the public cloud providers would spell the death of the traditional storage vendor. That’s clearly not been the case, but it has been interesting to see the major storage slingers evolving their product strategies to both accommodate and leverage the cloud providers in a more effective manner. Some have used the opportunity to get themselves as close as possible to the cloud providers, without actually being in the cloud. Others have deployed virtualised versions of their offerings inside public cloud and offered users the comfort of their traditional stack, but off-premises. There’s value in these approaches, for sure. But I like the way that Pure has taken it a step further and optimised its architecture to leverage some of the features of what AWS can offer from a cloud hardware perspective.

In my opinion, the main reason you’d look to leverage something like CBS on AWS is if you have an existing investment in Pure and want to keep doing things a certain way. You’re also likely using a lot of traditional VMs in AWS and want something that can improve the performance and resilience of those workloads. CBS is certainly a great way to do this. If you’re already running a raft of cloud-native applications, it’s likely that you don’t necessarily need the features on offer from CBS, as you’re already (hopefully) using them natively. I think Pure understands this though, and isn’t pushing CBS for AWS as the silver bullet for every cloud workload.

I’m looking forward to seeing what the market uptake on this product is like. I’m also keen to crunch the numbers on running this type of solution versus the cost associated with doing something on-premises or via other means. In any case, I’m looking forward to see how this capability evolves over time, and I think CBS on AWS is definitely worthy of further consideration.

Random Short Take #23

Want some news? In a shorter format? And a little bit random? This listicle might be for you.

  • Remember Retrospect? They were acquired by StorCentric recently. I hadn’t thought about them in some time, but they’re still around, and celebrating their 30th anniversary. Read a little more about the history of the brand here.
  • Sometimes size does matter. This article around deduplication and block / segment size from Preston was particularly enlightening.
  • This article from Russ had some great insights into why it’s not wise to entirely rule out doing things the way service providers do just because you’re working in enterprise. I’ve had experience in both SPs and enterprise and I agree that there are things that can be learnt on both sides.
  • This is a great article from Chris Evans about the difficulties associated with managing legacy backup infrastructure.
  • The Pure Storage VM Analytics Collector is now available as an OVA.
  • If you’re thinking of updating your Mac’s operating environment, this is a fairly comprehensive review of what macOS Catalina has to offer, along with some caveats.
  • Anthony has been doing a bunch of cool stuff with Terraform recently, including using variable maps to deploy vSphere VMs. You can read more about that here.
  • Speaking of people who work at Veeam, Hal has put together a great article on orchestrating Veeam recovery activities to Azure.
  • Finally, the Brisbane VMUG meeting originally planned for Tuesday 8th has been moved to the 15th. Details here.

Pure//Accelerate 2019 – (Fairly) Full Disclosure

Disclaimer: I recently attended Pure//Accelerate 2019.  My flights, accommodation, and conference pass were paid for by Pure Storage. There is no requirement for me to blog about any of the content presented and I am not compensated by Pure Storage for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my notes on gifts, etc, that I received as an attendee at Pure//Accelerate 2019. Apologies if it’s a bit dry but I’m just trying to make it clear what I received during this event to ensure that we’re all on the same page as far as what I’m being influenced by. I’m going to do this in chronological order, as that was the easiest way for me to take notes during the week. Whilst every attendee’s situation is different, I was paid by my employer to be at this event.

 

Saturday

My wife kindly dropped me at the airport. I flew Qantas economy class from BNE – LAX – AUS courtesy of Pure Storage. I had a 5 hour layover at LAX. I stopped at the Rolling Stone Bar and Grill in the Terminal 7 and had a breakfast burrito. It wasn’t the best, but anything is pretty good after the smell of airplane food. When I got to Austin I was met by a driver that Pure had organised. I grabbed my suitcase and we travelled to the Fairmont Austin (paid for by Pure) in one of those big black SUVs that are favoured by many of the limousine companies.

I got presentable and then went down to the hotel bar to catch up with Alastair Cooke and his wife Tracey, Matt Leib, Gina Minks, and Leah Schoeb. I had a gin and tonic, paid for by Alastair, and then took the hotel courtesy car to Austin City Limits with Matt to see Joe Russo’s Almost Dead. It’s not the sort of gig I’d normally go to, but I appreciate live music in most forms, the crowd was really into it, and it’s always great to spend time with Matt. Matt also very kindly paid for my entry to the gig and bought me a beer there (a 16oz can of Land Shark Lager). I had a second beer and bought one for Matt too.

 

Sunday

I hadn’t really eaten since LAX, so I hit up Matt to come to lunch with me. We went for a wander downtown in Austin and ended up on 6th Street at Chupacabra Cantina y Tacqueria. I had one of the West Coast Burritos, a huge flour tortilla stuffed with refried beans, green chilli rice, jack cheese, crispy potato, lettuce, tomato, onion and chicken Tinga filling. It was delicious. I also had two Twisted X Austin Lager beers to wash it down.

In the afternoon I caught up with Matt and Chris Evans in the hotel bar. I had 3 Modelo Especial beers – these were kindly paid for by Emily Gallagher from Touchdown PR.

The Tech Field Day people all got together for dinner at Revue in the hotel. I had 3 Vista Adair Kolsch beers, some shrimp gyoza, chilli wonton dumplings, and okonomiyaki. This was paid for by Tech Field Day.

 

Monday

On Monday morning I had breakfast at the hotel. This was a buffet-style affair and I had scrambled eggs, huevo rancheros, bacon, jalapeño sausage, charcuterie, salmon and cream cheese, and coffee. This was paid for by Pure Storage. I received a gift bag at registration. This included a:

  • Pure//Accelerate cloth tote bag;
  • Rocketbook Everlast notebook;
  • “Flash Was Only The Beginning” hardcover book;
  • Porter 12 oz portable ceramic mug;
  • h2go Concord 25 oz stainless steel bottle; and
  • 340g bag of emporium medium house blend cuvée coffee.

For lunch I had beef brisket, BBQ sauce and some green salad. I also picked up a Pure FlashArray//C t-shirt during the Storage Field Day Exclusive event.

Before dinner I had a Modelo in the hotel – this was paid for by Tech Field Day. We then attended an Analysts and Influencers reception at Banger’s. I had 3 beers there (some kind of Pilsner) and a small amount of BBQ. I then made my way over to Parkside on 6th Street for an APJ event. I had 4 Austin Limits Lagers there and some brisket and macaroni and cheese. I should have smoke-bombed at that point but didn’t and ended up getting my phone swiped from a bar. Lesson learnt.

 

Tuesday

I skipped breakfast in favour of some more sleep. For lunch I had beef tacos in the Analysts area. Dinner was an Analyst and Influencer and Executive Program reception at the hotel. I had 3 Modelo beers, some dumplings, and some beef skewers. I turned in relatively early as the jet-lag was catching up with me.

 

Wednesday

For breakfast we were in the Solutions Exchange area for a private tour of the Pure setup. I had a greasy ham, cheese and egg croissant, some fruit, and 2 coffees. After the keynote I picked up some Rubrik socks.

In the afternoon I took a taxi to the Austin PD to attempt to report my phone. I then grabbed lunch with Matt Leib at P. Terry’s Burger Stand downtown. I had a hamburger and a chocolate shake. Matt paid for this. Matt then paid for a ride-sharing service to the local Apple Store where I picked up a new handset. We then took another car back to the hotel, which Matt kindly paid for.

We had dinner at Banger’s with the remaining Tech Field Day crew. I had 3 Austin Beerworks Pearl-Snap beers, boiled peanuts, chilli fries, and jalapeño sausage. It was delicious. This was paid for by Tech Field Day. I then headed to Austin City Limits for the Pure//Accelerate appreciation party. Weezer were playing, and I was lucky enough to get a photo with them (big thanks to Stephen Foskett and Armi Banaria for sorting me out!).

I had 3 Landshark Lager beers during the concert. After the show we retired to the hotel bar where I had 2 more Modelo beers before calling it a night.

 

Thursday

On Thursday morning I ran into Craig Waters and Justin Warren and joined them for a coffee at Houndstooth Coffee (I had the iced latte to try and fight off the heat). This was paid for by Craig. We then headed to Fareground. I had a burger with bacon and cheese from Contigo. It was delicious. This was also paid for by Craig.

Returning to the hotel, I bumped into my old mentor Andrew Fisher and he bought me a few Modelos in the bar while re-booking his flights due to some severe weather issues in Houston. I then took a Pure-provided car service to the airport and made my way home to Brisbane via LAX.

Big thanks to Pure Storage for having me over for the week, and big thanks to everyone who spent time with me at the event (and after hours) – it’s a big part of why I keep coming back to these types of events.

Random Short Take #22

Oh look, another semi-regular listicle of random news items that might be of some interest.

  • I was at Pure Storage’s //Accelerate conference last week, and heard a lot of interesting news. This piece from Chris M. Evans on FlashArray//C was particularly insightful.
  • Storage Field Day 18 was a little while ago, but that doesn’t mean that the things that were presented there are no longer of interest. Stephen Foskett wrote a great piece on IBM’s approach to data protection with Spectrum Protect Plus that’s worth read.
  • Speaking of data protection, it’s not just for big computers. Preston wrote a great article on the iOS recovery process that you can read here. As someone who had to recently recover my phone, I agree entirely with the idea that re-downloading apps from the app store is not a recovery process.
  • NetApp were recently named a leader in the Gartner Magic Quadrant for Primary Storage. Say what you will about the MQ, a lot of folks are still reading this report and using it to help drive their decision-making activities. You can grab a copy of the report from NetApp here. Speaking of NetApp, I’m happy to announce that I’m now a member of the NetApp A-Team. I’m looking forward to doing a lot more with NetApp in terms of both my day job and the blog.
  • Tom has been on a roll lately, and this article on IT hero culture, and this one on celebrity keynote speakers, both made for great reading.
  • VMworld US was a little while ago, but Anthony‘s wrap-up post had some great content, particularly if you’re working a lot with Veeam.
  • WekaIO have just announced some work their doing Aiden Lab at the Baylor College of Medicine that looks pretty cool.
  • Speaking of analyst firms, this article from Justin over at Forbes brought up some good points about these reports and how some of them are delivered.