Here are some links to some random news items and other content that I recently found interesting. You might find them interesting too. Episode 19 – let’s get tropical! It’s all happening.
I seem to link to Alastair’s blog a lot. That’s mainly because he’s writing about things that interest me, like this article on data governance and data protection. Plus he’s a good bloke.
Speaking of data protection, Chris M. Evans has been writing some interesting articles lately on things like backup as a service. Having worked in the service provider space for a piece of my career, I wholeheartedly agree that it can be a “leap of faith” on the part of the customer to adopt these kinds of services.
This podcast episode from W. Curtis Preston was well worth the listen. I’m constantly fascinated by the challenges presented to infrastructure in media and entertainment environments, particularly when it comes to data protection.
This article from Tom Hollingsworth was honest and probably cut too close to the bone with a lot of readers. There are a lot of bad habits that we develop in our jobs, whether we’re coding, running infrastructure, or flipping burgers. The key is to identify those behaviours and work to address them where possible.
Over at SimplyGeek.co.uk, Gavin has been posting a number of Ansible-related articles, including this one on automating vSphere VM and ova deployments. A number of folks in the industry talk a tough game when it comes to automation, and it’s nice to see Gavin putting it on wax and setting a great example.
The Mark Of Cain have announced a national tour to commemorate the 30th anniversary of their Battlesick album. Unfortunately I may not be in the country when they’re playing in my part of the woods, but if you’re in Australia you can find out more information here.
Disclaimer: I recently attended Tech Field Day 19. My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
This is a quick post to say thanks once again to Stephen, Ken, and Ben, and the presenters at Tech Field Day 19. I had a super fun and educational time. For easy reference, here’s a list of the posts I did covering the events (they may not match the order of the presentations).
Also, here’s a number of links to posts by my fellow delegates (in no particular order). They’re all very smart people, and you should check out their stuff, particularly if you haven’t before. I’ll attempt to keep this updated as more posts are published. But if it gets stale, the Tech Field Day 19 landing page will have updated links.
Finally, thanks again to Stephen and the team at Gestalt IT for making it all happen. It was an educational and enjoyable week and I really valued the opportunity I was given to attend. Here’s a photo of the Tech Field Day 19 delegates.
Disclaimer: I recently attended Tech Field Day 19. My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
Automation Anywhere recently presented at Tech Field Day 19. You can see videos of their presentation here, and download my rough notes from here.
Robotic What?
Robotic Process Automation (RPA) is the new hotness in enterprise software. Automation Anywhere raised over $550 million in funding in the last 12 months. That’s a lot of money. But what is RPA? It’s a way to develop workflows so that business processes can be automated. One of the cool things, though, is that it can develop these automation actions by observing the user perform the actions in the GUI, and then repeating those actions. There’s potential to make this more accessible to people who aren’t necessarily software development types.
Automation Anywhere started back in 2003, and the idea was to automate any application. Automation anywhere want to “democratise automation”, and “anything that can be automated, should be automated”. The real power of this kind of approach is that it, potentially, allows you do things you never did before. Automation Anywhere want us to “imagine a world where every job has a digital assistant working side by side, allowing people doing what they do best”.
[image courtesy of Automation Anywhere]
Humans are the Resource
This whole automating all the things mantra has been around for some time, and the idea has always been that we’re “[m]oving humans up the value chain”. Not only that, but RPA isn’t about digital transformation in the sense that a lot of companies see it currently, i.e. as a way to change the way they do things to better leverage digital tools. What’s interesting is that RPA is more focused on automating what you already have. You can then decide whether the process is optimal or whether it should be changed. I like this idea, if only because of the number of times I’ve witnessed small and large companies go through “transformations”, only to realise that what they were doing previously was pretty good, and they’d just made a few mistakes in terms of manual process creeping in.
Automation Anywhere told us that some people start with “I know that my job cannot be automated”, but it turns out that about 80% of their job is business tools based, and a lack of automation is holding them back from thinking strategically. We’ve seen this problem throughout the various industrial revolutions that have occurred, and people have invariably argued against steam-powered devices, and factory lines, and self-healing infrastructure.
Thoughts and Further Reading
Automation is a funny thing. It’s often sold to people as a means to give them back time in their day to do “higher order” activities within the company. This has been a message that has been around as long as I’ve been in IT. There’s an idea that every worker is capable of doing things that could provide more value to the company, if only they had more time. Sometimes, though, I think some folks are just good at breaking rocks. They don’t want to do anything else. They may not really be capable of doing anything else. And change is hard, and is going to be hard for them in particular. I’m not anticipating that RPA will take over every single aspect of the workplace, but there’s certainly plenty of scope for it to have a big presence in the modern enterprise. So much time is wasted on process that should really be automated, because it can give you back a lot of time in your day. And it also provides the consistency that human resources lack.
As Automation Anywhere pointed out in their presentation “every piece of software in the world changes how we work, but rarely do you have the opportunity to change what the work is”. And that’s kind of the point, I think. We’re so tied to do things in a business a certain way, and oftentimes we fill the gaps in workflows with people because the technology can’t keep up with what we’re trying to do. But if you can introduce tools into the business that can help you move past those shortfalls in workflow, and identify ways to improve those workflows, that could really be something interesting. I don’t know if RPA will solve all of our problems overnight, because humans are unfortunately still heavily involved in the decision making process inside enterprise, but it seems like there’s scope to do some pretty cool stuff with it.
If you’d like to read some articles that don’t just ramble on, check out Adam’s article here, Jim’s view here, and Liselotte’s article here. Marina posted a nice introduction to Automation Anywhere here, and Scott’s impression of Automation Anywhere’s security approach made for interesting reading. There’s a wealth of information on the Automation Anywhere website, and a community edition you can play with too.
Disclaimer: I recently attended Tech Field Day 19. My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
Druva recently presented at Tech Field Day 19. You can see videos of their presentation here, and download my rough notes from here. Here’s a photo of Jaspreet Singh kicking things off.
Let’s Talk About You
What do people want in a backup system?
I’ll tell you what we want. What we really, really want. Less Spice Girls ear worms. And a data protection service. It seems simplistic, but it’s true. A lot of organisations are tired of being IT organisations and they just want to consume services from companies that are IT organisations. That’s not a copout. They want to make money doing the things they’re good at. It’s one of the big reasons public cloud has proven so popular. Druva offers a service, and are positioning themselves as being to backups what Salesforce is to CRM. The key selling point is that they can do data protection simpler, faster, cheaper, and safer. And you get the two big benefits of SaaS:
There’s nothing to maintain; and
New features are made available immediately.
Am I The Ideal Druva Customer?
Are you a good fit though? If you’re running modern / virtualised workloads, Druva want to talk to you. To wit, if you find yourself in one of these categories you should be okay:
“Versatilist” Users;
Cloud focus or initiative;
Hybrid cloud environment;
Distributed workloads, including laptops;
SaaS adopter (SFDC, O365, G Suite); and
Moving away from legacy Unix and apps.
The more distributed your company is – the better Druva looks.
Who’s not a good fit for Druva though? Enterprises that:
Must have an on-premises backup system;
Have no desire to leverage cloud; and
Want a backup system for legacy OS / apps.
Poor enterprises, missing out again.
Challenges Solved by Druva
Curtis knows a bit about data protection, and he’s been around for a while now, so he remembers when not everything was peaches and cream in the data protection world. He talked about the various trends in data protection over the years and used the below table as an anchor point. The gist of it is that a solution such as the one Druva has doesn’t have quite as many challenges as the more “traditional” data protection systems we were using through for the last 20 plus years (yes, and longer still, I know).
!
$
?
Challenges
$
?
Design, maintain, refresh physical backup server & storage
Over-provision compute / storage (growth and variable load)
$
?
Not easy to scale
$
Unexpected / variable costs
$
Massive capital expenditures
!
First large backup
!
Any large restore
Every vendor can look good when you take tape out of consideration. It has an awful a lot of advantages in terms of capacity and economy, but the execution can often be a real pain. Druva also compete pretty well with the “hyper-converged” backup vendors, although I think they get a bad rap for having a focus on hardware that isn’t necessarily as much of a problem as some people think. The real killer feature for Druva is the cloud-native architecture, and the SaaS story in general.
Thoughts and Further Reading
It’s no secret that I’ve been a fan of Curtis for years, so when he moved to Druva I was intrigued and wanted to hear more. But Druva isn’t just Curtis. There are a whole bunch of people at the company who know cloud, and data protection, and have managed to put them together into a solution that makes a lot of sense. And I like what I’ve seen thus far. There’s a really good story here, particularly if you’re all in on cloud, and running relatively modern applications. The heritage in endpoint protection has helped them overcome some obstacles that other vendors haven’t had to deal with yet. They’re also willing to admit that not everything is perfect, particularly when it comes to getting that first large backup done. They also believe that “[w]ithin the limits of physics they can scale to meet the needs of most customers”. You’re not going to be able to achieve RPO 0 and RTO 0 with Druva. But that’s what things like replication are for. What they do offer, however, is an RTO of minutes, not hours. A few other things they don’t do include VM live mount and native support for Azure and GCP.
What Druva do do well is understand that customers have requirements that can be satisfied though the use of protection data. They also understand the real operational value (in terms of resiliency and reduced spend) that can be had with SaaS-based offerings. We all talk a tough game when it comes to buying what we think is the absolute best solution to protect our data, and rightly so. A business’s data is (hopefully) one of its most critical assets, and we should do anything we can to protect it. Druva are as dedicated as the next company to that philosophy, but they’ve also realised that the average business is under constant pressure to reduce costs wherever possible. Now you don’t just get to access the benefits of running your applications in the cloud – you can also get the benefit of protecting them in the cloud too.
Tape was hard to do well, and many of us have horror stories about things going wrong. Cloud can be hard to do well too, and there are plenty of stories of cloud going horribly wrong. Druva isn’t magic, but it does help take away a lot of the complexity that’s been frequently attached with protecting cloud-native workloads.
Disclaimer: I recently attended Tech Field Day 19. My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
VMware recently presented at Tech Field Day 19. You can see videos of their presentation here, and download my rough notes from here.
Operations, And No Operators
VMware has a pretty comprehensive suite of management tools that you can use to manage your VMware cloud products, including:
One of the keys to building successful management and monitoring tools is delivering the ability to perform activities in an autonomous fashion. To wit, there are parts of your infrastructure that you want to be “self-driving”. Taruna Gandhi talked about the “4 Tenets of self-driving operations”. These are:
Continuous Performance Optimisation – Assure application performance with atomic workload placement and balancing workloads based on business and operational intent
Efficient Capacity Management – Run infrastructure like a public cloud – optimal densification, proactive planning, and procurement
Intelligent Remediation – Predict, prevent, and troubleshoot across SDDC and multiple clouds, from apps to infrastructure
Integrated Compliance – Reduce risk and enforce IT and regulatory standards with integrated compliance and automated remediation
The idea behind tools like vRealize Operations is that you can run your VMware-based infrastructure in an autonomous fashion.
It’s A Small Thing, But It’s Really Quite Useful
One small thing that VMware bought up was the ability to use tags for licensing enforcement and VM placement using DRS. You can read about how to do that here. I think the capability was first introduced in vROps 6.7. Why would you need to move workloads around for licensing enforcement? Just five years ago I was working with enterprise environments that had to have limited amounts of CPU sockets exposed to various operating systems (when virtualised) or line of business applications. The way to combat the requirement was to deploy dedicated clusters of compute for particular software packages. Which is pretty stupid when it comes to getting value from virtualisation. Nowadays the cluster is no longer the barrier to VM mobility, so you can move workloads around in an easier fashion. The general feeling on the Internet might be that the likes of Microsoft and Oracle have made these kinds of workarounds harder to do (and stay compliant), but there are still plenty of smaller software vendors that have odd requirements when it comes to the number of sockets consumed in virtual environments. Being able to leverage tags shounds like just the sort of thing that we’ve talked about for years in terms of operational overheads that shouldn’t be overheads. It strikes me as something that many enterprise customers could be interested in. As VMware pointed out though, some of the enterprises needing this capability ironically may not have upgraded yet to the required version yet.
Thoughts and Further Reading
I’m the first to admit that I haven’t spent nearly enough time keeping up to date on what VMware’s been delivering with the vRealize Operations product. I used it early on and then moved into roles where it became someone else’s problem. So it was nice to get up to speed on some of the capabilities they’ve added to the product in the past few years. It’s my opinion that if you don’t have to do certain operations in your environment, that’s a good thing. Infrastructure operations is a hectic business at the best of times, and the requirement to intervene in a manual way is not just potentially a burden on your workforce (particularly when something goes awry at 3 in the morning), it’s also an opportunity for other things to go wrong. The good thing about automating the management of infrastructure is that things get done in a consistent fashion. And there are, generally speaking, fewer opportunities for human error to creep in. This does require a certain amount of intelligence to be built into the platform, but VMware seem to have a pretty good grasp of what’s happening in the average vSphere environment, and they’ve coupled this with many years of field experience to build a platform that can get you out of a spot before you get in one.
vRealize Operations is more than just a glorified dashboard application with some cool traffic lights that keep management happy. If you’re running any type of reasonably sized virtual infrastructure, and you’re not leveraging vROps, I think you’re making things unnecessarily difficult for your operational staff. Obviously, vROps isn’t some silver bullet when it comes to IT operations, but it has a lot of power under the hood, and I think there’s some great potential that can be leveraged in the platform. You still need people to do stuff, but with tools like this you won’t need them to do quite as much of that tedious stuff. I’d also recommend you check out the other parts of VMware’s presentation at Tech Field Day 19, because they covered a lot of really cool stuff in terms of their vision for cloud management tools.
Disclaimer: I recently attended Tech Field Day 19. My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
NetApp recently presented at Tech Field Day 19. You can see videos of their presentation here, and download my rough notes from here.
Management or Monitoring?
James Holden (Director, Cloud Analytics) delivered what I think was a great presentation on NetApp Cloud Insights. Early on he made the comment that “[w]e’re as read-only as we possibly can be. Being actionable puts you in a conversation where you’re doing something with the infrastructure that may not be appropriate.” It’s a comment that resonated with me, particularly as I’ve been on both sides of the infrastructure management and monitoring fence (yes, I know, it sounds like a weird fence – just go with it). I remember vividly providing feedback to vendors that I wanted their fancy single pane of glass monitoring solution to give me more management capabilities as well. And while they were at it, it would be great if they could develop software that would automagically fix issues in my environment as they arose.
But do you want your cloud monitoring tools to really have that much control over your environment? Sure, there’s a lot of benefit to be had deploying solutions that can reduce the stick time required to keep things running smoothly, but I also like the idea that the software won’t just dive in a fix what it perceives as errors in an environment based on a bunch of pre-canned constraints that have been developed by people that may or may not always have a good grip on what’s really happening in these types of environments.
Keep Your Cloud Happy
So what can you do with Cloud Insights? As it turns out, all kinds of stuff, including cost optimisation. It doesn’t always sound that cool, but customers are frequently concerned with the cost of their cloud investment. What they get with Cloud Insights is:
Understanding
What’s my last few months cost?
What’s my current month running cost
Cost broken down by AWS service, account, region?
Does it meet the budget?
Analysis
Real time cost analysis to alert on sudden rise in cost
It’s all wrapped up in an alarmingly simple SaaS solution meaning quick deployment and faster time to value.
The Full Picture
One of my favourite bits of the solution though is that NetApp are striving to give you access to the full picture:
There are application services running in the environment; and
There are operating systems and hardware underneath.
“The world is not just VMs on compute with backend storage”, and NetApp have worked hard to ensure that the likes of micro services are also supported.
Thoughts and Further Reading
One of the recurring themes of Tech Field Day 19 was that of management and monitoring. When you really dig into the subject, every vendor has a different take on what can be achieved through software. And it’s clear that every customer also has an opinion on what they want to achieve with their monitoring and management solutions. Some folks are quite keen for their monitoring solutions to take action as events arise to resolve infrastructure issues. Some people just want to be alerted about the problem and have a human intervene. And some enterprises just want an easy way to report to their C-level what they’re spending their money on. With all of these competing requirements, it’s easy to see how I’ve ended up working in enterprises running 10 different solutions to monitor infrastructure. They also had little idea what the money was being spent on, and had a large team of operations staff dealing with issues that weren’t always reported by the tools, or they got buried in someone’s inbox.
IT operations has been a hard nut to crack for a long time, and it’s not always the fault of the software vendors that it isn’t improving. It’s not just about generating tonnes of messages that no-one will read. It’s about doing something with the data that people can derive value from. That said, I think NetApp’s solution is a solid attempt at providing a useful platform to deliver on some pretty important requirements for the modern enterprise. I really like the holistic view they’ve taken when it comes to monitoring all aspects of the infrastructure, and the insights they can deliver should prove invaluable to organisations struggling with the myriad of moving parts that make up their (private and public) cloud footprint. If you’d like to know more, you can access the data sheet here, and the documentation is hosted here.
Disclaimer: I recently attended Tech Field Day 19. My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
Ixia recently presented at Tech Field Day 19. You can see videos of their presentation here, and download my rough notes from here.
Overview
Recep Ozdag, VP of Product Management at Ixia, presented first on Ixia’s company overview and history. Here’s a bad photo of Recep.
Ixia were acquired by Keysight in 2017, but they’ve been around for an awfully long time.
1939 – 1998 – the HP years
1999 – 2013 – The Agilent Technologies years
2014 – Keysight Technologies launched
In February 2019, they launched the Vision E1S with Hawkeye, and in June 2019 the Vision X was announced.
Ixia Visibility Solutions
So Ixia specialise in “network visibility”, but why is that important? What’s the real thing you need to know about on your network? Is it performance? That’s important, sure. But the really big thing that keeps network folks awake at night is security. It’s constantly changing and there’s always a lot of ground to cover. To wit, you have:
BYOD – uncontrolled devices on the network;
Encryption – hidden traffic means hidden threats;
IoT – billions more endpoints to protect; and
Cloud – secure data on or off-premises.
According to Ixia, every day there are approximately 5 million IoT devices being connected to networks. Some of these cheap security cameras are even sitting on shelves pre-installed with malware. Happy days! With better visibility you have the opportunity to enhance your existing investments. Within a bank, for example, there are 15 different tools doing stuff and they all want to see a different piece of the data.
So how does Ixia help to improve visibility inside your network? Network packet brokers.
[image courtesy of Ixia]
And what can these things do? All kinds of cool stuff, including “Context Aware” Data Processing:
Deduplication
Packet trimming
Adaptive packet filtering
Data masking
GRE tunnel termination
SSL decryption
Geo location
Netlog generation
The Struggle Is Real
As I mentioned before, securing your network can be a challenge, and every day things are changing and new threats are popping up. Keeping up with all of this stuff is a struggle. You’re looking at challenges with:
DDoS
SSL and IPsec
Data leakage
Advanced persistent threats
Malware and vulnerabilities
BYOD
Enter the Vision X
The Vision X is a network packet broker delivered via a modular platform. You can make it do anything you want it to do today, and add functionality as it’s developed.
High-density
2 Terabits/sec per unit
60 ports of 200Gb
108 ports of 50Gb
76 ports of 40Gb
108 ports of 10Gb
108 ports of 25 Gb
High availability
5 redundant and hot swap fans
4 redundant and hot swap power
6.4TB per second switching capacity
2Tb per second of PacketStack
NEBS 3 certification
Out of band and inline
PacketStack – intelligent packet filtering, manipulation and transport
Deduplication
Data masking
Time-stamping
Protocol trimming
Header stripping
GRE tunneling
NetStack – robust filtering, aggregation, replication, and more
Three stages of filtering
Dynamic filter compiler
Aggregation
Replication
Load balancing
VLAN tagging
But The Edge!
Ixia also have the Vision E1S (the E stands for Edge). As Ixia pointed out during their presentation, a lot of customer data doesn’t always traverse to the cloud or DC – it stays local. “If you want to monitor something – you monitor where the data is”.
Thoughts And Further Reading
One of my favourite things about attending Tech Field Day events is that I hear from companies that I don’t deal with on a daily basis. As anyone who’s worked with me can attest, my networking chops aren’t great at all. So hearing about things like network packet brokers has been really interesting.
One of the biggest challenges in both enterprise and service provider environments is visibility into what’s happening at various levels of infrastructure – be it storage, compute, network or application. Tools like the ones offered by Ixia seem to do a pretty comprehensive job of ensuring that visibility is not the reason that you don’t know what’s going on in your network. I was intrigued by the security theme of the presentation, and I agree wholeheartedly that security concerns should be at the forefront of everything we do from an infrastructure perspective. Managing your critical infrastructure isn’t just knowing about what’s happening in your environment, but also being able to keep up with threats as they arise. Network packet brokers don’t automagically make your environment more secure, nor do they increase your security posture as new threats arise. That said, the kind of visibility you’ll get with these kinds of solutions takes away the concern that you can’t see what’s going on.
Monitoring and visibility solutions come in all shapes and sizes, and they can make a system administrator’s life a lot simpler or add to the noise in the environment. Given that most all infrastructure depends on network connectivity to some point, and network connectivity can have such a big impact on the end user’s ability to do what they need to do to engage in their core business activities, it makes a lot of sense to look at solutions like network packet brokers to get a deeper understanding of what’s going on in any particular environment.
Ixia’s range of products seems to do a pretty good job of covering both the core DC and edge workload requirements (along with cloud), and coupled with their rich history in network visibility, they’re delivering a good story when it comes to improving visibility within your environment. If you’re struggling to understand what your East-West traffic looks like, or what your applications are doing, or if someone’s been silly and plonked a malware-ridden security camera in their office, you’d do well to check out what Ixia has to offer. For another view, check out Wes‘s take on Ixia’s portfolio here.
Disclaimer: I recently attended Tech Field Day 19. My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
Here are my notes on gifts, etc, that I received as a conference attendee at Tech Field Day 19. This is by no stretch an interesting post from a technical perspective, but it’s a way for me to track and publicly disclose what I get and how it looks when I write about various things. I’m going to do this in chronological order, as that was the easiest way for me to take notes during the week.
Sunday
My wife kindly dropped me at the domestic airport so I could make my way from BNE to SFO via SYD. I had ham and cheese in the Qantas Club before my flight. I flew Qantas economy class to SFO. The flights were paid for by Tech Field Day. Plane food was consumed on the flight. It was a generally good experience, lack of sleep notwithstanding. I stayed at the Plaza Suites Hotel. This was covered by Tech Field Day as well. I made my way to a friend’s house to spend some time before the event.
On Sunday evening I was fortunate to have dinner at Georgiana Comsa’s house with her and some friends. I’m always appreciative of good food and good conversation, and it was great to spend some time with them (jet lag notwithstanding).
Tuesday
On Tuesday I arrived at the hotel in the afternoon and was provided with a bag of various snacks by the Tech Field Day team. I had 3 Firestone Lager beers at the hotel bar. We had dinner at the hotel, consisting of:
Panzanella, with plums, cucumber, nduja vinaigrette, burrata, and brioche
Gazpacho, with shoyu-marinated heirloom tomatoes, and avocado creme fraiche
Artichokes, consisting of Monterey grilled artichoke halves, and mint-chutney mayo
Bavette, with 100% grass-fed beed from Marin Sun Farms, and pistachio leek chimichurri
Chicken, with mixed nut puree, summer squash, and harissa
Asparagus, à la plancha, with Romesco sauce
Homemade gelato, in some kind of banana and miso flavour.
I also had a Firestone lager and some pinot noir. It was all very delicious. I received a very cool wooden sign from Alastair Cooke as part of the Yankee gift swap. Emre also gave each of us some real Turkish Delight.
Wednesday
Wednesday morning was breakfast at the hotel, consisting of scrambled eggs, sausage, pancake, bacon, and coffee. During the pre-TFD delegate meeting, I was given some OpenTechCast, VirtualBonzo.com, and TFD stickers.
During the Ixia session I had a coffee. At NetApp I had 2 still spring waters. Lunch at NetApp was an antioxidant salad with feta cheese and pomegranate vinaigrette, braised short ribs with chilli citrus marinade, grilled salmon with feta and olive, and grilled asparagus. I also had a chocolate chip cookie after lunch. We had the dinner / reception at Faz Restaurant in San Jose. There was a variety of finger food on offer, including Mediterranean Bruschetta, black sesame encrusted Ahi Tuna with wasabi, vegetarian spring rolls with teriyaki dipping sauce, crab cakes with chipotle aioli, and fire-roasted beef mini-kabobs. I had 3 Modelo Especial beers, and 2 Firestone lagers at the hotel.
Thursday
Thursday morning was breakfast at the hotel, consisting of scrambled eggs, sausage, pancake, bacon, and coffee. At Automation Anywhere I grabbed a bottle of water before our session. Automation Anywhere also provided us with lunch from Dish Dash. I had a chicken shawarma wrap, tabouli, leafy salad, rice, falafel, and water. They also gave us a branded water container and phone pop socket.
At Druva we were given some personalised gifts by W. Curtis Preston. This was the first time I’ve had a particular tweet about a dream become a reality.
Thanks Curtis! (and thanks for signing my copy of your book too!)
We were also each given a Holy Stone HS160 drone and personalised coffee mug. I really like when sponsors take the time to find out a little more about each of the delegates. After the session we wandered over to Philz Coffee. I had a large iced Mocha Tesora with dark chocolate, caramel, and cocoa. For dinner we went to River Rock Taproom. I had 2 Russian River Brewing STS Pils beers from the tap (there are about 40 different beers to choose from). We also had a variety of snacks, including:
Bruschetta, with tomatoes, herbs, onions, garlic, balsamic vinegar, olive oil, parmesan cheese on toasted baguette slices
Sliders, consisting of creamy habanero beef with caramelised onions and cheddar cheese
Garlic French Fries
Buffalo wings and creamy habanero
Popcorn shrimp served with orange chilli garlic
Candied bacon with caramelised steakhouse bacon served on toasted bread
By that stage jet lag had really caught up with me, so I retired to my room for an early night.
Friday
Friday morning was breakfast at the hotel, consisting of scrambled eggs, sausage, pancake, bacon, and coffee. At the VMware office I had a coffee, and picked up a vROps ninja sticker, Indy VMUG sticker, and VMware Cloud Assembly sticker and key ring. We had lunch at VMware. It was Mexican, and I had a tortilla, rice, refried beans, grated cheese, guacamole, sour cream and salsa, and some water.
At the finish of proceedings some delegates made their way to the airport, and the rest of us hung around for one last dinner together. I had a Firestone lager at the hotel. We then walked to Faultline Brewing Company for dinner. I had 2 Kolsch beers, and the Brewhouse Bacon Cheeseburger, consisting of an 8 oz. Angus beef patty, sharp cheddar cheese, applewood-smoked bacon, lettuce, tomatoes, red onion, roasted garlic aioli , a brioche bun, and kettle chips. I then had 3 Firestone lagers at the hotel bar.
Saturday
I was staying a little longer in the Bay Area, so on Saturday morning I had breakfast at the hotel, consisting of scrambled eggs, sausage, pancake, bacon, and coffee. I then took a car to a friend’s house in the Bay Area, courtesy of Tech Field Day. It was a great week. Thanks again to the Tech Field Day team for having me, thanks to the other delegates for being super nice and smart, and thanks to the presenters for some really educational and engaging sessions.
Disclaimer: I recently attended Tech Field Day 19. My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
This is just a quick post to share some thoughts on day zero at Tech Field Day 19. Thanks again Stephen and the team at Gestalt IT for having me back, making sure everything is running according to plan and for just being really very decent people. I’ve really enjoyed catching up with the people I’ve met before and meeting the new delegates. Look out for some posts related to the Tech Field Day sessions in the next few weeks. And if you’re in a useful timezone, check out the live streams from the event here, or the recordings afterwards.
Here’s the rough schedule for the next three days (all times are ‘Merican Pacific).
Here’s some good news for you. I’ll be heading to the US in late June for my first Tech Field Day event – Tech Field Day 19 (as opposed to the Storage Field Day events I’ve attended previously). If you haven’t heard of the very excellent Tech Field Day events, you should check them out. I’m looking forward to a little time travel and spending time with some really smart people for a few days. It’s also worth checking back on the Tech Field Day 19 website during the event (June 26 – 28) as there’ll be video streaming and updated links to additional content. You can also see the list of delegates and event-related articles that have been published.
I think it’s a great line-up of both delegates and presenting companies this time around. (If more companies are added to the agenda I’ll update this).
I’d like to publicly thank in advance the nice folks from Tech Field Day who’ve seen fit to have me back, as well as my employer for letting me take time off to attend these events. Also big thanks to the companies presenting. It’s going to be a lot of fun. Seriously.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.