Random Short Take #51

Welcome to Random Short Take #51. A few players have worn 51 in the NBA including Lawrence Funderburke (I remember the Ohio State team wearing grey Nikes on TV and thinking that was a really cool sneaker colour – something I haven’t been able to shake over 25 years later). My pick is Boban Marjanović though. Let’s get random.

  • Folks don’t seem to spend much time making sure the fundamentals are sound, particularly when it comes to security. This article from Jess provides a handy list of things you should be thinking about, and doing, when it comes to securing your information systems. As she points out, it’s just a starting point, but I think it should be seen as a bare minimum / entry level set of requirements that you could wrap around most environments out in the wild.
  • Could there be a new version of AIX on the horizon? Do I care? Not really. But I do sometimes yearn for the “simpler” times I spent working on a myriad of proprietary open systems, particularly when it came to storage array support.
  • StorCentric recently announced Nexsan Assureon Cloud Edition. You can read the press release here.
  • Speaking of press releases, Zerto continues to grow its portfolio of cloud protection technology. You can read more on that here.
  • Spectro Cloud has been busy recently, and announced supporting for management of existing Kubernetes deployments. The news on that can be found here.
  • Are you a data hoarder? I am. This article won’t help you quit data, but it will help you understand some of the things you can do to protect your data.
  • So you’ve found yourself with a publicly facing vCenter? Check out this VMware security advisory, and get patching ASAP. vCenter is the only thing you need to be patching either, but hopefully you knew that already.
  • John Birmingham is one of my favourite writers. Not just for his novels with lots of things going bang, but also for his blog posts about food. And things of that nature.

NetApp Keystone – How Do you Want It?

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

NetApp recently presented at Storage Field Day 21. You can see videos of the presentation here, and download my rough notes from here. This post is focussed on the Keystone presentation, but I recommend you check out the Oracle performance analysis session as well – I found it extremely informative.

 

Keystone? What Is It?

According to the website, “Keystone provides a portfolio of payment solutions and storage-as-a-service offerings for hybrid cloud environments to deliver greater agility, financial flexibility, and reduced financial risk that helps you meet your business outcomes”. In short, it gives you a flexible way to consume the broader portfolio of NetApp solutions as a service on-premises (and off-premises).

 

How Much XaaS Is Too Much?

According to NetApp’s research, no amount of XaaS is too much. The market is apparently hungry for everything as a service to be a thing. It seems we’re no longer just looking to do boring old Infrastructure or Software as a Service. We really seem to want everything as a Service.

[image courtesy of NetApp]

Why?

There are some compelling reasons to consume things as a service via operating expenditure rather than upfront capital expenditure. In the olden days, when I needed some storage for my company, I usually had a line item in the budget for some kind of storage array. What invariably happened was that the budget would only be made available once every 3 – 5 years. It doesn’t make any sense necessarily, but I’m sure there are accounting reasons behind it. So I would try to estimate how much storage the company would need for the next 5 years (and usually miss the estimate by a country mile). I’d then buy as much storage as I could and watch it fill up at an alarming rate.

The other problem with this approach was that we were paying for spindles that weren’t necessarily in use for the entirety of the asset’s lifecycle. There was also an issue that some storage vendors would offer special discounting to buy everything up front. When you went to add additional storage, however, you’d be slugged with pricing that was nowhere near as good as it was if you’d have bought everything up front. The appeal of solutions like storage as a service is that you can start with a smallish footprint and grow it as required, spending what you need, and going from there. It’s also nice for the vendors, as the sales engagement is a little more regular, and the opportunity to sell other services into the environment that may not have been identified previously becomes a reality.

No, But Really Why?

If you’ve watched the NetApp Keystone presentation, and maybe checked out the supporting documentation, you’re going to wonder why folks aren’t just moving everything to public cloud, and skipping the disk slinging middle man. As anyone who’s worked with or consulted for enterprise IT organisations will be able to tell you though, it’s rarely that simple. There may indeed be appetite to leverage public cloud storage services, for example, but there may also be a raft of reasons why this can’t be done, including latency requirements, legacy application support, data sovereignty concerns, and so forth.

[image courtesy of NetApp]

Sometimes the best thing that can happen is that there’s a compromise to be had between the desire for the business to move to a different operating model and the ability for the IT organisation to service that need.

 

Thoughts and Further Reading

The growth of XaaS over the last decade has been fascinating to see. There really is an expectation that you can do pretty much anything as a service, and folks are queuing up for the privilege. As I mentioned earlier, I think there are reasons why it’s popular on both sides, and I certainly don’t want to come across as some weird on-premises storage hugger who doesn’t believe the future of infrastructure is heavily entwined with as a service offerings. Heck, my day job is a a company that is entirely built on this model. What I do wonder at times is whether folks in organisations looking to transform their services are really ready to relinquish the control of the infrastructure part of the equation in exchange for a per-GB / per month option. Offerings like Keystone aren’t just fancy financial models to make getting kit on the floor easier, they’re changing the way that vendors and IT organisations interact at a fairly fundamental level. In much the same way that public cloud has changed the role of the IT bod in the organisation, so too does XaaS change that value proposition.

I think the folks at NetApp have done quite a good job with Keystone, particularly recognising that there is still a place for on-premises infrastructure, but acknowledging that the market wants both a “cloud-like” experience, and a new way of consuming these services. I’ll be interested to see how Keystone develops over the next 12 – 24 months now that it’s been released to the market at large. We all talk about as a service being the future, so I’m keen to see if folks are really buying it.

Intel Optane – Challenges and Triumphs

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Intel recently presented at Storage Field Day 21. You can see videos of the presentation here, and download my rough notes from here.

 

Alive and Kicking

Kristie Mann, Sr. Director Products, Intel Optane Group, kicked off the session by telling us that “Intel Optane is alive and well”. I don’t think anyone thought it was really going away, particularly given the effort that folks inside Intel have gone to to get this product to market. But from a consumer perspective, it’s potentially been a little confusing.

[image courtesy of Intel]

In terms of data centre penetration, it’s been a different story, and taking Optane from idea to reality has been quite a journey. It was also noted that the “strong uptake of PMem in HPC was unexpected”, but no doubt welcome.

 

Learnings

Some of the other learnings that were covered as part of the session were as follows.

Software Really Matters

It’s one thing to come out with some super cool hardware that is absolutely amazing, but it’s quite another to get software support for that hardware. Unfortunately, the hardware doesn’t give you much without the software, no matter how well it performs. While this has been something of a challenge for Optane until recent times, there’s definitely been more noise from the big ISVs about enhanced Optane support.

IaaS Adoption

Adoption in IaaS has not been great, mainly due to some uneven performance. This will only improve as the software support matures. But the IaaS market can be tough for a bunch of reasons. IaaS vendors are trying to do a things at a certain price point. That doesn’t mean that they’re still making people run VMs on spinning disk (hopefully), but rolling out All-Flash support for platforms is something that’s only going to be done when the $/GB makes sense for the providers. You also might have seen in the field that IaaS providers are pretty particular about performance and quality of service. It makes sense when you’re trying to host a whole bunch of different workloads at large scale. So it makes sense that they’d be somewhat cautious about launching new media types on their platforms without running through a whole bunch of performance and integration testing. I’m not saying they’re not going to get there, they just may not be the first cabs off the rank.

Can you spell OEM?

OEM qualifications have been slow to date with Optane. This is key to getting the product out there. Enterprise folks don’t like to buy things until their favourite Tier 1 vendors are offering it as a default option in their server / storage array / fabric switch. If Dell has the Optane Inside sticker (not a real thing, but you know what I mean), the infrastructure architects inside large government entities are more likely to get on board.

Battling The Status Quo

Status quo thinking makes it hard to understand this isn’t just memory or storage. This has been something of a problem for Intel since Optane became a thing. I’m still having conversations with people and running up against significant confusion about the difference between PMem and Optane SSD. I think that’s going to improve as time goes on, but it can make things difficult when it comes to broad market penetration.

Thoughts and Further Reading

I don’t want people reading this to think that I’m down on Intel and what it’s done with Optane. If anything, I’m really into it. I enjoyed the presentation at Storage Field Day 21 tremendously, and not just because my friend Howard was on the panel repping VAST Data. It’s unusual that a vendor as big as Intel would be so frank about some of the challenges that it’s faced with getting new media to market. But I think it’s the willingness to share some of this information that demonstrates how committed Intel is to Optane moving forward. I was lucky enough to speak to Intel Senior Fellow Al Fazio about the Optane journey, and it was clear that there’s a whole lot of innovation and sweat that goes into making a product like this work.

Some folks think that these panel presentations are marketing disguised as a presentation. Invariably, the reference customers are friendly with the company, and you’ll only ever hear good stories. But I think those stories from those customers are still extremely powerful. After all, having a customer jump on a session to tell the world about how good your product has been means you’ve done something right. As a consumer of these products, I find these kind of testimonials invaluable. Ultimately, products are successful in the market when they serve the market’s needs. From what I can see, Intel Optane is on its way to meeting those needs, and it has a bright future.

Hammerspace, Storageless Data, And One Tough Problem

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Hammerspace recently presented at Storage Field Day 21. You can see videos of the presentation here, and download my rough notes from here.

 

Storageless Data You Say?

David Flynn kicked off the presentation from Hammerspace talking about storageless data. Storageless data? What on earth is that, then? Ultimately your data has to live on storage. But this all about consumption side abstraction. Hammerspace doesn’t want you to care about how your application maps to servers, or how it maps to storage. It’s more of a data-focussed approach to storage than we’re used to, perhaps. Some of the key requirements of the solution are as follows:

  • The agent needs to run on everything – virtual, physical, containers – it can’t be bound to specific hardware
  • Needs to be multi-vendor and support multi-protocol
  • Presumes metadata
  • Make data into a routed resource
  • Deliver objective-based orchestration

The trick is that you have to be able to do all of this without killing the benefits of the infrastructure (performance, reliability, cost, and management). Simple, huh?

Stitching It Together

A key part of the Hammerspace story is the decoupling of the control plane and the data plane. This allows it to focus on getting the data where it needs to be, from edge to cloud, and over whatever protocol it needs to be done over.

[image courtesy of Hammerspace]

Other Notes

Hammerspace officially supports 8 sites at the moment, and the team have tested the solution with 32 sites. It uses an eventually consistent model, and the Global Namespace is global per share, providing flexible deployment options. Metadata replication can be setup to be periodic – and customised at each site. You always rehydrate the data and serve it locally over NAS via SMB or NFS.

Licensing Notes

Hammerspace is priced on capacity (data under management). You can also purchase it via the AWS Marketplace. Note that you can access up to 10TB free on the public cloud vendors (AWS, GCP, Azure) from a Hammerspace perspective.

 

Thoughts and Further Reading

I was fortunate to have a followup session with Douglas Fallstrom and Brendan Wolfe to revisit the Hammerspace story, ask a few more questions, and check out some more demos. I asked Fallstrom about the kind of use cases they were seeing in the field for Hammerspace. One popular use case was for disaster recovery. Obviously, there’s a lot more to doing DR than just dumping data in multiple locations, but it seems that there’s appetite for this very thing. At a high level, Hammerspace is a great choice for getting data into multiple locations, regardless of the underlying platform. Sure, there’s a lot more that needs to be done once it’s in another location, or when something goes bang. But from the perspective of keeping things simple, this one is up there.

Fallstrom was also pretty clear with me that this isn’t Primary Data 2.0, regardless of the number of folks that work at Hammerspace with that heritage. I think it’s a reasonable call, given that Hammerspace is doubling down on the data story, and really pushing the concept of a universal file system, regardless of location or protocol.

So are we finally there in terms of data abstraction? It’s been a problem since computers became common in the enterprise. As technologists we frequently get caught up in the how, and not as much in the why of storage. It’s one thing to say that I can scale this to this many Petabytes, or move these blocks from this point to that one. It’s an interesting conversation for sure, and has proven to be a difficult problem to solve at times. But I think as a result of this, we’ve moved away from understanding the value of data, and data management, and focused too much on the storage and services supporting the data. Hammerspace has the noble goal of moving us beyond that conversation to talking about data and the value that it can bring to the enterprise. Is it there yet in terms of that goal? I’m not sure. It’s a tough thing to be able to move data all over the place in a reliable fashion and still have it do what it needs to do with regards to performance and availability requirements. Nevertheless I think that the solution does a heck of a lot to remove some of the existing roadblocks when it comes to simplified data management. Is serverless compute really a thing? No, but it makes you think more about the applications rather than what they run on. Storageless data is aiming to do the same thing. It’s a bold move, and time will tell whether it pays off or not. Regardless of the success or otherwise of the marketing team, I’m thinking that we’ll be seeing a lot more innovation coming out of Hammerspace in the near future. After all, all that data isn’t going anywhere any time soon. And someone needs to take care of it.

Storage Field Day 21 – (Fairly) Full Disclosure

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my notes on gifts, etc, that I received as a conference attendee at Storage Field Day 21. This is by no stretch an interesting post from a technical perspective, but it’s a way for me to track and publicly disclose what I get and how it looks when I write about various things. With all of … this stuff … happening, it’s not going to be as lengthy as normal, but I did receive a couple of boxes of stuff in the mail, so I wanted to disclose it.

The Tech Field Day team sent a keyboard cloth (a really useful thing to save the monitor on my laptop from being bashed against the keyboard), a commemorative TFD coin, and some TFD patches. The team also sent me a snack pack with a variety of treats in it, including Crunch ‘n Munch caramel popcorn with peanuts, fudge brownie M&M’s, Pop Rocks, Walker’s Monster Munch pickled onion flavour baked corn snacks, peanut butter Cookie Dough Bites, Airheads, Razzles, a giant gobstopper, Swedish Fish, a Butterfinger bar, some Laffy Taffy candy, Hershey’s Kisses, Chewy Lemonhead, Bottlecaps, Airheads, Candy Sours and Milk Duds. I don’t know what most of this stuff is but I guess I’ll find out. I can say the pickled onion flavour baked corn snacks were excellent.

Pliops came through with the goods and sent me a Lume Cube Broadcast Lighting Kit. Hammerspace sent a stainless steel water bottle and Hammerspace-branded Leeman notepad. Nasuni threw in a mug, notepad, and some pens, while NetApp gave me a travel mug and notepad. Tintri kindly included a Tintri trucker cap, Tintri-branded hard drive case and Tintri-branded OGIO backpack in the swag box.

My Secret Santa gift was the very excellent “Working for the clampdown: The Clash, the dawn of neoliberalism and the political promise of punk“, edited by Colin Coulter.

It wasn’t fancy food and limos this time around. But it was nonetheless an enjoyable event. Hopefully we can get back to in-person events some time this decade. Thanks again to Stephen and the team for having me back.

Back To The Future With Tintri

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Tintri recently presented at Storage Field Day 21. You can see videos of the presentation here, and download my rough notes from here.

 

Tintri? 

Remember Tintri? The company was founded in 2008, fell upon difficult times in 2018, and was acquired by DDN. It’s still going strong, and now offers a variety of products under the Tintri brand, including VMstore, IntelliFlash, and NexentaStor. I’ve had exposure to all of these different lines of business over the years, and was interested to see how it was all coming together under the DDN acquisition.

 

Does Your Storage Drive Itself?

Ever since I got into the diskslinger game, self-healing infrastructure has been talked about as the next big thing in terms of reducing operational overheads. We build this stuff, can teach it how to do things, surely we can get it to fix itself when it goes bang? As those of you who’ve been in the industry for some time would likely know, we’re still some ways off that being a reality across a broad range of infrastructure solutions. But we do seem closer than we were a while ago.

Autonomous Infrastructure

Tintri spent some time talking about what it was trying to achieve with its infrastructure by comparing it to autonomous vehicle development. If you think about it for a minute, it’s a little easier to grasp the concept of a vehicle driving itself somewhere, using a lot of telemetry and little computers to get there, than it is to think about how disk storage might be able to self-repair and redirect resources where they’re most needed. Of most interest to me was the distinction made between analytics and intelligence. It’s one thing to collect a bunch of telemetry data (something that storage companies have been reasonably good at for some time now) and analyse it after the fact to come to conclusions about what the storage is doing well and what it’s doing poorly. It’s quite another thing to use that data on the fly to make decisions about what the storage should be doing, without needing the storage manager to intervene.

[image courtesy of Tintri]

If you look at the various levels of intelligence, you’ll see that autonomy eventually kicks in and the concept of supervision and management moves away. The key to the success of this is making sure that your infrastructure is doing the right things autonomously.

So What Do You Really Get?

[image courtesy of Tintri]

You get an awful lot from Tintri in terms of information that helps the platform decide what it needs to do to service workloads in an appropriate fashion. It’s interesting to see how the different layers deliver different outcomes in terms of frequency as well. Some of this is down to physics, and time to value. The info in the cloud may not help you make an immediate decision on what to do with your workloads, but it will certainly help when the hapless capacity manager comes asking for the 12-month forecast.

 

Conclusion

I was being a little cheeky with the title of this post. I was a big fan of what Tintri was able to deliver in terms of storage analytics with a virtualisation focus all those years ago. It feels like some things haven’t changed, particularly when looking at the core benefits of VMstore. But that’s okay, because all of the things that were cool about VMstore back then are still actually cool, and absolutely still valuable in most enterprise storage shops. I don’t doubt that there are VMware shops that have definitely taken up vVols, and wouldn’t get as much out of VMstore as those shops running oldey timey LUNs, but there are plenty of organisations that just need storage to host VMs on, storage that gives them insight into how it’s performing. Maybe it’s even storage that can move some stuff around on the fly to make things work a little better.

It’s a solid foundation upon which to add a bunch of pretty cool features. I’m not 100% convinced that what Tintri is proposing is the reality in a number of enterprise shops (have you ever had to fill out a change request to storage vMotion a VM before?), but that doesn’t mean it’s not a noble goal, and certainly one worth pursuing. I’m a fan of any vendor that is actively working to take the work out of infrastructure, and allowing people to focus on the business of doing business (or whatever it is that they need to focus on). It looks like Tintri has made some really progress towards reducing the overhead of infrastructure, and I’m keen to see how that plays out across the product portfolio over the next year or two.

 

 

Random Short Take #49

Happy new year and welcome to Random Short Take #49. Not a great many players have worn 49 in the NBA (2 as it happens). It gets better soon, I assure you. Let’s get random.

  • Frederic has written a bunch of useful articles around useful Rubrik things. This one on setting up authentication to use Active Directory came in handy recently. I’ll be digging in to some of Rubrik’s multi-tenancy capabilities in the near future, so keep an eye out for that.
  • In more things Rubrik-related, this article by Joshua Stenhouse on fully automating Rubrik EDGE / AIR deployments was great.
  • Speaking of data protection, Chris Colotti wrote this useful article on changing the Cloud Director database IP address. You can check it out here.
  • You want more data protection news? How about this press release from BackupAssist talking about its partnership with Wasabi?
  • Fine, one more data protection article. Six backup and cloud storage tips from Backblaze.
  • Speaking of press releases, WekaIO has enjoyed some serious growth in the last year. Read more about that here.
  • I loved this article from Andrew Dauncey about things that go wrong and learning from mistakes. We’ve all likely got a story about something that went so spectacularly wrong that you only made that mistake once. Or twice at most. It also reminds me of those early days of automated ESX 2.5 builds and building magical installation CDs that would happily zap LUN 0 on FC arrays connected to new hosts. Fun times.
  • Finally, I was lucky enough to talk to Intel Senior Fellow Al Fazio about what’s happening with Optane, how it got to this point, and where it’s heading. You can read the article and check out the video here.

Storage Field Day 21 – I’ll Be At Storage Field Day 21

Here’s some news that will get you excited. I’ll be virtually heading to the US next week for another Storage Field Day event. If you haven’t heard of the very excellent Tech Field Day events, you should check them out. It’s also worth visiting the Storage Field Day 21 website during the event (January 20 – 22) as there’ll be video streaming and updated links to additional content. You can also see the list of delegates and event-related articles that have been published.

I think it’s a great line-up of both delegates and presenting companies this time around. I know most of them, but there may also still be a few companies added to the line-up. I’ll update this if and when they’re announced.

I’d like to publicly thank in advance the nice folks from Tech Field Day who’ve seen fit to have me back, as well as my employer for letting me take time off to attend these events. Also big thanks to the companies presenting. It’s going to be a lot of fun. Last time was a little weird doing this virtually, rather than in person, but I think it still worked. I’m really looking forward to this, even if it means doing the night shift for a few days. I’ll post details of the presentation times when I have them.

[Update – here’s the schedule]

Wednesday, Jan 20 9:30-11:00 MinIO Presents at Storage Field Day 21 Presenters: AB PeriasamyDaniel ValdivaEco Willson
Wednesday, Jan 20 12:00-15:30 Tintri Presents at Storage Field Day 21 Presenters: Erwin DariaRob GirardShawn MeyersTomer Hagay Nevel
Thursday, Jan 21 8:00-10:00 NetApp Presents at Storage Field Day 21 Presenters: Arun RamanDave KrenikJeffrey SteinMike McNamaraSunitha Rao
Thursday, Jan 21 11:00-13:00 Nasuni Presents at Storage Field Day 21 Presenters: Andres Rodriguez
Friday, Jan 22 8:00-9:30 Hammerspace Presents at Storage Field Day 21 Presenters: David FlynnDouglas Fallstrom
Friday, Jan 22 10:30-11:30 Pliops Presents at Storage Field Day 21  
Friday, Jan 22 12:30-14:30 Intel Presents at Storage Field Day 21  

 

Cisco Introduces HyperFlex 4.5

Disclaimer: I recently attended Storage Field Day 20.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Cisco presented a sneak preview of HyperFlex 4.5 at Storage Field Day 20 a little while ago. You can see videos of the presentation here, and download my rough notes from here. Note that this preview was done some time before the product was officially announced, so there may be a few things that did or didn’t make it into the final product release.

 

Announcing HyperFlex 4.5

4.5: Meat and Potatoes

So what are the main components of the 4.5 announcement?

  • iSCSI Block storage
  • N:1 Edge data replication
  • New edge platforms / SD-WAN
  • HX Application Platform (KVM)
  • Intersight K8s Service
  • Intersight Workload Optimizer

Other Cool Stuff

  • HX Boost Mode – virtual CPU configuration change in HX controller VM, the boost is persistent (scale up).
  • ESXi & VC 7.0, Native VC Plugin, 6.0 is EoS, HX Native HTML5 vCenter Plugin (this has been available since HX 4.0)
  • Secure Boot – protect the hypervisor against bootloader attacks with secure boot anchored in Cisco hardware root of trust
  • Hardened SDS Controller – reduce the attack surface and mitigate against compromised admin credentials

The HX240 Short Depth nodes have been available since HX 4.0, but there’s now a new Edge Option – the HX240 Edge. This is a new 2RU form factor option for HX Edge (2N / 3N / 4N), A-F and hybrid, 1 or 2 sockets, up to 3TB RAM and 175TB capacity, and PCIe slots for dense GPUs.

 

iSCSI in HX 4.5(1a)

[image courtesy of Cisco]

iSCSI Topologies

[image courtesy of Cisco]

 

Thoughts and Further Reading

Some of the drama traditionally associated with HCI marketing seems to have died down now, and people have mostly stopped debating what it is or isn’t, and started focusing on what they can get from the architecture over more traditional infrastructure deployments. Hyperconverged has always had a good story when it comes to compute and storage, but the networking piece has proven problematic in the field. Sure, there have been attempts at making software-defined networking more effective, but some of these efforts have run into trouble when they’ve hit the northbound switches.

When I think of Cisco HyperFlex I think of it as the little HCI solution that could. It doesn’t dominate the industry conversation like some of the other vendors, but it’s certainly had an impact, in much the same way UCS has. I’ve been a big fan of Springpath for some time, and HyperFlex has taken a solid foundation and turned it into something even more versatile and fully featured. I think the key thing to remember with HyperFlex is that it’s a networking company selling this stuff – a networking company that knows what’s up when it comes to connecting all kinds of infrastructure together.

The addition of iSCSI keeps the block storage crowd happy, and the new edge form-factor will have appeal for customers trying to squeeze these boxes into places they probably shouldn’t be going. I’m looking forward to seeing more HyperFlex from Cisco over the next 12 months, as I think it finally has a really good story to tell, particularly when it comes to integration with other Cisco bits and pieces.

Random Short Take #46

Welcome to Random Short Take #46. Not a great many players have worn 46 in the NBA, but one player who has is one of my favourite Aussie players: Aron “Bangers” Baynes. So let’s get random.

  • Enrico recently attended Cloud Field Day 9, and had some thoughts on NetApp’s identity in the new cloud world. You can read his insights here.
  • This article from Chris Wahl on multi-cloud design patterns was fantastic, and well worth reading.
  • I really enjoyed this piece from Russ on technical debt, and some considerations when thinking about how we can “future-proof” our solutions.
  • The Raspberry Pi 400 was announced recently. My first computer was an Amstrad CPC 464, so I have a real soft spot for jamming computers inside keyboards.
  • I enjoyed this piece from Chris M. Evans on hybrid storage, and what it really means nowadays.
  • Working from home a bit this year? Me too. Tom wrote a great article on some of the security challenges associated with the new normal.
  • Everyone has a quadrant nowadays, and Zerto has found itself in another one recently. You can read more about that here.
  • Working with VMware Cloud Director and wanting to build a custom theme? Check out this article.