Storage Field Day 21 – Wrap-up and Link-o-rama

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

This is a quick post to say thanks once again to Stephen and Ben, and the presenters at Storage Field Day 21. I had a great time. For easy reference, here’s a list of the posts I did covering the events (they may not match the order of the presentations).

Storage Field Day 21 – I’ll Be At Storage Field Day 21

Storage Field Day 21 – (Fairly) Full Disclosure

Back To The Future With Tintri

Hammerspace, Storageless Data, And One Tough Problem

Intel Optane – Challenges and Triumphs

NetApp Keystone – How Do you Want It?

Pliops – Can We Take Fast And Make It Faster?

Nasuni Puts Your Data Where You Need It

MinIO – Cloud, Edge, Everywhere …

Also, here’s a number of links to posts by my fellow delegates (in no particular order). They’re all very smart people, and you should check out their stuff, particularly if you haven’t before. I’ll attempt to keep this updated as more posts are published. But if it gets stale, the Storage Field Day 21 landing page will have updated links.

 

Jason Collier (@BocaNuts)

 

Barry Coombs (@VirtualisedReal)

#SFD21 – Storage Field Day 21 – Tintri

#SFD21 – Storage Field Day 21 – NetApp

#SFD21 – Storage Field Day 21 – Nasuni

#SFD21 – Storage Field Day 21 – MinIO Session

#SFD21 – Storage Field Day 21 – Pliops

#SFD21 – Storage Field Day 21 – Hammerspace

#SFD21 – Storage Field Day 21 – Intel

 

Becky Elliott (@BeckyLElliott)

 

Matthew Leib (@MBLeib)

 

Ray Lucchesi (@RayLucchesi)

The rise of MinIO object storage

Data Science storage with NetApp’s Python Toolkit

Storageless data!?

115-GreyBeards talk database acceleration with Moshe Twitto, CTO&Co-founder, Pliops

 

Andrea Mauro (@Andrea_Mauro)

 

Max Mortillaro (@DarkkAvenger)

Nasuni – Cloud-Scale NAS Without Cloud Worries

Storage Field Day 21 – The TECHunplugged Take on Nasuni

Pliops: Re-Imagining Storage, Crushing Bottlenecks and a Bright Future in the Cloud

 

Keiran Shelden (@Keiran_Shelden)

 

Enrico Signoretti (@esignoretti)

Object Storage Is Heating Up

Storage Options for the Distributed Enterprise

 

Paul Stringfellow (@TechStringy)

Looking ahead with Storage Field Day 21 – Barry Coombs, Jason Collier, Max Mortillaro – Ep 149

Storageless data, really? – Doug Fallstrom – Ep156

 

Frederic Van Haren (@FredericVHaren)

 

On-Premise IT Podcast

Is Storageless Storage Just Someone Else’s Storage?

 

Now please enjoy this group photo.

[image courtesy of Gestalt IT]

MinIO – Cloud, Edge, Everywhere …

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

MinIO recently presented at Storage Field Day 21. You can see videos of the presentation here, and download my rough notes from here.

 

What Is It?

To quote the good folks at MinIO, it is a “high performance, Kubernetes-native object store”. It is designed to be used for large-scale data infrastructure, and was built from scratch to be cloud native.

[image courtesy of MinIO]

Design Principles

MinIO has been built with the following principles in mind:

  • Cloud Native – born in the cloud with “cloud native DNA”
  • Performance Focussed – believe it is the fastest object store in existence
  • Simplicity – designed for simplicity because “simplicity scales”

S3 Compatibility

MinIO is heavily focussed on S3 compatibility. It was first to market with V4 and one of the few vendors to support S3 Select. It has also been strictly consistent from inception.

Put Me In Your Favourite Box

The cloud native part of MinIO was no accident, and as a result more than 62% of MinIO instances run in containers (according to MinIO). 43% of those instances are also managed via Kubernetes. It’s not just about jamming this solution into your favourite container solution though. The lightweight nature of it means you can deploy it pretty much anywhere. As the MinIO folks pointed out during the presentation, MinIO is going everywhere that AWS S3 isn’t.

 

Thoughts And Further Reading

I love object storage. Maybe not in the way I love my family or listening to records or beer, but I do love it. It’s not just useful for storage for the great unwashed of the Internet, but also backup and recovery, disaster recovery, data archives, and analytics. And I’m a big fan of MinIO, primarily because of the S3 compatibility and simplicity of deployment. Like it or not, S3 is the way forward in terms of a standard for object storage for cloud native (and a large number of enterprise) workloads. I’ve written before about other vendors being focussed on this compatibility, and I think it’s great that MinIO has approached this challenge with just as much vigour. There are plenty of problems to be had deploying applications at the best of times, and being able to rely on the storage vendor sticking to the script in terms of S3 compatibility takes one more potential headache away.

The simplicity of deployment is a big part of what intrigues me about MinIO too. I’m old enough to remember some deployments of early generation on-premises object storage systems that involved a bunch of hardware and complicated software interactions for what ultimately wasn’t a great experience. Something like MinIO can be up and running on some pretty tiny footprints in no time at all. A colleague of mine shared some insights into that process here.

And that’s what makes this cool. It’s not that MinIO are trying to take a piece of the AWS pie. Rather, it’s positioning the solution as one that can operate everywhere that the hyperscalers aren’t. Putting object storage solutions in edge locations has historically been a real pain to do. That’s no longer the case. Part of this has to do with the fact that we’ve got access to really small computers and compact storage. But it also has a bit to do with lightweight code that can be up and running in a snap. Like some of the other on-premises object vendors, MinIO has done a great job of turning people on to the possibility of doing cool storage for cloud native workloads outside of the cloud. It seems a bit odd until you think about all of the use cases in enterprise that might work really well in cloud, but aren’t allowed to be hosted in the cloud. It’s my opinion that MinIO has done a great job of filling that gap (and exceeding expectations) when it comes to lightweight, easy to deploy object storage. I’m looking forward to see what’s next for them, particularly as the other vendors start to leverage the solution. For another perspective on MinIO’s growth, check out Ray’s article here.

Nasuni Puts Your Data Where You Need It

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Nasuni recently presented at Storage Field Day 21. You can see videos of the presentation here, and download my rough notes from here.

 

Nasuni?

The functionality is in the product name. It’s NAS that offers a unified file system across cloud. The key feature is that it’s cloud-native, rather than built on any particular infrastructure solution.

[image courtesy of Nasuni]

The platform is comprised of 5 key components.

UniFS

  • Consolidates files and metadata in cloud storage – “Gold Copy”
  • Ensures durability by storing files as immutable, read-only objects
  • Stores an unlimited version history of every file

Virtual Edge Appliances

  • Caches active files with 99% hit rate
  • 98% smaller footprint vs traditional file server / NAS
  • Scales across all sites, including VDI
  • Supports standard file sharing protocols
  • Built-in web server enables remote file access via web browser (HTTP)

Management Console

  • Administers appliances, volumes, shares and file recovery
  • Automated through central GUI and REST API
  • Provides centralised monitoring, reporting, and alerting

Orchestration Center

  • Multi-site file sync keeps track of versions
  • Advanced version control with Nasuni Global File Lock
  • Multi-region cloud support to ensure performance

Analytics Connector

  • Translates file data into native object storage format
  • Leverage any public cloud services (AI, data analytics, search)
  • Multi-cloud support so you can run any cloud service against your data

 

Thoughts and Further Reading

I’m the first to admit I’ve had a bit of a blind spot for Nasuni for a little while now. Not because I think the company doesn’t do cool stuff – it really does. Rather, my former employer was an investor in the tech and was keen to see how we could use the platform in every opportunity. Even when the opportunity wasn’t appropriate.

Distributed storage for file sharing has been a pain in the rear for enterprises ever since enterprises have been a thing. The real challenge has been doing something sensible about managing data across multiple locations in a cogent fashion. As local becomes global, this becomes even more of an issue, particularly when folks all across the world need to work on the same data. Email isn’t really great for this, and some of those sync and share solutions don’t cope well with the scale that is sometimes required. In the end, file serving is still a solution that can solve a problem for a lot of enterprise use cases.

The advent of public cloud has been great in terms of demonstrating that workloads can be distributed, and you don’t need to have a bunch of tin sitting in the office to get value from infrastructure. Nasuni recognised this over ten years ago, and it has put together a platform that seeks to solve that problem by taking advantage of the distributed nature of cloud, whilst acknowledging that virtualised resources can make for a useful local presence when it comes to having the right data in the right place. One of my favourite things about the solution is that you can also do stuff via the Analytics Connector to derive further value from your unstructured data. This is not a unique feature, but it’s certainly something that gives the impression that Nasuni isn’t just here to serve up your data.

The elegance of the Nasuni solution is in the fact that the complexity is well hidden from the end user. It’s a normal file access experience, but it’s hosted in the cloud. When you contrast that with what you get from the sync solutions of the world or the clumsy web-based document management systems so prevalent in the enterprise, this kind of simplicity is invaluable. It’s my opinion that there is very much a place for this kind of solution in the marketplace. The world is becoming increasingly global, but we still need solutions that can provide data where we need it. We also need those solutions to accommodate the performance and resilience needs of the enterprise.

If you’re after a great discussion on storage options for the distributed enterprise, check out Enrico’s article over at GigaOm.

Pliops – Can We Take Fast And Make It Faster?

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Pliops recently presented at Storage Field Day 21. You can see videos of the presentation here, and download my rough notes from here.

 

The Problem

You might have heard of solid-state drives (SSDs). You might have one in your computer. You might even have a data centre full of them. They’re quiet and they’re fast. It’s generally accepted that SSDs perform way better than HDDs. The problem, however, is that CPUs haven’t kept up with that performance increase. The folks at Pliops have also pointed out that resiliency technologies such as RAID generally suck, particularly when you’re using SSDs. In essence, you’re wasting a good chunk of your Flash.

 

The Solution?

The solution, according to Pliops, is the Pliops Storage Processor. This is “a hardware-based storage accelerator that enables cloud and enterprise customers to offload and accelerate data-intensive workloads using just a fraction of the computational load and power”. Capabilities include increased performance, capacity expansion, improved endurance, and data protection capabilities.

System Integration

From an integration perspective, the card is a half-height, half-length PCIe device that fits in any standard rackmount server. There’s a Pliops Agent that’s installed, and it supports a variety of different deployment options. There’s even a cloud / as-a-service option available.

[image courtesy of Pliops]

Use Cases

The Pliops SP is targeted primarily at RDBMS, NoSQL and Analytics workloads, due in large part to its ability to reduce read and write amplification on SSDs – something that’s been a problem since Flash became more prevalent in the data centre.

[image courtesy of Pliops]

 

Thoughts and Further Reading

On the surface, the Pliops Storage Processor seems to be solving a fairly specific problem. It’s not a problem that gets a lot of airplay, but that doesn’t mean it’s not an important one to solve. There are scads of solutions in the market that have been developed to address the problem of legacy systems design. For example, the way we addressed resilience previously (i.e. RAID) doesn’t work that well as drive capacities continue to increase. We’ve also fundamentally changed the media we’re working with, but haven’t necessarily developed new ways of taking advantage of that media.

Whenever I see add-in devices like this I worry that it would be a pain to manage at any sort of scale. But then I remember that literally everything related to hardware is a pain to manage at any kind of scale. The Pliops folks tell us that it’s not actually too bad, and any disadvantages related to having super specialised add-in cards deployed in servers is more than made up for by the improved outcomes achieved with those cards.

Ultimately, the value of a solution like the Pliops Storage Processor is absolutely tied to whether you’ve had a problem with this in the past. If you have, you’ll understand that this kind of solution is a reasonably elegant way of addressing the problem. It has the added bonus of taking fast media and eking out even more performance from that media.

Pliops has only been around since 2017, but it recently announced a decent funding round and the product is being ready for mass deployment. I’ll happily admit that I’ve done a fairly poor job of explaining the Pliops Storage Processor and what it does, so I recommend you check out the solution brief on the Pliops website. If you’d like another perspective, be sure to head over to TECHunplugged to read Max’s thoughts on Pliops.

NetApp Keystone – How Do you Want It?

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

NetApp recently presented at Storage Field Day 21. You can see videos of the presentation here, and download my rough notes from here. This post is focussed on the Keystone presentation, but I recommend you check out the Oracle performance analysis session as well – I found it extremely informative.

 

Keystone? What Is It?

According to the website, “Keystone provides a portfolio of payment solutions and storage-as-a-service offerings for hybrid cloud environments to deliver greater agility, financial flexibility, and reduced financial risk that helps you meet your business outcomes”. In short, it gives you a flexible way to consume the broader portfolio of NetApp solutions as a service on-premises (and off-premises).

 

How Much XaaS Is Too Much?

According to NetApp’s research, no amount of XaaS is too much. The market is apparently hungry for everything as a service to be a thing. It seems we’re no longer just looking to do boring old Infrastructure or Software as a Service. We really seem to want everything as a Service.

[image courtesy of NetApp]

Why?

There are some compelling reasons to consume things as a service via operating expenditure rather than upfront capital expenditure. In the olden days, when I needed some storage for my company, I usually had a line item in the budget for some kind of storage array. What invariably happened was that the budget would only be made available once every 3 – 5 years. It doesn’t make any sense necessarily, but I’m sure there are accounting reasons behind it. So I would try to estimate how much storage the company would need for the next 5 years (and usually miss the estimate by a country mile). I’d then buy as much storage as I could and watch it fill up at an alarming rate.

The other problem with this approach was that we were paying for spindles that weren’t necessarily in use for the entirety of the asset’s lifecycle. There was also an issue that some storage vendors would offer special discounting to buy everything up front. When you went to add additional storage, however, you’d be slugged with pricing that was nowhere near as good as it was if you’d have bought everything up front. The appeal of solutions like storage as a service is that you can start with a smallish footprint and grow it as required, spending what you need, and going from there. It’s also nice for the vendors, as the sales engagement is a little more regular, and the opportunity to sell other services into the environment that may not have been identified previously becomes a reality.

No, But Really Why?

If you’ve watched the NetApp Keystone presentation, and maybe checked out the supporting documentation, you’re going to wonder why folks aren’t just moving everything to public cloud, and skipping the disk slinging middle man. As anyone who’s worked with or consulted for enterprise IT organisations will be able to tell you though, it’s rarely that simple. There may indeed be appetite to leverage public cloud storage services, for example, but there may also be a raft of reasons why this can’t be done, including latency requirements, legacy application support, data sovereignty concerns, and so forth.

[image courtesy of NetApp]

Sometimes the best thing that can happen is that there’s a compromise to be had between the desire for the business to move to a different operating model and the ability for the IT organisation to service that need.

 

Thoughts and Further Reading

The growth of XaaS over the last decade has been fascinating to see. There really is an expectation that you can do pretty much anything as a service, and folks are queuing up for the privilege. As I mentioned earlier, I think there are reasons why it’s popular on both sides, and I certainly don’t want to come across as some weird on-premises storage hugger who doesn’t believe the future of infrastructure is heavily entwined with as a service offerings. Heck, my day job is a a company that is entirely built on this model. What I do wonder at times is whether folks in organisations looking to transform their services are really ready to relinquish the control of the infrastructure part of the equation in exchange for a per-GB / per month option. Offerings like Keystone aren’t just fancy financial models to make getting kit on the floor easier, they’re changing the way that vendors and IT organisations interact at a fairly fundamental level. In much the same way that public cloud has changed the role of the IT bod in the organisation, so too does XaaS change that value proposition.

I think the folks at NetApp have done quite a good job with Keystone, particularly recognising that there is still a place for on-premises infrastructure, but acknowledging that the market wants both a “cloud-like” experience, and a new way of consuming these services. I’ll be interested to see how Keystone develops over the next 12 – 24 months now that it’s been released to the market at large. We all talk about as a service being the future, so I’m keen to see if folks are really buying it.

Intel Optane – Challenges and Triumphs

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Intel recently presented at Storage Field Day 21. You can see videos of the presentation here, and download my rough notes from here.

 

Alive and Kicking

Kristie Mann, Sr. Director Products, Intel Optane Group, kicked off the session by telling us that “Intel Optane is alive and well”. I don’t think anyone thought it was really going away, particularly given the effort that folks inside Intel have gone to to get this product to market. But from a consumer perspective, it’s potentially been a little confusing.

[image courtesy of Intel]

In terms of data centre penetration, it’s been a different story, and taking Optane from idea to reality has been quite a journey. It was also noted that the “strong uptake of PMem in HPC was unexpected”, but no doubt welcome.

 

Learnings

Some of the other learnings that were covered as part of the session were as follows.

Software Really Matters

It’s one thing to come out with some super cool hardware that is absolutely amazing, but it’s quite another to get software support for that hardware. Unfortunately, the hardware doesn’t give you much without the software, no matter how well it performs. While this has been something of a challenge for Optane until recent times, there’s definitely been more noise from the big ISVs about enhanced Optane support.

IaaS Adoption

Adoption in IaaS has not been great, mainly due to some uneven performance. This will only improve as the software support matures. But the IaaS market can be tough for a bunch of reasons. IaaS vendors are trying to do a things at a certain price point. That doesn’t mean that they’re still making people run VMs on spinning disk (hopefully), but rolling out All-Flash support for platforms is something that’s only going to be done when the $/GB makes sense for the providers. You also might have seen in the field that IaaS providers are pretty particular about performance and quality of service. It makes sense when you’re trying to host a whole bunch of different workloads at large scale. So it makes sense that they’d be somewhat cautious about launching new media types on their platforms without running through a whole bunch of performance and integration testing. I’m not saying they’re not going to get there, they just may not be the first cabs off the rank.

Can you spell OEM?

OEM qualifications have been slow to date with Optane. This is key to getting the product out there. Enterprise folks don’t like to buy things until their favourite Tier 1 vendors are offering it as a default option in their server / storage array / fabric switch. If Dell has the Optane Inside sticker (not a real thing, but you know what I mean), the infrastructure architects inside large government entities are more likely to get on board.

Battling The Status Quo

Status quo thinking makes it hard to understand this isn’t just memory or storage. This has been something of a problem for Intel since Optane became a thing. I’m still having conversations with people and running up against significant confusion about the difference between PMem and Optane SSD. I think that’s going to improve as time goes on, but it can make things difficult when it comes to broad market penetration.

Thoughts and Further Reading

I don’t want people reading this to think that I’m down on Intel and what it’s done with Optane. If anything, I’m really into it. I enjoyed the presentation at Storage Field Day 21 tremendously, and not just because my friend Howard was on the panel repping VAST Data. It’s unusual that a vendor as big as Intel would be so frank about some of the challenges that it’s faced with getting new media to market. But I think it’s the willingness to share some of this information that demonstrates how committed Intel is to Optane moving forward. I was lucky enough to speak to Intel Senior Fellow Al Fazio about the Optane journey, and it was clear that there’s a whole lot of innovation and sweat that goes into making a product like this work.

Some folks think that these panel presentations are marketing disguised as a presentation. Invariably, the reference customers are friendly with the company, and you’ll only ever hear good stories. But I think those stories from those customers are still extremely powerful. After all, having a customer jump on a session to tell the world about how good your product has been means you’ve done something right. As a consumer of these products, I find these kind of testimonials invaluable. Ultimately, products are successful in the market when they serve the market’s needs. From what I can see, Intel Optane is on its way to meeting those needs, and it has a bright future.

Hammerspace, Storageless Data, And One Tough Problem

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Hammerspace recently presented at Storage Field Day 21. You can see videos of the presentation here, and download my rough notes from here.

 

Storageless Data You Say?

David Flynn kicked off the presentation from Hammerspace talking about storageless data. Storageless data? What on earth is that, then? Ultimately your data has to live on storage. But this all about consumption side abstraction. Hammerspace doesn’t want you to care about how your application maps to servers, or how it maps to storage. It’s more of a data-focussed approach to storage than we’re used to, perhaps. Some of the key requirements of the solution are as follows:

  • The agent needs to run on everything – virtual, physical, containers – it can’t be bound to specific hardware
  • Needs to be multi-vendor and support multi-protocol
  • Presumes metadata
  • Make data into a routed resource
  • Deliver objective-based orchestration

The trick is that you have to be able to do all of this without killing the benefits of the infrastructure (performance, reliability, cost, and management). Simple, huh?

Stitching It Together

A key part of the Hammerspace story is the decoupling of the control plane and the data plane. This allows it to focus on getting the data where it needs to be, from edge to cloud, and over whatever protocol it needs to be done over.

[image courtesy of Hammerspace]

Other Notes

Hammerspace officially supports 8 sites at the moment, and the team have tested the solution with 32 sites. It uses an eventually consistent model, and the Global Namespace is global per share, providing flexible deployment options. Metadata replication can be setup to be periodic – and customised at each site. You always rehydrate the data and serve it locally over NAS via SMB or NFS.

Licensing Notes

Hammerspace is priced on capacity (data under management). You can also purchase it via the AWS Marketplace. Note that you can access up to 10TB free on the public cloud vendors (AWS, GCP, Azure) from a Hammerspace perspective.

 

Thoughts and Further Reading

I was fortunate to have a followup session with Douglas Fallstrom and Brendan Wolfe to revisit the Hammerspace story, ask a few more questions, and check out some more demos. I asked Fallstrom about the kind of use cases they were seeing in the field for Hammerspace. One popular use case was for disaster recovery. Obviously, there’s a lot more to doing DR than just dumping data in multiple locations, but it seems that there’s appetite for this very thing. At a high level, Hammerspace is a great choice for getting data into multiple locations, regardless of the underlying platform. Sure, there’s a lot more that needs to be done once it’s in another location, or when something goes bang. But from the perspective of keeping things simple, this one is up there.

Fallstrom was also pretty clear with me that this isn’t Primary Data 2.0, regardless of the number of folks that work at Hammerspace with that heritage. I think it’s a reasonable call, given that Hammerspace is doubling down on the data story, and really pushing the concept of a universal file system, regardless of location or protocol.

So are we finally there in terms of data abstraction? It’s been a problem since computers became common in the enterprise. As technologists we frequently get caught up in the how, and not as much in the why of storage. It’s one thing to say that I can scale this to this many Petabytes, or move these blocks from this point to that one. It’s an interesting conversation for sure, and has proven to be a difficult problem to solve at times. But I think as a result of this, we’ve moved away from understanding the value of data, and data management, and focused too much on the storage and services supporting the data. Hammerspace has the noble goal of moving us beyond that conversation to talking about data and the value that it can bring to the enterprise. Is it there yet in terms of that goal? I’m not sure. It’s a tough thing to be able to move data all over the place in a reliable fashion and still have it do what it needs to do with regards to performance and availability requirements. Nevertheless I think that the solution does a heck of a lot to remove some of the existing roadblocks when it comes to simplified data management. Is serverless compute really a thing? No, but it makes you think more about the applications rather than what they run on. Storageless data is aiming to do the same thing. It’s a bold move, and time will tell whether it pays off or not. Regardless of the success or otherwise of the marketing team, I’m thinking that we’ll be seeing a lot more innovation coming out of Hammerspace in the near future. After all, all that data isn’t going anywhere any time soon. And someone needs to take care of it.

Storage Field Day 21 – (Fairly) Full Disclosure

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my notes on gifts, etc, that I received as a conference attendee at Storage Field Day 21. This is by no stretch an interesting post from a technical perspective, but it’s a way for me to track and publicly disclose what I get and how it looks when I write about various things. With all of … this stuff … happening, it’s not going to be as lengthy as normal, but I did receive a couple of boxes of stuff in the mail, so I wanted to disclose it.

The Tech Field Day team sent a keyboard cloth (a really useful thing to save the monitor on my laptop from being bashed against the keyboard), a commemorative TFD coin, and some TFD patches. The team also sent me a snack pack with a variety of treats in it, including Crunch ‘n Munch caramel popcorn with peanuts, fudge brownie M&M’s, Pop Rocks, Walker’s Monster Munch pickled onion flavour baked corn snacks, peanut butter Cookie Dough Bites, Airheads, Razzles, a giant gobstopper, Swedish Fish, a Butterfinger bar, some Laffy Taffy candy, Hershey’s Kisses, Chewy Lemonhead, Bottlecaps, Airheads, Candy Sours and Milk Duds. I don’t know what most of this stuff is but I guess I’ll find out. I can say the pickled onion flavour baked corn snacks were excellent.

Pliops came through with the goods and sent me a Lume Cube Broadcast Lighting Kit. Hammerspace sent a stainless steel water bottle and Hammerspace-branded Leeman notepad. Nasuni threw in a mug, notepad, and some pens, while NetApp gave me a travel mug and notepad. Tintri kindly included a Tintri trucker cap, Tintri-branded hard drive case and Tintri-branded OGIO backpack in the swag box.

My Secret Santa gift was the very excellent “Working for the clampdown: The Clash, the dawn of neoliberalism and the political promise of punk“, edited by Colin Coulter.

It wasn’t fancy food and limos this time around. But it was nonetheless an enjoyable event. Hopefully we can get back to in-person events some time this decade. Thanks again to Stephen and the team for having me back.

Back To The Future With Tintri

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Tintri recently presented at Storage Field Day 21. You can see videos of the presentation here, and download my rough notes from here.

 

Tintri? 

Remember Tintri? The company was founded in 2008, fell upon difficult times in 2018, and was acquired by DDN. It’s still going strong, and now offers a variety of products under the Tintri brand, including VMstore, IntelliFlash, and NexentaStor. I’ve had exposure to all of these different lines of business over the years, and was interested to see how it was all coming together under the DDN acquisition.

 

Does Your Storage Drive Itself?

Ever since I got into the diskslinger game, self-healing infrastructure has been talked about as the next big thing in terms of reducing operational overheads. We build this stuff, can teach it how to do things, surely we can get it to fix itself when it goes bang? As those of you who’ve been in the industry for some time would likely know, we’re still some ways off that being a reality across a broad range of infrastructure solutions. But we do seem closer than we were a while ago.

Autonomous Infrastructure

Tintri spent some time talking about what it was trying to achieve with its infrastructure by comparing it to autonomous vehicle development. If you think about it for a minute, it’s a little easier to grasp the concept of a vehicle driving itself somewhere, using a lot of telemetry and little computers to get there, than it is to think about how disk storage might be able to self-repair and redirect resources where they’re most needed. Of most interest to me was the distinction made between analytics and intelligence. It’s one thing to collect a bunch of telemetry data (something that storage companies have been reasonably good at for some time now) and analyse it after the fact to come to conclusions about what the storage is doing well and what it’s doing poorly. It’s quite another thing to use that data on the fly to make decisions about what the storage should be doing, without needing the storage manager to intervene.

[image courtesy of Tintri]

If you look at the various levels of intelligence, you’ll see that autonomy eventually kicks in and the concept of supervision and management moves away. The key to the success of this is making sure that your infrastructure is doing the right things autonomously.

So What Do You Really Get?

[image courtesy of Tintri]

You get an awful lot from Tintri in terms of information that helps the platform decide what it needs to do to service workloads in an appropriate fashion. It’s interesting to see how the different layers deliver different outcomes in terms of frequency as well. Some of this is down to physics, and time to value. The info in the cloud may not help you make an immediate decision on what to do with your workloads, but it will certainly help when the hapless capacity manager comes asking for the 12-month forecast.

 

Conclusion

I was being a little cheeky with the title of this post. I was a big fan of what Tintri was able to deliver in terms of storage analytics with a virtualisation focus all those years ago. It feels like some things haven’t changed, particularly when looking at the core benefits of VMstore. But that’s okay, because all of the things that were cool about VMstore back then are still actually cool, and absolutely still valuable in most enterprise storage shops. I don’t doubt that there are VMware shops that have definitely taken up vVols, and wouldn’t get as much out of VMstore as those shops running oldey timey LUNs, but there are plenty of organisations that just need storage to host VMs on, storage that gives them insight into how it’s performing. Maybe it’s even storage that can move some stuff around on the fly to make things work a little better.

It’s a solid foundation upon which to add a bunch of pretty cool features. I’m not 100% convinced that what Tintri is proposing is the reality in a number of enterprise shops (have you ever had to fill out a change request to storage vMotion a VM before?), but that doesn’t mean it’s not a noble goal, and certainly one worth pursuing. I’m a fan of any vendor that is actively working to take the work out of infrastructure, and allowing people to focus on the business of doing business (or whatever it is that they need to focus on). It looks like Tintri has made some really progress towards reducing the overhead of infrastructure, and I’m keen to see how that plays out across the product portfolio over the next year or two.

 

 

Storage Field Day 21 – I’ll Be At Storage Field Day 21

Here’s some news that will get you excited. I’ll be virtually heading to the US next week for another Storage Field Day event. If you haven’t heard of the very excellent Tech Field Day events, you should check them out. It’s also worth visiting the Storage Field Day 21 website during the event (January 20 – 22) as there’ll be video streaming and updated links to additional content. You can also see the list of delegates and event-related articles that have been published.

I think it’s a great line-up of both delegates and presenting companies this time around. I know most of them, but there may also still be a few companies added to the line-up. I’ll update this if and when they’re announced.

I’d like to publicly thank in advance the nice folks from Tech Field Day who’ve seen fit to have me back, as well as my employer for letting me take time off to attend these events. Also big thanks to the companies presenting. It’s going to be a lot of fun. Last time was a little weird doing this virtually, rather than in person, but I think it still worked. I’m really looking forward to this, even if it means doing the night shift for a few days. I’ll post details of the presentation times when I have them.

[Update – here’s the schedule]

Wednesday, Jan 20 9:30-11:00 MinIO Presents at Storage Field Day 21 Presenters: AB PeriasamyDaniel ValdivaEco Willson
Wednesday, Jan 20 12:00-15:30 Tintri Presents at Storage Field Day 21 Presenters: Erwin DariaRob GirardShawn MeyersTomer Hagay Nevel
Thursday, Jan 21 8:00-10:00 NetApp Presents at Storage Field Day 21 Presenters: Arun RamanDave KrenikJeffrey SteinMike McNamaraSunitha Rao
Thursday, Jan 21 11:00-13:00 Nasuni Presents at Storage Field Day 21 Presenters: Andres Rodriguez
Friday, Jan 22 8:00-9:30 Hammerspace Presents at Storage Field Day 21 Presenters: David FlynnDouglas Fallstrom
Friday, Jan 22 10:30-11:30 Pliops Presents at Storage Field Day 21  
Friday, Jan 22 12:30-14:30 Intel Presents at Storage Field Day 21