Storage Field Day 21 – (Fairly) Full Disclosure

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my notes on gifts, etc, that I received as a conference attendee at Storage Field Day 21. This is by no stretch an interesting post from a technical perspective, but it’s a way for me to track and publicly disclose what I get and how it looks when I write about various things. With all of … this stuff … happening, it’s not going to be as lengthy as normal, but I did receive a couple of boxes of stuff in the mail, so I wanted to disclose it.

The Tech Field Day team sent a keyboard cloth (a really useful thing to save the monitor on my laptop from being bashed against the keyboard), a commemorative TFD coin, and some TFD patches. The team also sent me a snack pack with a variety of treats in it, including Crunch ‘n Munch caramel popcorn with peanuts, fudge brownie M&M’s, Pop Rocks, Walker’s Monster Munch pickled onion flavour baked corn snacks, peanut butter Cookie Dough Bites, Airheads, Razzles, a giant gobstopper, Swedish Fish, a Butterfinger bar, some Laffy Taffy candy, Hershey’s Kisses, Chewy Lemonhead, Bottlecaps, Airheads, Candy Sours and Milk Duds. I don’t know what most of this stuff is but I guess I’ll find out. I can say the pickled onion flavour baked corn snacks were excellent.

Pliops came through with the goods and sent me a Lume Cube Broadcast Lighting Kit. Hammerspace sent a stainless steel water bottle and Hammerspace-branded Leeman notepad. Nasuni threw in a mug, notepad, and some pens, while NetApp gave me a travel mug and notepad. Tintri kindly included a Tintri trucker cap, Tintri-branded hard drive case and Tintri-branded OGIO backpack in the swag box.

My Secret Santa gift was the very excellent “Working for the clampdown: The Clash, the dawn of neoliberalism and the political promise of punk“, edited by Colin Coulter.

It wasn’t fancy food and limos this time around. But it was nonetheless an enjoyable event. Hopefully we can get back to in-person events some time this decade. Thanks again to Stephen and the team for having me back.

Back To The Future With Tintri

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Tintri recently presented at Storage Field Day 21. You can see videos of the presentation here, and download my rough notes from here.

 

Tintri? 

Remember Tintri? The company was founded in 2008, fell upon difficult times in 2018, and was acquired by DDN. It’s still going strong, and now offers a variety of products under the Tintri brand, including VMstore, IntelliFlash, and NexentaStor. I’ve had exposure to all of these different lines of business over the years, and was interested to see how it was all coming together under the DDN acquisition.

 

Does Your Storage Drive Itself?

Ever since I got into the diskslinger game, self-healing infrastructure has been talked about as the next big thing in terms of reducing operational overheads. We build this stuff, can teach it how to do things, surely we can get it to fix itself when it goes bang? As those of you who’ve been in the industry for some time would likely know, we’re still some ways off that being a reality across a broad range of infrastructure solutions. But we do seem closer than we were a while ago.

Autonomous Infrastructure

Tintri spent some time talking about what it was trying to achieve with its infrastructure by comparing it to autonomous vehicle development. If you think about it for a minute, it’s a little easier to grasp the concept of a vehicle driving itself somewhere, using a lot of telemetry and little computers to get there, than it is to think about how disk storage might be able to self-repair and redirect resources where they’re most needed. Of most interest to me was the distinction made between analytics and intelligence. It’s one thing to collect a bunch of telemetry data (something that storage companies have been reasonably good at for some time now) and analyse it after the fact to come to conclusions about what the storage is doing well and what it’s doing poorly. It’s quite another thing to use that data on the fly to make decisions about what the storage should be doing, without needing the storage manager to intervene.

[image courtesy of Tintri]

If you look at the various levels of intelligence, you’ll see that autonomy eventually kicks in and the concept of supervision and management moves away. The key to the success of this is making sure that your infrastructure is doing the right things autonomously.

So What Do You Really Get?

[image courtesy of Tintri]

You get an awful lot from Tintri in terms of information that helps the platform decide what it needs to do to service workloads in an appropriate fashion. It’s interesting to see how the different layers deliver different outcomes in terms of frequency as well. Some of this is down to physics, and time to value. The info in the cloud may not help you make an immediate decision on what to do with your workloads, but it will certainly help when the hapless capacity manager comes asking for the 12-month forecast.

 

Conclusion

I was being a little cheeky with the title of this post. I was a big fan of what Tintri was able to deliver in terms of storage analytics with a virtualisation focus all those years ago. It feels like some things haven’t changed, particularly when looking at the core benefits of VMstore. But that’s okay, because all of the things that were cool about VMstore back then are still actually cool, and absolutely still valuable in most enterprise storage shops. I don’t doubt that there are VMware shops that have definitely taken up vVols, and wouldn’t get as much out of VMstore as those shops running oldey timey LUNs, but there are plenty of organisations that just need storage to host VMs on, storage that gives them insight into how it’s performing. Maybe it’s even storage that can move some stuff around on the fly to make things work a little better.

It’s a solid foundation upon which to add a bunch of pretty cool features. I’m not 100% convinced that what Tintri is proposing is the reality in a number of enterprise shops (have you ever had to fill out a change request to storage vMotion a VM before?), but that doesn’t mean it’s not a noble goal, and certainly one worth pursuing. I’m a fan of any vendor that is actively working to take the work out of infrastructure, and allowing people to focus on the business of doing business (or whatever it is that they need to focus on). It looks like Tintri has made some really progress towards reducing the overhead of infrastructure, and I’m keen to see how that plays out across the product portfolio over the next year or two.

 

 

VMware – VMworld 2019 – HBI3516BUS – Scaling Virtual Infrastructure for the Enterprise: Truths, Beliefs and the Real World

Disclaimer: I recently attended VMworld 2019 – US.  My flights and accommodation were paid for by Digital Sense, and VMware provided me with a free pass to the conference and various bits of swag. There is no requirement for me to blog about any of the content presented and I am not compensated by VMware for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

These are my rough notes from “HBI3516BUS – Scaling Virtual Infrastructure for the Enterprise: Truths, Beliefs and the Real World” was a sponsored panel session hosted by George Crump (of Storage Switzerland fame) and sponsored by Tintri by DDN. The panellists were:

JP: Hyper-V is not really for the enterprise. Configuration, and automation were a challenge. Tintri made it easier to deal with the hypervisor.

JD: You put a bunch of disks and connect it up to what you want to. It’s really simple to setup. “Why would you want to go complex if you didn’t have to?”

MB: When we had block storage, we were beholden to the storage team. We’ve never had problems with their [Tintri’s] smallest hybrid arrays.

AA: Back in the ESX 2.5 days – single LUN per VM. We would buy our arrays half-populated – ready to grow. We’re now running 33 – 34 devices. Tintri was great with QoS for VMs. It became a great troubleshooting tool for VMware.

GC: Reporting and analytics with Tintri has always been great.

MB: We use Tintri analytics to create reports for global infrastructure. Tintri will give you per-VM allocation by default. Performance like a Tivo – you can go back and look at analytics at a very granular level.

GC: How did the addition of new arrays go with Global Center?

MB: We manage our purchases based on capacity or projects. 80 – 85% we consider additional capacity. Global Center has a Pools function. It does a storage vMotion “like” feature to move data between arrays. There’s no impact.

JP: We used a UCS chassis, Tintri arrays, and Hyper-V hypervisor. We used a pod architecture. We knew how many users we wanted to host per pod. We have 44000 users globally. VDI is the only thing the bank uses.

AA: We’re more of a compute / core based environment, rather than users.  One of the biggest failings of Tintri is that it just works. When you’re not causing problems – people aren’t paying attention to it.

MB: HCI in general has a problem with very large VMs.

AA: We use a lot of scripting, particularly on the Red Hat (RHV) side of things. Tintri is fixing a lot of those at a different level.

GC: What would you change?

JP: I would run VMware.

MB: The one thing that can go wrong is the network. It was never a standardised network deployment. We had different network people in different regions doing different things.

JP: DR in the cloud. How do you do bank infrastructure in the cloud? Can we DR into the cloud? Tested Tintri replicating into Azure.

AA: We’re taking on different people. Moving “up” the stack.

Consistency in environments. It’s still a hard thing to do.

Wishlist?

  • Containers
  • A Virtual Appliance

 

Thoughts

Some folks get upset about these sponsored sessions at VMworld. I’ve heard it said before that they’re nothing more than glorified advertising for the company that sponsors the session. I’m not sure that it’s really any different to a vendor holding a four day conference devoted to themselves, but some people like to get ornery about stuff like that. One of my favourite things about working with technology is hearing from people out in the field about how they use that technology to do their jobs better / faster / more efficiently.

Sure, this session was a bit of a Tintri fan panel, but I think the praise is warranted. I’ve written enthusiastically in the past about how I thought Tintri has really done some cool stuff in terms of storage for virtualisation. I was sad when things went south for them as a company, but I have hopes that they’ll recover and continue to innovate under the control of DDN.

When everything I’ve been hearing from the keynote speakers at this conference revolved around cloud-native tools and digital transformation, it was interesting to come across a session where the main challenges still involved getting consistent, reliable and resilient performance from block storage to serve virtual desktop workloads to the enterprise. That’s not to say that we shouldn’t be looking at what’s happening with Kubernetes, etc, but I think there’s still room to understand what’s making these bigger organisations tick in terms of successful storage infrastructure deployments.

Useful session. 4 stars.

Tintri Announces Centralised Upgrades

Announcement

Tintri recently announced centralised upgrades for users of Tintri Global Center (TGC). I normally wouldn’t get too excited about minor innovations from storage vendors, but I do get a little dizzy when I hear about storage vendors making life easier for the hapless storage admin. In this case, if you’re using TGC you can leverage a new feature of TGC that allows you to bulk select storage arrays that you’re managing and set them to upgrade.

This probably isn’t a major issue if you’re running one or two arrays, but if you have 8 or 16 under your watch (or up to 64 per TGC), then this is going to save you a bit of time at the console.

 

Conclusion

I remember when I started out with midrange storage arrays that the process to upgrade them was tedious at best and oftentimes went pear-shaped thanks to odd behaviour with Java or mis-typed commands at a console. The process to perform the upgrade often ran to tens of pages and involved an awful lot of pre-flight checks. The worst part of broken upgrades was the requirement to trudge to the data centre to find out what was up with that, and if you were lucky, you had a sophisticated enough connectivity solution that your storage vendor could access the array and fix things remotely.

Thankfully, the days of less than seamless array upgrades with 100s of steps are behind us. Instead, most all of the vendors have introduced automated mechanisms to deliver a painless upgrade process that can be performed during the day. Tintri have taken this philosophy a step further and made it easier to do this at scale. I’m all for vendors introducing technology that means I don’t have to perform repetitive tasks, particularly when it comes to mundane operational activities like storage operating environment upgrades.

WHOA.com Are A Happy Tintri Customer

Disclaimer: This is a sponsored post and you’ll probably see the content elsewhere on the Internet. Tintri provided no editorial input and the words and opinions in this post are my own.

Introduction

I recently had the opportunity to speak to Brock Mowry (CTO of WHOA.com) about the company’s experience adopting Tintri in their environment. You can read the case study on Tintri’s website, but sometimes it’s nice to get a perspective straight from the source. If you don’t know of WHOA.com, they were established in 2013 and deliver a “[c]ybersecure cloud hosting platform with an emphasis on compliance workloads, [including] HIPAA regulation and PCI regulation”. They have a data centre presence in Las Vegas, NV and Miami, FL and plans to expand that footprint.

 

Challenges?

I asked Mowry what one of the main challenges was as a growing cloud service provider and he said “[s]torage was one of the challenges”. The problem, it seems, was when they looked at how much time they spent on keeping the environment running, there was a lot of operational overhead with their storage platform, and they “didn’t want to be scaling by head count – [they] wanted to scale by technology”.

 

What solutions did they look at?

According to Mowry, at WHOA.com they “optimise [the] network for NFS traffic and get really, really good results operating NFS in [the] infrastructure. Again, Tintri being an NFS-based platform, it was really an easy choice from there”. The benefit of deploying an IP-based storage solution was that they were “able to eliminate an entire fibre channel fabric within [the] data centre”. The added benefit of this was that they were able to reduce the number of “employees that are required to operate that platform. That’s a huge cost saving for [them] because at the end of the day head count is typically one of the most expensive things to operate a cloud infrastructure”.

 

Why not look at hyperconverged solutions then?

It turns out they looked at a number of hyperconverged vendors, including solutions from Nutanix and Cisco. At the time they ran across a problem with the converged nature of the resources in hyperconverged environments. Mowry provided an example where there was a “need to increase […] CPU and RAM capacity to meet a customer’s workload. Well now I’m sitting on a bunch of excess storage that I really don’t want to power, I really don’t want to cool, and I really don’t want to manage, because it’s not needed”. Note that a number of vendors now offer solutions to that problem, with “storage-only” nodes being available to counter the requirement to scale memory, CPU and storage in equal amounts. At the time, however, Mowry felt that it was best to go with what he describes as a “broken-out” architecture, where they “have storage arrays or storage appliances and [they] have UCS blade systems so [they] can increase RAM and increase CPU to the customer’s workloads without having to scale out our storage at the same time where it might not be used”.

 

Why go All-Flash?

WHOA.com have deployed both All-Flash and hybrid arrays, because, as Mowry points out, they “have customers who are demanding that lower tier. And a lot of times they’re trying to hit a price point, they’re not trying to hit a performance point”.

 

Conclusion

WHOA.com are obviously very happy Tintri customers, but not simply because the Tintri arrays they’ve deployed give them per-VM control or nice APIs to use with their own products. Vendors often focus on the technical advantages of the solutions they sell, because they think that’s what demonstrates value to their (potential) customers. But discussions around decreasing operational overhead and improving configuration simplicity by removing fibre channel fabrics are real world examples of how businesses can, in some instances, save money and improve their bottom line by choosing an architecture that aligns well with their operational strengths and experience. People are normally the most expensive part of any type of managed service, so if you can deploy efficient systems that don’t need a lot of people to run them, you’ll be in a good place.

Of interest also was the decision to continue with a decoupled infrastructure architecture that provided them with a solution that scales the way they want it to. In my opinion this a great example of a business choosing a solution that suits them because of a number of reasons, not all of which are technical. Customers like WHOA.com provide a great example of how to understand your requirements (from both a technical and financial perspective), understand your market, and work to your strengths. You can download a full transcript of my chat with Mowry from here.

Tintri FlexDrive Goes GA – Is A Very Handy Feature

Disclaimer: This is a sponsored post and you’ll probably see the content elsewhere on the Internet. Tintri provided no editorial input and the words and opinions in this post are my own.

Tintri recently announced the general availability of FlexDrive, a storage expansion feature for EC6000 all-flash arrays that gives you the ability to increase system capacity to meet a specific storage need by adding drives incrementally. This feature is included at no charge with Tintri’s 4.4.1 release.

 

What is It?

With FlexDrive, you can purchase a partially populated EC6000 all-flash array (with as few as thirteen drives) and add as little as one drive at a time to increase the capacity. Once the drive’s been added, all you need to do is click on the “Expand” button in the management interface and you’re all set. It’s a non-disruptive activity that you can do yourself, so there’s no need for drawn out change control meetings or extensive planning for support staff to be on-site to deploy the capacity. Imagine working with storage infrastructure that no longer ties you to the requirement to deploy flash drives as if they were spinning disks? Instead of scrambling to deploy additional rack and power capacity that you don’t really need, you can slot in a drive and get what you need out of the equipment, rather than dreading the idea of a capacity upgrade.

 

Who’s the Estimator?

Many all-flash systems leverage data reduction technologies such as deduplication and compression to provide effective resource utilisation in a dense footprint. So how do you know how the addition of one drive to your array will impact the capacity and performance of the array? Tintri has incorporated an expansion estimator into its standard user interface so that you can model the impact of adding drives. The estimator also uses historical system workload profiles to help you get to the right number of extra drives, ensuring you get the necessary number to meet your target capacity. With the estimator by your side, you not only have clarity around what the future holds, you also get the benefit of some really smart analytics ensuring you get the performance and capacity outcome you need based on what you’ve used the array for previously.

 

The Benefits are Real

Storage provisioning has been a somewhat clumsy process for years. Anyone familiar with deploying terabytes of disk storage will also be painfully familiar with the various vendors’ requirements to deploy disks in certain, minimum numbers to satisfy configuration and performance requirements. This approach was extremely important when spinning disk ruled the world. Those days are behind us though, with Flash-dominant arrays becoming the norm in a data centres all over the world. When you have Flash as your primary medium, you don’t necessarily have to deploy a lot of it to meet your performance requirements. So why buy five more disks when you only need the capacity of two? It’s a waste of money, not just in terms of the asset, but also the additional costs, such as power, cooling and (possibly) additional rack space. All to meet a requirement that keeps systems from the early part of the century happy. With FlexDrive you don’t need to do this, and nor should you have to. The estimator adds a level of intelligence to the expansion activity that has previously been an exercise in guesswork and hope. Tintri aren’t interested in making you buy trays of disk just because that’s the way it’s always been done. Tintri are interested in you getting the solution that matches your requirements in terms of capacity and performance. This approach yields real benefits when it comes to budget allocations and controlling costs for your storage environment. You no longer have to spend over the odds-on storage to get the performance you need for the next six months.

Tintri ChatOps – Because All I Do Is Hang Out On Slack Anyway

 

I’m a bit behind the times with my tech news, but Tintri sent me a link to a video they did demonstrating their new “ChatOps” feature. I was going to make fun of it, but it’s actually pretty neat. If you’ve used Slack before, you probably know it’s got a fairly extensible engine that you can use to do a bunch of cool things. With ChatOps, you can send your Tintri arrays commands and things get done. Not only does it do stuff for you, it does them in a sensible / efficient fashion as well. And since I spend a lot of time on Slack in any case, this feature just might take off.

You can read more about this and some other new features from Tintri at El Reg. And I agree with Chris that a focus by Tintri beyond table stakes is a smart move.

Testing Tintri’s Lightning Lab and Pizza

Disclaimer: I was offered a pizza to write this post.  I haven’t taken up the offer yet, but I will be.

Tintri_Logo_Horizontal_1024

I had the opportunity to test drive Tintri’s “Lightning Lab” about six months ago and the nice folks at Tintri thought I might like to post about my experiences. They’ve offered me a pizza for my troubles which, coincidentally, ties in nicely with their current promotion “The Tintri Pizza Challenge“. If you’re in the US or Canada it’s worth checking it out.

In any case, the Lightning Lab is Tintri’s internet accessible lab that showcases a number of its arrays and provides you with an opportunity to take their gear for a spin. From a hardware perspective it’s pretty well provisioned, with T5060, T880, T620 & T540 arrays, along with a Dell R720 host with 128GB of RAM and 2 Dell R610 servers with 48GB of RAM. From a software perspective, the version of the lab I used had VMware vSphere 5.5U2b installed, but I believe this has been since updated. There’s also a functional version of Tintri Global Center, and both the Web Client Plug-in and the vROps plugin configured. Networking wise, management runs overs a 1GbE Dell switch, with Data travelling via a 10GbE Arista switch.

Lab_Overview

Global Center has a pretty neat login screen. Like all good admins, I use many dots in my password too.

tintri09

There’s a bunch of stuff I could show from the interface, but one of my favourite bits is the ability to see an aggregated view of your deployed VMstores.

tintri04

The interface is simple to operate and painfully colourful too. It’s also simple to navigate and makes it really easy to get a quick view of what’s going on in your environment without having to do a lot of digging.

 

Conclusion

There’s a lot more I could write about Tintri. If you’re aligned with their use case (NFS-only), they have a compelling offering that’s worth checking out. The Lightning Lab is an excellent tool to take their platform for a spin and gain a good understanding of just what you can do with the VMstore and Global Center. I think these kind of offerings are great, and not just because there’s pizza involved. If more storage vendors read this and think that they should be doing something like this, then that’s a great thing. I’ve barely scratched the surface, so you should head over to Andrea Mauro’s blog and check out his thorough write-up of his Lightning Lab experience.

Storage Field Day 10 – Wrap-up and Link-o-rama

Disclaimer: I recently attended Storage Field Day 10.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

SFD-Logo2-150x150

This is a quick post to say thanks once again to Stephen, Tom, Megan and the presenters at Storage Field Day 10. I had an enjoyable and educational time. For easy reference, here’s a list of the posts I did covering the event (they may not match the order of the presentations).

Storage Field Day – I’ll Be At SFD10

Storage Field Day 10 – Day 0

Storage Field Day 10 – (Fairly) Full Disclosure

Kaminario are doing some stuff we’ve seen before, but that’s okay

Pure Storage really aren’t a one-trick pony

Tintri Keep Doing What They Do, And Well

Nimble Storage are Relentless in Their Pursuit of Support Excellence

Cloudian Does Object Smart and at Scale

Exablox Isn’t Just Pretty Hardware

It’s Hedvig, not Hedwig

The Cool Thing About Datera Is Intent

Data Virtualisation is More Than Just Migration for Primary Data

 

Also, here’s a number of links to posts by my fellow delegates (and Tom!). They’re all really quite smart, and you should check out their stuff, particularly if you haven’t before. I’ll try keep this updated as more posts are published. But if it gets stale, the SFD10 landing page has updated links.

 

Chris M Evans (@ChrisMEvans)

Storage Field Day 10 Preview: Hedvig

Storage Field Day 10 Preview: Primary Data

Storage Field Day 10 Preview: Exablox

Storage Field Day 10 Preview: Nimble Storage

Storage Field Day 10 Preview: Datera

Storage Field Day 10 Preview: Tintri

Storage Field Day 10 Preview: Pure Storage

Storage Field Day 10 Preview: Kaminario

Storage Field Day 10 Preview: Cloudian

Object Storage: Validating S3 Compatibility

 

Ray Lucchesi (@RayLucchesi)

Surprises in flash storage IO distributions from 1 month of Nimble Storage customer base

Has triple parity Raid time come?

Pure Storage FlashBlade well positioned for next generation storage

Exablox, bring your own disk storage

Hedvig storage system, Docker support & data protection that spans data centers

 

Jon Klaus (@JonKlaus)

I will be flying out to Storage Field Day 10!

Ready for Storage Field Day 10!

Simplicity with Kaminario Healthshield & QoS

Breaking down storage silos with Primary Data DataSphere

Cloudian Hyperstore: manage more PBs with less FTE

FlashBlade: custom hardware still makes sense

Squashing assumptions with Data Science

Bringing hyperscale operations to the masses with Datera

Making life a whole lot easier with Tintri VM-aware storage

 

Enrico Signoretti (@ESignoretti)

VM-aware storage, is it still a thing?

Scale-out, flash, files and objects. How cool is Pure’s FlashBlade?

 

Josh De Jong (@EuroBrew)

 

Max Mortillaro (@DarkkAvenger)

Follow us live at Storage Field Day 10

Primary Data: a true Software-defined Storage platform?

If you’re going to SFD10 be sure to wear microdrives in your hair

Hedvig Deep Dive – Is software-defined the future of storage?

Pure Storage’s FlashBlade – Against The Grain

Pure Storage Flashblade is now available!

 

Gabe Maentz (@GMaentz)

Heading to Tech Field Day

 

Arjan Timmerman (@ArjanTim)

We’re almost live…

Datera: Elastic Data Fabric

 

Francesco Bonetti (@FBonez)

EXABLOX – A different and smart approach to NAS for SMB

 

Marco Broeken (@MBroeken)

 

Rick Schlander (@VMRick)

Storage Field Day 10 Next Week

Hedvig Overview

 

Tom Hollingsworth (@networkingnerd)

Flash Needs a Highway

 

Finally, thanks again to Stephen, Tom, Megan (and Claire in absentia). It was an educational and enjoyable few days and I really valued the opportunity I was given to attend.

SFD10_GroupPhoto

Tintri Keep Doing What They Do, And Well

Disclaimer: I recently attended Storage Field Day 10.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Tintri_Logo_Horizontal_1024

Before I get into it, you can find a link to my notes on Tintri‘s presentation here. You can also see videos of the presentation here.

I’ve written about Tintri recently. As recently, in fact, as a week before I saw them at SFD10. You can check out my article on their most recent product announcements here.

 

VAS but not AAS (and that’s alright)

Tintri talk a lot about VM-aware Storage (or VAS as they put it). There’s something about the acronym that makes me cringe, but the sentiment is admirable. They put it all over their marketing stuff. They’re committed to the acronym, whether I like it or not. But what exactly is VM-aware Storage? According to Tintri, it provides:

  • VM-level QoS;
  • VM-level analytics;
  • VM data management;
  • VM-level automation with PowerShell and REST; and
  • Supported across multiple hypervisors (Support VMware, Hyper-V, OpenStack, RedHat).

Justin Lauer, Global Evangelist with Tintri, took us through a demo of VAS and the QoS capabilities built in to the Tintri platform.

SFD10_Tintri_Justin

I particularly liked the fact that I can get a view of end to end latency (host / network / storage (contention and flash) / throttle latency). In my opinion this is something that people have struggled with for some time, and it looks like Tintri have a really good story to tell here. I also liked the look of the “Capacity gas gauge” (petrol for Antipodeans), providing an insight into when you’ll run out of either performance, capacity, or both.

So what’s AAS then? Well, in my mind at least, this is the ability to delve into application-level performance and monitoring, rather than just VM-level. And I don’t think Tintri are doing that just yet. Which, to my way of thinking, isn’t a problem, as I think a bunch of other vendors are struggling to really do this in a cogent fashion either. But I want to know what my key web server tier is doing, for example, and I don’t want to assume that it still lives on the datastore that I tagged for it when I first deployed it. I’m not sure that I get this with VAS, but I still think it’s a long way ahead of where we were a few years ago, getting stats out of volumes and not a lot else.

 

Further Reading and Conclusion

In the olden days (a good fifteen years ago) I used to struggle to get multiple Oracle instances to play nicely on the same NT4 host. But I didn’t have a large number of physical hosts to play with, and I had limited options when I wanted to share resources across applications. Virtualisation to slice up physical resources in a more concise fashion, And as a result of this it’s made it simple for us to justify running one application per VM. In this way we can still get insights into our applications from understanding what our VMs are doing. This is no minor thing when you’re looking after storage in the enterprise – it’s a challenge at the best of times. Tintri has embraced the concept of intelligent analytics in their arrays in the same way that Nimble and Pure have started really making use of the thousands of data points that they collect every minute.

But what if you’re not running virtualised workloads? Well, you’re not going to get as much from this. But you’ve probably got a whole lot of different requirements you’re working to as well. Tintri is really built from the ground up to deliver insight into virtualised workloads that has been otherwise unavailable. I’m hoping to see them take it to the next level with application-centric monitoring.

Finally, Enrico did some more thorough analysis here that’s worth your time. And Chris’s SFD10 preview post on Tintri is worth a gander as well.