Tintri Announces Centralised Upgrades

Announcement

Tintri recently announced centralised upgrades for users of Tintri Global Center (TGC). I normally wouldn’t get too excited about minor innovations from storage vendors, but I do get a little dizzy when I hear about storage vendors making life easier for the hapless storage admin. In this case, if you’re using TGC you can leverage a new feature of TGC that allows you to bulk select storage arrays that you’re managing and set them to upgrade.

This probably isn’t a major issue if you’re running one or two arrays, but if you have 8 or 16 under your watch (or up to 64 per TGC), then this is going to save you a bit of time at the console.

 

Conclusion

I remember when I started out with midrange storage arrays that the process to upgrade them was tedious at best and oftentimes went pear-shaped thanks to odd behaviour with Java or mis-typed commands at a console. The process to perform the upgrade often ran to tens of pages and involved an awful lot of pre-flight checks. The worst part of broken upgrades was the requirement to trudge to the data centre to find out what was up with that, and if you were lucky, you had a sophisticated enough connectivity solution that your storage vendor could access the array and fix things remotely.

Thankfully, the days of less than seamless array upgrades with 100s of steps are behind us. Instead, most all of the vendors have introduced automated mechanisms to deliver a painless upgrade process that can be performed during the day. Tintri have taken this philosophy a step further and made it easier to do this at scale. I’m all for vendors introducing technology that means I don’t have to perform repetitive tasks, particularly when it comes to mundane operational activities like storage operating environment upgrades.

WHOA.com Are A Happy Tintri Customer

Disclaimer: This is a sponsored post and you’ll probably see the content elsewhere on the Internet. Tintri provided no editorial input and the words and opinions in this post are my own.

Introduction

I recently had the opportunity to speak to Brock Mowry (CTO of WHOA.com) about the company’s experience adopting Tintri in their environment. You can read the case study on Tintri’s website, but sometimes it’s nice to get a perspective straight from the source. If you don’t know of WHOA.com, they were established in 2013 and deliver a “[c]ybersecure cloud hosting platform with an emphasis on compliance workloads, [including] HIPAA regulation and PCI regulation”. They have a data centre presence in Las Vegas, NV and Miami, FL and plans to expand that footprint.

 

Challenges?

I asked Mowry what one of the main challenges was as a growing cloud service provider and he said “[s]torage was one of the challenges”. The problem, it seems, was when they looked at how much time they spent on keeping the environment running, there was a lot of operational overhead with their storage platform, and they “didn’t want to be scaling by head count – [they] wanted to scale by technology”.

 

What solutions did they look at?

According to Mowry, at WHOA.com they “optimise [the] network for NFS traffic and get really, really good results operating NFS in [the] infrastructure. Again, Tintri being an NFS-based platform, it was really an easy choice from there”. The benefit of deploying an IP-based storage solution was that they were “able to eliminate an entire fibre channel fabric within [the] data centre”. The added benefit of this was that they were able to reduce the number of “employees that are required to operate that platform. That’s a huge cost saving for [them] because at the end of the day head count is typically one of the most expensive things to operate a cloud infrastructure”.

 

Why not look at hyperconverged solutions then?

It turns out they looked at a number of hyperconverged vendors, including solutions from Nutanix and Cisco. At the time they ran across a problem with the converged nature of the resources in hyperconverged environments. Mowry provided an example where there was a “need to increase […] CPU and RAM capacity to meet a customer’s workload. Well now I’m sitting on a bunch of excess storage that I really don’t want to power, I really don’t want to cool, and I really don’t want to manage, because it’s not needed”. Note that a number of vendors now offer solutions to that problem, with “storage-only” nodes being available to counter the requirement to scale memory, CPU and storage in equal amounts. At the time, however, Mowry felt that it was best to go with what he describes as a “broken-out” architecture, where they “have storage arrays or storage appliances and [they] have UCS blade systems so [they] can increase RAM and increase CPU to the customer’s workloads without having to scale out our storage at the same time where it might not be used”.

 

Why go All-Flash?

WHOA.com have deployed both All-Flash and hybrid arrays, because, as Mowry points out, they “have customers who are demanding that lower tier. And a lot of times they’re trying to hit a price point, they’re not trying to hit a performance point”.

 

Conclusion

WHOA.com are obviously very happy Tintri customers, but not simply because the Tintri arrays they’ve deployed give them per-VM control or nice APIs to use with their own products. Vendors often focus on the technical advantages of the solutions they sell, because they think that’s what demonstrates value to their (potential) customers. But discussions around decreasing operational overhead and improving configuration simplicity by removing fibre channel fabrics are real world examples of how businesses can, in some instances, save money and improve their bottom line by choosing an architecture that aligns well with their operational strengths and experience. People are normally the most expensive part of any type of managed service, so if you can deploy efficient systems that don’t need a lot of people to run them, you’ll be in a good place.

Of interest also was the decision to continue with a decoupled infrastructure architecture that provided them with a solution that scales the way they want it to. In my opinion this a great example of a business choosing a solution that suits them because of a number of reasons, not all of which are technical. Customers like WHOA.com provide a great example of how to understand your requirements (from both a technical and financial perspective), understand your market, and work to your strengths. You can download a full transcript of my chat with Mowry from here.

Tintri FlexDrive Goes GA – Is A Very Handy Feature

Disclaimer: This is a sponsored post and you’ll probably see the content elsewhere on the Internet. Tintri provided no editorial input and the words and opinions in this post are my own.

Tintri recently announced the general availability of FlexDrive, a storage expansion feature for EC6000 all-flash arrays that gives you the ability to increase system capacity to meet a specific storage need by adding drives incrementally. This feature is included at no charge with Tintri’s 4.4.1 release.

 

What is It?

With FlexDrive, you can purchase a partially populated EC6000 all-flash array (with as few as thirteen drives) and add as little as one drive at a time to increase the capacity. Once the drive’s been added, all you need to do is click on the “Expand” button in the management interface and you’re all set. It’s a non-disruptive activity that you can do yourself, so there’s no need for drawn out change control meetings or extensive planning for support staff to be on-site to deploy the capacity. Imagine working with storage infrastructure that no longer ties you to the requirement to deploy flash drives as if they were spinning disks? Instead of scrambling to deploy additional rack and power capacity that you don’t really need, you can slot in a drive and get what you need out of the equipment, rather than dreading the idea of a capacity upgrade.

 

Who’s the Estimator?

Many all-flash systems leverage data reduction technologies such as deduplication and compression to provide effective resource utilisation in a dense footprint. So how do you know how the addition of one drive to your array will impact the capacity and performance of the array? Tintri has incorporated an expansion estimator into its standard user interface so that you can model the impact of adding drives. The estimator also uses historical system workload profiles to help you get to the right number of extra drives, ensuring you get the necessary number to meet your target capacity. With the estimator by your side, you not only have clarity around what the future holds, you also get the benefit of some really smart analytics ensuring you get the performance and capacity outcome you need based on what you’ve used the array for previously.

 

The Benefits are Real

Storage provisioning has been a somewhat clumsy process for years. Anyone familiar with deploying terabytes of disk storage will also be painfully familiar with the various vendors’ requirements to deploy disks in certain, minimum numbers to satisfy configuration and performance requirements. This approach was extremely important when spinning disk ruled the world. Those days are behind us though, with Flash-dominant arrays becoming the norm in a data centres all over the world. When you have Flash as your primary medium, you don’t necessarily have to deploy a lot of it to meet your performance requirements. So why buy five more disks when you only need the capacity of two? It’s a waste of money, not just in terms of the asset, but also the additional costs, such as power, cooling and (possibly) additional rack space. All to meet a requirement that keeps systems from the early part of the century happy. With FlexDrive you don’t need to do this, and nor should you have to. The estimator adds a level of intelligence to the expansion activity that has previously been an exercise in guesswork and hope. Tintri aren’t interested in making you buy trays of disk just because that’s the way it’s always been done. Tintri are interested in you getting the solution that matches your requirements in terms of capacity and performance. This approach yields real benefits when it comes to budget allocations and controlling costs for your storage environment. You no longer have to spend over the odds-on storage to get the performance you need for the next six months.

Tintri ChatOps – Because All I Do Is Hang Out On Slack Anyway

 

I’m a bit behind the times with my tech news, but Tintri sent me a link to a video they did demonstrating their new “ChatOps” feature. I was going to make fun of it, but it’s actually pretty neat. If you’ve used Slack before, you probably know it’s got a fairly extensible engine that you can use to do a bunch of cool things. With ChatOps, you can send your Tintri arrays commands and things get done. Not only does it do stuff for you, it does them in a sensible / efficient fashion as well. And since I spend a lot of time on Slack in any case, this feature just might take off.

You can read more about this and some other new features from Tintri at El Reg. And I agree with Chris that a focus by Tintri beyond table stakes is a smart move.

Testing Tintri’s Lightning Lab and Pizza

Disclaimer: I was offered a pizza to write this post.  I haven’t taken up the offer yet, but I will be.

Tintri_Logo_Horizontal_1024

I had the opportunity to test drive Tintri’s “Lightning Lab” about six months ago and the nice folks at Tintri thought I might like to post about my experiences. They’ve offered me a pizza for my troubles which, coincidentally, ties in nicely with their current promotion “The Tintri Pizza Challenge“. If you’re in the US or Canada it’s worth checking it out.

In any case, the Lightning Lab is Tintri’s internet accessible lab that showcases a number of its arrays and provides you with an opportunity to take their gear for a spin. From a hardware perspective it’s pretty well provisioned, with T5060, T880, T620 & T540 arrays, along with a Dell R720 host with 128GB of RAM and 2 Dell R610 servers with 48GB of RAM. From a software perspective, the version of the lab I used had VMware vSphere 5.5U2b installed, but I believe this has been since updated. There’s also a functional version of Tintri Global Center, and both the Web Client Plug-in and the vROps plugin configured. Networking wise, management runs overs a 1GbE Dell switch, with Data travelling via a 10GbE Arista switch.

Lab_Overview

Global Center has a pretty neat login screen. Like all good admins, I use many dots in my password too.

tintri09

There’s a bunch of stuff I could show from the interface, but one of my favourite bits is the ability to see an aggregated view of your deployed VMstores.

tintri04

The interface is simple to operate and painfully colourful too. It’s also simple to navigate and makes it really easy to get a quick view of what’s going on in your environment without having to do a lot of digging.

 

Conclusion

There’s a lot more I could write about Tintri. If you’re aligned with their use case (NFS-only), they have a compelling offering that’s worth checking out. The Lightning Lab is an excellent tool to take their platform for a spin and gain a good understanding of just what you can do with the VMstore and Global Center. I think these kind of offerings are great, and not just because there’s pizza involved. If more storage vendors read this and think that they should be doing something like this, then that’s a great thing. I’ve barely scratched the surface, so you should head over to Andrea Mauro’s blog and check out his thorough write-up of his Lightning Lab experience.

Storage Field Day 10 – Wrap-up and Link-o-rama

Disclaimer: I recently attended Storage Field Day 10.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

SFD-Logo2-150x150

This is a quick post to say thanks once again to Stephen, Tom, Megan and the presenters at Storage Field Day 10. I had an enjoyable and educational time. For easy reference, here’s a list of the posts I did covering the event (they may not match the order of the presentations).

Storage Field Day – I’ll Be At SFD10

Storage Field Day 10 – Day 0

Storage Field Day 10 – (Fairly) Full Disclosure

Kaminario are doing some stuff we’ve seen before, but that’s okay

Pure Storage really aren’t a one-trick pony

Tintri Keep Doing What They Do, And Well

Nimble Storage are Relentless in Their Pursuit of Support Excellence

Cloudian Does Object Smart and at Scale

Exablox Isn’t Just Pretty Hardware

It’s Hedvig, not Hedwig

The Cool Thing About Datera Is Intent

Data Virtualisation is More Than Just Migration for Primary Data

 

Also, here’s a number of links to posts by my fellow delegates (and Tom!). They’re all really quite smart, and you should check out their stuff, particularly if you haven’t before. I’ll try keep this updated as more posts are published. But if it gets stale, the SFD10 landing page has updated links.

 

Chris M Evans (@ChrisMEvans)

Storage Field Day 10 Preview: Hedvig

Storage Field Day 10 Preview: Primary Data

Storage Field Day 10 Preview: Exablox

Storage Field Day 10 Preview: Nimble Storage

Storage Field Day 10 Preview: Datera

Storage Field Day 10 Preview: Tintri

Storage Field Day 10 Preview: Pure Storage

Storage Field Day 10 Preview: Kaminario

Storage Field Day 10 Preview: Cloudian

Object Storage: Validating S3 Compatibility

 

Ray Lucchesi (@RayLucchesi)

Surprises in flash storage IO distributions from 1 month of Nimble Storage customer base

Has triple parity Raid time come?

Pure Storage FlashBlade well positioned for next generation storage

Exablox, bring your own disk storage

Hedvig storage system, Docker support & data protection that spans data centers

 

Jon Klaus (@JonKlaus)

I will be flying out to Storage Field Day 10!

Ready for Storage Field Day 10!

Simplicity with Kaminario Healthshield & QoS

Breaking down storage silos with Primary Data DataSphere

Cloudian Hyperstore: manage more PBs with less FTE

FlashBlade: custom hardware still makes sense

Squashing assumptions with Data Science

Bringing hyperscale operations to the masses with Datera

Making life a whole lot easier with Tintri VM-aware storage

 

Enrico Signoretti (@ESignoretti)

VM-aware storage, is it still a thing?

Scale-out, flash, files and objects. How cool is Pure’s FlashBlade?

 

Josh De Jong (@EuroBrew)

 

Max Mortillaro (@DarkkAvenger)

Follow us live at Storage Field Day 10

Primary Data: a true Software-defined Storage platform?

If you’re going to SFD10 be sure to wear microdrives in your hair

Hedvig Deep Dive – Is software-defined the future of storage?

Pure Storage’s FlashBlade – Against The Grain

Pure Storage Flashblade is now available!

 

Gabe Maentz (@GMaentz)

Heading to Tech Field Day

 

Arjan Timmerman (@ArjanTim)

We’re almost live…

Datera: Elastic Data Fabric

 

Francesco Bonetti (@FBonez)

EXABLOX – A different and smart approach to NAS for SMB

 

Marco Broeken (@MBroeken)

 

Rick Schlander (@VMRick)

Storage Field Day 10 Next Week

Hedvig Overview

 

Tom Hollingsworth (@networkingnerd)

Flash Needs a Highway

 

Finally, thanks again to Stephen, Tom, Megan (and Claire in absentia). It was an educational and enjoyable few days and I really valued the opportunity I was given to attend.

SFD10_GroupPhoto

Tintri Keep Doing What They Do, And Well

Disclaimer: I recently attended Storage Field Day 10.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Tintri_Logo_Horizontal_1024

Before I get into it, you can find a link to my notes on Tintri‘s presentation here. You can also see videos of the presentation here.

I’ve written about Tintri recently. As recently, in fact, as a week before I saw them at SFD10. You can check out my article on their most recent product announcements here.

 

VAS but not AAS (and that’s alright)

Tintri talk a lot about VM-aware Storage (or VAS as they put it). There’s something about the acronym that makes me cringe, but the sentiment is admirable. They put it all over their marketing stuff. They’re committed to the acronym, whether I like it or not. But what exactly is VM-aware Storage? According to Tintri, it provides:

  • VM-level QoS;
  • VM-level analytics;
  • VM data management;
  • VM-level automation with PowerShell and REST; and
  • Supported across multiple hypervisors (Support VMware, Hyper-V, OpenStack, RedHat).

Justin Lauer, Global Evangelist with Tintri, took us through a demo of VAS and the QoS capabilities built in to the Tintri platform.

SFD10_Tintri_Justin

I particularly liked the fact that I can get a view of end to end latency (host / network / storage (contention and flash) / throttle latency). In my opinion this is something that people have struggled with for some time, and it looks like Tintri have a really good story to tell here. I also liked the look of the “Capacity gas gauge” (petrol for Antipodeans), providing an insight into when you’ll run out of either performance, capacity, or both.

So what’s AAS then? Well, in my mind at least, this is the ability to delve into application-level performance and monitoring, rather than just VM-level. And I don’t think Tintri are doing that just yet. Which, to my way of thinking, isn’t a problem, as I think a bunch of other vendors are struggling to really do this in a cogent fashion either. But I want to know what my key web server tier is doing, for example, and I don’t want to assume that it still lives on the datastore that I tagged for it when I first deployed it. I’m not sure that I get this with VAS, but I still think it’s a long way ahead of where we were a few years ago, getting stats out of volumes and not a lot else.

 

Further Reading and Conclusion

In the olden days (a good fifteen years ago) I used to struggle to get multiple Oracle instances to play nicely on the same NT4 host. But I didn’t have a large number of physical hosts to play with, and I had limited options when I wanted to share resources across applications. Virtualisation to slice up physical resources in a more concise fashion, And as a result of this it’s made it simple for us to justify running one application per VM. In this way we can still get insights into our applications from understanding what our VMs are doing. This is no minor thing when you’re looking after storage in the enterprise – it’s a challenge at the best of times. Tintri has embraced the concept of intelligent analytics in their arrays in the same way that Nimble and Pure have started really making use of the thousands of data points that they collect every minute.

But what if you’re not running virtualised workloads? Well, you’re not going to get as much from this. But you’ve probably got a whole lot of different requirements you’re working to as well. Tintri is really built from the ground up to deliver insight into virtualised workloads that has been otherwise unavailable. I’m hoping to see them take it to the next level with application-centric monitoring.

Finally, Enrico did some more thorough analysis here that’s worth your time. And Chris’s SFD10 preview post on Tintri is worth a gander as well.

 

Storage Field Day 10 – (Fairly) Full Disclosure

Disclaimer: I recently attended Storage Field Day 10.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

SFD-Logo2-150x150

Here are my notes on gifts, etc, that I received as a delegate at Storage Field Day 10. I’d like to point out that I’m not trying to play companies off against each other. I don’t have feelings one way or another about receiving gifts at these events (although I generally prefer small things I can fit in my suitcase). Rather, I’m just trying to make it clear what I received during this event to ensure that we’re all on the same page as far as what I’m being influenced by. Some presenters didn’t provide any gifts as part of their session – which is totally fine. I’m going to do this in chronological order, as that was the easiest way for me to take notes during the week. While every delegate’s situation is different, I’d also like to clarify that I took 5 days of training / work time to be at this event (thanks to my employer for being on board).

 

Saturday

I paid for my taxi to BNE airport. I had a burger at Benny Burger in SYD airport. It was quite good. I flew Qantas economy class to SFO. The flights were paid for by Tech Field Day. Plane food was consumed on the flight. It was a generally good experience.

 

Tuesday

When I arrived at the hotel I was given a bag of snacks by Tom. The iced coffee and granola bars came in handy. We had dinner at Il Fornaio at the Westin Hotel. I had some antipasti, pizza fradiavola and 2 Hefeweizen beers (not sure of the brewery).

 

Wednesday

We had breakfast in the hotel. I had bacon, eggs, sausage, fruit and coffee. We also did the Yankee Gift Swap at that time and I scored a very nice stovetop Italian espresso coffee maker (thanks Enrico!). We also had lunch at the hotel, it was something Italian. Cloudian gave each delegate a green pen, bottle opener, 1GB USB stick, and a few Cloudian stickers. We had dinner at Gordon Biersch in San Jose. I had some sliders (hamburgers for small people) and about 5 Golden Export beers.

 

Thursday

Pure Storage gave each delegate a Tile, a pen, some mints, and an 8GB USB stick. Datera gave each delegate a Datera-branded “vortex 16oz double wall 18/8 stainless steel copper vacuum insulated thermal pilsner” (a cup) with our twitter handles on them. Tintri provided us with a Tintri / Nike golf polo shirt, a notepad, a pen, an 8GB USB stick, and a 2600mAh USB charger. We then had happy hour at Tintri. I had a Pt. Bonita Pilsner beer and a couple of fistfuls of prawns. For dinner we went to Taplands. I had a turkey sandwich and 2 Fieldwork Brewing Company Pilsners.

 

Friday

We had breakfast on Friday at Nimble Storage. I had some bacon, sausage and eggs for breakfast with an orange juice. I don’t know why my US comrades struggle so much with the concept of tomato sauce (ketchup) with bacon. But there you go. Nimble gave us each a custom baseball jersey with our name on the back and the Nimble logo. They also gave us each a white lab coat with the Nimble logo on it. My daughters love the coat. Hedvig provided us with a Hedvig sticker and a Hedvig-branded Rogue bluetooth speaker. We had lunch at Hedvig, which was a sandwich, some water, and a really delicious choc-chip cookie. Exablox gave each of us an Exablox-branded aluminium water bottle. We then had happy hour at Exablox. I had two Anchor Brewing Liberty Ale beers (“tastes like freedom”) and some really nice cheese. To finish off we had dinner at Mexicali in Santa Clara. I had a prawn burrito. I didn’t eat anything on the flight home.

 

Conclusion

I’d like to extend my thanks once again to the Tech Field Day organisers and the companies presenting at the event. I had a super enjoyable and educational time. Here’s a photo.

SFD10_disclosure1

 

Storage Field Day 10 – Day 0

Disclaimer: I recently attended Storage Field Day 10.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

SFD-Logo2-150x150

This is just a quick post to share some thoughts on day zero at Storage Field Day 10. I can do crappy tourist snaps as well if not better than the next guy. Here’s the obligatory wing shot. No wait here’s two – one leaving SYD and the other coming in to SFO. Bet you can’t guess which is which.

SFD10_plane1     SFD10_plane2

We all got together for dinner on Tuesday night in the hotel. I had the pizza. It was great.

SFD10_Food

But enough with the holiday snaps and underwhelming travel journal. Thanks again Stephen, Tom, Claire and Megan for having me back, making sure everything is running according to plan and for just being really very decent people. I’ve really enjoyed catching up with the people I’ve met before and meeting the new delegates. Look out for some posts related to the Tech Field Day sessions in the next few weeks. And if you’re in a useful timezone, check out the live streams from the event here, or the recordings afterwards.

Here’s the rough schedule (all times are ‘Merican Pacific).

Wednesday, May 25 9:30 – 11:30 Kaminario Presents at Storage Field Day 10
Wednesday, May 25 12:30 – 14:30 Primary Data Presents at Storage Field Day 10
Wednesday, May 25 15:00 – 17:00 Cloudian Presents at Storage Field Day 10
Thursday, May 26 9:30 – 11:30 Pure Storage Presents at Storage Field Day 10
Thursday, May 26 13:00 – 15:00 Datera Presents at Storage Field Day 10
Thursday, May 26 16:00 – 18:00 Tintri Presents at Storage Field Day 10
Friday, May 27 8:00 – 10:00 Nimble Storage Presents at Storage Field Day 10
Friday, May 27 10:30 – 12:30 Hedvig Presents at Storage Field Day 10
Friday, May 27 13:30 – 15:30 Exablox Presents at Storage Field Day 10

You can also follow along with the live stream here.


Tintri Announces New Scale-Out Storage Platform

I’ve had a few briefings with Tintri now, and talked about Tintri’s T5040 here. Today they announced a few enhancements to their product line, including:

  • Nine new Tintri VMstore T5000 all flash models with capacity expansion capabilities;
  • VM Scale-out software;
  • Tintri Analytics for predictive capacity and performance planning; and
  • Two new Tintri Cloud offerings.

 

Scale-out Storage Platform

You might be familiar with the T5040, T5060 and T5080 models, with the Tintri VMstore T5000 all-flash series being introduced in August 2015. All three models have been updated with new capacity options ranging from 17 TB to 308 TB. These systems use the latest in 3D NAND technology and high density drives to offer organizations both higher capacity and lower $/GB.

Tintri03_NewModels

The new models have the following characteristics:

  • Federated pool of storage. You can now treat multiple Tintri VMstores—both all-flash and hybrid-flash nodes—as a pool of storage. This makes management, planning and resource allocation a lot simpler. You can have up to 32 VMstores in a pool.
  • Scalability and performance. The storage platform is designed to scale to more than one million VMs. Tintri tell me that the  “[s]eparation of control flow from data flow ensures low latency and scalability to a very large number of storage nodes”.
  • This allows you to scale from small to very large with new and existing, all flash and hybrid, partially or fully populated systems.
  • The VM Scale-out software works across any standard high performance Ethernet network, eliminating the need for proprietary interconnects. The VM Scale-out software automatically provides best placement recommendation for VMs.
  • Scale compute and storage independently. Loose coupling of storage and compute provides customers with maximum flexibility to scale these elements independently. I think this is Tintri’s way of saying they’re not (yet) heading down the hyperconverged path.

 

VM Scale-out Software

Tintri’s new VM Scale-out Software (*included with Tintri Global Center Advanced license) provides the following capabilities:

  • Predictive analytics derived from one million statistics collected every 10 minutes from 30 days of history, accounting for peak loads instead of average loads, providing (according to Tintri) for the most accurate predictions. Deep workload analysis identifies VMs that are growing rapidly and applies sophisticated algorithms to model the growth ahead and avoid resource constraints.
  • Least-cost optimization based on multi-dimensional modelling. Control algorithm constantly optimizes across the thousands of VMs in each pool of VMstores, taking into account space savings, resources required by each VM, and the cost in time and data to move VMs, and makes the least-cost recommendation for VM migration that optimizes the pool.
  • Retain VM policy settings and stats. When a VM is moved, not only are the snapshots moved with the VM, the stastistics,  protection and QoS policies migrate as well using efficient compressed and deduplicated replication protocol.
  • Supports all major hypervisors.

Tintri04_ScaleOut

You can check out a YouTube video on Tintri VM Scale-out (covering optimal VM distribution) here.

 

Tintri Analytics
Tintri has always offered real-time, VM-level analytics as part of its Tintri Operating System and Tintri Global Center management system. This has now been expanded to include a SaaS offering of predictive analytics that provides organizations with the ability to model both capacity and performance requirements. Powered by big data engines such as Apache Spark and Elastic Search, Tintri Analytics is capable of analyzing stats from 500,000 VMs over several years in one second.  By mining the rich VM-level metadata, Tintri Analytics provides customers with information about their environment to help them make better decisions about applications’ behaviours and storage needs.

Tintri Analytics is a SaaS tool that allows you to model storage needs up to 6 months into the future based on up to 3 years of historical data.

Tintri01_Analytics

Here is a shot of the dashboard. You can see a few things here, including:

  • Your live resource usage for your entire footprint up to 32 VMstores;
  • Average consumption per VM (bottom left); and
  • The types of applications that are your largest consumers of Capacity, Performance and Working Set (bottom center).

Tintri02_Analytics

Here you can see exactly how your usage of capacity, performance and working set have been trending over time. You can see also when you can expect to run out of these resources (and which is on the critical path). It also provides the ability to change the timeframe to alter the projections, or drill into specific application types to understand their impact on your footprint.

There are a number of videos covering Tintri Analytics that I think are worth checking out:

 

Tintri Cloud Suites

Tintri have also come up with a new packaging model called “Tintri Cloud”. Aimed at folks still keen on private cloud deployments, Tintri Cloud combines the Tintri Scale-out platform and the all-flash VMstores.

Customers can start with a single Tintri VMstore T5040 with 17 TB of effective capacity and scale out to the Tintri Foundation Cloud with 1.2 PB in as few as 8 rack units. Or they can grow all the way to the Tintri Ultimate Cloud, which delivers a 10 PB cloud-ready storage infrastructure for up to 160,000 VMs, delivering over 6.4 million IOPS in 64 RU for less than $1/GB effective. Both the Foundation Cloud and Ultimate Cloud include Tintri’s complete set of software offerings for storage management, VM-level analytics, VM Scale-out, replication, QoS, and lifecycle management.

 

Further Reading and Thoughts

There’s another video covering setting policies on groups of VMs in Tintri Global Center here. You might also like to check out the Tintri Product Launch webinar.

Tintri have made quite a big deal about their “VM-aware” storage in the past, and haven’t been afraid to call out the bigger players on their approach to VM-centric storage. While I think they’ve missed the mark with some of their comments, I’ve enjoyed the approach they’ve taken with their own products. I’ve also certainly been impressed with the demonstrations I’ve been given on the capability built into the arrays and available via Global Center. Deploying workload to the public cloud isn’t for everyone, and Tintri are doing a bang-up job of going for those who still want to run their VM storage decoupled from their compute and in their own data centre. I love the analytics capability, and the UI looks to be fairly straightforward and informative. Trending still seems to be a thing that companies are struggling with, so if a dashboard can help them with further insight then it can’t be a bad thing.