Back To The Future With Tintri

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Tintri recently presented at Storage Field Day 21. You can see videos of the presentation here, and download my rough notes from here.

 

Tintri? 

Remember Tintri? The company was founded in 2008, fell upon difficult times in 2018, and was acquired by DDN. It’s still going strong, and now offers a variety of products under the Tintri brand, including VMstore, IntelliFlash, and NexentaStor. I’ve had exposure to all of these different lines of business over the years, and was interested to see how it was all coming together under the DDN acquisition.

 

Does Your Storage Drive Itself?

Ever since I got into the diskslinger game, self-healing infrastructure has been talked about as the next big thing in terms of reducing operational overheads. We build this stuff, can teach it how to do things, surely we can get it to fix itself when it goes bang? As those of you who’ve been in the industry for some time would likely know, we’re still some ways off that being a reality across a broad range of infrastructure solutions. But we do seem closer than we were a while ago.

Autonomous Infrastructure

Tintri spent some time talking about what it was trying to achieve with its infrastructure by comparing it to autonomous vehicle development. If you think about it for a minute, it’s a little easier to grasp the concept of a vehicle driving itself somewhere, using a lot of telemetry and little computers to get there, than it is to think about how disk storage might be able to self-repair and redirect resources where they’re most needed. Of most interest to me was the distinction made between analytics and intelligence. It’s one thing to collect a bunch of telemetry data (something that storage companies have been reasonably good at for some time now) and analyse it after the fact to come to conclusions about what the storage is doing well and what it’s doing poorly. It’s quite another thing to use that data on the fly to make decisions about what the storage should be doing, without needing the storage manager to intervene.

[image courtesy of Tintri]

If you look at the various levels of intelligence, you’ll see that autonomy eventually kicks in and the concept of supervision and management moves away. The key to the success of this is making sure that your infrastructure is doing the right things autonomously.

So What Do You Really Get?

[image courtesy of Tintri]

You get an awful lot from Tintri in terms of information that helps the platform decide what it needs to do to service workloads in an appropriate fashion. It’s interesting to see how the different layers deliver different outcomes in terms of frequency as well. Some of this is down to physics, and time to value. The info in the cloud may not help you make an immediate decision on what to do with your workloads, but it will certainly help when the hapless capacity manager comes asking for the 12-month forecast.

 

Conclusion

I was being a little cheeky with the title of this post. I was a big fan of what Tintri was able to deliver in terms of storage analytics with a virtualisation focus all those years ago. It feels like some things haven’t changed, particularly when looking at the core benefits of VMstore. But that’s okay, because all of the things that were cool about VMstore back then are still actually cool, and absolutely still valuable in most enterprise storage shops. I don’t doubt that there are VMware shops that have definitely taken up vVols, and wouldn’t get as much out of VMstore as those shops running oldey timey LUNs, but there are plenty of organisations that just need storage to host VMs on, storage that gives them insight into how it’s performing. Maybe it’s even storage that can move some stuff around on the fly to make things work a little better.

It’s a solid foundation upon which to add a bunch of pretty cool features. I’m not 100% convinced that what Tintri is proposing is the reality in a number of enterprise shops (have you ever had to fill out a change request to storage vMotion a VM before?), but that doesn’t mean it’s not a noble goal, and certainly one worth pursuing. I’m a fan of any vendor that is actively working to take the work out of infrastructure, and allowing people to focus on the business of doing business (or whatever it is that they need to focus on). It looks like Tintri has made some really progress towards reducing the overhead of infrastructure, and I’m keen to see how that plays out across the product portfolio over the next year or two.

 

 

Nexenta Announces NexentaCloud

I haven’t spoken to Nexenta in some time, but that doesn’t mean they haven’t been busy. They recently announced NexentaCloud in AWS, and I had the opportunity to speak to Michael Letschin about the announcement.

 

What Is It?

In short, it’s a version of NexentaStor that you can run in the cloud. It’s ostensibly an EC2 machine running in your virtual private cloud using EBS for storage on the backend. It’s:

  • Available in the AWS Marketplace;
  • Is deployed on preconfigured Amazon Machine Images; and
  • Delivers unified file and block services (NFS, SMB, iSCSI).

According to Nexenta, the key benefits include:

  • Access to a fully-featured file (NFS and SMB) and block (iSCSI) storage array;
  • Improved cloud resource efficiency through
    • data reduction
    • thin provisioning
    • snapshots and clones
  • Seamless replication to/from NexentaStor and NexentaCloud;
  • Rapid deployment of NexentaCloud instances for test/dev operations;
  • Centralised management of NexentaStor and NexentaCloud;
  • Advanced Analytics across your entire Nexenta storage environment; and
  • Migrate legacy applications to the cloud without re-architecting your applications.

There’s an hourly or annual subscription model, and I believe there’s also capacity-based licensing options available.

 

But Why?

Some of the young people reading this blog who wear jeans to work every day probably wonder why on earth you’d want to deploy a virtual storage array in your VPC in the first place. Why would your cloud-native applications care about iSCSI access? It’s very likely they don’t. But one of the key reasons why you might consider the NexentaCloud offering is because you’ve not got the time or resources to re-factor your applications and you’ve simply lifted and shifted a bunch of your enterprise applications into the cloud. These are likely applications that depend on infrastructure-level resiliency rather than delivering their own application-level resiliency. In this case, a product like NexentaCloud makes sense in that it provides some of the data services and resiliency that are otherwise lacking with those enterprise applications.

 

Thoughts

I’m intrigued by the NexentaCloud offering (and by Nexenta the company, for that matter). They have a solid history of delivering interesting software-defined storage solutions at a reasonable cost and with decent scale. If you’ve had the chance to play with NexentaStor (or deployed it in production), you’ll know it’s a fairly solid offering with a lot of the features you’d look for in a traditional storage platform. I’m curious to see how many enterprises take advantage of the NexentaCloud product, although I know there are plenty of NexentaStor users out in the wild, and I have no doubt their CxOs are placing a great amount of pressure on them to don the cape and get “to the cloud” post haste.

Storage Field Day 6 – Wrap-up and Link-o-rama

Disclaimer: I recently attended Storage Field Day 6.  My flights, accommodation and other expenses were paid for by Tech Field Day and their sponsors. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

This is a quick post to say thanks once again to the organisers and sponsors of Storage Field Day 6. I had a great time, learnt a lot, and didn’t get much sleep. For easy reference, here’s a list of the posts I did covering the event (not necessarily in chronological order).

Storage Field Day 6 – Day 0
Storage Field Day 6 – Day 1 – Avere
Storage Field Day 6 – Day 1 – StorMagic
Storage Field Day 6 – Day 1 – Tegile
Storage Field Day 6 – Day 2 – Coho Data
Storage Field Day 6 – Day 2 – Nexenta
Storage Field Day 6 – Day 2 – Pure Storage
Storage Field Day 6 – Day 3 – Nimble Storage
Storage Field Day 6 – Day 3 – NEC
Storage Field Day 6 – (Fairly) Full Disclosure

Also, here’s a number of links to posts by my fellow delegates. They’re all really smart folks, and you’d do well to check out what they’re writing about. I’ll update this list as more posts are published.

 

Eric Shanks
Storage Field Day 6
Local Premises Storage for EC2 Provided by Avere Systems
Nimble Storage Data Analytics – InfoSight

Will All New Storage Arrays be Hybrid?

 

John Obeto
Today at Storage Field Day 6

Day 2 at Storage Field Day 6: Coho Data

Day 2 at Storage Field Day 6: Nexenta Systems

 

Arjan Timmerman
Storage Field Day Starts Today :D

 

Nigel Poulton

Nexenta – Back in da house…

 

Enrico Signoretti
Avere Systems, great technology but…

 

Chin-Fah Heoh

MASSive, Impressive, Agile, TEGILE

 

Jon Klaus
Storage Field Day 6 Day 0 – Sightseeing and Yankee swap
SFD6 Day 1 – Avere, StorMagic, Tegile

 

Finally, thanks again to Stephen, Claire and Tom, it was a great few days and I really valued the opportunity I was given to attend.

Storage Field Day 6 – (Fairly) Full Disclosure

Disclaimer: I recently attended Storage Field Day 6.  My flights, accommodation and other expenses were paid for by Tech Field Day and their sponsors. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

My full disclosure post will be nowhere near as epic as Justin’s, although he is my role model for this type of thing. Here are my notes on gifts, etc, that I received as a delegate at Storage Field Day 6. I’m going to do this in chronological order, as that was the easiest way for me to take notes during the week.

Tuesday

My wife paid for parking at BNE airport when she dropped me off. I also bought McDonalds for lunch at SYD, paid for by myself (in more ways than one). A period of time passed and I consumed plane “food”. This was included in the price of the ticket. Alcoholic beverages were not, but I stuck with water. Bless you United for keeping the economy class sober on long-haul flights.

On Tuesday night we had the Delegate dinner at Genji Steak House – a Teppanyaki restaurant in San Jose. I had the gyoza, followed by chicken and prawns with water. This was paid for by Tech Field Day. I also received 2 Manchester (City and United) beanies and 3 large blocks of Cadbury chocolate as part of the gift swap. Tom also gave me a care pack of various American snacks, including some Starbucks Iced Coffee and Nutella. I gave most of it to an American friend prior to my departure. I also had 2 Dos Equis beers in the hotel bar with everyone. This was also paid for by Tech Field Day.

Wednesday

At Avere‘s presentation on Wednesday morning I was given an Avere t-shirt. We had lunch at Bhava Communications in Emeryville (the location of the first two presentations). I had some sandwiches, a cookie, and a can of coke. At StorMagic‘s presentation I was given a 4GB USB stick with StorMagic info on it, as well as a personalised, leather-bound notebook. At Tegile‘s presentation I received a 2200ma portable USB charger thing. I also had a bottle of water.

On Wednesday night we had a delegate dinner (paid for by Tech Field Day) at an “Asian fusion” restaurant called Mosaic. I had the Triple Crown (Calamari, scallops, tiger prawns, asparagus, ginger, white wine garlic sauce). We then went to “Tech Field Day at the Movies” with delegates and friends at the Camera 12 Downtown. We watched Twelve Monkeys. I had a bottle of water from the concession stand. Tech Field Day covered entry for delegates and friends.

Thursday

Thursday morning we had breakfast at Coho Data. I had a sausage and egg roll, some greek yoghurt and an orange juice. I also received a personalised LEGO minifig, a LEGO Creator kit (31018), a foam fish hat (!) and a Coho Data sticker. At Nexenta‘s session I received a Nexenta notepad, orange Converse sneakers with Nexenta embroidered on them and a Nexenta-branded orange squishy ball. Lunch was pizza and some cheesy bread and boneless chicken and a bottle of water. At the Pure Storage session I received a Pure Storage-branded pilsener glass and a 8GB USB stick in a nice little wooden box.

For dinner on Thursday we had canapés and drinks at Cucina Venti  Italian restaurant. This was paid for by Tech Field Day, as was my entry to the Computer History Museum that night (a personal highlight).

Friday

Breakfast was had at Nimble Storage‘s office. I had bacon, eggs and juice. I also received a Nimble-branded jacket and a Raspberry Pi kit. At NEC, I received a set of NEC-branded Headphones. We had lunch at NEC’s office which consisted of Thai food.

I then made my own way to SFO with a friend.

Conclusion

I’d like to extend my thanks to the Storage Field Day organisers and the sponsors of the event. I had a great time. Since I can’t think of a good way to wrap up this post I’ll leave you with a photo.

SFD6_Swag

 

Storage Field Day 6 – Day 2 – Nexenta

Disclaimer: I recently attended Storage Field Day 6.  My flights, accommodation and other expenses were paid for by Tech Field Day and their sponsors. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

I’ve been a fan of Nexenta for some time, having used the Community Edition of NexentaStor in various lab environments over the past few years, so I was interested to hear what they had to talk about. But firstly, you can see video footage of the Nexenta presentation here. You can also download my raw notes from the presentation here. Finally, here’s a link to the Nexenta website that covers some of what they presented.

SFD6_Nexenta

Murat Karslioglu presented on SDS for VSAN and VDI, with the focus being primarily on NexentaConnect for VMware Virtual SAN. According to Murat, VMware’s Virtual SAN currently has the following limitations.

  • No storage for file
  • Data is locked to VSAN cluster nodes
  • Limited to 32 nodes
  • All ESXi hosts must be on the HCL
  • Must have VSAN license
  • Every ESXi host contributing storage needs SSD
  • SSD is only used for cache
  • Does not support SCSI reservations
  • 2TB vmdk size limitation [not convinced this is still the case]
  • No compression, no deduplication

Murat described Virtual SAN as “more shared scale-out datastore than SAN”.

Nexenta_vSAN-357px

So what does NexentaConnect for Virtual SAN get you?

  • File Storage
  • User data
  • Departmental data
  • ISO files
  • Managed through the vSphere web client
  • Inline compression
  • Performance and health monitoring
  • Folder and volume level snapshots
  • Maintains data integrity during stressful tasks
  • HA File Services via VMware HA

He also mentioned that EVO:RAIL now has file services via Nexenta’s partnership with Dell and Supermicro.

So this all sounds technically pretty neat, and I applaud Nexenta for taking something they’ve historically been good at and applying it to a new type of infrastructure that people are talking about, if not adopting in droves. The problem is I’m just not sure why you would bother doing this with VMware Virtual SAN and Nexenta in this type of combination. The licensing alone, at least from a VMware perspective, must be a little bit awful. I would have thought it would make more sense to save the money on Virtual SAN licenses and invest in some solid infrastructure using NexentaStor.

I guess the other use case is a situation where someone has already invested in Virtual SAN, potentially for a VDI deployment, and just needs to provide some file access to a few physical hosts or some other non-standard client. But I’m struggling to see where that might be the case on a regular basis. All that said though, I work in a different market to the US, and I recall having conversations with people at VMworld this year about the benefits of virtualising workloads, so I guess there are people out there who are going to go for this solution.

Don’t misunderstand, I really like Nexenta’s core product, I’m just not convinced that this takes them in the right direction. In any case, go and check them out if you haven’t already, I think it’s worthwhile.