Cloudian Does Object Smart and at Scale

Disclaimer: I recently attended Storage Field Day 10.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

cloudian-logo

Before I get started, you can find a link to my raw notes on Cloudian‘s presentation here. You can also see videos of the presentation here.

I’m quite keen on Cloudian’s story, having seen them in action at Storage Field Day 7. I also got to have a beer with Michael Tso at the SFD10 mixer and talk all things Australian.

SFD10_Cloudian_MichaelTso_Cropped

 

Smart and at Scale

Cloudian took us through some of their driving design principles, and I thought it was worth covering these off again. You’ll notice the word “scale” gets used a lot, and this has been a particularly important capability for Cloudian. They did a blog post on it too.

One of the key features of the HyperStore solution is that it needed to support what Cloudian term “Smart Operations at Scale”. This requires the tech to:

  • Be simple and intuitive;
  • Be fully automated from an operations perspective (e.g. add/remove drives/nodes, upgrades);
  • Provide visual storage analytics to automatically see hot spots; and
  • Offer self service consumption (via a policy based approach).

Cloudian have also worked hard to ensure they can provide “Extreme Durability at Scale”, with the HyperStore solution offering the ability to:

  • Be always repaired, always verified;
  • Offer automated failure avoidance (through the use of Dynamic Object Routing); and
  • Be “enterprise” grade.

One of the keys to being able deliver a scaleable solution has been the ability to provide the end user with “Smart support at Scale”, primarily through the use of:

  • Proactive (not reactive) support;
  • Continuous monitoring; and
  • Global analytics.

The analytics piece is a big part of the Cloudian puzzle, and something they’ve been working hard on recently. With their visual analytics you can analyse your data across globe and plan for future based on your demand. Cloudian not only performs analytics at scale, but also designed to facilitate operations at scale, with:

  • One screen for hundreds of nodes (in a kind of “beehive” layout);
  • Instant view of a node’s health;
  • The ability to add nodes with one click; and
  • The ability to dynamically rebalance the cluster.

When it comes to software defined storage platforms, the simple things matter, particularly as it relates to your interactions with the hardware platform. To that end, with HyperStore you’ve got the ability to do some basic stuff, like:

  • Identifying node types;
  • Blinking suspect servers; and
  • Blinking suspect drives.

When you’re running a metric s**t-tonne of these devices in a very big data centre, this kind of capability is really important, especially when it comes to maintenance. As is the ability to perform rolling upgrades of the platform with no downtime and in an automated fashion. When it comes to rebuilds, Cloudian provides insight into both data rebuild information and cluster rebalance information – both handy things to know when something’s gone sideways.

The Cloudian platform also does “Smart Disk Balancing”. If there’s a disk imbalance it will change the tokens pointing from “highly used disk to low used disk”. If there’s a disk failure, new data automatically routes to newly assigned resources. Makes sense, and nice to see they’ve thought it through.

 

Further Reading and Conclusion

Cloudian make quite a big deal of their S3 compatibility. They even give me a sticker that says it’s guaranteed. It looks a lot like this:

Badge_S3YourDataCenter_transparent2

Chris Evans also did a series of posts on S3 and Cloudian that you can read here, here and here. He also did a great preview post prior to SFD10 which is also worth a look. He’s a good lad, he is. Particularly when I need to point you, my loyal reader, to well written articles on topics I’m a little sketchy on.

S3 compatibility is a big thing for a lot of people looking at deploying object storage, primarily because AWS are leaps and bounds ahead of the pack in terms of object storage functionality, deployed instances, and general mindshare. Cloudian haven’t just hitched their wagon to S3 compatibility though. In my opinion they’ve improved on the S3 experience through clever design and a solid approach to some fundamental issues that arise when you’re deploying a whole bunch of devices in data centres that don’t often have staff members present.

Nimble Storage are Relentless in their Pursuit of Support Excellence

Disclaimer: I recently attended Storage Field Day 10.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

400px-Nimble_logo

Before I get cracking, you can find a link to my raw notes on Nimble Storage‘s presentation here. You can also see videos of the presentation here.

I’ve written about Nimble recently. I went to their Predictive Flash Platform launch in San Francisco earlier this year. You can read about that here (disclosure is here). I’ve also talked about InfoSight with some level of enthusiasm. I think this all ties in nicely with my thoughts on their SFD10 presentation.

Before I get into that though, kudos to Tom McKnight (VP Hardware Engineering) for his demo on component resilience (pulling 6 drives, PSU and causing a controller failure). Demos are tough at the best of times, and it’s always nice to see people with the confidence to stand behind their product and run it through its paces in front of a “live studio audience”.

SFD10_Nimble_TomMcKnight

 

Tier 3 Before You Know It

Rod Bagg (VP Analytics and Customer Support) provided an overview of InfoSight. He spoke a lot about what he called the “app-data gap”, with the causes of problems in the environment being:

  • Storage related;
  • Configuration issues;
  • Non-storage best practices;
  • Interoperability issues; and
  • Host, compute, VM, etc.

But closing the app-data gap with tech (in this case, SSDs) oftentimes is not enough. You need predictive analytics. Every week InfoSight analyses more than a trillion data points. And it’s pretty good in helping you make your infrastructure transparent. According to Rod, it:

  • Proactively informs and guides without alarm fatigue;
  • Predicts future needs and simplifies planning; and
  • Delivers transformed support experience from Level 3 experts.

Nimble say that 9 out of 10 issues are detected before you know about them. “If we know about an issue, it shouldn’t happen to you”. Rod also spoke at some length about the traditional Level 3 Support model vs. Nimble’s approach. He said that you could “pick up the phone, dial 1-877-364-6253, and get Level 3 Support”, with the average hold time being <1 minute. This isn’t your standard vendor support experience, and Nimble were very keen to remind us of that.

SFD10_Nimble_TraditionalSupport

 

Further Reading and Conclusion

I’ve said before that I think InfoSight is a really cool tool. It’s not just about Nimble’s support model, but the value of the data they collect and what they’re doing with that data to solve support issues in a proactive fashion. It also provides insight (!) into what customers are doing out in the real world with their arrays. Ray Lucchesi had a nice write-up on IO distribution here that is well worth a read. Chris M. Evans also did a handy preview post on Nimble that you can find here.

Whenever people have asked me in the past what they should be looking for in a storage array, I’ve been reluctant to recommend vendors based purely on performance specifications or the pretty bezel. When I was working in operations, the key success criterion for me was the vendor’s ability to follow up on issues with reliable, prompt support. Nothing works perfectly, despite what vendors tell you. Having the ability to fix things in a timely fashion, through solid logistics, good staff and a really solid analytics platform, provides vendors like Nimble with an advantage over their competitors. Indeed, a few other vendors, including Pure and Kaminario, have seen the value in this approach and are taking similar approached with their support models. It will be really interesting to see how the platform evolves over time and how Nimble’s relentless pursuit of support excellence scales as the company grows bigger.

Tintri Keep Doing What They Do, And Well

Disclaimer: I recently attended Storage Field Day 10.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Tintri_Logo_Horizontal_1024

Before I get into it, you can find a link to my notes on Tintri‘s presentation here. You can also see videos of the presentation here.

I’ve written about Tintri recently. As recently, in fact, as a week before I saw them at SFD10. You can check out my article on their most recent product announcements here.

 

VAS but not AAS (and that’s alright)

Tintri talk a lot about VM-aware Storage (or VAS as they put it). There’s something about the acronym that makes me cringe, but the sentiment is admirable. They put it all over their marketing stuff. They’re committed to the acronym, whether I like it or not. But what exactly is VM-aware Storage? According to Tintri, it provides:

  • VM-level QoS;
  • VM-level analytics;
  • VM data management;
  • VM-level automation with PowerShell and REST; and
  • Supported across multiple hypervisors (Support VMware, Hyper-V, OpenStack, RedHat).

Justin Lauer, Global Evangelist with Tintri, took us through a demo of VAS and the QoS capabilities built in to the Tintri platform.

SFD10_Tintri_Justin

I particularly liked the fact that I can get a view of end to end latency (host / network / storage (contention and flash) / throttle latency). In my opinion this is something that people have struggled with for some time, and it looks like Tintri have a really good story to tell here. I also liked the look of the “Capacity gas gauge” (petrol for Antipodeans), providing an insight into when you’ll run out of either performance, capacity, or both.

So what’s AAS then? Well, in my mind at least, this is the ability to delve into application-level performance and monitoring, rather than just VM-level. And I don’t think Tintri are doing that just yet. Which, to my way of thinking, isn’t a problem, as I think a bunch of other vendors are struggling to really do this in a cogent fashion either. But I want to know what my key web server tier is doing, for example, and I don’t want to assume that it still lives on the datastore that I tagged for it when I first deployed it. I’m not sure that I get this with VAS, but I still think it’s a long way ahead of where we were a few years ago, getting stats out of volumes and not a lot else.

 

Further Reading and Conclusion

In the olden days (a good fifteen years ago) I used to struggle to get multiple Oracle instances to play nicely on the same NT4 host. But I didn’t have a large number of physical hosts to play with, and I had limited options when I wanted to share resources across applications. Virtualisation to slice up physical resources in a more concise fashion, And as a result of this it’s made it simple for us to justify running one application per VM. In this way we can still get insights into our applications from understanding what our VMs are doing. This is no minor thing when you’re looking after storage in the enterprise – it’s a challenge at the best of times. Tintri has embraced the concept of intelligent analytics in their arrays in the same way that Nimble and Pure have started really making use of the thousands of data points that they collect every minute.

But what if you’re not running virtualised workloads? Well, you’re not going to get as much from this. But you’ve probably got a whole lot of different requirements you’re working to as well. Tintri is really built from the ground up to deliver insight into virtualised workloads that has been otherwise unavailable. I’m hoping to see them take it to the next level with application-centric monitoring.

Finally, Enrico did some more thorough analysis here that’s worth your time. And Chris’s SFD10 preview post on Tintri is worth a gander as well.

 

Pure Storage really aren’t a one-trick pony

Disclaimer: I recently attended Storage Field Day 10.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

PureStorage_logo

Before I get started, you can find a link to the Pure Storage presentation at Storage Field Day 10 here. You can also see the videos of the presentation here.

In talking with people about Pure Storage, some of the feedback I received was that they were a “one-trick pony“. By that I mean that people thought all they did was offer an all-flash array and nothing more. I think there’s always been a lot more to Pure than just the array. To wit, their approach to hardware maintenance and lifecycles, via the Evergreen Storage program, as well as their implementation of Pure1 has had me thinking for a while that they’re not your father’s AFA vendor.

 

FlashBlade

I wrote about FlashBlade when it was first announced and I was cautiously optimistic that they were onto something kind of cool. As it happened, Pure spent a lot of time at SFD10 giving us a run-through on what some of the thinking around the design of the FlashBlade was, and it solidified some of my ideas around the product.

Here’s a happy snap of Brian Gold (@briantgold) taking us through the hardware in the FlashBlade.

Pure_SFD10

Some of the challenges Pure aimed to address when designing the FlashBlade was the need for scale in terms of:

  • Capacity – from terabytes to petabytes;
  • Concurrency – from a few users to thousands; and
  • Access patterns – from small files and metadata to large, streaming workloads.

They also wanted to do this without drowning the users or administrators in complexity.

One of the key approaches to this problem was to adopt a modular architecture through the use of the blade chassis. While we talk a lot about the flash in Pure’s FlashBlade, the network architecture shouldn’t be underestimated. A key component of Pure’s “software-defined networking” is hardware (no, the irony is not lost on me), with two Broadcom Trident-II ethernet switch ASICs collapsing three networks (Front End, Back End and Control) into one high performance fabric providing 8 40Gbs QSFP connections into customer Top of Rack switches. This provides Pure with the use of a high performance, integrated fabric connected to scalable server nodes. While some of the specifications at the time of announcement were limited to the chassis, you’ll start to see these numbers increase as the SDN component is improved over time.

Brian was keen to see us thinking about the FlashBlade hardware design in the following terms:

  • An integrated blade chassis provides density and simplicity;
  • All-flash storage unlocks the parallelism inside an SSD; and
  • An NVRAM engine built for distributed transaction processing.

Rob Lee then went on to talk about the software side of the equation, with the key takeaways from the software side of things being Pure’s desire to:

  • Achieve scalability though parallelism at all layers;
  • Create parallelism through deep partitioning and distribution; and
  • Minimise the cost of distributed coordination.

 

Further Reading and Conclusion

Chris Evans did a nice article on Pure prior to SFD10. Chris Mellor did a decent write-up (something he’s prone to) at the time of release, and Enrico put together some interesting insights as well. Pure are certainly bucking the trend of commodity hardware by using their own stuff. They’re doing scale out differently as well, which is something some pundits aren’t entirely pleased about. All that said, I think the next 12 months will be critical to the success of the scale-out file and object play. Pure’s ability to execute on a technically compelling roadmap, as well as grabbing the interest of customers in rich media, analytics and technical computing will be the ultimate measure of what looks to be a well thought out product architecture. If nothing else, they’ve come up with a chassis that does this …

Scale Computing – If Only Everything Else Were This Simple

Disclaimer: Scale Computing have provided me with the use of a refurbished HC3 system comprised of 3 HC1000 nodes, along with an HP 2920-24G Gigabit switch. They are keen for me to post about my experiences using the platform and I am too, so this arrangement works out well for both parties. I’m not a professional equipment reviewer, and everyone’s situation is different, so if you’re interested in this product I recommend getting in touch with Scale Computing.

Scale_Logo_High_Res

Introduction

This is the first of what is going to be a few posts covering my experiences with the Scale Computing HC3 platform. By way of introduction, I recently wrote a post on some of the new features available with HC3. Trevor Pott provides an excellent overview of the platform here, and Taneja Group provides an interesting head to head comparison with VSAN here.

In this post I’d like to cover off my installation experience and make a brief mention of the firmware update process.

 

Background

I’d heard of Scale Computing around the traps, but hadn’t really taken the time to get to know them. For whatever reason I was given a run through of their gear by Alan Conboy. We agreed that it would be good for me to get hands on with some kit and by the next week I had 3 boxes of HC1000 nodes lob up at my front door. Scale support staff contacted me to arrange installation as soon as they knew the delivery had occurred. I had to travel however so I asked them to send me through the instructions and I’d get to it when I got home. Long story short I was back a week before I got these things out of the box and started looking into getting stuff setup. By the way the screwdriver included with every node was a nice touch.

The other issue I had is that I really haven’t had a functional lab environment at home for some time, so I had no switching or basic infrastructure services to speak of. And my internet connection is ADSL 1, so uploading big files can be a pain. And while I have DNS in the house it’s really not enterprise grade. In some ways, my generally shonky home “lab” environment is similar to a lot of small business environments I’ve come across during my career. Perhaps this is why Scale support staff are never too stressed about the key elements being missing.

As I mentioned in the disclaimer, Scale also kindly provided my with an HP 2920-24G gigabit switch. I set this up per the instructions here. In the real world, you’d be running off two switches. But as anyone who’s been to my house can attest, my home office is a long way from the real world.

HC1000_1

I do have a rack at home, but it’s full of games consoles for the moment. I’m currently working on finding a more appropriate home for the HC3 cluster.

 

First Problem

So, I unpacked and cabled up the three nodes as per the instructions in the HC3 Getting Started Guide. I initialised each node, and then started the cluster initialisation process on the first node. I couldn’t, however, get the nodes to join the cluster or talk to each other. I’d spent about an hour unpacking everything and then another hour futzing my way about the nodes trying to get them to talk to each other (or me). Checked cables, checked switch configuration, and so forth. No dice. It was Saturday afternoon, so I figured I’d move on to some LEGO. I sent off a message to Scale support to provide an update on my progress and figured I’d hear back Tuesday AM my time (the beauty of living +10 GMT). To my surprise I got a response back from Tom Roeder five minutes after I sent the e-mail. I chipped him about working too late but he claims he was already working on another case.

It turns out that the nodes I was sent had both the 10Gbps and 1Gbps cards installed in them, and by default the 10Gbps cards were being used. The simplest fix for this (given that I wouldn’t be using 10Gbps in the short term) was to remove the cards from each node.

HC1000_2

Once this was done, I had to log into each node as the root user and run the following command:

/opt/scale/libexec/40-scale-persistent-net-generate

I then rebooted and reinitialised each node. At this point I was then able to get them joining the cluster. This took about twenty minutes. Happy days.

 

Second Problem

So I thought everything was fine, but I started getting messages about the second node being unable to update from the configured time source. I messaged Tom and he got me to open up a support tunnel on one of the working nodes (this has been a bloody awesome feature of the support process). While the cluster looked in good shape, he wasn’t able to ping external DNS servers (e.g. 8.8.8.8) from node 2, nor could he get it to synchronise with the NTP pool I’d nominated. I checked and re-checked the new Ethernet cables I’d used. I triple-checked the switch config. I rebooted the Airport Express (!) that everything was hanging off in my office. Due to the connectivity weirdness I was also unable to update the firmware o the cluster. I grumbled and moaned a lot.

Tom then had another poke around and noticed that, for some reason, no gateway was configured on node 2. He added one in and voilà, the node started merrily chatting to its peers and the outside world. Tom has the patience of a saint. And I was pretty excited that it was working.

SC_Tom_tweet

Tom has been super supportive in, well, supporting me during this installation. He’s been responsive, knowledgeable and has made the whole installation experience a breeze, minor glitches notwithstanding.

 

Firmware Update

I thought I’d also quickly run through the firmware update process, as it’s extremely simple and I like posts with screenshots. I think it took 10 minutes, tops. Which is a little unusual, and probably due to a few factors, including the lack of VMs running on the nodes (day job has been a bit busy), and it was a fairly minor update. Scale generally suggest 20 minutes per node for updates.

Here’s the process. Firstly, if there’s new firmware available to install, you’ll see it in the top-right corner of the HC3 GUI.

sc01

Click on “Update Available” for more information. You can also access the release notes from here.

sc02

If you click on “Apply Update” you’ll be asked to confirm your decision.

sc03

You’ll then see a bunch of activity.

sc04

 

sc05

 

sc06

And once it’s done, the version will change. In this case I went from 6.4.1 to 6.4.2 (a reasonably minor update).

sc08

The whole thing took about 10 minutes, according to the cluster logs.

sc09

 

Conclusion

Getting up and running with the HC3 platform has been a snap so far, even though there were some minor issues getting started. Support staff were super responsive, instructions were easy to follow and the general experience has been top notch. Coupled with the fact that the interface is really easy to use, I think Scale are onto a winner here, particularly given the market they’re aiming at and the price point. I’m looking forward to putting together some more articles on actually using the kit.

 

Kaminario are doing some stuff we’ve seen before, but that’s okay

Disclaimer: I recently attended Storage Field Day 10.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

kaminario

For each of the xFD-related posts I do I like to include a few things, namely a link to my presentation notes and a link to the videos of the presentation.

I wrote a brief summary of my last encounter with Kaminario at Storage Field Day 7 – you can check it out here. In this post I’d like to look more into what they’re doing from an analytics and array intelligence perspective. But before I do that I’d like to tip my hat to Shachar Fienblit. Fienblit did a great presentation on what’s coming in storage technology (from Kaminario’s perspective on the industry) and I urge you to check out the video.

SFD10_Kaminario_Fienblit

So it’s probably not really fair to say that Kaminario “are doing some stuff we’ve seen before, but that’s okay”. A better description might be that I think there are thematic commonalities between Kaminario’s approach and Nimble, Tintri and Pure’s various approaches to array analytics and understanding workload.

 

HealthShield

Kaminario have been developing a vision of what they call “DC-aware” storage, with their key imperative being to “simplify and optimise storage deployment and the integration between layers of the IT stack without compromising on cost”. In this fashion they say they differentiate themselves from the current hyperconverged infrastructure paradigm. So what are Kaminario doing?

Firstly, HealthShield is helping them to make some interesting design decisions for the K2 product. What’s HealthShield? It’s “a cloud-based, call-home and analytics engine that delivers proactive monitoring and troubleshooting. Tightly integrated with Kaminario’s world-class support, HealthShield complements high-availability features by ensuring hardware failures never impact availability”. As part of Kaminario’s strategy around understanding the infrastructure better, they’re looking to extend their HealthShield Analytics infrastructure to ensure that storage is both optimised and aware of the full IT deployment.

To this end, with HealthShield they’ve created a Software-as-a-Service offering that helps to “consolidate, analyse, optimise, and automate storage configurations”. While Kaminario’s technical area of expertise is with their storage arrays, they’re very keen to extend this capability beyond the storage array and into the broader application and infrastructure stack. This can only be a good thing, as I think array vendors historically have done something of a shoddy job at understanding what the environment is doing beyond simple IO measurements. The cool thing about making this a SaaS offering is that, as Kaminario say, you can do “Analytics in the cloud so your controllers can work on IO”. Which is a good point too, as we’ve all witnessed the problems that can occur when your storage controllers are working hard on processing a bunch of data points rather than dishing up the required throughput for your key applications. Both Pure and Nimble do this with Pure1 and InfoSight, and I think it’s a very sensible approach to managing your understanding of your infrastructure’s performance. Finally, HealthShield collects thousands of data points, and partners once the customer gives permission.

 

So What Can I Do With This Data?

Shai Maskit, Director of Technical Marketing (pictured below) did a great demo on Kaminario’s Quality of Service (QoS) capabilities that I think added a level of clarity to the HealthShield story. While HealthShield can be a great help to Kaminario in tuning their arrays, it also provides insight into the right settings to use when applying QoS policies.

SFD10_Kaminario_Demo

But what’s the problem with QoS that Kaminario are trying to solve? In Kaminario’s opinion, existing QoS solutions are complicated to administer and don’t integrate into the broader set of application delivery operations. Kaminario have set out to do a few things.

Firstly, they want to simplify storage QoS. They do this by abstracting QoS based on customer-defined policies. In this scenario, the customer also defines preferences, not just how to implement QoS in the environment. The key benefit of this approach is that you can then integrate QoS with the application, allowing you to set QoS polices for specific workloads (e.g. OLTP vs OLAP), while closing the gap between the database and its storage platform.

Another key benefit is the availability of performance data, with analytics being made available to detect changing performance patterns and automatically adapt. This also provides insight into workload migration to the K2 environment based on application performance. This can be extremely handy when you don’t want to run everything on your all flash array.

 

Conclusion

I love that every storage vendor I talk to now is either heavily promoting their analytics capability or gushing about its prominence on their product roadmap. While each vendor does things slightly differently, I think it’s great that there’s been some great progress in the marketplace to extend the conversation beyond speeds and feeds into a more mature conversation around understanding how applications are behaving and what can be done to improve performance to enable improved business operations. QoS doesn’t have to be a super onerous endeavour either. Kaminario have certainly taken an interesting approach to this, and I look forward to seeing how HealthShield develops over the next little while.

Brisbane VMUG – June 2016

hero_vmug_express_2011

The second Brisbane VMUG for 2016 will be held on Tuesday 21st June at the Pig ‘N’ Whistle Riverside  in the city (Riverside Centre, 123 Eagle Street, Brisbane) from 2 – 4 pm (*note the new time). It’s sponsored by Pure Storage and should be a lot of fun.

Here’s the agenda:

  • VMware Presentation: How to use vRealize Orchestration to Automate IT
  • User Presentation: QUT vRealize Automation Demo and Lessons Learned
  • Pure Storage Presentation: vRealize your Future with VMware and Pure Storage
  • Refreshments

I’m really looking forward to Michael Francis starting his enablement series on vRO – these will become a regular feature of our meetings. I’m also stoked to have QUT on board for the user presentation and always happy to have veteran VMUGger Craig Waters up to present on Pure Storage. You can find out more information and register for the event here. I hope to see you there. Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.

Storage Field Day 10 – (Fairly) Full Disclosure

Disclaimer: I recently attended Storage Field Day 10.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

SFD-Logo2-150x150

Here are my notes on gifts, etc, that I received as a delegate at Storage Field Day 10. I’d like to point out that I’m not trying to play companies off against each other. I don’t have feelings one way or another about receiving gifts at these events (although I generally prefer small things I can fit in my suitcase). Rather, I’m just trying to make it clear what I received during this event to ensure that we’re all on the same page as far as what I’m being influenced by. Some presenters didn’t provide any gifts as part of their session – which is totally fine. I’m going to do this in chronological order, as that was the easiest way for me to take notes during the week. While every delegate’s situation is different, I’d also like to clarify that I took 5 days of training / work time to be at this event (thanks to my employer for being on board).

 

Saturday

I paid for my taxi to BNE airport. I had a burger at Benny Burger in SYD airport. It was quite good. I flew Qantas economy class to SFO. The flights were paid for by Tech Field Day. Plane food was consumed on the flight. It was a generally good experience.

 

Tuesday

When I arrived at the hotel I was given a bag of snacks by Tom. The iced coffee and granola bars came in handy. We had dinner at Il Fornaio at the Westin Hotel. I had some antipasti, pizza fradiavola and 2 Hefeweizen beers (not sure of the brewery).

 

Wednesday

We had breakfast in the hotel. I had bacon, eggs, sausage, fruit and coffee. We also did the Yankee Gift Swap at that time and I scored a very nice stovetop Italian espresso coffee maker (thanks Enrico!). We also had lunch at the hotel, it was something Italian. Cloudian gave each delegate a green pen, bottle opener, 1GB USB stick, and a few Cloudian stickers. We had dinner at Gordon Biersch in San Jose. I had some sliders (hamburgers for small people) and about 5 Golden Export beers.

 

Thursday

Pure Storage gave each delegate a Tile, a pen, some mints, and an 8GB USB stick. Datera gave each delegate a Datera-branded “vortex 16oz double wall 18/8 stainless steel copper vacuum insulated thermal pilsner” (a cup) with our twitter handles on them. Tintri provided us with a Tintri / Nike golf polo shirt, a notepad, a pen, an 8GB USB stick, and a 2600mAh USB charger. We then had happy hour at Tintri. I had a Pt. Bonita Pilsner beer and a couple of fistfuls of prawns. For dinner we went to Taplands. I had a turkey sandwich and 2 Fieldwork Brewing Company Pilsners.

 

Friday

We had breakfast on Friday at Nimble Storage. I had some bacon, sausage and eggs for breakfast with an orange juice. I don’t know why my US comrades struggle so much with the concept of tomato sauce (ketchup) with bacon. But there you go. Nimble gave us each a custom baseball jersey with our name on the back and the Nimble logo. They also gave us each a white lab coat with the Nimble logo on it. My daughters love the coat. Hedvig provided us with a Hedvig sticker and a Hedvig-branded Rogue bluetooth speaker. We had lunch at Hedvig, which was a sandwich, some water, and a really delicious choc-chip cookie. Exablox gave each of us an Exablox-branded aluminium water bottle. We then had happy hour at Exablox. I had two Anchor Brewing Liberty Ale beers (“tastes like freedom”) and some really nice cheese. To finish off we had dinner at Mexicali in Santa Clara. I had a prawn burrito. I didn’t eat anything on the flight home.

 

Conclusion

I’d like to extend my thanks once again to the Tech Field Day organisers and the companies presenting at the event. I had a super enjoyable and educational time. Here’s a photo.

SFD10_disclosure1

 

Storage Field Day 10 – Day 0

Disclaimer: I recently attended Storage Field Day 10.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

SFD-Logo2-150x150

This is just a quick post to share some thoughts on day zero at Storage Field Day 10. I can do crappy tourist snaps as well if not better than the next guy. Here’s the obligatory wing shot. No wait here’s two – one leaving SYD and the other coming in to SFO. Bet you can’t guess which is which.

SFD10_plane1     SFD10_plane2

We all got together for dinner on Tuesday night in the hotel. I had the pizza. It was great.

SFD10_Food

But enough with the holiday snaps and underwhelming travel journal. Thanks again Stephen, Tom, Claire and Megan for having me back, making sure everything is running according to plan and for just being really very decent people. I’ve really enjoyed catching up with the people I’ve met before and meeting the new delegates. Look out for some posts related to the Tech Field Day sessions in the next few weeks. And if you’re in a useful timezone, check out the live streams from the event here, or the recordings afterwards.

Here’s the rough schedule (all times are ‘Merican Pacific).

Wednesday, May 25 9:30 – 11:30 Kaminario Presents at Storage Field Day 10
Wednesday, May 25 12:30 – 14:30 Primary Data Presents at Storage Field Day 10
Wednesday, May 25 15:00 – 17:00 Cloudian Presents at Storage Field Day 10
Thursday, May 26 9:30 – 11:30 Pure Storage Presents at Storage Field Day 10
Thursday, May 26 13:00 – 15:00 Datera Presents at Storage Field Day 10
Thursday, May 26 16:00 – 18:00 Tintri Presents at Storage Field Day 10
Friday, May 27 8:00 – 10:00 Nimble Storage Presents at Storage Field Day 10
Friday, May 27 10:30 – 12:30 Hedvig Presents at Storage Field Day 10
Friday, May 27 13:30 – 15:30 Exablox Presents at Storage Field Day 10

You can also follow along with the live stream here.


Tintri Announces New Scale-Out Storage Platform

I’ve had a few briefings with Tintri now, and talked about Tintri’s T5040 here. Today they announced a few enhancements to their product line, including:

  • Nine new Tintri VMstore T5000 all flash models with capacity expansion capabilities;
  • VM Scale-out software;
  • Tintri Analytics for predictive capacity and performance planning; and
  • Two new Tintri Cloud offerings.

 

Scale-out Storage Platform

You might be familiar with the T5040, T5060 and T5080 models, with the Tintri VMstore T5000 all-flash series being introduced in August 2015. All three models have been updated with new capacity options ranging from 17 TB to 308 TB. These systems use the latest in 3D NAND technology and high density drives to offer organizations both higher capacity and lower $/GB.

Tintri03_NewModels

The new models have the following characteristics:

  • Federated pool of storage. You can now treat multiple Tintri VMstores—both all-flash and hybrid-flash nodes—as a pool of storage. This makes management, planning and resource allocation a lot simpler. You can have up to 32 VMstores in a pool.
  • Scalability and performance. The storage platform is designed to scale to more than one million VMs. Tintri tell me that the  “[s]eparation of control flow from data flow ensures low latency and scalability to a very large number of storage nodes”.
  • This allows you to scale from small to very large with new and existing, all flash and hybrid, partially or fully populated systems.
  • The VM Scale-out software works across any standard high performance Ethernet network, eliminating the need for proprietary interconnects. The VM Scale-out software automatically provides best placement recommendation for VMs.
  • Scale compute and storage independently. Loose coupling of storage and compute provides customers with maximum flexibility to scale these elements independently. I think this is Tintri’s way of saying they’re not (yet) heading down the hyperconverged path.

 

VM Scale-out Software

Tintri’s new VM Scale-out Software (*included with Tintri Global Center Advanced license) provides the following capabilities:

  • Predictive analytics derived from one million statistics collected every 10 minutes from 30 days of history, accounting for peak loads instead of average loads, providing (according to Tintri) for the most accurate predictions. Deep workload analysis identifies VMs that are growing rapidly and applies sophisticated algorithms to model the growth ahead and avoid resource constraints.
  • Least-cost optimization based on multi-dimensional modelling. Control algorithm constantly optimizes across the thousands of VMs in each pool of VMstores, taking into account space savings, resources required by each VM, and the cost in time and data to move VMs, and makes the least-cost recommendation for VM migration that optimizes the pool.
  • Retain VM policy settings and stats. When a VM is moved, not only are the snapshots moved with the VM, the stastistics,  protection and QoS policies migrate as well using efficient compressed and deduplicated replication protocol.
  • Supports all major hypervisors.

Tintri04_ScaleOut

You can check out a YouTube video on Tintri VM Scale-out (covering optimal VM distribution) here.

 

Tintri Analytics
Tintri has always offered real-time, VM-level analytics as part of its Tintri Operating System and Tintri Global Center management system. This has now been expanded to include a SaaS offering of predictive analytics that provides organizations with the ability to model both capacity and performance requirements. Powered by big data engines such as Apache Spark and Elastic Search, Tintri Analytics is capable of analyzing stats from 500,000 VMs over several years in one second.  By mining the rich VM-level metadata, Tintri Analytics provides customers with information about their environment to help them make better decisions about applications’ behaviours and storage needs.

Tintri Analytics is a SaaS tool that allows you to model storage needs up to 6 months into the future based on up to 3 years of historical data.

Tintri01_Analytics

Here is a shot of the dashboard. You can see a few things here, including:

  • Your live resource usage for your entire footprint up to 32 VMstores;
  • Average consumption per VM (bottom left); and
  • The types of applications that are your largest consumers of Capacity, Performance and Working Set (bottom center).

Tintri02_Analytics

Here you can see exactly how your usage of capacity, performance and working set have been trending over time. You can see also when you can expect to run out of these resources (and which is on the critical path). It also provides the ability to change the timeframe to alter the projections, or drill into specific application types to understand their impact on your footprint.

There are a number of videos covering Tintri Analytics that I think are worth checking out:

 

Tintri Cloud Suites

Tintri have also come up with a new packaging model called “Tintri Cloud”. Aimed at folks still keen on private cloud deployments, Tintri Cloud combines the Tintri Scale-out platform and the all-flash VMstores.

Customers can start with a single Tintri VMstore T5040 with 17 TB of effective capacity and scale out to the Tintri Foundation Cloud with 1.2 PB in as few as 8 rack units. Or they can grow all the way to the Tintri Ultimate Cloud, which delivers a 10 PB cloud-ready storage infrastructure for up to 160,000 VMs, delivering over 6.4 million IOPS in 64 RU for less than $1/GB effective. Both the Foundation Cloud and Ultimate Cloud include Tintri’s complete set of software offerings for storage management, VM-level analytics, VM Scale-out, replication, QoS, and lifecycle management.

 

Further Reading and Thoughts

There’s another video covering setting policies on groups of VMs in Tintri Global Center here. You might also like to check out the Tintri Product Launch webinar.

Tintri have made quite a big deal about their “VM-aware” storage in the past, and haven’t been afraid to call out the bigger players on their approach to VM-centric storage. While I think they’ve missed the mark with some of their comments, I’ve enjoyed the approach they’ve taken with their own products. I’ve also certainly been impressed with the demonstrations I’ve been given on the capability built into the arrays and available via Global Center. Deploying workload to the public cloud isn’t for everyone, and Tintri are doing a bang-up job of going for those who still want to run their VM storage decoupled from their compute and in their own data centre. I love the analytics capability, and the UI looks to be fairly straightforward and informative. Trending still seems to be a thing that companies are struggling with, so if a dashboard can help them with further insight then it can’t be a bad thing.