Storage Field Day – I’ll Be At SFD10

SFD-Logo2-150x150

Woohoo! I’ll be heading to the US in just over a fortnight for another Storage Field Day event. If you haven’t heard of the very excellent Tech Field Day events, you should check them out. I’m looking forward to time travel and spending time with some really smart people for a few days. It’s also worth checking back on the SFD10 website during the event as there’ll be video streaming and updated links to additional content. You can also see the list of delegates and event-related articles that they’ve published.

I think it’s a great line-up of companies this time around, with some I’m familiar with and some not so much.

SFD10_Companies

I’d also like to publicly thank in advance the nice folk from Tech Field Day (Stephen, Claire and Tom) who’ve seen fit to have me back, as well as my employer for giving me time to attend these events. Also big thanks to the companies presenting.

 

iStock-Unfinished-Business-5

Storage Field Day – I’ll be at SFD8

SFD-Logo2-150x150

Giddyup. I’m excited to say that I’ll be heading to the US in just over a month for another Storage Field Day event. If you haven’t heard of the very excellent Tech Field Day events, you should check them out. I’m looking forward to crossing time zones (it’s about the destination, not the journey) and hob-nobbing with some really smart people for a few days. It’s also worth checking back on the SFD8 website during the event as there’ll be video streaming and the like. You can also see the (evolving) list of delegates and related articles that they’ve published.

I’d also like to publicly thank in advance the nice folk from Tech Field Day (Stephen, Claire and Tom) who’ve seen fit to have me back. Also thanks to the companies presenting (more about them later).

 

iStock-Unfinished-Business-9

Storage Field Day – I’ll be at SFD7

If you haven’t heard of the very excellent Tech Field Day, head on over and check it out. They do all kinds of Field Days, including one for storage. Number 7 is happening in less than a month and, yes, I’ll be there again. Woohoo! Twice in 6 months. Look, you can see my picture on the internet and everything. I’m looking forward to crossing time zones and hob-nobbing with some really smart people for a few days. It’s also worth checking back on the SFD7 website during the event as there’ll be video streaming and the like. You can also check out the evolving list of delegates.

I’d also like to publicly thank in advance the nice folk from Tech Field Day (Stephen, Claire and Tom) who’ve seen fit to have me back. Also thanks to the sponsors (more about them later).

Storage Field Day 6 – Wrap-up and Link-o-rama

Disclaimer: I recently attended Storage Field Day 6.  My flights, accommodation and other expenses were paid for by Tech Field Day and their sponsors. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

This is a quick post to say thanks once again to the organisers and sponsors of Storage Field Day 6. I had a great time, learnt a lot, and didn’t get much sleep. For easy reference, here’s a list of the posts I did covering the event (not necessarily in chronological order).

Storage Field Day 6 – Day 0
Storage Field Day 6 – Day 1 – Avere
Storage Field Day 6 – Day 1 – StorMagic
Storage Field Day 6 – Day 1 – Tegile
Storage Field Day 6 – Day 2 – Coho Data
Storage Field Day 6 – Day 2 – Nexenta
Storage Field Day 6 – Day 2 – Pure Storage
Storage Field Day 6 – Day 3 – Nimble Storage
Storage Field Day 6 – Day 3 – NEC
Storage Field Day 6 – (Fairly) Full Disclosure

Also, here’s a number of links to posts by my fellow delegates. They’re all really smart folks, and you’d do well to check out what they’re writing about. I’ll update this list as more posts are published.

 

Eric Shanks
Storage Field Day 6
Local Premises Storage for EC2 Provided by Avere Systems
Nimble Storage Data Analytics – InfoSight

Will All New Storage Arrays be Hybrid?

 

John Obeto
Today at Storage Field Day 6

Day 2 at Storage Field Day 6: Coho Data

Day 2 at Storage Field Day 6: Nexenta Systems

 

Arjan Timmerman
Storage Field Day Starts Today :D

 

Nigel Poulton

Nexenta – Back in da house…

 

Enrico Signoretti
Avere Systems, great technology but…

 

Chin-Fah Heoh

MASSive, Impressive, Agile, TEGILE

 

Jon Klaus
Storage Field Day 6 Day 0 – Sightseeing and Yankee swap
SFD6 Day 1 – Avere, StorMagic, Tegile

 

Finally, thanks again to Stephen, Claire and Tom, it was a great few days and I really valued the opportunity I was given to attend.

Storage Field Day 6 – Day 2 – Coho Data

Disclaimer: I recently attended Storage Field Day 6.  My flights, accommodation and other expenses were paid for by Tech Field Day and their sponsors. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

For each of the presentations I attended at SFD6, there are a few things I want to include in the post. Firstly, you can see video footage of the Coho Data presentation here. You can also download my raw notes from the presentation here. Finally, here’s a link to the Coho Data website as well as their blog that covers some of what they presented.

Firstly, while there were no “winners” at SFD6, I’d like to give Coho Data a special shout-out for thinking of giving the delegates customised LEGO minifigs. Here’s a picture of mine. If you’ve ever met me in real life you’ll know that that’s pretty much what I look like, except I don’t cruise around holding a fish.

SFD6_CohoData_Minfig

Secondly, the Coho Data session, delivered by Coho Data’s CTO Andrew Warfield, was an exceptionally technical deep dive and is probably best summarised thusly: go and watch the video again. And probably again after that. In the meantime, here’s a few of my notes on what they do.

Coho Data offers a software stack that qualifies and ships on commodity hardware. It is ostensibly a NFS V3 server for VMware. If you look at the architecture diagram below, the bottom half effectively virtualises flash.

SFD6_CohoData_Architecture

Coho Data is about to ship v2.0, which adds remote asynchronous site to site replication (read about that here) and some API additions. Andrew also covered off some analytics and looked at Coho Data’s implementation thereof. You can check out a video that covers roughly the same topic here.

Andrew is a super-smart dude and I simply cannot do justice to his presentation by blabbering on here. Go check out the videos, they’re well worth it.

 

Storage Field Day 6 – Day 3 – NEC

Disclaimer: I recently attended Storage Field Day 6.  My flights, accommodation and other expenses were paid for by Tech Field Day and their sponsors. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

For each of the presentations I attended at SFD6, there are a few things I want to include in the post. Firstly, you can see video footage of the NEC presentation here. You can also download my raw notes from the presentation here. Finally, here’s a link to the NEC website that covers some of what they presented.

Firstly, I’d like to say up front that the NEC session was a little bizarre, in that the two lines of products presented, the M-Series block storage and the HYDRAstor deduplication solution, seemed years apart in terms of capability and general, erm, currency when compared to other vendor offerings. All I’ll say about the M-Series is that it seems like a solid product, but felt a lot like someone had taken a CLARiiON and added a SAS backend to it. (As an aside, a few people argued that that’s what EMC did with the first VNX a few years ago too). That would not be doing it real justice though, so I’ll stick with covering the HYDRAstor here.

Here are some of the highlights from that part of the session. The HYDRAstor is based on a scalable grid storage architecture using a community of smart nodes. These nodes are:

  • Industry standard x86 servers
  • Multiple types allowed (cross generation clusters)
  • Heterogeneous and open software

The system:

  • Is fully distributed
  • Is self-aware
  • Provides data management services
  • Virtualises hardware
  • And provides the capability to perform on-line upgrades / expansions with multi-generation nodes

There is no virtual edition, as NEC wants to control the performance of the whole thing.

The hands-free management also delivers:

  • Simple, fast deployment
  • Self-discovering capacity
  • Self-tuning and resource management
  • Self-healing
  • Web-browser GUI

I’ll say now that the GUI was a massive improvement over the M-Series Windows 2000-themed thing. It wasn’t amazing, but it was light-years ahead of where the M-Series is. NEC say that the system can scale to 165 nodes. Right now the biggest system in the US is 50 nodes.

In summary, I wasn’t a huge fan of what I saw from the M-Series, although I think it could be a solid workhorse in the data centre. I did, however, like the look of the HYDRAstor offering and would recommend you give it a look if you’re in the market for that kind of thing.

Storage Field Day 6 – (Fairly) Full Disclosure

Disclaimer: I recently attended Storage Field Day 6.  My flights, accommodation and other expenses were paid for by Tech Field Day and their sponsors. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

My full disclosure post will be nowhere near as epic as Justin’s, although he is my role model for this type of thing. Here are my notes on gifts, etc, that I received as a delegate at Storage Field Day 6. I’m going to do this in chronological order, as that was the easiest way for me to take notes during the week.

Tuesday

My wife paid for parking at BNE airport when she dropped me off. I also bought McDonalds for lunch at SYD, paid for by myself (in more ways than one). A period of time passed and I consumed plane “food”. This was included in the price of the ticket. Alcoholic beverages were not, but I stuck with water. Bless you United for keeping the economy class sober on long-haul flights.

On Tuesday night we had the Delegate dinner at Genji Steak House – a Teppanyaki restaurant in San Jose. I had the gyoza, followed by chicken and prawns with water. This was paid for by Tech Field Day. I also received 2 Manchester (City and United) beanies and 3 large blocks of Cadbury chocolate as part of the gift swap. Tom also gave me a care pack of various American snacks, including some Starbucks Iced Coffee and Nutella. I gave most of it to an American friend prior to my departure. I also had 2 Dos Equis beers in the hotel bar with everyone. This was also paid for by Tech Field Day.

Wednesday

At Avere‘s presentation on Wednesday morning I was given an Avere t-shirt. We had lunch at Bhava Communications in Emeryville (the location of the first two presentations). I had some sandwiches, a cookie, and a can of coke. At StorMagic‘s presentation I was given a 4GB USB stick with StorMagic info on it, as well as a personalised, leather-bound notebook. At Tegile‘s presentation I received a 2200ma portable USB charger thing. I also had a bottle of water.

On Wednesday night we had a delegate dinner (paid for by Tech Field Day) at an “Asian fusion” restaurant called Mosaic. I had the Triple Crown (Calamari, scallops, tiger prawns, asparagus, ginger, white wine garlic sauce). We then went to “Tech Field Day at the Movies” with delegates and friends at the Camera 12 Downtown. We watched Twelve Monkeys. I had a bottle of water from the concession stand. Tech Field Day covered entry for delegates and friends.

Thursday

Thursday morning we had breakfast at Coho Data. I had a sausage and egg roll, some greek yoghurt and an orange juice. I also received a personalised LEGO minifig, a LEGO Creator kit (31018), a foam fish hat (!) and a Coho Data sticker. At Nexenta‘s session I received a Nexenta notepad, orange Converse sneakers with Nexenta embroidered on them and a Nexenta-branded orange squishy ball. Lunch was pizza and some cheesy bread and boneless chicken and a bottle of water. At the Pure Storage session I received a Pure Storage-branded pilsener glass and a 8GB USB stick in a nice little wooden box.

For dinner on Thursday we had canapés and drinks at Cucina Venti  Italian restaurant. This was paid for by Tech Field Day, as was my entry to the Computer History Museum that night (a personal highlight).

Friday

Breakfast was had at Nimble Storage‘s office. I had bacon, eggs and juice. I also received a Nimble-branded jacket and a Raspberry Pi kit. At NEC, I received a set of NEC-branded Headphones. We had lunch at NEC’s office which consisted of Thai food.

I then made my own way to SFO with a friend.

Conclusion

I’d like to extend my thanks to the Storage Field Day organisers and the sponsors of the event. I had a great time. Since I can’t think of a good way to wrap up this post I’ll leave you with a photo.

SFD6_Swag

 

Storage Field Day 6 – Day 1 – Tegile

Disclaimer: I recently attended Storage Field Day 6.  My flights, accommodation and other expenses were paid for by Tech Field Day and their sponsors. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

For each of the presentations I attended at SFD6, there are a few things I want to include in the post. Firstly, you can see video footage of the Tegile presentation here. You can also download my raw notes from the presentation here. Finally, here’s a link to the Tegile website that covers some of what they presented.

Before the session we took a quick tour of the lab. Here’s a blurry phone snap of shiny, shiny machines.

SFD6_Tegile

Tegile spent a fair bit of time taking us through their system architecture which I found interesting as I wasn’t overly familiar with their story. You can read about their system hardware in my presentation notes. I thought for this post I’d highlight some of the features in the data services layer.

SFD6_Tegile_IntelliFlash

Data Reduction is offered via:

  • In-line deduplication
    • block level
    • dedupe across media
    • performance multiplier
  • In-line compression
    • block level
    • turn on/off at LUN / share level
    • alogrithm – LZ4, LZJB, GZIP
    • perf multiplier
  • Thin provisioning
    • array-level thin
    • for LUNs and shares
    • supports VMware VAAI “STUN”
    • JIT storage provisioning

Interestingly, Tegile chooses to compress then dedupe, which seems at odds with a few other offerings out there.

 

From a data protection perspective, Tegile offers:

  • Instantaneous thin snapshots
    • point-in-time copies of data
    • space allocated only for changed blocks
    • no reserve space for snapshots
    • unlimited number of snapshots
    • VM-consistent and application-consistent
  • Instantaneous thin clones
    • mountable copies
    • R/W-able copies
    • point-in-time copies
    • Space allocated only for deltas
  • Detect and correct silent corruption
    • checksums all data blocks
    • data and checksum in separate locations
    • match data/checksum for integrity
    • corrupt / mismatched data fixed using blocks from mirrored copy

From a data recovery perspective, the Tegile solution offers:

  • Instantaneous stable Recovery
    • data-consistent VM snapshots
    • hypervisor integrated
    • MSFT VSS co-ordinated data-consistent snapshots
    • VM-consistent and application-consistent snapshots
  • Intelligent data reconstruction
    • no need to rebuild entire drive
    • only portion of data rebuilt
    • accelerated metadata accelerates rebuilds
  • WAN-optimized replication
    • snapshot-based site-to-site replication
    • no need to replicate multiple updates to a block within the replication interval
    • minimizes bandwidth usage
    • one to many / many to one replication

Overall, I found Tegile’s presentation pretty interesting, and will be looking for opportunities to examine their products in more depth in the future. I also recommend checking out Scott D. Lowe’s article that looks into the overall theme of simplicity presented by a number of the vendors at SFD6.

 

Storage Field Day 6 – Day 3 – Nimble Storage

Disclaimer: I recently attended Storage Field Day 6.  My flights, accommodation and other expenses were paid for by Tech Field Day and their sponsors. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

For each of the presentations I attended at SFD6, there are a few things I want to include in the post. Firstly, you can see video footage of the Nimble Storage presentation here. You can also download my raw notes from the presentation here. Finally, here’s a link to the Nimble Storage website that covers some of what they presented.

I had a briefing from Nimble Storage in Australia last year when I was still working in customer land. At the time I liked what they had to offer, but couldn’t really move on getting hold of their product. So this time around it was interesting to dive into some other aspects of the Nimble Storage story that make it a pretty neat offering.

Rod Bagg, VP of Customer Support, spoke about Nimble’s desire to “[m]aintain a maniacal focus on providing the industry’s most enviable customer support”. They achieve this through a combination of products and strategies, but InfoSight underpins the success of this undertaking.

As with most storage vendors, customers have been asking Nimble:

  • Why can’t vendors proactively monitor customer systems for insights?
  • Can vendors predict and prevent problems before they occur?

Since 94% of the deployed Nimble Storage arrays connect back to the mothership for monitoring and analysis, it would seem a shame not to leverage that information. With InfoSight, it appears that they’ve made some headway towards solving these types of problems.

From a telemetry perspective, Nimble collects between 12 and 70m sensors per array daily, with data collected every 5 minutes and on-demand. They then perform systems modelling, correlations, trending and projection. Some of the benefits of this approach include the ability to perform:

  • Monitoring and alerting
  • Visualisation, capacity planning, and performance management

This leads to what Nimble calls “Proactive wellness”, where a vast majority of cases are opened by Nimble, and they have secure, on-demand system access to resolve issues. What they really seem to be about, though, is “[l]everaging pervasive network connectivity and big data analytics to automate support and enable cloud-based management”. They use HP Vertica as the analytics engine.

The demo looked super pretty, you can use InfoSight to assist with sizing, and overall it seemed like a pretty solid offering. I don’t think this post does enough justice to the InfoSight tool, and I heartily recommend checking out the SFD6 video and reaching out to Nimble for a demo – it’s really cool stuff. Also, the have a Peet’s machine in their kitchen. While coffee in the U.S. is pretty awful, Peet’s is kind of okay.

SFD6_Nimble_Coffee

 

Storage Field Day 6 – Day 2 – Nexenta

Disclaimer: I recently attended Storage Field Day 6.  My flights, accommodation and other expenses were paid for by Tech Field Day and their sponsors. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

I’ve been a fan of Nexenta for some time, having used the Community Edition of NexentaStor in various lab environments over the past few years, so I was interested to hear what they had to talk about. But firstly, you can see video footage of the Nexenta presentation here. You can also download my raw notes from the presentation here. Finally, here’s a link to the Nexenta website that covers some of what they presented.

SFD6_Nexenta

Murat Karslioglu presented on SDS for VSAN and VDI, with the focus being primarily on NexentaConnect for VMware Virtual SAN. According to Murat, VMware’s Virtual SAN currently has the following limitations.

  • No storage for file
  • Data is locked to VSAN cluster nodes
  • Limited to 32 nodes
  • All ESXi hosts must be on the HCL
  • Must have VSAN license
  • Every ESXi host contributing storage needs SSD
  • SSD is only used for cache
  • Does not support SCSI reservations
  • 2TB vmdk size limitation [not convinced this is still the case]
  • No compression, no deduplication

Murat described Virtual SAN as “more shared scale-out datastore than SAN”.

Nexenta_vSAN-357px

So what does NexentaConnect for Virtual SAN get you?

  • File Storage
  • User data
  • Departmental data
  • ISO files
  • Managed through the vSphere web client
  • Inline compression
  • Performance and health monitoring
  • Folder and volume level snapshots
  • Maintains data integrity during stressful tasks
  • HA File Services via VMware HA

He also mentioned that EVO:RAIL now has file services via Nexenta’s partnership with Dell and Supermicro.

So this all sounds technically pretty neat, and I applaud Nexenta for taking something they’ve historically been good at and applying it to a new type of infrastructure that people are talking about, if not adopting in droves. The problem is I’m just not sure why you would bother doing this with VMware Virtual SAN and Nexenta in this type of combination. The licensing alone, at least from a VMware perspective, must be a little bit awful. I would have thought it would make more sense to save the money on Virtual SAN licenses and invest in some solid infrastructure using NexentaStor.

I guess the other use case is a situation where someone has already invested in Virtual SAN, potentially for a VDI deployment, and just needs to provide some file access to a few physical hosts or some other non-standard client. But I’m struggling to see where that might be the case on a regular basis. All that said though, I work in a different market to the US, and I recall having conversations with people at VMworld this year about the benefits of virtualising workloads, so I guess there are people out there who are going to go for this solution.

Don’t misunderstand, I really like Nexenta’s core product, I’m just not convinced that this takes them in the right direction. In any case, go and check them out if you haven’t already, I think it’s worthwhile.