Pure Storage Announces FlashArray//m, Evergreen Storage and Pure1

That’s one of the wordier titles I’ve used for a blog post in recent times, but I think it captures the essence of Pure Storage‘s recent announcements. Firstly, I’m notoriously poor at covering product announcements, so if you want a really good insight into what is going on, check out Dave Henry’s post here. There were three key announcements made today:

  • FlashArray//m;
  • Evergreen Storage; and
  • Pure1 Cloud-Based Management and Support.

 

FlashArray//m

Besides having some slightly weird spelling, the FlashArray//m (mini because it fits in 3RU and modular because, well, you can swap modules in it) is Pure’s next-generation storage appliance. Here’s a picture.

Pure_hardware1

There are three models, the //m20, //m50, and //m70. Each of these has various capabilities. I’ve included an overview from the datasheet, but note that this is subject to change before GA of the tin.

Pure_hardware2

The key takeaway for me is that, after some time using other people’s designs, this is Pure’s crack at using their own hardware design, and it will be interesting to see how this plays out over the expected life of the gear.

 

Evergreen Storage

Pure_evergreen

In the olden days, when I was a storage customer, I would have been pretty excited about a program like Evergreen Storage. Far too often I found myself purchasing storage only to have the latest version released a month later, sometimes before the previous generation had hit the loading dock. I was rarely given a heads up from the vendor that something new was coming, and often had the feeling I was just using up their old stock. Pure don’t want you to have that feeling with them. Instead, for as long as the array is under maintenance, Pure will help customers upgrade the controllers, storage, and software in a non-disruptive fashion. The impression I got was that these arrays would keep on keeping on for around 7 – 10 years, with the modular design enabling easy upgrades of key technologies as well as capacity.

 

Pure1 Cloud-Based Management and Support

I’ve never been a Pure Storage customer, so I can’t comment as to how easy or difficult it currently is to get support. Nonetheless, I imagine the Pure1 announcement might be a bit exciting for the average punter slogging through storage ops. Basically, Pure1 gets you in touch with improved analytics and management of your storage infrastructure, all of which can be performed via a web browser. And, if you’re so inclined, you can turn on a call home feature and have Pure collect info from your arrays every 30 seconds. This provides both the customer and Pure with a wealth of information to make decisions about performance, resilience and upgrades. You can get the datasheet here.

 

Final Thoughts

I like Pure Storage. I was lucky enough to visit them during Storage Field Day 6 and was impressed by their clarity of vision and different approach to flash storage architecture. I like the look of the new hardware, although the proof will be in field performance. The Evergreen Storage announcement is fantastic from the customer’s perspective, although I’ll be interested to see just how long they can keep something like that going.

 

Storage Field Day 6 – Wrap-up and Link-o-rama

Disclaimer: I recently attended Storage Field Day 6.  My flights, accommodation and other expenses were paid for by Tech Field Day and their sponsors. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

This is a quick post to say thanks once again to the organisers and sponsors of Storage Field Day 6. I had a great time, learnt a lot, and didn’t get much sleep. For easy reference, here’s a list of the posts I did covering the event (not necessarily in chronological order).

Storage Field Day 6 – Day 0
Storage Field Day 6 – Day 1 – Avere
Storage Field Day 6 – Day 1 – StorMagic
Storage Field Day 6 – Day 1 – Tegile
Storage Field Day 6 – Day 2 – Coho Data
Storage Field Day 6 – Day 2 – Nexenta
Storage Field Day 6 – Day 2 – Pure Storage
Storage Field Day 6 – Day 3 – Nimble Storage
Storage Field Day 6 – Day 3 – NEC
Storage Field Day 6 – (Fairly) Full Disclosure

Also, here’s a number of links to posts by my fellow delegates. They’re all really smart folks, and you’d do well to check out what they’re writing about. I’ll update this list as more posts are published.

 

Eric Shanks
Storage Field Day 6
Local Premises Storage for EC2 Provided by Avere Systems
Nimble Storage Data Analytics – InfoSight

Will All New Storage Arrays be Hybrid?

 

John Obeto
Today at Storage Field Day 6

Day 2 at Storage Field Day 6: Coho Data

Day 2 at Storage Field Day 6: Nexenta Systems

 

Arjan Timmerman
Storage Field Day Starts Today :D

 

Nigel Poulton

Nexenta – Back in da house…

 

Enrico Signoretti
Avere Systems, great technology but…

 

Chin-Fah Heoh

MASSive, Impressive, Agile, TEGILE

 

Jon Klaus
Storage Field Day 6 Day 0 – Sightseeing and Yankee swap
SFD6 Day 1 – Avere, StorMagic, Tegile

 

Finally, thanks again to Stephen, Claire and Tom, it was a great few days and I really valued the opportunity I was given to attend.

Storage Field Day 6 – Day 2 – Coho Data

Disclaimer: I recently attended Storage Field Day 6.  My flights, accommodation and other expenses were paid for by Tech Field Day and their sponsors. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

For each of the presentations I attended at SFD6, there are a few things I want to include in the post. Firstly, you can see video footage of the Coho Data presentation here. You can also download my raw notes from the presentation here. Finally, here’s a link to the Coho Data website as well as their blog that covers some of what they presented.

Firstly, while there were no “winners” at SFD6, I’d like to give Coho Data a special shout-out for thinking of giving the delegates customised LEGO minifigs. Here’s a picture of mine. If you’ve ever met me in real life you’ll know that that’s pretty much what I look like, except I don’t cruise around holding a fish.

SFD6_CohoData_Minfig

Secondly, the Coho Data session, delivered by Coho Data’s CTO Andrew Warfield, was an exceptionally technical deep dive and is probably best summarised thusly: go and watch the video again. And probably again after that. In the meantime, here’s a few of my notes on what they do.

Coho Data offers a software stack that qualifies and ships on commodity hardware. It is ostensibly a NFS V3 server for VMware. If you look at the architecture diagram below, the bottom half effectively virtualises flash.

SFD6_CohoData_Architecture

Coho Data is about to ship v2.0, which adds remote asynchronous site to site replication (read about that here) and some API additions. Andrew also covered off some analytics and looked at Coho Data’s implementation thereof. You can check out a video that covers roughly the same topic here.

Andrew is a super-smart dude and I simply cannot do justice to his presentation by blabbering on here. Go check out the videos, they’re well worth it.

 

Storage Field Day 6 – Day 3 – NEC

Disclaimer: I recently attended Storage Field Day 6.  My flights, accommodation and other expenses were paid for by Tech Field Day and their sponsors. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

For each of the presentations I attended at SFD6, there are a few things I want to include in the post. Firstly, you can see video footage of the NEC presentation here. You can also download my raw notes from the presentation here. Finally, here’s a link to the NEC website that covers some of what they presented.

Firstly, I’d like to say up front that the NEC session was a little bizarre, in that the two lines of products presented, the M-Series block storage and the HYDRAstor deduplication solution, seemed years apart in terms of capability and general, erm, currency when compared to other vendor offerings. All I’ll say about the M-Series is that it seems like a solid product, but felt a lot like someone had taken a CLARiiON and added a SAS backend to it. (As an aside, a few people argued that that’s what EMC did with the first VNX a few years ago too). That would not be doing it real justice though, so I’ll stick with covering the HYDRAstor here.

Here are some of the highlights from that part of the session. The HYDRAstor is based on a scalable grid storage architecture using a community of smart nodes. These nodes are:

  • Industry standard x86 servers
  • Multiple types allowed (cross generation clusters)
  • Heterogeneous and open software

The system:

  • Is fully distributed
  • Is self-aware
  • Provides data management services
  • Virtualises hardware
  • And provides the capability to perform on-line upgrades / expansions with multi-generation nodes

There is no virtual edition, as NEC wants to control the performance of the whole thing.

The hands-free management also delivers:

  • Simple, fast deployment
  • Self-discovering capacity
  • Self-tuning and resource management
  • Self-healing
  • Web-browser GUI

I’ll say now that the GUI was a massive improvement over the M-Series Windows 2000-themed thing. It wasn’t amazing, but it was light-years ahead of where the M-Series is. NEC say that the system can scale to 165 nodes. Right now the biggest system in the US is 50 nodes.

In summary, I wasn’t a huge fan of what I saw from the M-Series, although I think it could be a solid workhorse in the data centre. I did, however, like the look of the HYDRAstor offering and would recommend you give it a look if you’re in the market for that kind of thing.

Storage Field Day 6 – (Fairly) Full Disclosure

Disclaimer: I recently attended Storage Field Day 6.  My flights, accommodation and other expenses were paid for by Tech Field Day and their sponsors. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

My full disclosure post will be nowhere near as epic as Justin’s, although he is my role model for this type of thing. Here are my notes on gifts, etc, that I received as a delegate at Storage Field Day 6. I’m going to do this in chronological order, as that was the easiest way for me to take notes during the week.

Tuesday

My wife paid for parking at BNE airport when she dropped me off. I also bought McDonalds for lunch at SYD, paid for by myself (in more ways than one). A period of time passed and I consumed plane “food”. This was included in the price of the ticket. Alcoholic beverages were not, but I stuck with water. Bless you United for keeping the economy class sober on long-haul flights.

On Tuesday night we had the Delegate dinner at Genji Steak House – a Teppanyaki restaurant in San Jose. I had the gyoza, followed by chicken and prawns with water. This was paid for by Tech Field Day. I also received 2 Manchester (City and United) beanies and 3 large blocks of Cadbury chocolate as part of the gift swap. Tom also gave me a care pack of various American snacks, including some Starbucks Iced Coffee and Nutella. I gave most of it to an American friend prior to my departure. I also had 2 Dos Equis beers in the hotel bar with everyone. This was also paid for by Tech Field Day.

Wednesday

At Avere‘s presentation on Wednesday morning I was given an Avere t-shirt. We had lunch at Bhava Communications in Emeryville (the location of the first two presentations). I had some sandwiches, a cookie, and a can of coke. At StorMagic‘s presentation I was given a 4GB USB stick with StorMagic info on it, as well as a personalised, leather-bound notebook. At Tegile‘s presentation I received a 2200ma portable USB charger thing. I also had a bottle of water.

On Wednesday night we had a delegate dinner (paid for by Tech Field Day) at an “Asian fusion” restaurant called Mosaic. I had the Triple Crown (Calamari, scallops, tiger prawns, asparagus, ginger, white wine garlic sauce). We then went to “Tech Field Day at the Movies” with delegates and friends at the Camera 12 Downtown. We watched Twelve Monkeys. I had a bottle of water from the concession stand. Tech Field Day covered entry for delegates and friends.

Thursday

Thursday morning we had breakfast at Coho Data. I had a sausage and egg roll, some greek yoghurt and an orange juice. I also received a personalised LEGO minifig, a LEGO Creator kit (31018), a foam fish hat (!) and a Coho Data sticker. At Nexenta‘s session I received a Nexenta notepad, orange Converse sneakers with Nexenta embroidered on them and a Nexenta-branded orange squishy ball. Lunch was pizza and some cheesy bread and boneless chicken and a bottle of water. At the Pure Storage session I received a Pure Storage-branded pilsener glass and a 8GB USB stick in a nice little wooden box.

For dinner on Thursday we had canapés and drinks at Cucina Venti  Italian restaurant. This was paid for by Tech Field Day, as was my entry to the Computer History Museum that night (a personal highlight).

Friday

Breakfast was had at Nimble Storage‘s office. I had bacon, eggs and juice. I also received a Nimble-branded jacket and a Raspberry Pi kit. At NEC, I received a set of NEC-branded Headphones. We had lunch at NEC’s office which consisted of Thai food.

I then made my own way to SFO with a friend.

Conclusion

I’d like to extend my thanks to the Storage Field Day organisers and the sponsors of the event. I had a great time. Since I can’t think of a good way to wrap up this post I’ll leave you with a photo.

SFD6_Swag

 

Storage Field Day 6 – Day 1 – Tegile

Disclaimer: I recently attended Storage Field Day 6.  My flights, accommodation and other expenses were paid for by Tech Field Day and their sponsors. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

For each of the presentations I attended at SFD6, there are a few things I want to include in the post. Firstly, you can see video footage of the Tegile presentation here. You can also download my raw notes from the presentation here. Finally, here’s a link to the Tegile website that covers some of what they presented.

Before the session we took a quick tour of the lab. Here’s a blurry phone snap of shiny, shiny machines.

SFD6_Tegile

Tegile spent a fair bit of time taking us through their system architecture which I found interesting as I wasn’t overly familiar with their story. You can read about their system hardware in my presentation notes. I thought for this post I’d highlight some of the features in the data services layer.

SFD6_Tegile_IntelliFlash

Data Reduction is offered via:

  • In-line deduplication
    • block level
    • dedupe across media
    • performance multiplier
  • In-line compression
    • block level
    • turn on/off at LUN / share level
    • alogrithm – LZ4, LZJB, GZIP
    • perf multiplier
  • Thin provisioning
    • array-level thin
    • for LUNs and shares
    • supports VMware VAAI “STUN”
    • JIT storage provisioning

Interestingly, Tegile chooses to compress then dedupe, which seems at odds with a few other offerings out there.

 

From a data protection perspective, Tegile offers:

  • Instantaneous thin snapshots
    • point-in-time copies of data
    • space allocated only for changed blocks
    • no reserve space for snapshots
    • unlimited number of snapshots
    • VM-consistent and application-consistent
  • Instantaneous thin clones
    • mountable copies
    • R/W-able copies
    • point-in-time copies
    • Space allocated only for deltas
  • Detect and correct silent corruption
    • checksums all data blocks
    • data and checksum in separate locations
    • match data/checksum for integrity
    • corrupt / mismatched data fixed using blocks from mirrored copy

From a data recovery perspective, the Tegile solution offers:

  • Instantaneous stable Recovery
    • data-consistent VM snapshots
    • hypervisor integrated
    • MSFT VSS co-ordinated data-consistent snapshots
    • VM-consistent and application-consistent snapshots
  • Intelligent data reconstruction
    • no need to rebuild entire drive
    • only portion of data rebuilt
    • accelerated metadata accelerates rebuilds
  • WAN-optimized replication
    • snapshot-based site-to-site replication
    • no need to replicate multiple updates to a block within the replication interval
    • minimizes bandwidth usage
    • one to many / many to one replication

Overall, I found Tegile’s presentation pretty interesting, and will be looking for opportunities to examine their products in more depth in the future. I also recommend checking out Scott D. Lowe’s article that looks into the overall theme of simplicity presented by a number of the vendors at SFD6.

 

Storage Field Day 6 – Day 3 – Nimble Storage

Disclaimer: I recently attended Storage Field Day 6.  My flights, accommodation and other expenses were paid for by Tech Field Day and their sponsors. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

For each of the presentations I attended at SFD6, there are a few things I want to include in the post. Firstly, you can see video footage of the Nimble Storage presentation here. You can also download my raw notes from the presentation here. Finally, here’s a link to the Nimble Storage website that covers some of what they presented.

I had a briefing from Nimble Storage in Australia last year when I was still working in customer land. At the time I liked what they had to offer, but couldn’t really move on getting hold of their product. So this time around it was interesting to dive into some other aspects of the Nimble Storage story that make it a pretty neat offering.

Rod Bagg, VP of Customer Support, spoke about Nimble’s desire to “[m]aintain a maniacal focus on providing the industry’s most enviable customer support”. They achieve this through a combination of products and strategies, but InfoSight underpins the success of this undertaking.

As with most storage vendors, customers have been asking Nimble:

  • Why can’t vendors proactively monitor customer systems for insights?
  • Can vendors predict and prevent problems before they occur?

Since 94% of the deployed Nimble Storage arrays connect back to the mothership for monitoring and analysis, it would seem a shame not to leverage that information. With InfoSight, it appears that they’ve made some headway towards solving these types of problems.

From a telemetry perspective, Nimble collects between 12 and 70m sensors per array daily, with data collected every 5 minutes and on-demand. They then perform systems modelling, correlations, trending and projection. Some of the benefits of this approach include the ability to perform:

  • Monitoring and alerting
  • Visualisation, capacity planning, and performance management

This leads to what Nimble calls “Proactive wellness”, where a vast majority of cases are opened by Nimble, and they have secure, on-demand system access to resolve issues. What they really seem to be about, though, is “[l]everaging pervasive network connectivity and big data analytics to automate support and enable cloud-based management”. They use HP Vertica as the analytics engine.

The demo looked super pretty, you can use InfoSight to assist with sizing, and overall it seemed like a pretty solid offering. I don’t think this post does enough justice to the InfoSight tool, and I heartily recommend checking out the SFD6 video and reaching out to Nimble for a demo – it’s really cool stuff. Also, the have a Peet’s machine in their kitchen. While coffee in the U.S. is pretty awful, Peet’s is kind of okay.

SFD6_Nimble_Coffee

 

Storage Field Day 6 – Day 2 – Nexenta

Disclaimer: I recently attended Storage Field Day 6.  My flights, accommodation and other expenses were paid for by Tech Field Day and their sponsors. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

I’ve been a fan of Nexenta for some time, having used the Community Edition of NexentaStor in various lab environments over the past few years, so I was interested to hear what they had to talk about. But firstly, you can see video footage of the Nexenta presentation here. You can also download my raw notes from the presentation here. Finally, here’s a link to the Nexenta website that covers some of what they presented.

SFD6_Nexenta

Murat Karslioglu presented on SDS for VSAN and VDI, with the focus being primarily on NexentaConnect for VMware Virtual SAN. According to Murat, VMware’s Virtual SAN currently has the following limitations.

  • No storage for file
  • Data is locked to VSAN cluster nodes
  • Limited to 32 nodes
  • All ESXi hosts must be on the HCL
  • Must have VSAN license
  • Every ESXi host contributing storage needs SSD
  • SSD is only used for cache
  • Does not support SCSI reservations
  • 2TB vmdk size limitation [not convinced this is still the case]
  • No compression, no deduplication

Murat described Virtual SAN as “more shared scale-out datastore than SAN”.

Nexenta_vSAN-357px

So what does NexentaConnect for Virtual SAN get you?

  • File Storage
  • User data
  • Departmental data
  • ISO files
  • Managed through the vSphere web client
  • Inline compression
  • Performance and health monitoring
  • Folder and volume level snapshots
  • Maintains data integrity during stressful tasks
  • HA File Services via VMware HA

He also mentioned that EVO:RAIL now has file services via Nexenta’s partnership with Dell and Supermicro.

So this all sounds technically pretty neat, and I applaud Nexenta for taking something they’ve historically been good at and applying it to a new type of infrastructure that people are talking about, if not adopting in droves. The problem is I’m just not sure why you would bother doing this with VMware Virtual SAN and Nexenta in this type of combination. The licensing alone, at least from a VMware perspective, must be a little bit awful. I would have thought it would make more sense to save the money on Virtual SAN licenses and invest in some solid infrastructure using NexentaStor.

I guess the other use case is a situation where someone has already invested in Virtual SAN, potentially for a VDI deployment, and just needs to provide some file access to a few physical hosts or some other non-standard client. But I’m struggling to see where that might be the case on a regular basis. All that said though, I work in a different market to the US, and I recall having conversations with people at VMworld this year about the benefits of virtualising workloads, so I guess there are people out there who are going to go for this solution.

Don’t misunderstand, I really like Nexenta’s core product, I’m just not convinced that this takes them in the right direction. In any case, go and check them out if you haven’t already, I think it’s worthwhile.

Storage Field Day 6 – Day 1 – StorMagic

Disclaimer: I recently attended Storage Field Day 6.  My flights, accommodation and other expenses were paid for by Tech Field Day and their sponsors. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

For each of the presentations I attended at SFD6, there are a few things I want to include in the post. Firstly, you can see video footage of the StorMagic presentation here. You can also download my raw notes from the presentation here. Finally, here’s a link to the StorMagic website that covers some of what they presented.

 

StorMagic know their limitations, and are really looking to address storage problems at the edge. The “Distributed enterprise” is:

  • Virtualising remote infrastructure;
  • Introducing new remote services;
  • Systems in harsh environments;
  • Operating without local IT;
  • Seeking to reduce support costs; and
  • Experiencing downtime of critical, remote applications.

They ‘ve found that the average remote site has:

  • 2TB average data capacity;
  • 7 or 8 key applications;
  • separation of servers on site;
  • no computer room; and a
  • set and forget mentality.

What these businesses need is

  • High Availability;
  • Centralised Management;
  • A Small IT footprint; and
  • Simple, automated deployment.

Traditional storage does not fit at the remote site. Adding a traditional SAN in these environments leads to:

  • A single point of failure;
  • Complexity;
  • Specialist staff;
  • Depreciating value;
  • High capex and opex; and
  • Tied in to hardware vendor.

StorMagic have basically taken this on board in the design of their SvSAN product, and also claim to get around a number of the current limitations of VMware Virtual SAN. In a nutshell, SvSAN is a VSA that:

  • Uses shared storage (internal or DAS);
  • Provides synchronous mirroring between nodes;
  • Runs as VSA independent of storage hardware;
  • Provides HA – withstands server or storage failure; and is
  • Scalable – 2 nodes to many nodes.

The StorMagic guys were asked whether their focus on beating VMware Virtual SAN at the smaller end of the market was a mistake. They seemed to think that, moving forward, VMware would be a lot more interested in the mid- to high-end of the market, leaving them to play in the two-node, edge storage scenarios. It seems like a solid strategy, and it seems like a solid bit of technology. I recommend looking at them if you have this kind of use case come up.

Storage Field Day 6 – Day 2 – Pure Storage

Disclaimer: I recently attended Storage Field Day 6.  My flights, accommodation and other expenses were paid for by Tech Field Day and their sponsors. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

For each of the presentations I attended at SFD6, there are a few things I want to include in the post. Firstly, you can see video footage of the Pure Storage presentation here. You can also download my raw notes from the presentation here. Finally, here’s a link to the Pure Storage website that covers some of what they presented.

SFD6_PureStorage

If you’ve watched any of the Storage Field Day sessions from this year or previous events, you’ll know that the delegates and vendors tend to like to go into technical deep dives on a pretty regular basis. While this also happened with Pure Storage it was refreshing, in my mind at least, to also cover some of the issues that customers are faced with from a business perspective. Pure Storage are keen to make sure that we’re asking the right questions when evaluating the requirements for an all-flash array (AFA) solution. To wit, these are the wrong areas to focus on:

  • Can my array do one meeellion IOPS?
  • Architecture swordplay;
  • Raw price per gigabyte; and
  • How do I adjust the Tiering? RAID? Caching? Disk layout? etc.

Instead, you’re much better off looking at:

  • How does the app perform?
  • What is the usable storage cost for my applications?
  • How simple is this for my admins, DBAs, etc?
  • How do I scale capacity and performance?

I thought Vaughn Stewart did a bang up job of covering off these questions, as well as talking about Pure Storage’s mantra of simplicity.

  • How simple is this for my storage admin?
  • Is storage invisible for my application owner?

There’s no more need for application alignment, it’s invisible. There’s no more need for storage tuning, it’s all about keeping it simple. And automated …

Operate & Automate with:

  • Web-based GUI;
  • CLI; and
  • RESTful API.

And integrated with your favourite stack. There’s support for:

  • OpenStack;
  • VMware VAAI;
  • vCenter plugin;
  • Log Insight plugin; and
  • MSFT’s VSS Provider.

We also need to understand vendor performance claims versus real world requirements. Pure have been looking at a bunch of data and are seeing that the average IO size is 40.6K. As Vaughn said during the session, we need to get out of the “4K Vanity Zone” favoured by vendors when it comes to published benchmark data. What it comes down to is knowing the I/O size of the application you’re running? There’s nothing earth-shattering in this approach, but it’s something we seem to get distracted from at times.

And, I have to say, as much as I enjoy hearing about how storage works in its various incantations, and whether providers dedupe or compress first, and what happens to the data after it hits write cache, sometimes my customers just don’t care about that. And all the technical presentations in the world won’t get them to think otherwise. While I’m not saying you need to forget all about the tech and how it works, sometimes it really is important to keep it simple. I think the Pure Storage message does a pretty good job of that.