Intel Optane And The DAOS Storage Engine

Disclaimer: I recently attended Storage Field Day 20.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Intel recently presented at Storage Field Day 20. You can see videos of the presentation here, and download my rough notes from here.

 

Intel Optane Persistent Memory

If you’re a diskslinger, you’ve very likely heard of Intel Optane. You may have even heard of Intel Optane Persistent Memory. It’s a little different to Optane SSD, and Intel describes it as “memory technology that delivers a unique combination of affordable large capacity and support for data persistence”. It looks a lot like DRAM, but the capacity is greater, and there’s data persistence across power losses. This all sounds pretty cool, but isn’t it just another form factor for fast storage? Sort of, but the application of the engineering behind the product is where I think it starts to get really interesting.

 

Enter DAOS

Distributed Asynchronous Object Storage (DAOS) is described by Intel as “an open source software-defined scale-out object store that provides high bandwidth, low latency, and high I/O operations per second (IOPS) storage containers to HPC applications”. It’s ostensibly a software stack built from the ground up to take advantage of the crazy speeds you can achieve with Optane, and at scale. There’s a handy overview of the architecture available on Intel’s website. Traditional object (and other storage systems) haven’t really been built to take advantage of Optane in quite the same way DAOS has.

[image courtesy of Intel]

There are some cool features built into DAOS, including:

  • Ultra-fine grained, low-latency, and true zero-copy I/O
  • Advanced data placement to account for fault domains
  • Software-managed redundancy supporting both replication and erasure code with online rebuild
  • End-to-end (E2E) data integrity
  • Scalable distributed transactions with guaranteed data consistency and automated recovery
  • Dataset snapshot capability
  • Security framework to manage access control to storage pools
  • Software-defined storage management to provision, configure, modify, and monitor storage pools

Exciting? Sure is. There’s also integration with Lustre. The best thing about this is that you can grab it from Github under the Apache 2.0 license.

 

Thoughts And Further Reading

Object storage is in its relative infancy when compared to some of the storage architectures out there. It was designed to be highly scalable and generally does a good job of cheap and deep storage at “web scale”. It’s my opinion that object storage becomes even more interesting as a storage solution when you put a whole bunch of really fast storage media behind it. I’ve seen some media companies do this with great success, and there are a few of the bigger vendors out there starting to push the All-Flash object story. Even then, though, many of the more popular object storage systems aren’t necessarily optimised for products like Intel Optane PMEM. This is what makes DAOS so interesting – the ability for the storage to fundamentally do what it needs to do at massive scale, and have it go as fast as the media will let it go. You don’t need to worry as much about the storage architecture being optimised for the storage it will sit on, because the folks developing it have access to the team that developed the hardware.

The other thing I really like about this project is that it’s open source. This tells me that Intel are both focused on Optane being successful, and also focused on the industry making the most of the hardware it’s putting out there. It’s a smart move – come up with some super fast media, and then give the market as much help as possible to squeeze the most out of it.

You can grab the admin guide from here, and check out the roadmap here. Intel has plans to release a new version every 6 months, and I’m really looking forward to seeing this thing gain traction. For another perspective on DAOS and Intel Optane, check out David Chapa’s article here.

 

 

Pure Storage Expands Portfolio, Adds Capacity And Performance

Disclaimer: I recently attended Pure//Accelerate 2019.  My flights, accommodation, and conference pass were paid for by Pure Storage. There is no requirement for me to blog about any of the content presented and I am not compensated by Pure Storage for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Pure Storage announced two additions to its portfolio of products today: FlashArray//C and DirectMemory Cache. I had the opportunity to hear about these two products at the Storage Field Day Exclusive event at Pure//Accelerate 2019 and thought I’d share some thoughts here.

 

DirectMemory Cache

DirectMemory Cache is a high-speed caching system that reduces read latency for high-locality, performance-critical applications.

  • High speed: based on Intel Optane SCM drives
  • Caching system: repeated accesses to “hot data” are sped up automatically – no tiering = no configuration
  • Read latency: only read performance is affected – no changes to latency
  • High-locality: only workloads that reuse often a dates that fits in the cache will benefit
  • Performance-Critical: high-throughput latency sensitive workloads

According to Pure, “DirectMemory Cache is the functionality within Purity that provides direct access to data and accelerates performance critical applications”. Note that this is only for read data, write caching is still done via DRAM.

How Can This Help?

Pure has used Pure1 Meta analysis to arrive at the following figures:

  • 80% of arrays can achieve 20% lower latency
  • 40% of arrays can achieve 30-50% lower latency (up to 2x boost)

So there’s some real potential to improve existing workloads via the use of this read cache.

DirectMemory Configurations

Pure Storage DirectMemory Modules plug directly into FlashArray//X70 and //X90, are inserted into the chassis, and are available in the following configurations:

  • 3TB (4x750GB) DirectMemory Modules
  • 6TB (8x750GB) DirectMemory Modules

Top of Rack Architecture

Pure are positioning the “top of rack” architecture as a way to compete some of the architectures that have jammed a bunch of flash in DAS or in compute to gain increased performance. The idea is that you can:

  • Eliminate data locality;
  • Bring storage and compute closer;
  • Provide storage services that are not possible with DAS;
  • Bring the efficiency of FlashArray to traditional DAS applications; and
  • Offload storage and networking load from application CPUs.

 

FlashArray//C

Typical challenges in Tier 2

Things can be tough in the tier 2 storage world. Pure outlined some of the challenges they were seeking to address by delivering a capacity optimised product.

Management complexity

  • Complexity / management
  • Different platforms and APIs
  • Interoperability challenges

Inconsistent Performance

  • Variable app performance
  • Anchored by legacy disk
  • Undersized / underperforming

Not enterprise class

  • <99.9999% resiliency
  • Disruptive upgrades
  • Not evergreen

The C Stands For Capacity Optimised All-Flash Array

Flash performance at disk economics

  • QLC architecture enables tier 2 applications to benefit from the performance of all-flash – predictable 2-4ms latency, 5.2PB (effective) in 9U delivers 10x consolidation for racks and racks of disk.

Optimised end-to-end for QLC Flash

  • Deep integration from software to QLC NAND solves QLC wear concerns and delivers market-leading economics. Includes the same evergreen maintenance and wear replacement as every FlashArray

“No Compromise” enterprise experience

  • Built for the same 99.9999%+ availability, Pure1 cloud management, API automation, and AI-driven predictive support of every FlashArray

Flash for every data workflow

  • Policy driven replication, snapshots, and migration between arrays and clouds – now use Flash for application tiering, DR, Test / Dev, Backup, and retention

Configuration Details

Configuration options include:

  • 366TB RAW – 1.3PB effective
  • 878TB RAW – 3.2PB effective
  • 1.39PB RAW – 5.2PB effective

Use Cases

  • Policy based VM tiering between //X and //C
  • Multi-cloud data protection and DR – on-premises and multi-site
  • Multi-cloud test / dev – workload consolidation

*File support (NFS / SMB) coming in 2020 (across the entire FlashArray family, not just //C)

 

Thoughts

I’m a fan of companies that expand their portfolio based on customer requests. It’s a good way to make more money, and sometimes it’s simplest to give the people what they want. The market has been in Pure’s ear for some time about delivering some kind of capacity storage solution. I think it was simply a matter of time before the economics and the technology intersected at a point where it made sense for it to happen. If you’re an existing Pure customer, this is a good opportunity to deploy Pure across all of your tiers of storage, and you get the benefit of Pure1 keeping an eye on everything, and your “slow” arrays will still be relatively performance-focused thanks to NVMe throughout the box. Good times in IT isn’t just about speeds and feeds though, so I think this announcement is more important in terms of simplifying the story for existing Pure customers that may be using other vendors to deliver Tier 2 capabilities.

I’m also pretty excited about DirectMemory Cache, if only because it’s clear that Pure has done its homework (i.e. they’ve run the numbers on Pure1 Meta) and realised that they could improve the performance of existing arrays via a reasonably elegant solution. A lot of the cool kids do DAS, because that’s what they’ve been told will yield great performance. And that’s mostly true, but DAS can be a real pain in the rear when you want to move workloads around, or consolidate performance, or do useful things like data services (e.g. replication). Centralised storage arrays have been doing this stuff for years, and it’s about time they were also able to deliver the performance required in order for those companies not to have to compromise.

You can read the press release here, and the Tech Field Day videos can be viewed here.

Burlywood Tech Announces TrueFlash

Burlywood Tech came out of stealth late last year and recently announced their TrueFlash product. I had the opportunity to speak with Mike Tomky about what they’ve been up to since emerging from stealth and thought I’d cover the announcement here.

 

Burlywood TrueFlash

So what is TrueFlash? It’s a “modular controller architecture that accelerates time-to-market of new flash adoption”. The idea is that Burlywood can deliver a software-defined solution that will sit on top of commodity Flash. They say that one size doesn’t fit all, particularly with Flash, and this solution gives customers the opportunity to tailor the hardware to better meet their requirements.

It offers the following features:

  • Multiple interfaces (SATA, SAS, NVMe)
  • FTL Translation (Full SSD to None)
  • Capacity ->100TB
  • Traffic optimisation
  • Multiple Protocols (Block (NVMe, NVMe/F), File, Object, Direct Memory)

[image courtesy of Burlywood Tech]

 

Who’s Buying?

This isn’t really an enterprise play – those aren’t the types of companies that would buy Flash at the scale that this would make sense. This is really aimed at the hyperscalers, cloud providers, and AFA / HCI vendors. They sell the software, controller and SSD Reference Design to the hyperscalers, but treat the cloud providers and AFA vendors a little differently, generally delivering a completed SSD for them. All of their customers benefit from:

  • A dedicated support team (in-house drive team);
  • Manufacturing assembly & test;
  • Technical & strategic support in all phases; and
  • Collaborative roadmap planning.

The key selling point for Burlywood is that they claim to be able to reduce costs by 10 – 20% through better capacity utilisation, improved supply chain and faster product qualification times.

 

Thoughts

You know you’re doing things at a pretty big scale if you’re thinking it’s a good idea to be building your own SSDs to match particular workloads in your environment. But there are reasons to do this, and from what I can see, it makes sense for a few companies. It’s obviously not for everyone, and I don’t think you’ll be seeing this n the enterprise anytime soon. Which is the funny thing, when you think about it. I remember when Google first started becoming a serious search engine and they talked about some of their earliest efforts with DIY servers and battles with doing things at the scale they needed. Everyone else was talking about using appliances or pre-built solutions “optimised” by the vendors to provide the best value for money or best performance or whatever. As the likes of Dropbox, Facebook and LinkedIn have shown, there is value in going the DIY route, assuming the right amount of scale is there.

I’ve said it before, very few companies really qualify for the “hyper” in hyperscalers. So a company like Burlywood Tech isn’t necessarily going to benefit them directly. That said, these kind of companies, if they’re successful in helping the hyperscalers drive the cost of Flash in a downwards direction, will indirectly help enterprises by forcing the major Flash vendors to look at how they can do things more economically. And sometimes it’s just nice to peak behind the curtain to see how this stuff comes about. I’m oftentimes more interested in how networks put together their streaming media services than a lot of the content they actually deliver on those platforms. I think Burlywood Tech falls in that category as well. I don’t care for some of the services that the hyperscalers deliver, but I’m interested in how they do it nonetheless.

Storbyte Come Out Of Stealth Swinging

I had the opportunity to speak to Storbyte‘s Chief Evangelist and Design Architect Diamond Lauffin recently and thought I’d share some information on their recent announcement.

 

Architecture

ECO-FLASH

Storbyte have announced ECO-FLASH, positioning it as “a new architecture and flash management system for non-volatile memory”. Its integrated circuit, ASIC-based architecture abstracts independent SSD memory modules within the flash drive and presents the unified architecture as a single flash storage device.

 

Hydra

Each ECO-FLASH module is comprised of 16 mSATA modules, running in RAID 0. 4 modules are managed by each Hydra, with 4 “sub-master” Hydras being managed by a master Hydra. This makes up one drive that supports RAID 0, 5, 6 and N, so if you were only running a single-drive solution (think out at the edge), you can configure the modules to run in RAID 5 or 6.

 

[image courtesy of Storbyte]

 

Show Me The Product

[image courtesy of Storbyte]

 

The ECO-FLASH drives come in 4, 8, 16 and 32TB configurations, and these fit into a variety of arrays. Storbyte is offering three ECO-FLASH array models:

  • 131TB raw capacity in 1U (using 4 drives);
  • 262TB raw capacity in 2U (using 16 drives); and
  • 786TB raw capacity in 4U (using 48 drives).

Storbyte’s ECO-FLASH supports a blend of Ethernet, iSCSI, NAS and InfiniBand primary connectivity simultaneously. You can also add Storbyte’s 4U 1.18PB spinning disk JBOD expansion units to deliver a hybrid solution.

 

Thoughts

The idea behind Storbyte came about because some people were working in forensic security environments that had a very heavy write workload, and they needed to find a better way to add resilience to the high performance storage solutions they were using. Storbyte are offering a 10 year warranty on their product, so they’re clearly convinced that they’ve worked through a lot of the problems previously associated with the SSD Write Cliff (read more about that here, here, and here). They tell me that Hydra is the primary reason that they’re able to mitigate a number of the effects of the write cliff and can provide performance for a longer period of time.

Storbyte’s is not a standard approach by any stretch. They’re talking some big numbers out of the gate and have a pretty reasonable story to tell around capacity, performance, and resilience as well. I’ve scheduled another session with Storbyte to talk some more about how it all works and I’ll be watching these folks with some interest as they enter the market and start to get some units running workload on the floor. There’s certainly interesting heritage there, and the write cliff has been an annoying problem to solve. Coupled with some aggressive economics and support for a number of connectivity options and I can see this solution going in to a lot of DCs and being used for some cool stuff. If you’d like to read another perspective, check out what Rich over at Gestalt IT wrote about them and you can read the full press release here.

Kingston’s NVMe Line-up Is The Life Of The Party

Disclaimer: I recently attended VMworld 2017 – US.  My flights were paid for by ActualTech Media, VMware provided me with a free pass to the conference and various bits of swag, and Tech Field Day picked up my hotel costs. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

You can view the video of Kingston‘s presentation at Tech Field Day Extra VMworld US 2017 here, and download a PDF copy of my rough notes from here.

 

It’s A Protocol, Not Media

NVMe has been around for a few years now, and some people get it confused for a new kind of media that they plug into their servers. But it’s not really, it’s just a standard specification for accessing Flash media via the PCI Express bus. There’re a bunch of reasons why you might choose to use NVMe instead of SAS, including lower latency and less CPU overhead. My favourite thing about it though is the plethora of form factors available to use. Kingston touched on these in their presentation at Tech Field Day Extra recently. You can get them in half-height, half-length (HHHL) add-in cards (AIC), U.2 (2.5″) and M.2 sizes. To give you an idea of the use cases for each of these, Kingston suggested the following applications:

  • HHHL (AIC) card
    • Server / DC applications
    • High-end workstations
  • U.2 (2.5″)
    • Direct-attached, server backplane, just a bunch of flash (JBOF)
    • White box and OEM-branded
  • M.2
    • Client applications
    • Notebooks, desktops, workstations
    • Specialised systems

 

It’s Pretty Fast

NVMe has proven to be pretty fast, and a number of companies are starting to develop products that leverage the protocol in an extremely efficient manner. Coupled with the rise of NVMe/F solutions and you’ve got some pretty cool stuff coming to market. The price is also becoming a lot more reasonable, with Kingston telling us that their DCP1000 NVMe HHHL comes in at around “$0.85 – $0.90 per GB at the moment”. It’s obviously not as cheap as things that spin at 7200RPM but the speed is mighty fine. Kingston also noted that the 2.5″ form factor would be hanging around for some time yet, as customers appreciated the serviceability of the form factor.

 

[Kingston DCU1000 – Image courtesy of Kingston]

 

This Stuff’s Everywhere

Flash media has been slowly but surely taking over the world for a little while now. The cost per GB is reducing (slowly, but surely), and the range of form factors means there’s something for everyone’s needs. Protocol advancements such as NVMe make things even easier, particularly at the high end of town. It’s also been interesting to see these “high end” solutions trickle down to affordable form factors such as PCIe add-in cards. With the relative ubiquity of operating system driver support, NVMe has become super accessible. The interesting thing to watch now is how we effectively leverage these advancements in protocol technologies. Will we use them to make interesting advances in platforms and data access? Or will we keep using the same software architectures we fell in love with 15 years ago (albeit with dramatically improved performance specifications)?

 

Conclusion and Further Reading

I’ll admit it took me a little while to come up with something to write about after the Kingston presentation. Not because I don’t like them or didn’t find their content interesting. Rather, I felt like I was heading down the path of delivering another corporate backgrounder coupled with speeds and feeds and I know they have better qualified people to deliver that messaging to you (if that’s what you’re into). Kingston do a whole range of memory-related products across a variety of focus areas. That’s all well and good but you probably already knew that. Instead, I thought I could focus a little on the magic behind the magic. The Flash era of storage has been absolutely fascinating to witness, and I think it’s only going to get more interesting over the next few years. If you’re into this kind of thing but need a more comprehensive primer on NVMe, I recommend you check out J Metz’s article on the Cisco blog. It’s a cracking yarn and enlightening to boot. Data Centre Journal also provide a thorough overview here.

Dell EMC Announces Isilon All-Flash

You get a flash, you get a flash, you all get a flash

Last week at Dell EMC World it was announced that the Isilon All-Flash NAS (formerly “Project Nitro“) offering was available for pre-order (and GA in early 2017). You can check out the specs here, but basically each chassis is comprised of 4 nodes in 4RU. Dell EMC says this provides “[e]xtreme density, modular and incredibly scalable all-flash tier” with the ability to have up to 100 systems with 400 nodes, storing 92.4PB of capacity, 25M IOPS and up to 1.5TB/s of total aggregate bandwidth—all within a single file system and single volume. All OneFS features are supported, and a OneFS update will be required to add these to existing clusters.

isilon_all-flash_001

[image via Dell EMC]

 

Why?

Dell EMC are saying this solution provides 6x greater IOPS per RU over existing Isilon nodes. It also helps in areas where Isilon hasn’t been as competitive, providing:

  • High throughput for large datasets of large files for parallel processing;
  • IOPS intensive: You can now work on billions of small files and large datasets for parallel processing;
  • Predictable latency and performance for mixed workloads; and
  • Improved cost of ownership, with higher density flash providing some level of relief in terms of infrastructure and energy efficiency.

 

Use Cases?

Dell EMC covered the usual suspects – but with greater performance:

  • Media and entertainment;
  • Life sciences;
  • Geoscience;
  • IoT; and
  • High Performance Computing.

 

Thoughts and Further Reading

If you followed along with the announcements from Dell EMC last week you would have noticed that there have been some incremental improvements in the current storage portfolio, but no drastic changes. While it might make for an exciting article when Dell EMC decide to kill off a product, these changes make a lot more sense (FluidFS for XtremIOenhanced support for Compellent, and the addition of a PowerEdge offering for VxRail). The addition of an all-flash offering for Isilon has been in the works for some time, and gives the platform a little extra boost in areas where it may have previously struggled. I’ve been a fan of the Isilon platform since I first heard about it, and while I don’t have details of pricing, if you’re already an Isilon shop the all-flash offering should make for interesting news.

Vipin V.K did a great write-up on the announcement that you can read here. The press release from Dell EMC can be found here. There’s also a decent overview from ESG here. Along with the above links to El Reg, there’s a nice article on Nitro here.

Tintri Announces New Scale-Out Storage Platform

I’ve had a few briefings with Tintri now, and talked about Tintri’s T5040 here. Today they announced a few enhancements to their product line, including:

  • Nine new Tintri VMstore T5000 all flash models with capacity expansion capabilities;
  • VM Scale-out software;
  • Tintri Analytics for predictive capacity and performance planning; and
  • Two new Tintri Cloud offerings.

 

Scale-out Storage Platform

You might be familiar with the T5040, T5060 and T5080 models, with the Tintri VMstore T5000 all-flash series being introduced in August 2015. All three models have been updated with new capacity options ranging from 17 TB to 308 TB. These systems use the latest in 3D NAND technology and high density drives to offer organizations both higher capacity and lower $/GB.

Tintri03_NewModels

The new models have the following characteristics:

  • Federated pool of storage. You can now treat multiple Tintri VMstores—both all-flash and hybrid-flash nodes—as a pool of storage. This makes management, planning and resource allocation a lot simpler. You can have up to 32 VMstores in a pool.
  • Scalability and performance. The storage platform is designed to scale to more than one million VMs. Tintri tell me that the  “[s]eparation of control flow from data flow ensures low latency and scalability to a very large number of storage nodes”.
  • This allows you to scale from small to very large with new and existing, all flash and hybrid, partially or fully populated systems.
  • The VM Scale-out software works across any standard high performance Ethernet network, eliminating the need for proprietary interconnects. The VM Scale-out software automatically provides best placement recommendation for VMs.
  • Scale compute and storage independently. Loose coupling of storage and compute provides customers with maximum flexibility to scale these elements independently. I think this is Tintri’s way of saying they’re not (yet) heading down the hyperconverged path.

 

VM Scale-out Software

Tintri’s new VM Scale-out Software (*included with Tintri Global Center Advanced license) provides the following capabilities:

  • Predictive analytics derived from one million statistics collected every 10 minutes from 30 days of history, accounting for peak loads instead of average loads, providing (according to Tintri) for the most accurate predictions. Deep workload analysis identifies VMs that are growing rapidly and applies sophisticated algorithms to model the growth ahead and avoid resource constraints.
  • Least-cost optimization based on multi-dimensional modelling. Control algorithm constantly optimizes across the thousands of VMs in each pool of VMstores, taking into account space savings, resources required by each VM, and the cost in time and data to move VMs, and makes the least-cost recommendation for VM migration that optimizes the pool.
  • Retain VM policy settings and stats. When a VM is moved, not only are the snapshots moved with the VM, the stastistics,  protection and QoS policies migrate as well using efficient compressed and deduplicated replication protocol.
  • Supports all major hypervisors.

Tintri04_ScaleOut

You can check out a YouTube video on Tintri VM Scale-out (covering optimal VM distribution) here.

 

Tintri Analytics
Tintri has always offered real-time, VM-level analytics as part of its Tintri Operating System and Tintri Global Center management system. This has now been expanded to include a SaaS offering of predictive analytics that provides organizations with the ability to model both capacity and performance requirements. Powered by big data engines such as Apache Spark and Elastic Search, Tintri Analytics is capable of analyzing stats from 500,000 VMs over several years in one second.  By mining the rich VM-level metadata, Tintri Analytics provides customers with information about their environment to help them make better decisions about applications’ behaviours and storage needs.

Tintri Analytics is a SaaS tool that allows you to model storage needs up to 6 months into the future based on up to 3 years of historical data.

Tintri01_Analytics

Here is a shot of the dashboard. You can see a few things here, including:

  • Your live resource usage for your entire footprint up to 32 VMstores;
  • Average consumption per VM (bottom left); and
  • The types of applications that are your largest consumers of Capacity, Performance and Working Set (bottom center).

Tintri02_Analytics

Here you can see exactly how your usage of capacity, performance and working set have been trending over time. You can see also when you can expect to run out of these resources (and which is on the critical path). It also provides the ability to change the timeframe to alter the projections, or drill into specific application types to understand their impact on your footprint.

There are a number of videos covering Tintri Analytics that I think are worth checking out:

 

Tintri Cloud Suites

Tintri have also come up with a new packaging model called “Tintri Cloud”. Aimed at folks still keen on private cloud deployments, Tintri Cloud combines the Tintri Scale-out platform and the all-flash VMstores.

Customers can start with a single Tintri VMstore T5040 with 17 TB of effective capacity and scale out to the Tintri Foundation Cloud with 1.2 PB in as few as 8 rack units. Or they can grow all the way to the Tintri Ultimate Cloud, which delivers a 10 PB cloud-ready storage infrastructure for up to 160,000 VMs, delivering over 6.4 million IOPS in 64 RU for less than $1/GB effective. Both the Foundation Cloud and Ultimate Cloud include Tintri’s complete set of software offerings for storage management, VM-level analytics, VM Scale-out, replication, QoS, and lifecycle management.

 

Further Reading and Thoughts

There’s another video covering setting policies on groups of VMs in Tintri Global Center here. You might also like to check out the Tintri Product Launch webinar.

Tintri have made quite a big deal about their “VM-aware” storage in the past, and haven’t been afraid to call out the bigger players on their approach to VM-centric storage. While I think they’ve missed the mark with some of their comments, I’ve enjoyed the approach they’ve taken with their own products. I’ve also certainly been impressed with the demonstrations I’ve been given on the capability built into the arrays and available via Global Center. Deploying workload to the public cloud isn’t for everyone, and Tintri are doing a bang-up job of going for those who still want to run their VM storage decoupled from their compute and in their own data centre. I love the analytics capability, and the UI looks to be fairly straightforward and informative. Trending still seems to be a thing that companies are struggling with, so if a dashboard can help them with further insight then it can’t be a bad thing.

New eBook from Dell

I recently had the opportunity to contribute to an eBook from Dell (just quietly it feels more like a pamphlet) called “10 Ways to Flash Forward: Future-Ready Storage Insights from the Experts”. Besides the fact that I need to get a headshot that isn’t the same as my work ID card, I think it’s worth checking out if only for the insights that other people have provided. You can grab a PDF copy here. It’s also available via SlideShare.

Violin Memory Announces Additions To FSP Range

I got a chance to speak to Violin Memory at Storage Field Day 8 and was impressed by the company’s “new” approach to all-flash arrays. They recently announced the addition of the FSP 7600 and the FSP 7250 to the Flash Storage Platform. I’ve been told these will be GA in December 2015. Please note that I’ve not used either of these products in the wild, and recommend that you test them in your own environment prior to making any purchasing decisions.

Violin positions FSP as a competitive differentiator with Concerto OS 7 offering the following features:

  • Comprehensive Data Protection Services (including Syncronous, Asynchronous and CDP);
  • Stretch Cluster for Zero Down time and zero data loss;
  • granular deduplication and compression;
  • sustained low latency with Flash Fabric Architecture;
  • simple and single pane of glass management; and
  • integrated data migration and ecosytem integration.

The FSP 7250 is being positioned as an entry-level, sub-$100K US AFA that is:

  • Data Reduction Optimized (Always on Dedupe);
  • Integrated 3U Platform;
  • 8-26TB Raw; and
  • Up to 92TB Effective capacity.

The FSP 7600 sits just below the FSP 7700, and offers:

  • “Extreme” Performance
  • An integrated 3U Platform
  • 35-140TB Raw
  • 1.1 M IOPS < 500 μsecs

Unfortunately I don’t currently have links to useful things like data sheets, but you can read a nice summary article at El Reg here, and a link to the Violin Memory press release can be found here.

Pure Storage Announces FlashArray//m, Evergreen Storage and Pure1

That’s one of the wordier titles I’ve used for a blog post in recent times, but I think it captures the essence of Pure Storage‘s recent announcements. Firstly, I’m notoriously poor at covering product announcements, so if you want a really good insight into what is going on, check out Dave Henry’s post here. There were three key announcements made today:

  • FlashArray//m;
  • Evergreen Storage; and
  • Pure1 Cloud-Based Management and Support.

 

FlashArray//m

Besides having some slightly weird spelling, the FlashArray//m (mini because it fits in 3RU and modular because, well, you can swap modules in it) is Pure’s next-generation storage appliance. Here’s a picture.

Pure_hardware1

There are three models, the //m20, //m50, and //m70. Each of these has various capabilities. I’ve included an overview from the datasheet, but note that this is subject to change before GA of the tin.

Pure_hardware2

The key takeaway for me is that, after some time using other people’s designs, this is Pure’s crack at using their own hardware design, and it will be interesting to see how this plays out over the expected life of the gear.

 

Evergreen Storage

Pure_evergreen

In the olden days, when I was a storage customer, I would have been pretty excited about a program like Evergreen Storage. Far too often I found myself purchasing storage only to have the latest version released a month later, sometimes before the previous generation had hit the loading dock. I was rarely given a heads up from the vendor that something new was coming, and often had the feeling I was just using up their old stock. Pure don’t want you to have that feeling with them. Instead, for as long as the array is under maintenance, Pure will help customers upgrade the controllers, storage, and software in a non-disruptive fashion. The impression I got was that these arrays would keep on keeping on for around 7 – 10 years, with the modular design enabling easy upgrades of key technologies as well as capacity.

 

Pure1 Cloud-Based Management and Support

I’ve never been a Pure Storage customer, so I can’t comment as to how easy or difficult it currently is to get support. Nonetheless, I imagine the Pure1 announcement might be a bit exciting for the average punter slogging through storage ops. Basically, Pure1 gets you in touch with improved analytics and management of your storage infrastructure, all of which can be performed via a web browser. And, if you’re so inclined, you can turn on a call home feature and have Pure collect info from your arrays every 30 seconds. This provides both the customer and Pure with a wealth of information to make decisions about performance, resilience and upgrades. You can get the datasheet here.

 

Final Thoughts

I like Pure Storage. I was lucky enough to visit them during Storage Field Day 6 and was impressed by their clarity of vision and different approach to flash storage architecture. I like the look of the new hardware, although the proof will be in field performance. The Evergreen Storage announcement is fantastic from the customer’s perspective, although I’ll be interested to see just how long they can keep something like that going.