Storage Field Day 13 – Wrap-up and Link-o-rama

Disclaimer: I recently attended Storage Field Day 13.  My flights, accommodation and other expenses were paid for by Tech Field Day and Pure Storage. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

This is a quick post to say thanks once again to Stephen, Tom and Claire, and the presenters at Storage Field Day 13 and Pure//Accelerate. I had a super fun and educational time. For easy reference, here’s a list of the posts I did covering the events (they may not match the order of the presentations).

Storage Field Day – I’ll Be At Storage Field Day 13

Storage Field Day Exclusive at Pure//Accelerate 2017 – General Session Notes

Storage Field Day Exclusive at Pure//Accelerate 2017 – FlashBlade 2.0

Storage Field Day Exclusive at Pure//Accelerate 2017 – Purity Update

Storage Field Day 13 – Day 0

NetApp Doesn’t Want You To Be Special (This Is A Good Thing)

Dell EMC’s in the Midst of a Midrange Resurrection

X-IO Technologies Are Living On The Edge

SNIA’s Swordfish Is Better Than The Film

ScaleIO Is Not Your Father’s SDS

Dell EMC’s Isilon All-Flash Is Starting To Make Sense

Primary Data Attacks Application Ignorance

StorageCraft Are In Your Data Centre And In The Cloud

Storage Field Day 13 – (Fairly) Full Disclosure

 

Also, here’s a number of links to posts by my fellow delegates (in no particular order). They’re all very smart people, and you should check out their stuff, particularly if you haven’t before. I’ll attempt to keep this updated as more posts are published. But if it gets stale, the Storage Field Day 13 and Storage Field Day Exclusive at Pure Accelerate 2017 landing pages will have updated links.

 

Alex Galbraith (@AlexGalbraith)

Storage Field Day 13 (SFD13) – Preview

Long Term Data Retention – What do I do?

Does Cloud Provide Infinite Storage Capacity and Retention?

 

Brandon Graves (@BrandonGraves08)

Delegate For Storage Field Day 13

Storage Field Day Is Almost Here

X-IO Technologies Axellio At SFD13

 

Chris Evans (@ChrisMEvans)

Pure Accelerate: FlashArray Gets Synchronous Replication

Pure1 META – Analytics for Pure Storage Arrays

 

Erik Ableson (@EAbleson)

 

Matthew Leib (@MBLeib)

Pure Storage Accelerate/Storage Field Day 13 – PreFlight

 

Jason Nash (@TheJasonNash)

 

Justin Warren (@JPWarren)

Pure Storage Charts A Course To The Future Of Big Data

 

Max Mortillaro (@DarkkAvenger)

See you at Storage Field Day 13 and Pure Accelerate!

Storage Field Day 13 Primer – Exablox

SFD13 Primer – X-IO Axellio Edge Computing Platform

Real-time Storage Analytics: one step further towards AI-enabled storage arrays?

X-IO Axellio and Edge Computing: an NVMe-enabled emerging architecture model?

 

Mark May (@CincyStorage)

What the heck is tail latency anyways?

Cloudified Snapshots done two ways

 

Mike Preston (@MWPreston)

A field day of Storage lies ahead!

Primary Data set to make their 5th appearance at Storage Field Day

Hear more from Exablox at Storage Field Day 13

X-IO Technology  A #SFD13 preview

Hear more from Exablox at Storage Field Day 13

SNIA comes back for another Storage Field Day

Is there still a need for specialized administrators?

The concept of “Scale In” for high volume data

 

Ray Lucchesi (@RayLucchesi)

Axellio, next gen, IO intensive server for RT analytics by X-IO Technologies

 

Scott D. Lowe (@OtherScottLowe)

Backup and Recovery in the Cloud: Simplification is Actually Really Hard

The Purity of Hyperconverged Infrastructure: What’s in a Name?

 

Stephen Foskett (@SFoskett)

The Year of Cloud Extension

 

Finally, thanks again to Stephen and the team at Gestalt IT for making it all happen. It was an educational and enjoyable week and I really valued the opportunity I was given to attend. Here’s a photo of the Storage Field Day 13 delegates.

[image courtesy of Tech Field Day]

Primary Data Attacks Application Ignorance

Disclaimer: I recently attended Storage Field Day 13.  My flights, accommodation and other expenses were paid for by Tech Field Day and Pure Storage. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

Primary Data recently presented at Storage Field Day 13. You can see videos of their presentation here, and download my rough notes from here.

 

My Applications Are Ignorant

I had the good fortune of being in a Primary Data presentation at Storage Field Day. I’ve written about them a few times (here, here and here) so I won’t go into the basic overview again. What I would like to talk about is the idea raised by Primary Data that “applications are unaware”. Unaware of what exactly? Usually it’s the underlying infrastructure. When you deploy applications they generally want to run as fast as they can, or as fast as they need to. When you think about it though, it’s unusual that they can determine the platform they run on and only run as fast as the supporting infrastructure allows. This is different to phone applications, for example, that are normally written to operate within the constraints of the hardware.

An application’s ignorance has an impact. According to Primary Data, this impact can be in terms of performance, cost, protection (or all three). The cost of unawareness can have the following impact on your environment:

  • Bottlenecks hinder performance
  • Cold data hogs hot capacity
  • Over provisioning creates significant overspending
  • Migration headaches keep data stuck until retirements
  • Vendor lock-in limits agility and adds more cost

As well as this, the following trends are being observed in the data centre:

  • Cost: Budgets are getting smaller;
  • Time: We never have enough; and
  • Resources: We have limited resources and infrastructure to run all of this stuff.

Put this all together and you’ve got a problem on your hands.

 

Primary Data In The Mix

Primary Data tells us that DataSphere is solving the main pain points through:

They do this with DataSphere, “a metadata engine that automates the flow of data across the enterprise infrastructure and the cloud to meet evolving application demands”. It:

  • Is storage and vendor agnostic;
  • Virtualises the view of data;
  • Automates the flow of data; and
  • Solves the inefficiency of traditional storage and compute architectures.

 

Can We Be More Efficient?

Probably. But the traditional approach of architecting infrastructure for various workloads isn’t really working as well as we’d like. I like the way Primary Data are solving the problem of application ignorance. But I think it’s treating a symptom, rather than providing a cure. I’m not suggesting that I think what Pd are doing is wrong by any stretch, but rather that my applications will still remain ignorant. They’re still not going to have an appreciation of the infrastructure they’re running on, and they’re still going to run at the speed they want to run at. That said, with the approach that Primary Data takes to managing data, I have a better chance of having applications running with access to the storage resources they need.

Application awareness means different things to different people. For some people, it’s about understanding how the application is going to behave based on the constraints it was designed within, and what resources they think it will need to run as expected. For other people, it’s about learning the behavior of the application based on past experiences of how the application has run and providing infrastructure that can accommodate that behaviour. And some people want their infrastructure to react to the needs of the application in real time. I think this is probably the nirvana of infrastructure and application interaction.

Ultimately, I think Primary Data provides a really cool way of managing various bits of heterogeneous storage in a way that aligns with some interesting notions of how applications should behave. I think the way they pick up on the various behaviours of applications within the infrastructure and move data around accordingly is also pretty neat. I think we’re still some ways away from running the kind of infrastructure that interacts intelligently with applications at the right level, but Primary Data’s solution certainly helps with some of the pain of running ignorant applications.

You can read more about DataSphere Extended Services (DSX) here, and the DataSphere metadata engine here.

StorageCraft Are In Your Data Centre And In The Cloud

Disclaimer: I recently attended Storage Field Day 13.  My flights, accommodation and other expenses were paid for by Tech Field Day and Pure Storage. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

StorageCraft presented at Storage Field Day 13 recently. You can see videos of their presentation here, and download my rough notes from here.

 

StorageCraft Are In Your Data Centre

And have been for a while (well, since 2003 in any case). I first came across StorageCraft about ten years ago when the small integrator I was working for at the time was looking for a solution to make P2V migrations easier. ShadowProtect did this (and a lot more stuff) really well. In fact, a colleague of mine used it last year to P2P some SAP cluster nodes from a hospital to a managed infrastructure environment (don’t even get me started). I hated P2Vs, but I loved the idea of moving away from crappy physical machines. While SPX has been primarily aimed at smaller organisations, it supports a range of operating systems, hypervisors and hardware combinations. And, most importantly, it’s quite good at recovering data too.

 

And In The Cloud

How does that work then? Well, you create local backup images with ShadowProtect SPX or StorageCraft ShadowProtect. You then use ImageManager to replicate those encrypted backups offsite to the StorageCraft Cloud. You can also seed the cloud data with your base backup image and any existing consolidated incremental images by replicating these to a seed drive and shipping it to StorageCraft (this is a feature I’ve always been fond of with CrashPlan). SPX is installed on each machine, and ImageManager runs on a single machine. Its job is to verify your backup images and perform consolidation functions. There’s a portal you can access to manage resources.

[image courtesy of StorageCraft]

StorageCraft’s cloud (OpenStack with Ceph distributed storage) is a managed service from them. Their primary data centre is located in the Rocky Mountains in Utah, with a secondary site in Georgia. They also have a DC presence in “the UK somewhere”, Canada and Sydney. The cool thing about using ShadowProtect is that there’s support for physical and virtual environments, with support for Windows and Linux (CentOS, Redhat, Ubuntu). You can also replicate to other clouds if required (but StorageCraft can’t guarantee you can recover when you’re in those clouds). There’s also a built-in connector to AWS if you need to dump a bunch of data there.

Is the recovery temporary? Yes. It’s not a permanent restore. This is really only something you’d use in the event of a disaster. Not just because you’ve decided to save yourself some power in your own DC. As such, there’s not really the facility to run some “supporting infrastructure” VMs (such as DNS, authentication services, etc). You can test the facility though. The connection to the cloud is asynchronous, with ImageManager using an FTP connection to get the data to the cloud.

In terms of granularity, you can go down to 15 minutes, but typically customers are doing a nightly synchronization of data. The average sized customer is backing up 20 – 30 servers there. You can find out more about the guaranteed service levels here.

 

Conclusion and Further Reading

When I heard StorageCraft were presenting at Storage Field Day 13, I was hoping they’d spend some time on their plans for Exablox integration with their existing products. Us storage pundits had various thoughts on what it might look like, and were hoping that they’d give us at least a hint. It wasn’t to be, which isn’t really a bad thing, as the acquisition only happened in January this year, and I’d rather they spent the time to do it properly. Instead, I was pleasantly reminded of a lot of the things I like about StorageCraft’s offering.

Backup and recovery is hard to do at times. And DR as a service has been hard to do because the software mechanisms and the bandwidth haven’t been available to make the solutions work really well. This is even more of a problem in smaller environments who don’t have the luxury of hiring specialists to look after their data protection requirements (both on-premises and off). The cool thing about StorageCraft’s offering is that it’s terribly simple to manage and not very complex to get up and running. And, as I mentioned at the start, it’s quite good at recovering data when required. This is critical to the success of these kinds of offerings. StorageCraft are likely never going to dominate the data protection market (although they may have plans to do just that), but they offer a technically interesting and price-competitive offering that should appeal to smaller places looking for peace of mind.

I’m looking forward to seeing what they come up with in terms of Exablox integration, as I think that acquisition gives them a really cool hardware play to leverage as well, at the right price point for their target market. If you’d like to know more, you can find more technical information on ShadowCraft’s offering here, and the data sheet can be downloaded from here.

ScaleIO Is Not Your Father’s SDS

Disclaimer: I recently attended Storage Field Day 13.  My flights, accommodation and other expenses were paid for by Tech Field Day and Pure Storage. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

I’ve written about ScaleIO before (here and here), but thought it might be useful to deliver a basic overview of what ScaleIO actually is and what it can do. You can see Dell EMC’s Storage Field Day presentation video here and you can grab a copy of my rough notes from here.

 

ScaleIO Overview

What is it?

In a nutshell, it’s a software-defined storage product that leverages captive server storage at scale.

 

Benefits

According to Dell EMC, the useful life of ScaleIO is perpetual.

  • Deploy once
  • Grow incrementally
  • No data migration
  • Rolling upgrades
  • Perpetual software licenses

 

ScaleIO Vision and Architecture

Core, Fundamental Features of ScaleIO

Configuration Flexibility

  • Hyperconverged and/or 2-layers

Highly scalable

  • 100s / 1000s of nodes

High performance / low footprint

  • Performance scales linearly
  • High I/O parallelism
  • Gets the maximum from flash media
  • Various caching options (RAM, flash)

Platform agnostic

  • Bare-metal: Linux / Windows
  • Virtual: ESX, XEN, KVM, Hyper-V

Any network

  • Slow, fast, shared, dedicated, IPv6

Flash and Magnetic

  • SSD, NVMe, PCI or HDD
  • Manual and automatic multi-tiering

Elastic / flexible / multi-tenancy

  • Add, move, remove nodes or disks “on the fly”
  • Auto-balance

Various partitioning schemes:

  • Protection-domains
  • Storage pools
  • Fault sets
  • Seamlessly move assets from one partition to another
  • QoS – bandwidth/IOPS limiter

Resilient

  • Distributed mirroring
  • Fast auto many-to-many rebuild
  • Extensive failure handling / HA
  • Background disk scanner

Secure

  • AD/LDAP, RBAC integration
  • Secure cluster formation and component authentication
  • Secure connectivity with components, secure external client communication
  • [email protected] (SW, followed by SED*)

Ease of management & operation

  • GUI, CLI, REST, OpenStack Cinder, vSphere plugin and more
  • Instant maintenance mode
  • NDU

Competent Snapshots

  • Writeable, no hierarchy limits
  • Large consistency groups
  • Automatic policies*

Thin-provisioning

Space-efficient layout*

  • Fine-grain snapshots and thin-provisioning*
  • Compression*

*Soon

 

Two-ways

You can use ScaleIO in a hyperconverged configuration and a “two-layer” configuration. With hyperconverged, you can run:

  • Application and storage in the same node, where  ScaleIO is yet another application running alongside other applications
  • Asymmetric nodes, where nodes may have a different # of spindles, etc

You can also run ScaleIO in a two-layer configuration

  • app-only nodes can access ScaleIO volumes
  • app+storage – hyperconverged nodes

 

Components

ScaleIO Components

  • ScaleIO Data Client (SDC) exposes shared block volumes to the application (block device driver)
  • ScaleIO Data Server (SDS) owns local storage that contributes to the ScaleIO storage pool (daemon/service)

SDS and SDC in the same host

  • Can live together
  • SDC serves the I/O requests of the resident host applications
  • SDS serves the I/O requests of various SDCs

 

Volume Layout, Redundancy and Elasticity

A volume appears as a single object to the application.

Volume Layout (No Redundancy)

  • Chunks (1MB) are spread across the cluster in a balanced manner
  • No hot spots, no I/O splitting

2-Copy Mirror Scheme

Free and Spare Capacity

  • Free and reserved space scattered across the cluster

 

Fast, balanced and smart rebuild

Forwards Rebuild

  • Once disk/node fails – the rebuild load is balanced across all the cluster partition disks/nodes -> faster and smoother rebuild

Backwards Rebuild

  • Smart and selective transition to “backwards” rebuild (re-silvering), once a failed node is back alive
  • Short outage = small penalty

 

Elasticity, Auto-rebalance

Add: one may add nodes or disks dynamically -> the system automatically rebalances the storage

  • Old volumes can use the wider striping
  • No extra exposure
  • Most minimal data transferred

Remove: One may remove nodes / disks dynamically -> the system automatically rebalances the storage

  • Minimal data transferred in a many to many fashion

Combination: The same rebalance plan could handle additions and removals simultaneously

 

Conclusion and Further Reading

I’ve spoken to a range of people in the industry, from customers to Dell EMC folks to competitive vendors, and one thing that gets raised constantly is that Dell EMC are offering both ScaleIO and VMware vSAN. If you’ve been following along at home, you’ll know that this isn’t the first time Dell EMC have offered up products that could be seen as competing for the same market share. But I think they’re doing different things and are aimed at different use cases. My second favourite Canadian Chad Sakac explains this better than I would here.

Put this software on the right hardware (the key to any successful software defined storage product) and you’ve got something that can deliver very good block storage performance across a range of use cases. If you want to know more, Dell EMC have a pretty handy architecture overview document you can get here, and you can download a best practice white paper from here. You can also access a brief introduction to ScaleIO here (registration required). But the best bit is you can download ScaleIO for free from here.

ScaleIO.Next promises to deliver a range of new features, including space efficient storage and NVMe support. I’m curious to see the market uptake has been given the accessibility of the software. I’d also like to see what the uptake has been given the availability of ScaleIO-ready nodes based on Dell PowerEdge hardware. In any case, if you’ve got some spare tin, I recommend taking ScaleIO for a spin.

SNIA’s Swordfish Is Better Than The Film

Disclaimer: I recently attended Storage Field Day 13.  My flights, accommodation and other expenses were paid for by Tech Field Day and Pure Storage. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

SNIA presented on Swordfish, amongst other things, at Storage Field Day 13 recently. You can see videos of their presentation here, and download my rough notes from here.

 

Swordfish

What is SNIA Swordfish?

It’s not the John Travolta movie that I love and hate so much. So what is it? Well, SNIA are looking to:

  • Refactor and leverage SMI-S schema into a simplified model that is client oriented;
  • Move to class of service based provisioning and monitoring;
  • Cover block, file and object storage; and
  • Extend traditional storage domain coverage to include converged environments (covering servers, storage and fabric together).

 

How do make Swordfish?

SNIA are leveraging Redfish heavily for Swordfish by:

  • Leveraging and extending DMTF Redfish Specification (focuses on hardware and system management – you can read an introduction on it here);
  • Building using DMTF’s Redfish technologies (RESTful interface over HTTPS in JSON format based on Data v4); and
  • Implementing Swordfish as a seamless extension of the Redfish specification.

 

Conclusion

SMI-S originally delivered the capability to identify storage device properties and attributes and has since been significantly extended to provide all types of management capabilities. It was a fantastic idea that was let down at times by various vendor interpretations of the implementation. Storage in the data centre also looks a lot different to what it did 15 years ago. It’s not just the hyperscalers who needs tools to manage their environments in a consistent fashion, it’s cloud folks as well. These environments lean heavily on automation as a key construct within their management capability, and the development of Swordfish on Redfish certainly provides a large part of this capability.

Managing infrastructure resources can be hard at the best of times. At the worst of times there’s usually a whole bunch of things that are (figuratively) on fire and it’s hard to know where to look for resolution. Ultimately, we’re all looking for simple ways to allocate, manage and monitor storage that integrates easily into our existing operational framework and processes. It feels like Swordfish provides this in theory, and SNIA have certainly put a lot of thought into how this experience can be improved and modernized when compared to SMI-S. I’ll be watching to see just how this plays out in reality, and just how well the vendors take to this new standard.

I’ve previously waxed lyrical about the role that SNIA plays in the industry. Initiatives like Swordfish prove once again how important SNIA is to the industry, with key people from various vendors coming together for the common good. If you do nothing else today, go check out SNIA’s website and, if you’re in the industry, get involved. It’s good for all of us.

Dell EMC’s Isilon All-Flash Is Starting To Make Sense

Disclaimer: I recently attended Storage Field Day 13.  My flights, accommodation and other expenses were paid for by Tech Field Day and Pure Storage. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

I’ve written about Dell EMC Isilon All-Flash before (here and here). You can see Dell EMC’s Storage Field Day presentation video here and you can grab a copy of my rough notes from here.

 

The Problem?

Dell EMC’s Isilon (and OneFS) has been around for a while now, and Dell EMC tell us it offers the following advantages over competing scale-out NAS offerings:

  • Single, scalable file system;
  • Fully symmetric, clustered architecture;
  • Truly multi-protocol data lake;
  • Transparent tiering with heterogeneous clusters; and
  • Non-disruptive platform and OneFS upgrades.

While this is most likely true, the world (and its workloads) are changing. To this end, Dell EMC have been working with Isilon customers to address some key industry challenges, including:

  • Electronic Design Automation – 7nm and 3D Chip designs;
  • Life Sciences – population-scale genomics;
  • Media and Entertainment – 4K Content and Distribution; and
  • Enterprise – big data and analytics.

 

The Solution?

To cope with the ever-increasing throughput requirements, Dell EMC have developed an all-flash offering for their Isilon range of NAS devices, along with some changes in their OneFS operating environment. The idea of the “F” series of devices is that you can “start small and scale”, with capacities ranging from 72TB – 924TB (RAW) in 4RU. Dell EMC tell me you can go to over 33PB in a single file system. From a performance perspective, Dell EMC say that you can push 250K IOPS (or 15GB/s) in just 4RU and scale to 9M IOPS. These are pretty high numbers, and pointless if your editing workstation is plugged into a 1Gbps switch. But that’s generally not the case nowadays.

One of the neater resilience features that Dell EMC discussed was that the file system layout is “sled-aware” (there are 5 drive sleds per node and 20 sleds per 4RU chassis) meaning that a given file uses one drive per sled, allowing for sled removal for service without data unavailability, with these being treated as temporarily-offline drives.

 

Is All-Flash the Answer (Or Just Another Step?)

I’ve been fascinated with the storage requirements (and IT requirements in general) for media and entertainment workloads for some time. I have absolutely no real-world experience with these types of environments, and it would be silly for me to position myself as any kind of expert in the field. [I am, of course, happy for people working in M&E to get in touch with me and tell me all about what they do]. What I do have is a lot of information that tells me that the move from 2K to 4K (and 8K) is forcing people to rethink their requirements for high bandwidth storage in the ranges of capacities that studios are now starting to look at.

Whilst I was initially a little confused around the move to all-flash on the Isilon platform, the more I think about it, the more it makes sense. You’re always going to have a bunch of data hanging around that you might want to keep on-line for a long time, but it may not need to be retrieved at great speed (think “cheap and deep” storage). For this, it seems that the H (Hybrid) series of Isilon does the job, and does it well. But for workloads where large amounts of data need to be processed in a timely fashion, all-flash options are starting to make a lot more sense.

Is an all-flash offering the answer to everything? Probably not. Particularly not if you’re on a budget. And no matter how much money people have invested in the movie / TV show / whatever, I can guarantee that most of that is going to talent and content, not infrastructure. But there’s definitely a shift from spinning disk to Flash and this will continue as Flash media prices continue to fall. And then we’ll wonder how we ever did anything with those silly spinning disks. Until the next magic medium comes along. In the meantime, if you want to take OneFS for a spin, you can grab a copy of the version 8.1 simulator here. There’s also a very good Isilon overview document that I recommend you check out if that’s the kind of thing you’re into.

X-IO Technologies Are Living On The Edge

Disclaimer: I recently attended Storage Field Day 13.  My flights, accommodation and other expenses were paid for by Tech Field Day and Pure Storage. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

X-IO Technologies presented on their Axellio Edge product, amongst other things, at Storage Field Day 13 recently. You can see video of the presentation here, and download my rough notes from here.

 

What Edge?

So what is the “edge”? Well, a lot of data has mass. And I’m not talking about those big old 1.8″ SCSI drives I used to pull from servers when I was a young man. Some applications (think geosciences, for example) generate a bunch of data very close to their source. This data invariably needs to be analysed to realise its value. Which is all well and good, but if you’re sitting on a boat somewhere you might have more data than you can easily transport to your public cloud provider in a timely fashion. Once the dataset becomes big or fast enough, it’s easier to move the application to the data than vice versa. X-IO say Axellio focuses on the situation where “moving the data processing power closer to where the data is being generated – closer to the source” makes sense. This also means you need the appropriate CPU/RAM combination to run the application attached to the large dataset. And that’s what X-IO means by edge computing.

 

Show Me Your Specs

[image via X-IO Technologies]

 

2RU form factor

4 socket Intel e5-26xx v4 CPUs

  • 16 to 88 cores and 24 to 176 threads
  • Core optimised or frequency optimised

32 DIMMs, 16GB – 2TB

  • Optional NVDIMMs for storage cache

Industry-Standard NVMe Storage

  • Up to 72x 2.5” NVMe SSDs
  • 460TB of NVMe Flash with 6.4TB NVMe SSDs (1PB coming)
  • >12 Million IOPS, as low as 35 microseconds latency, 60GBs sustained
  • Optane ready

Optional offload modules

  • 2x Intel Phi – CPU extension for parallel compute
  • 2x Nvidia K2 GPU – Video processing, VDI
  • 2x Nvidia P100 Tesla – Sci Comp, Machine Learning
  • Solarflare Precision Timing Protocol (PTP) Packet Capture (PCAP) offload

 

FabricXpress

X-IO’s FabricXpress is the magic that makes the product work as well as it does. X-IO says it extends the native PCIe bus significantly.

PCIe based Interconnect

  • Up to 72 NVMe SSDs – significantly more SSDs
  • Between server modules
  • Offload modules

Dual ported NVMe architecture

  • Allows access to the same data on the same SSD from both servers
  • Shared access for HA solutions
  • Enables independent server behaviour on shared data

[image courtesy of X-IO Technologies]

 

Networking and Offloading Module

Networking

  • 1×16 PCIe per server module for networking
  • Supports standard off the shelf NICs/HCAs/HBAs
  • Supports HHHL or FHHL cards
  • Ethernet, InfiniBand, FC
  • Up to 2x100GbE per module

Offloading Module

  • Two centre modules is replaced with single carrier
  • Holds two FHFL DW, x16 PCIe cards
  • Nvidia P100: +18.6 Teraflops (sp)
  • Nvidia V100: +30 Teraflops (sp)

 

Doing What at The Edge?

Edge Data Analytics Platform

The point of Axellio Edge is to ingest and analyse data at really very high speeds. The neat thing about this is that a 2RU chassis replaces a rack of scale out gear. X-IO claim that it’s “uniquely qualified for real-time big data analytics”.

[image courtesy of X-IO Technologies]

 

Conclusion and Further Reading

I hadn’t previously given a lot of thought to the particular use cases X-IO presented as being ideally suited to the Axellio Edge offering. My day job revolves primarily around large enterprises running ridiculously critical and crusty SQL-based applications (eww, legacy). Whilst I’ve had some experience with scientific types doing interesting things with data out in the middle of nowhere, it’s not been at the scale or speed that X-IO talked about. Aside from the fact that there’s a whole lot to like about Axellio in terms of speed and capability in this 2RU box, I also like the range of scenarios that this thing delivers.

We’re working with bigger and bigger data sets, and it’s getting harder and harder to move this close to our compute platform in a timely fashion. Particularly if that compute platform is sitting in public cloud. And even moreso if we have to respect the laws of physics (stupid physics!). Instead of trying to push a whole tonne of data from the source to the application, X-IO have taken a different approach and are bringing the data and processing back to the source.

The Axellio Edge isn’t going to be the right platform for everyone, but it seems that, if the use case lines up, it’s a pretty compelling offering. Coupled with the fact that people I’ve spoken to who have been X-IO customers have been very staunch advocates for the company. The people I had the pleasure of speaking with at X-IO are all very switched on and have put a lot of thought into what they’re doing.

For more information on PCIe, have a look here. You can also find more info on NVM Express here. You can grab a copy of the Axellio data sheet from here, and there’s a good whitepaper on edge computing and IoT that you can find here (registration required).

Dell EMC’s in the Midst of a Midrange Resurrection

Disclaimer: I recently attended Storage Field Day 13.  My flights, accommodation and other expenses were paid for by Tech Field Day and Pure Storage. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

Dell EMC presented on their Unity range of midrange storage at Storage Field Day 13 recently. You can see video of the presentation here, and download my rough notes from here.

 

We’re Talking About Unity, Man

All of the Software

Dell EMC have been paying attention to their customers, and all of the software for Unity is now included:

  • Block, File or VVol
  • Snapshots and AppSync Basic
  • Replication (including RecoverPoint Basic)
  • Inline compression
  • [email protected]
  • AV enabler
  • QoS
  • Cloud Tiering
  • Unisphere (now running on HTML-5, die Java, die!)

There’s no need to go hunting for licenses or enablers like we had to in the VNX and CLARiiON days. This is a good thing, and tells me a lot about Dell EMC’s willingness to listen to customers when they say they want this stuff to be simple to consume without a bunch of extra costs.

 

Architecture

Dell EMC tell us that the Unity array is built on an active-active, fully redundant, dual node architecture. I can’t confirm whether this is the case or not, but I’m fairly sure that it’s an improvement on the ALUA days of yore. The Unity is also really a unified design now, with file, block or VMware Virtual Volume storage sharing the same pool of storage. Again, this is a significant improvement over the somewhat cludgy “Unified” approach that EMC took with the VNX range of arrays.

Dell EMC claim that the Unity array takes “10 minutes to install and 30 minutes to production”. I’m not sure how I feel about these numbers, and I’m not sure I’d make purchasing decisions based on how long it takes me to put some storage in a rack. Heck, I’ve worked in environments where it takes 2 hours to fill out the change request forms to deploy the arrays, and another 4 days to get these activities approved. I guess it’s nice to know that at the end of that administrative pain you could jam this gear in a rack pretty quickly and focus on other, more interesting activities.

Dell EMC are positioning the Unity as “compact and powerful: cloud integrated 500TB all-flash in 2RU”. Not unlike the Mazda3, you get a lot in a fairly compact form factor. And you likely won’t pay huge amounts for it either. Cloud integrated means a lot of things to a lot of people, but Dell EMC have been paying attention to what the likes of Pure Storage and Nimble Storage have been doing, and have delivered a pretty cool offering in CloudIQ, and I’m optimistic that the rest of Dell EMC’s tools will be following suit, if they haven’t already.

 

The Midrange Isn’t Dead

Okay, people weren’t actually saying that midrange is dead. But sometimes it feels like the focus has been on a lot of other things, like super scale out, hyper-object storage and terribly sexy, high-end all flash storage that runs to a large number of petabytes and connects directly into a port at the base of the end user’s skull. Added to that Dell EMC have had to do some careful balancing of product portfolios, and doing a pretty decent job of selling the benefits of both the Unity and SC series. I’ve had exposure to both products over time, and can see the good in each line of products. It’s not unreasonable to expect that they’ll merge in the future, but when this future will be is anyone’s guess. When Unity initially launched it felt a bit rushed (you can read my coverage here and here). Dell EMC have been working pretty hard to smooth out some of the roughness and bring to market some cool features that were missing in the first iteration of the product.

I’ve been fond of midrange arrays for a long time. The damn things tend to just run, and you can’t walk into most data centres without bumping into some kind of midrange array. Sometimes, midrange is really all you need to get the job done. And there’s no shame in that either. We’re also seeing a bunch of features that were traditionally considered “high-end” being implemented further down the stack. This should only be considered a good thing.

 

Further Reading

You can download the Unity Simulator here, and read my thoughts on Dell EMC’s midrange update from Dell EMC World 2017 here. You can also grab a copy of the Dell EMC Unity VSA from here.

 

NetApp Doesn’t Want You To Be Special (This Is A Good Thing)

Disclaimer: I recently attended Storage Field Day 13.  My flights, accommodation and other expenses were paid for by Tech Field Day and Pure Storage. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

I had the good fortune of seeing Andy Banta present at Storage Field Day 13. He spoke about a number of topics, including the death of the specialised admin, and VMware Virtual Volumes integration with SoldFire. You can find a copy of my rough notes from NetApp’s presentation here. You can also find videos from his presentation here.

Changing Times

People have been thinking about the changing role of the specialised IT admin for a while now. The discussion has been traditionally focused on the server administrator’s place in a cloudy world, but the storage administrator’s role is coming under scrutiny in much the same fashion. The reasons for the changing landscape are mostly identical to those that impacted the server administrator’s role:

  • Architectures are becoming easier to manage
  • Tools are designed for rapid deployment, not constantly adjusting
  • The hardware is becoming commoditised
  • Software is defining the features and the admin duties

Storage requirements are more dynamic than before, with transient workloads being seen as more commonplace than the static loads once prevalent in the enterprise. The pace of change is also increasing.

According to NetApp, the key focus areas for operational staff have changed as expectations have evolved.

  • Traditional IT has focused on things being “Available and Reliable”
  • The virtualisation age gave us the opportunity to do more with less
  • The cloud age is causing things to happen faster; and
  • As-a-Service is driving the application evolution.

These new focus areas bring with them a new set of challenges though. As we move from the “legacy” DC to the new now, there are other things we have to consider.

Legacy Data Centre Next Generation Data Centre
Single Tenant Multi-tenant
Isolated Workloads Mixed Workloads
Dedicated Infrastructure Shared Infrastructure
Scale Up Scale Out
Pre-provisioned Capacity Capacity on Demand
Hardware Defined Software Defined
Project Based Self Service
Manual Administration Automation

 

In case you hadn’t realised it, we’re in a bit of a bad way in a lot of enterprises when it comes to IT operations. NetApp neatly identified what’s going wrong in terms of both business and operational limitations.

Business Limitations

  • Unpredictable application performance
  • Show response to changing business needs
  • Under utilisation of expensive resources

Operational Limitations

  • Storage policies tied to static capabilities
  • All virtual disks treated the same
  • Minimal visibility and control on array
  • Very hands on

 

The idea is to embrace the “New Evolution” which will improve the situation from both a business and operational perspective.

Business Benefits

  • Guarantee per-application performance
  • Immediately respond to changing needs
  • Scale to match utilisation requirements

Operational benefits

  • Dynamically match storage to application
  • Align virtual disk performance to workload
  • Fully automate control of storage resources

 

No One is Exempt

Operations is hard. No one misses being focused on server administration. With virtualisation administration there is higher value that can be had. NetApp argue that there are higher value activities that exist for the storage discipline as well. Andy summed it up nicely when he said that “[e]nabling through integrations is the goal”.

People like tuning in to events like Storage Field Day because the presenters and delegates often get deep into the technology to highlight exactly how widget X works and why product Y is super terrific. But there’s a lot of value to be had in understanding the context within which these products exist too. We run technology to serve applications that help businesses do business things. It doesn’t matter how fast the latest NVMe/F product is if the application it dishes up is platformed on Windows 2003 and SQL 2005. Sometimes it’s nice to talk about things that aren’t directly focused on technology to understand why a lot of us are actually here.

Ultimately, the cloud (in its many incantations) is having a big impact on the day job of a lot of people, as are rapid developments in key cloud technologies, such as storage, compute, virtualisation and software defined everything. It’s not only operations staff, but also architects, sales people, coffee shop owners, and all kinds of IT folks within the organisation that are coming to grips with the changing IT landscape. I don’t necessarily buy into the “everything is DevOps now and you should learn to code or die” argument, but I also don’t think the way we did things 10 years ago is not necessarily sustainable anywhere but in the largest and crustiest of enterprise IT shops.

NetApp have positioned this viewpoint because they want us to think that what they’re selling is going to help us transition from rock breakers to automation rock stars. And they’re not the first to think that they can help make it happen. Plenty of companies have come along and told us (for years it seems) that they can improve our lot and make all of our problems go away with some smart automation and a good dose of common sense. Unfortunately, people are still running businesses, and people are still making decisions on how technology is being deployed in the businesses. Which is a shame, because I’d much rather let scripts handle the bulk of the operational work and get on with cool stuff like optimising workloads to run faster and smarter and give more value back to the business. I’m also not saying that what NetApp is selling doesn’t work as they say it will. I’m just throwing in the people element as a potential stumbling block.

Is the role of the specialised storage administrator dead? I think it may be a little premature to declare it dead at this stage. But if you’re spending all of your time carving out LUNs by hand and manually zoning fabrics you should be considering the next step in your evolution. You’re not exempt. You’re not doing things that are necessarily special or unique. A lot of this stuff can be automated. And should be. This stuff is science, not wizardry. Let a program do the heavy lifting. You can focus on providing the right inputs. I’m not by any stretch saying that this is an easy transition. Nor do I think that a lot of people have the answers when confronted with this kind of change. But I think it is coming. While vendors like NetApp have been promising to make administration and management of their products easy for years, it feels like we’re a lot closer to that operational nirvana than we were a few years ago. Which I’m really pretty happy about, and you should be too. So don’t be special, at least not in an operational way.

[image courtesy of Stephen Foskett]

Storage Field Day 13 – (Fairly) Full Disclosure

Disclaimer: I recently attended Storage Field Day 13.  My flights, accommodation and other expenses were paid for by Tech Field Day and Pure Storage. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

Here are my notes on gifts, etc, that I received as a delegate at Storage Field Day 13 and Storage Field Day Exclusive at Pure Accelerate 2017. I’d like to point out that I’m not trying to play companies off against each other. I don’t have feelings one way or another about receiving gifts at these events (although I generally prefer small things I can fit in my suitcase). Rather, I’m just trying to make it clear what I received during this event to ensure that we’re all on the same page as far as what I’m being influenced by. Some presenters didn’t provide any gifts as part of their session – which is totally fine. I’m going to do this in chronological order, as that was the easiest way for me to take notes during the week. While every delegate’s situation is different, I’d also like to clarify that I took 5 days of holiday time to be at this event.

 

Saturday

My employer paid for my taxi to BNE airport. I had a chicken burger at SYD airport. It was okay. I flew Qantas economy class to SFO. The flights were paid for by Pure Storage. Plane food was consumed on the flight. It was a generally good experience, lack of sleep notwithstanding.

 

Sunday

When I picked up my conference badge I was given a Pure Storage and Rubrik branded backpack and a Google Home device.

 

Monday

On Monday morning I had breakfast at the hotel buffet courtesy of Pure.

I had lunch in the analyst and influencer area at Pure//Accelerate. This was grilled chicken breast skewers, various salads, a chocolate chip cookie, and bottled water. We then went to a media event at Anchor Distillery where I had 3 Anchor Steam Beers and a cheese wurst. It was also nice to get to spend some time with Vaughn.

Pure also gave us a Pure branded, stainless steel “muddler”, a book about cocktails and a pen. I’m including a picture of the muddler here. You can make your own judgement on its application.

[Image stolen from Max Mortillaro]

Dinner was at Osha Thai Restaurant courtesy of Tech Field Day. I had some crispy wantons, some kind of duck roll entree and the Thai fried rice. I washed this down with 2 Singha beers. Stephen and I left dinner slightly early to catch the last 6 minutes of Game 5 of the NBA Finals. Go Dubs!

 

Tuesday

For breakfast I had organic yoghurt, fresh fruit, pastry, orange juice and a cappuccino from the coffee cart at Pure//Accelerate. At the keynote there were Toshiba branded fidget spinners on every seat.

For lunch I had some roast chicken and garden salad, a tiny cupcake and a bottle of water. Pure gave me a 8GB USB stick with presentations from the event on them.

We then took a car to SFO. This was covered by Tech Field Day. We travelled economy to Denver courtesy of Tech Field Day. One of the key benefits of travelling with Stephen (besides his good company and extensive knowledge of all things trivial) is that he can get you emergency row seats on United flights. Upon arrival at Denver International we took a car to our hotel. This (and accommodation at the Denver Tech Center Marriott) was covered by Tech Field Day as well.

 

Wednesday

We had breakfast at the hotel. This was scrambled eggs, sausages, fruit, and coffee. It was nice.

Lunch was also at the hotel. I had a bacon and cheddar cheese burger, fries, water, and apple pie and ice cream. I also got a Cleveland Cavaliers t-shirt in the delegate gift exchange. Go Cavs?

At the NetApp office I had 2 Sunnyvale Double IPA beers. I also received a Disruptosaurus sticker from Andy Banta. We then visited West Flanders Brewery where I had 3 Hoffmeister pilsner beers, and some fries, wings and other appetisers. Dinner was chicken, mac and cheese and salad.

 

Thursday

We had an early start and Tech Field Day very kindly organised for us to pick up Starbucks coffee at the hotel before hitting the road.

We had a very quick breakfast at the SNIA office, consisting of steak, egg and cheese thing on a bagel, coffee and water.

Dell EMC gave me 2x “Unity” 8GB USB drives with copies of collateral and the Unity VSA pre-loaded. I love the USB sticks that look like storage arrays, lack of size notwithstanding.

We had lunch at the SNIA office. This was tacos from “On The Border Mexican Grill and Cantina Catering”, water and some fresh fruit.

SNIA gave each of us a SNIA Swordfish polo shirt. I also consumed a coffee and a cookie during the break between SNIA’s and Primary Data’s presentations.

Primary Data gave us Tetris-shaped post-it notes.

For dinner we went to the Garden of the Gods Club for dinner. I had some steak, salmon and salad. I also had 3 New Belgium Fat Tire beers.

 

Friday

For breakfast I had eggs benedict and coffee at the hotel. It was pretty good. And one of the only ways I can think of to get bacon cooked the correct way in America.

X-IO Technologies gave us each a t-shirt, pen and ice cube ball thing, as well as a 4GB USB stick with a copy of the slides. Scott Lowe kindly donated his t-shirt to me. I’ll be giving it away at an upcoming VMUG meeting.

Lunch at X-IO was Thai food from Thai Basil. I then took a car Denver airport (paid for by Tech Field Day). Here’s a happy snap from Colorado Springs.