Storage Field Day 19 – Wrap-up and Link-o-rama

Disclaimer: I recently attended Storage Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

This is a quick post to say thanks once again to Stephen and Ben, and the presenters at Storage Field Day 19. I had a super fun and educational time. For easy reference, here’s a list of the posts I did covering the events (they may not match the order of the presentations).

Storage Field Day – I’ll Be at Storage Field Day 19

Storage Field Day 19 – (Fairly) Full Disclosure

Tiger Technology Is Bridging The Gap

Western Digital, Composable Infrastructure, Hyperscalers, And You

Infrascale Protects Your Infrastructure At Scale

MinIO – Not Your Father’s Object Storage Platform

Dell EMC Isilon – Cloudy With A Chance Of Scale Out

NetApp And The StorageGRID Evolution

Komprise – Non-Disruptive Data Management

Stellus Is Doing Something With All That Machine Data

Dell EMC PowerOne – Not V(x)block 2.0

WekaIO And A Fresh Approach

Dell EMC, DevOps, And The World Of Infrastructure Automation

Also, here’s a number of links to posts by my fellow delegates (in no particular order). They’re all very smart people, and you should check out their stuff, particularly if you haven’t before. I’ll attempt to keep this updated as more posts are published. But if it gets stale, the Storage Field Day 19 landing page will have updated links.

 

Becky Elliott (@BeckyLElliott)

SFD19: No Komprise on Knowing Thy Data

SFD19: DellEMC Does DevOps

 

Chin-Fah Heoh (@StorageGaga)

Hadoop is truly dead – LOTR version

Zoned Technologies With Western Digital

Is General Purpose Object Storage Disenfranchised?

Tiger Bridge extending NTFS to the cloud

Open Source and Open Standards open the Future

Komprise is a Winner

Rebooting Infrascale

DellEMC Project Nautilus Re-imagine Storage for Streams

Paradigm shift of Dev to Storage Ops

StorageGRID gets gritty

Dell EMC Isilon is an Emmy winner!

 

Chris M Evans (@ChrisMEvans)

Storage Field Day 19 – Vendor Previews

Storage Management and DevOps – Architecting IT

Stellus delivers scale-out storage with NVMe & KV tech – Architecting IT

Can Infrascale Compete in the Enterprise Backup Market?

 

Ray Lucchesi (@RayLucchesi)

097: GreyBeards talk open source S3 object store with AB Periasamy, CEO MinIO

Gaming is driving storage innovation at WDC

 

Enrico Signoretti (@ESignoretti)

Storage Field Day 19 RoundUp

Tiers, Tiers, and More Storage Tiers

The Hard Disk is Dead! (But Only in Your Datacenter)

Dell EMC PowerOne is Next-Gen Converged Infrastructure

Voices in Data Storage – Episode 35: A Conversation with Krishna Subramanian of Komprise

 

Gina Rosenthal (@GMinks)

Storage Field Day 19: Getting Back to My Roots

Is storage still relevant?

Tiger Technology Brings the Cloud to You

Taming Unstructured Data with Dell EMC Isilon

Project Nautilus emerged as Dell’s Streaming Data Platform

 

Joey D’Antoni (@JDAnton)

Storage Field Day 19–Current State of the Storage Industry #SFD19

Storage Field Day 19–Western Digital #SFD19

Storage Field Day 19 MinIO #SFD19

 

Keiran Shelden (@Keiran_Shelden)

California, Show your teeth… Storage Field Day 19

Western Digital Presents at SFD19

 

Ruairi McBride (@McBride_Ruairi)

 

Arjan Timmerman (@ArjanTim)

TECHunplugged at Storage Field Day 19

TECHunplugged VideoCast SFD19 Part 1

Preview Storage Field Day 19 – Day 1

 

Vuong Pham (@Digital_KungFu)

 

[photo courtesy of Stephen Foskett]

WekaIO And A Fresh Approach

Disclaimer: I recently attended Storage Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

WekaIO recently presented at Storage Field Day 19. You can see videos of their presentation here, and download my rough notes from here.

 

More Data And New Architectures

Liran Zvibel (Co-founder and CEO) spent some time talking about the explosion in data storage requirements in the next 4 – 5 years. It was suggested that most of this growth will come in the form of unstructured data. The problem with today’s storage systems, he suggested, was that storage is broken into “Islands of Compromise” categories – each category carries a leader. What does that mean exactly? DAS and SAN cannot share data easily, and the performance of a number of NAS and Object architectures isn’t great.

A New Storage Category

WekaIO is positioning itself in a new storage category. One that delivers:

  • The highest performance for any workload
  • Complete data shareability
  • Cloud native, hybrid cloud support
  • Full enterprise features
  • Simple management

Unique Product Differentiation

So what is that sets WekaIO apart from the rest of the storage industry? Zvibel listed a number of differentiators, including:

  • Only POSIX namespace that scales to exabytes of capacity and trillions of files
  • Only networked file system that is faster than local storage
    • Massively parallel
    • Lowest latency
  • Snap to object
    • Unique blend of All-Flash and Object storage for instant backup to cloud storage (no backup software required)
  • Cloud burst from on-premises to public cloud
    • Fully hybrid cloud enabled with highest performance
  • End-to-end data encryption with no performance degradation
    • Critical for modern workloads and compliance

[image courtesy of Barbara Murphy]

 

Customer Examples

This all sounds great, but where is WekaIO really being used effectively? Barbara Murphy spent some time talking with the delegates about a number of customer examples across the following market verticals.

Life sciences

  • Genomics sequencing and analytics
  • Drug discovery
  • Microscopy

Deep Learning

  • Machine Learning / Artificial Intelligence
  • Real-time analytics
  • IoT

 

Thoughts and Further Reading

I’ve written enthusiastically about WekaIO before. It’s easy to get caught up in some of the hype that seems to go hand in hand with WekaIO presentations. But WekaIO has a lot of data to back up its claims, and it’s taken an interesting approach to solving traditional storage problems in a non-traditional fashion. I like that there’s a strong cloud story there, as well as the potential to leverage the latest hardware advancements to deliver the performance companies need.

The analysts and storage vendors drone on and on about the explosion in data growth over the coming years, but it’s a real problem. Our workload challenges are changing as well, and it seems like a new approach is needed for how we approach some of these challenges. The scale of the data that needs to be crunched doesn’t always mean that DAS is a good option. You’re more likely to see these kinds of challenges show up in the science and technology industries. And WekaIO seems to be well-positioned to meet these challenges, whether it’s in public cloud or on-premises. It strikes me that WekaIO’s focus on performance and resilience, along with a robust software-defined architecture, has it in a good position to tackle the types of workload problems we’re seeing at the edge and in AI / ML focused environments. I’m really looking forward to seeing what comes next for WekaIO.

Dell EMC PowerOne – Not V(x)block 2.0

Disclaimer: I recently attended Storage Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Dell EMC recently presented at Storage Field Day 19. You can see videos of the presentation here, and download my rough notes from here.

 

Not VxBlock 2.0?

Dell EMC describes PowerOne as “all-in-one autonomous infrastructure”. It’s converged infrastructure, meaning your storage, compute, and networking are all built into the rack. It’s a transportation-tested package and fully assembled when it ships. When it arrives, you can plug it in, fire up the API, and be up and going “within a few hours”.

Trey Layton is no stranger to Vblock / VxBlock, and he was very clear with the delegates that PowerOne is not replacing VxBlock. After all, VxBlock lets them sell Dell EMC external storage into Cisco UCS customers.

 

So What Is It Then?

It’s a rack or racks full of gear. All of which is now Dell EMC gear. And it’s highly automated and has some proper management around it too.

[image courtesy of Dell EMC]

So what’s in those racks?

  • PowerMax Storage – World’s “fastest” storage array
  • PowerEdge MX – industry leading compute
  • PowerSwitch – Declarative system fabric
  • PowerOne Controller – API-powered automation engine

PowerMax Storage

  • Zero-touch SAN config
  • Discovery / inventory of storage resources
  • Dynamically create storage volumes for clusters
  • Intelligent load balancing

PowerEdge MX Compute

  • Dynamically provision compute resources into clusters
  • Automated chassis expansion
  • Telemetry aggregation
  • Kinetic infrastructure

System Fabrics

  • Switches are 32Gbps
  • 98% reduction in network configuration steps
  • System fabric visibility and lifecycle management
  • Intent-based automated deployment and provision
  • PowerSwitch open networking

PowerOne Controller

  • Highly automates 1000s of tasks
  • Powered by Kubernetes and Ansible
  • Delivers next-gen autonomous outcomes via robust API capabilities

From a scalability perspective, you can go to 275 nodes in a pod, and you can look after up to 32 pods (I think). The technical specifications are here.

 

Thoughts and Further Reading

Converged infrastructure has always been an interesting architectural choice for the enterprise. When VCE first came into being 10+ years ago via Acadia, delivering consistent infrastructure experiences in the average enterprise was a time-consuming endeavour and not a lot of fun. It was also hard to do well. VCE changed a lot of that with Vblock, but you paid a premium. The reason you paid that premium was that VCE did a pretty decent job of putting together an architecture that was reliable and, more importantly, supportable by the vendor. It wasn’t just the IP behind this that made it successful though, it was the effort put into logistics and testing. And yes, a lot of that was built on the strength of spreadsheets and the blood, sweat and tears of the deployment engineers out in the field.

PowerOne feels like a very different beast in this regard. Dell EMC took us through a demo of the “unboxing” experience, and talked extensively about the lifecycle of the product. They also demonstrated many of the automation features included in the solution that weren’t always there with Vblock. I’ve been responsible for Vblock environments over the years, and a lot of the lifecycle management activities were very thoroughly documented, and extremely manual. PowerOne, on the other hand, doesn’t look like it relies extensively on documentation and spreadsheets to be managed effectively. But maybe that’s just because Trey and the team were able to demonstrate things so effectively.

So why would the average enterprise get tangled up in converged infrastructure nowadays? What with all the kids and their HCI solutions, and the public cloud, and the plethora of easy to consume infrastructure solutions available via competitive consumption models? Well, some enterprises don’t like relying on people within the organisation to deliver solutions for mission critical applications. These enterprises would rather leave that type of outcome in the hands of one trusted vendor. But they might still want that outcome to be hosted on-premises. Think of big financial institutions, and various government agencies looking after very important things. These are the kinds of customers that PowerOne is well suited to.

That doesn’t mean that what Dell EMC is doing with PowerOne isn’t innovative. In fact I think what they’ve managed to do with converged infrastructure is very innovative, within the confines of converged infrastructure. This type of approach isn’t for everyone though. There’ll always be organisations that can do it faster and cheaper themselves, but they may or may not have as much at stake as some of the other guys. I’m curious to see how much uptake this particular solution gets in the market, particularly in environments where HCI and public cloud adoption is on the rise. It strikes me that Dell EMC has turned a corner in terms of system integration too, as the out of the box experience looks really well thought out compared to some of its previous attempts at integration.

Stellus Is Doing Something With All That Machine Data

Disclaimer: I recently attended Storage Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Stellus Technologies recently came out of stealth mode. I had the opportunity to see the company present at Storage Field Day 19 and thought I’d share my thoughts here. You can grab a copy of my notes here.

 

Company Background

Jeff Treuhaft (CEO) spent a little time discussing the company background and its development up to this point in time.

  • Founded in 2016
  • Data Path architecture developed in 2017
  • Data path validations in 2018
  • First customer deployments in 2019
  • Commercial availability in 2020

 

The Problem

What’s the problem Stellus is trying to solve then? There’s been a huge rise in unstructured data (driven in large part by AI / ML workloads) and an exponential increase in the size of data sources that enterprises are working with. There have also been significant increases in performance requirements for unstructured data. This has been driven primarily by:

  • Life sciences;
  • Media and entertainment; and
  • IoT.

The result is that the storage solutions supporting these workloads need to:

  • Offer scalable, consistent performance;
  • Support common global namespaces;
  • Work with variable file sizes;
  • Deliver high throughput;
  • Ensure that there are no parallel access penalties;
  • Easily manage data over time; and
  • Function as a data system of record.

It’s Stellus’s belief that “[c]urrent competitors have built legacy file systems at the time when spinning disk and building private data centres were the focus”.

 

Stellus Data Platform

Bala Ganeshan (CTO and VP of Engineering) walked the delegates through the Stellus Data Platform.

Design Goals

  • Parallelism
  • Scale
  • Throughput
  • Constant performance
  • Decoupling capacity and performance
  • Independently scale perfromance and capacity on commodity hardware
  • Distributed all, share everything KV based data model data path ready for new memories
  • Consistently high performance even as system scales

File System as Software

  • Stores unstructured data closest to native format: objects
  • Data Services provided on Stellus objects
  • Stateless – state in Key Value Stores
  • User mode enables
    • On-premises
    • Cloud
    • Hybrid
  • Independent from custom hardware and kernel

Don’t currently have deduplication capability built in.

Algorithmic Data Locality and Data Services

  • Enables scale by algorithmically determining location – no cluster-wide maps
  • Built for resilience to multiple failure – pet vs. cattle
  • Understands topology of persistent stores
  • Architecture maintains versions – enables data services such as snapshots

Key-Value-over-NVMe Fabrics

  • Decoupled data services and persistence requires transport
  • Architecture maintains native data structure – objects
  • NVMe-over-Fabric protocol enhanced to transport KV commands
  • Transport independent
    • RDMA
    • TCP/IP

Native Key-Value Stores

  • Unstructured data is generally immutable
  • Updates result in new objects
  • Available in different sizes and performance characteristics
  • We used application-specific KV stores, such as:
    • Immutable data
    • Short-lived updates
    • Metadata

 

Thoughts and Further Reading

Every new company emerging from stealth has a good story to tell. And they all want it to be a memorable one. I think Stellus certainly has a good story to tell in terms of how it’s taking newer technologies to solve more modern storage problems. Not every workload requires massive amounts of scalability at the storage layer. But for those that do, it can be hard to solve that problem with traditional storage architectures. The key-value implementation from Stellus allows it to do some interesting stuff with larger drives, and I can see how this will have appeal as we move towards the use of larger and larger SSDs to store data. Particularly as a large amount of modern storage workloads are leveraging unstructured data.

More and more NVMe-oF solutions are hitting the market now. I think this is a sign that evolving workload requirements are pushing the capabilities of traditional storage solutions. A lot of the data we’re dealing with is coming from machines, not people. It’s not about how I derive value from a spreadsheet. It’s about how I derive value from terabytes of log data from Internet of Things devices. This requires scale – in terms of both capacity and performance. Using key-value over NVMe-oF is an interesting approach to the challenge – one that I’m keen to explore further as Stellus makes its way in the market. In the meantime, check out Chris Evans’s article on Stellus over at Architecting IT.

Komprise – Non-Disruptive Data Management

Disclaimer: I recently attended Storage Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Komprise recently presented at Storage Field Day 19. You can see videos of the presentation here, and download my rough notes from here.

 

What Do You Need From A Data Management Solution?

Komprise took us through the 6 tenets used to develop the solution:

  • Insight into our data
  • Make the insight actionable
  • Don’t get in front of hot data
  • Show us a path to the cloud
  • Scale to manage massive quantities of data
  • Transparent data movement

3 Architectural pillars

  • Dynamic Data Analytics – analyses data so you can make the right decision before buying more storage or backup
  • Transparent Move Technology – moves data with zero interference to apps, users, or hot data
  • Direct Data Access – puts you in control of your data – not your vendor

Archive successfully

  • No disruption
    • Transparency
    • No interference with hot data
  • Save money
  • Without lock-in
  • Extract value

 

Architecture

So what does the Komprise architecture look like? There are a couple of components.

  • The Director is a VM that can be hosted on-premises or in a cloud. This hosts the console, exposes the API, and stores configuration information.
  • The Observer runs on-premises and can run on ESXi, or can be hosted on Linux bare metal. It’s used to discover the storage (and should be hosted in the same DC as said storage).
  • Deep Analytics indexes the files, and the Director can run queries against it. It can also be used to tag the data. Deep Analytics supports multiple Observers (across multiple DCs), giving you a “global metadata lake” and can also deliver automatic performance throttling for scans.

One neat feature is that you can choose to put a second copy somewhere when you’re archiving data. Komprise said that the typical customer starting size is 1PB or more.

 

Thoughts and Further Reading

I’ve previously written enthusiastically about what I’ve seen from Komprise. Data management is a difficult thing to get right at the best of times. I believe the growth in primary, unstructured storage has meant that the average punter / enterprise can’t really rely on file systems and directories to store data in a sensible location. There’s just so much stuff that gets generated daily. And a lot of it is important (well, at least a fair chunk of it is). One of the keys to getting value from the data you generate, though, is the ability to quickly access that data after it’s been generated. Going back to a file in 6 months time to refer to something can be immensely useful. But it’s a hard thing to do if you’ve forgotten about the file, or what was in it. So it’s a nice thing to have a tool that can track this stuff for you in a relatively sane fashion.

Komprise can also guide you down the path when it comes to intelligently accessing and storing your unstructured data. It can help with reducing your primary storage footprint, reducing your infrastructure spend and, hopefully, your operational costs. What’s more exciting, though, is the fact that all of this can be done in a transparent fashion to the end user. Betty in the finance department can keep generating documents that have ridiculous file names, and storing them forever, and Komprise will help you move those spreadsheets to where they’re of most use.

Storage is cheaper than it once was, but we’re also storing insanely big amounts of data. And for much longer than we have previously. Even if my effective $/GB stored is low compared to what it was in the year 2000, my number of GB stored is exponentially higher. Anything I can do to reduce that spend is going to be something that my enterprise is interested in. It seems like Komprise is well-positioned to help me do that. It’s biggest customer has close to 100PB of data being looked after by Komprise.

You can download a whitepaper overview of the Komprise architecture here (registration required). For a different perspective on Komprise, check out Becky’s article here. Chin-Fah also shared his thoughts here.

Dell EMC, DevOps, And The World Of Infrastructure Automation

Disclaimer: I recently attended Storage Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Dell EMC recently presented at Storage Field Day 19. You can see videos of the presentation here, and download my rough notes from here.

 

Silos? We Don’t Need No Silos

The data centre is changing, as is the way we manage it. There’s been an observable evolution of the applications we run in the DC and a need for better tools. The traditional approach to managing infrastructure, with siloed teams of storage, network, and compute administrators, is also becoming less common. One of the key parts of this story is the growing need for automation. As operational organisations in charge of infrastructure and applications, we want to:

  • Manage large scale operations across the hybrid cloud;
  • Enable DevOps and CI/CD models with infrastructure as code (operational discipline); and
  • Deliver self service experience.

Automation has certainly gotten easier, and as an industry we’re moving from brute force scripting to assembling pre-built modules.

 

Enablers for Dell EMC Storage (for Programmers)

REST

All of our automation Power Tools use REST

  • Arrays have a REST API
  • REST APIs are versioned APIs
  • Organised by resource for simple navigation

Secure

  • HTTPS, TLS 1.2 or higher
  • Username / password or token based
  • Granular RBAC

With REST, development is accelerated

 

Ansible for Storage?

Ansible is a pretty cool automation engine that’s already in use in a lot of organisations.

Minimal Setup

  • Install from yum or apt-get on a Linux server / VM
  • No agents anywhere

Low bar of entry to automation

  • Near zero programming
  • Simple syntax

 

Dell EMC and vRO for storage

VMware’s vRealize Orchestrator has been around for some time. It has a terrible name, but does deliver on its promise of simple automation for VMware environments.

  • Plugins allow full automation, from storage to VM
  • Easily integrated with other automation tools

The cool thing about the plugin is that you can replace homegrown scripts with a pre-written set of plugins fully supported by Dell EMC.

You can also use vRO to implement automated policy based workflows:

  • Automatic extension of datastores;
  • Configure storage the same way every time; and
  • Tracking of operations in a single place.

vRO plugs in to vRealize Automation as well, giving you self service catalogue capabilities along with support for quotas and roles.

What does the vRO plugin support?

Supported Arrays

  • PowerMax / VMAX All-Flash (Enterprise)
  • Unity (Midrange)
  • XtremIO

Storage Provisioning Operations

  • Adds
  • Moves
  • Changes

Array Level Data Protection Services

  • Snapshots
  • Remote replication

 

Thoughts and Further Reading

DevOps means a lot of things to a lot of people. Which is a bit weird, because some smart folks have written a handbook that lays it all out for us to understand. But the point is that automation is a big part of what makes DevOps work at a functional level. The key to a successful automation plan, though, is that you need to understand what you want to automate, and why you want to automate it. There’s no point automating every process in your organisation if you don’t understand why you do that process in the first place.

Does the presence of a vRO plugin mean that Dell EMC will make it super easy for you to automate daily operations in your storage environment? Potentially. As long as you understand the need for those operations and they’re serving a function in your organisation. I’m waffling, I know, but the point I’m attempting to make is that having a tool bag / shed / whatever is great, and automating daily processes is great, but the most successful operations environments are mature enough to understand not just the how but the why. Taking what you do every day and automating it can be a terrifically time-consuming activity. The important thing to understand is why you do that activity in the first place.

I’m really pleased that Dell EMC has made this level of functionality available to end users of its storage platforms. Storage administration and operations can still be a complicated endeavour, regardless of whether you’re a storage administrator comfortably ensconced in an operational silo, or one of those cool site reliability engineers wearing jeans to work every day and looking after thousands of cloud-native apps. I don’t think it’s the final version of what these tools look like, or what Dell EMC want to deliver in terms of functionality, but it’s definitely a step in the right direction.

NetApp And The StorageGRID Evolution

Disclaimer: I recently attended Storage Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

NetApp recently presented at Storage Field Day 19. You can see videos of the presentation here, and download my rough notes from here.

 

StorageGRID

If you haven’t heard of it before, StorageGRID is NetApp’s object storage platform. It offers a lot of the features you’d expect from an object storage platform. The latest version of the offering, 11.3, was released a little while ago, and includes a number of enhancements, as well as some new hardware models.

Workloads Are Changing

Object storage has been around a bit longer than you might think, and its capabilities and use cases have changed over that time. Some of the newer object workloads don’t just need a scale out bucket to store old archive data. Instead, they want more performance and flexibility.

Higher performance

  • Ingest / Retrieve
  • Delete

Flexibility

  • Support mixed workloads and multiple tenants
  • Granular data protection policies
    • Optimise data placement and retention
    • Adapt to new requirements and regulations

Agility / Simplicity

  • Leverage resources across multiple clouds – Move data to and from public cloud
  • Open standards for data portability
  • Low touch operations

 

New Hardware

SG1000

  • Load Balancer
  • Can run Admin node
Description Compute Appliance – Gateway Node
Performance High performance load balancer and optional Admin node function
Key Features 1U

Dual-socket Intel platform

768GB memory

Two dual-port 100GbE Mellanox NICs (10/25/40/100GbE)

Dual 1GBase-T ports for management

Redundant power and cooling

Two internal NVMe SSDs

Multi-shelf SG6060

[image courtesy of NetApp]

The SG6060 is mighty dense, offering 2PB in a single node.

SGF6024

[image courtesy of NetApp]

The SGF6024 is an All-Flash Storage Node.

Description All-Flash 24 SSD Appliance
Performance High performance, low latency, small object workloads
Key Features ·       2U (3U with compute node)

·       40 2.4 GHz CPU cores (compute node)

·       192 GB memory (compute node)

·       4x10GbE/4x25GbENICs

·       24 SSD drives

Max capacity 367.2 TB RAW (15.3 TB SSDs)
SSD drive support NON-FDE: 800Gb, 3.8TB, 7.6TB, 15.3TB FIPS: 1.6TB; SED: 3.8TB

 

Architecture

Flexible Deployment Options

  • Appliance-based
  • VMware-based
  • Software only

Storage Nodes

Manages metadata

Manages storage

  • Disk
  • Cloud

Policy Engine

  • Applies policy at ingest
  • Continual data integrity checks
  • Applies new policy if applicable

Minimum 3 storage nodes required per site

Admin Nodes

Admin / Tenant portal

  • Create tenants
  • Define grid configuration
  • Create ILM policies

Audit

  • Granular audit log of tenant actions

Metrics

  • Collect and store metrics via Prometheus

Load balancer

  • Create HA groups for Storage Nodes and optionally Admin portal

Service Provider Model

Separation between GRID admin and Tenant admin

Grid administration

  • Manages infrastructure
  • Creates data management policies
  • Creates tenant accounts – No data access

Tenant administration

  • Storage User administration
  • Tenant data is isolated by default
  • Use standard S3 IAM and Bucket Policy
  • Leverage multi-cloud Platform Services (Cloud mirror, SNS, ElasticSearch)

 

Thoughts and Further Reading

I’ve been a fan of StorageGRID for some time, and not just because I work at a service provider that sells it as a consumable service. NetApp has a good grasp of what’s required to make an object storage platform do what it needs to do to satisfy most requirements, and it also understands what’s required to ensure that the platform delivers on its promise of reliability and durability. I’m a big fan of the flexible deployment models, and the focus on service providers and multi-tenancy is a big plus.

The new hardware introduced in this update helps remove the requirement for a hypervisor to run admin VMs to keep the whole shooting match going. This is particularly appealing if you really just want to run a storage as a service offering and don’t want to mess about with all that pesky compute. Or you might want to be wanting to use this as a backup repository for one of the many products that can use it.

NetApp has owned Bycast for around 10 years now, and continues to evolve the StorageGRID platform in terms of resiliency, performance, and capabilities. I’m really quite keen to see what the next 10 years have in store. You can read more about what’s new with StorageGRID 11.3 here.

Dell EMC Isilon – Cloudy With A Chance Of Scale Out

Disclaimer: I recently attended Storage Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Dell EMC recently presented at Storage Field Day 19. You can see videos of the presentation here, and download my rough notes from here.

 

It’s A Scaling Thing

Unbounded Scaling

One of the key features of the Isilon platform has been its scalability. OneFS automatically expands the filesystem across additional nodes. This scalability is impressive, and the platform has the ability to linearly scale both capacity and performance. It supports up to 252 nodes, petabytes of capacity and millions of file operations. My favourite thing about the scalability story, though, is that it’s non-disruptive. Dell EMC says it takes less than 60 seconds to add a node. That assumes you’ve done a bit of pre-work, but it’s a good story to tell. Even better, Isilon supports automated workload rebalancing – so your data is automatically redistributed to take advantage of new nodes when they’re added.

One Filesystem

They call it OneFS for a reason. Clients can read / write from any Isilon node, and client connections are distributed across cluster. Each file is automatically distributed across the cluster. This means that the larger the cluster, the better the efficiency and performance is. OneFS is also natively multi-protocol – clients can read / write same data over multiple protocols.

Always-on

There are some neat features in terms of resiliency too.

  • The cluster can sustain multiple failures with no impact – no impact for failures of up to 4 nodes or 4 drives in each pool
  • Non-disruptive tech refresh – non-disruptively add, remove or replace nodes in the cluster
  • No dedicated spare nodes or drives – better efficiency as no node or drive is unused

There is support for an ultra dense configuration: 4 nodes in 4U, offering up to 240TB raw per RU.

 

Comprehensive Enterprise Software

  • SmartDedupe and Compression – storage efficiency
  • SmartPools – Automated Tiering
  • CloudPools – Cloud tiering
  • SmartQuotas – Thin provisioning
  • SmartConnect – Connection rebalancing
  • SmartLock – Data integrity
  • SnapshotIQ – Rapid Restore
  • SyncIQ – Disaster Recovery

Three Approaches to Data Reduction

  1. Inline compression and deduplication
  2. Post-process deduplication
  3. Small file packing

Configurable tiering based on time

  • Policy based tiering at file level
  • Transparent to clients / apps

 

Other Cool Stuff

SmartConnect with NFS Failover

  • High Availability
  • No RTO or RPO

SnapshotIQ

  • Very fast file recovery
  • Low RTO and RPO

SyncIQ via LAN

  • Disk-based backup and business continuity
  • Medium RTO and RPO

SyncIQ via WAN

  • Offsite DR
  • Medium – high RTO and RPO

NDMP Backup

  • Backup to tape
  • FC backup accelerator
  • Higher RTO and RPO

Scalability

Key Features

  • Support for files up to 16TB in size
  • Increase of 4X over previous versions

Benefits

  • Support applications and workloads that typically deal with large files
  • Use Isilon as a destination or temporary staging area for backups and database

 

Isilon in the Cloud

All this Isilon stuff is good, but what if you want to leverage those features in a more cloud-friendly way? Dell EMC has you covered. There’s a good story with getting data to and from the major public cloud providers (in a limited amount of regions), and there’s also an interesting solution when it comes to running OneFS in the cloud itself.

[image courtesy of Dell EMC]

 

Thoughts and Further Reading

If you’re familiar with Isilon, a lot of what I’ve covered here wouldn’t be news, and would likely be a big part of the reason why you might even be an existing customer. But the OneFS in the public cloud stuff may come as a bit of a surprise. Why would you do it? Why would you pay over the odds to run appliance-like storage services when you could leverage native storage services from these cloud providers? Because the big public cloud providers expect you to have it all together, and run applications that can leverage existing public cloud concepts of availability and resiliency. Unfortunately, that isn’t always the case, and many enterprises find themselves lifting and shifting workloads to public clouds. OneFS gives those customers access to features that may not be available to them using the platform natively. These kinds of solutions can also be interesting in the verticals where Isilon has traditionally proven popular. Media and entertainment workloads, for example, often still rely on particular tools and workflows that aren’t necessarily optimised for public cloud. You might have a render job that you need to get done quickly, and the amount of compute available in the public cloud would make that a snap. So you need storage that integrates nicely with your render workflow. Suddenly these OneFS in X Cloud services are beginning to make sense.

It’s been interesting to watch the evolution of the traditional disk slingers in the last 5 years. I don’t think the public cloud has eaten their lunch by any means, but enterprises continue to change the way they approach the need for core infrastructure services, across all of the verticals. Isilon continues to do what it did in the first place – scale out NAS – very well. But Dell EMC has also realised that it needs to augment its approach in order to keep up with what the hyperscalers are up to. I don’t see on-premises Isilon going away any time soon, but I’m also keen to see how the product portfolio develops over the next few years. You can read some more on OneFS in Google Cloud here.

MinIO – Not Your Father’s Object Storage Platform

Disclaimer: I recently attended Storage Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

MinIO recently presented at Storage Field Day 19. You can see videos of the presentation here, and download my rough notes from here.

 

MinIO – A Sharp Looking 3-Piece Suite

AB Periasamy spoke to the delegates first, describing MinIO as “a high performance, software-defined, distributed object storage server, designed for peta-scale data infrastructure”. It was built from scratch with the private cloud as its target and is comprised of three components:

  • MinIO Server
  • MinIO Client
  • MinIO SDK

He noted that “the private cloud is a very different beast to the public cloud”.

Why Object?

The MinIO founders felt strongly that data would continue to grow, S3 would overtake POSIX, and that the bulk of data would exist outside of AWS. It’s his opinion that private cloud finally started emerging as a real platform last year.

 

Architecture

A number of guiding principles were adopted by MinIO when designing the platform. MinIO is:

  • Focused on performance. They believe it is the fastest object store in existence;
  • Cloud native. It is the most K8s-friendly solution available for the private cloud;
  • 100% open-source enables an increasingly dominant position in the enterprise;
  • Built for scale using the same philosophy as web scalers; and
  • Designed for simplicity. Simplicity scales – across clients, clouds, and machines.

 

[image courtesy of MinIO]

 

Other Features

Some of the other key features MinIO is known for include:

  • Scalability;
  • Support for erasure coding;
  • Identity and Access Management capability;
  • Encryption; and
  • Lifecycle Management.

MinIO is written in Go and is 100% open source. “The idea of holding customers hostage with a license key – those days are over”.

 

Deployment Use Cases

MinIO delivers usable object storage capability in all of the places you would expect it to.

  • Big Data / Machine Learning environments
  • HDFS replacements
  • High performance data lake / warehouse infrastructure
  • Cloud native applications (replacing file and block)
  • Multi-cloud environments (portability)
  • Endpoint for streaming workloads

 

Thoughts and Further Reading

If you watch the MinIO presentation, or check my notes, you’ll see a lot of slides with some impressive numbers in terms of both performance and market penetration. MinIO is not your standard object storage stack. A number of really quite big customers use it internally to service their object storage requirements. And, because it’s open source, a whole lot of people are really curious about how it all works, and have taken it for a spin at some stage or another. The story here isn’t that MinIO is architecturally a bit different from some other vendors’ storage offerings. Rather, it’s the fact that it’s open source and accessible to every punter who wants to grab it. This is exactly the reason why neckbeards get excited about open source products. Because you can take a core infrastructure function, and build a product that does something extremely useful from it. And you can contribute back to the community.

The big question, though, is how to make money of this kind of operating model. A well known software company made a pretty decent stab at leveraging open source products as a business model, delivering enhanced support services as a way to keep the cash coming in. This is very much what MinIO is doing as well. It has a number of very big customers willing to pay for an enhanced support experience via a subscription. It’s an interesting idea. Come up with a product that does what it says it will quite well. Make it easy to get hold of. Help big companies adopt it at scale. Then keep them up and running when said open source code becomes a mission critical piece of their business workflow. I want this model to work, I really do. And I have no evidence to say that it won’t. The folks at MinIO were pretty confident about what they could deliver with SUBNET in terms of the return on investment. I’m optimistic that MinIO will be around for a while longer, as the product looks the goods, and the people behind the product have spent some time thinking through what this will look like in the future. I also recommend checking out Chin-Fah’s recent article for another perspective.

Infrascale Protects Your Infrastructure At Scale

Disclaimer: I recently attended Storage Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Infrascale recently presented at Storage Field Day 19. You can see videos of the presentation here, and download my rough notes from here.

 

Infrascale?

Russ Reeder (CEO) introduced the delegates to Infrascale. If you’ve not heard of Infrascale before, it’s a service provider and vendor focused primarily on backup and disaster recovery services. It has around 150 employees and operates in 10 cities in 5 countries. Infrascale currently services around 60000 customers / 250000 VMs and endpoints. Reeder said Infrascale as a company is “[p]assionate about its customers’ happiness and success”.

 

Product Portfolio

There are four different products in the Infrascale portfolio.

Infrascale Cloud Backup (ICB)

  • Backup directly to the cloud
  • Recover data in seconds
  • Optimised for endpoints and branch office servers
  • Ransomware detection & remediation

Infrascale Cloud Application Backup (ICAB)

  • Defy cloud applications limited retention policies
  • Backup O365, SharePoint and OneDrive, G-Suite, Salesforce.com, box.com, and more
  • Recover individual mail items or mailboxes

Infrascale Disaster Recovery – Local (IDR-LOCAL)

  • Backup systems to an on-premises appliance
  • Run system replicas (locally) in minutes
  • Restore from on-premises appliance or the cloud
  • Archive / DR data to disk

Infrascale Disaster Recovery – Cloud (IDR-CLOUD)

  • Backup systems to an on-premises appliance and to a bootable cloud appliance
  • Run system replicas in minutes (locally or boot in the cloud)
  • Optimised for mission-critical physical and virtual servers

Support for Almost Everything

Infrascale offers support for almost everything, including VMware, Hyper-V, Bare Metal, End Points, public cloud workloads.

Other Features

Speedy DR locally or to the Cloud

  • IDR is very fast – boot ready in minutes
  • IDR enables recovery locally or in the cloud

Backup Target Optionality; Vigilant Data Security

  • ICB allows for backup targets “anywhere”
  • ICB detects ransomware and mitigates impact

Single View

The Infrascale dashboard does a pretty decent job of providing all of the information you might need about the service in a single view.

[image courtesy of Infrascale]

Appliances

There are a variety of appliance options available, as well as virtual editions of the appliance that you can use.

[image courtesy of Infrascale]

 

Thoughts and Further Reading

Regular readers of this blog would know that I’m pretty interested in data protection as a topic. I’m sad to say that I hadn’t heard of Infrascale prior to this presentation, but I’m glad I have now. There are a lot of service providers out there offering some level of data protection and disaster recovery as a service. These services offer varying levels of protection, features, and commercial benefits. Infrascale distinguish themselves by offering its own hardware platform as a core part of the offering, rather than building a solution based on one of the major data protection vendors.

In my day job I work a lot with product development for these types of solutions and, to be honest, the idea of developing a hardware data protection appliance is not something that appeals. As a lot of failed hardware vendors will tell you, it’s one thing to have a great idea, and quite another to execute successfully on that idea. But Infrascale has done the hard work on engineering the solution, and it seems to offer all of the features the average punter looks for in a DPaaS and DRaaS offering. I’m also a big fan of the fact that it offers support for endpoint protection, as I think this is a segment that is historically under-represented in the data protection space. It has a good number of customers, primarily in the SME range, and is continuing to add services to its product portfolio.

Disaster recovery and data protection are things at that aren’t always done very well by small to medium enterprises. Unfortunately, these types of businesses tend to have the most to lose when something goes wrong with their critical business data (either via operator error, ransomware, or actual disaster). Something like Infrascale’s offering is a great way to take away a lot of the complexity traditionally associated with protecting that important data. I’m looking forward to hearing more about Infrascale in the future.