InfiniteIO And Your Data – Making Meta Better

InfiniteIO recently announced its new Application Accelerator. I had the opportunity to speak about the news with Liem Nguyen (VP of Marketing) and Kris Meier (VP of Product Management) from InfiniteIO and thought I’d share some thoughts here.

 

Metadata Is Good, And Bad

When you think about file metadata you might think about photos and the information they store that tells you about where the photo was taken, when it was taken, and the kind of camera used. Or you might think of an audio file and the metadata that it contains, such as the artist name, year of release, track number, and so on. Metadata is a really useful thing that tells us an awful lot about data we’re storing. But things like simple file read operations make use of a lot of metadata just to open the file:

  • During the typical file read, 7 out of 8 operations are metadata requests which significantly increases latency; and
  • Up to 90% of all requests going to NAS systems are for metadata.

[image courtesy of InfiniteIO]

 

Fire Up The Metadata Engine

Imagine how much faster storage would be if it only has to service 10% of the requests it does today? The Application Accelerator helps with this by:

  • Separating metadata request processing from file I/O
  • Responding directly to metadata requests at the speed of DRAM – much faster than a file system

[image courtesy of InfiniteIO]

The cool thing is it’s a simple deployment – installed like a network switch requiring no changes to workflows.

 

Thoughts and Further Reading

Metadata is a key part of information management. It provides data with a lot of extra information that makes that data more useful to applications that consume it and to the end users of those applications. But this metadata has a cost associated with it. You don’t think about the amount of activity that happens with simple file operations, but there is a lot going on. It gets worse when you look at activities like AI training and software build operations. The point of a solution like the Application Accelerator is that, according to InfiniteIO, your primary storage devices could be performing at another level if another device was doing the heavy lifting when it came to metadata operations.

Sure, it’s another box in the data centre, but the key to the Application Accelerator’s success is the software that sits on the platform. When I saw the name my initial reaction was that filesystem activities aren’t applications. But they really are, and more and more applications are leveraging data on those filesystems. If you could reduce the load on those filesystems to the extent that InfiniteIO suggest then the Application Accelerator becomes a critical piece of the puzzle.

You might not care about increasing the performance of your applications when accessing filesystem data. And that’s perfectly fine. But if you’re using a lot of applications that need high performance access to data, or your primary devices are struggling under the weight of your workload, then something like the Application Accelerator might be just what you need. For another view, Chris Mellor provided some typically comprehensive coverage here.

Random Short Take #26

Welcome to my semi-regular, random news post in a short format. This is #26. I was going to start naming them after my favourite basketball players. This one could be the Korver edition, for example. I don’t think that’ll last though. We’ll see. I’ll stop rambling now.

Excelero And The NVEdge

It’s been a little while since I last wrote about Excelero. I recently had the opportunity to catch up with Josh Goldenhar and Tom Leyden and thought I’d share some of my thoughts here.

 

NVMe Performance Good, But Challenging

NVMe has really delivered storage performance improvements in recent times.

All The Kids Are Doing It

Great performance:

  • Up to 1.2M IOPs, 6GB/s per drive
  • Ultra-low latency (20μs)

Game changer for data-intensive workloads:

  • Mission-Critical Databases
  • Analytical Processing
  • AI and Machine Learning

But It’s Not Always What You’d Expect

IOPs and Bandwidth Utilisation

  • Applications struggle to use local NVMe performance beyond 3-4 drives
  • Stranded IOPS and / or bandwidth = poor ROI

Sharing is the Logical Answer, with local latency

  • Physical disaggregation is often operationally desirable
  • 24 Drive servers are common and readily available

Data Protection Desired

  • NVMe performs, but by itself offers no data protection
  • Local data protection does not protect against server failures

Some NVMe-over-fabrics solutions offer controller based data protection, but limit IOPs, bandwidth and sacrifice latency.

 

Scale Up Or Out?

NVMesh – Scale-out design: data centre scale

  • Disaggregated & converged architecture
  • No CPU overhead: no noisy neighbours
  • Lowest latency: 5μs

NVEdge – Scale-up design: rack scale

  • Disaggregated architecture
  • Full bandwidth even at 4K IO
  • Client-less architecture with NVMe-oF initiators
  • Enterprise-ready: RAID 1/0, High Availability with fast failover, Thin Provisioning, CRC

 

Flexible Deployment Models

There are a few different ways you can deploy Excelero.

Converged – Local NVMe drives in Application Servers

  • Single, unified storage pool
  • NVMesh initiator and client on all nodes
  • NVMesh bypasses server CPU
  • Various protection levels
  • No dedicated storage servers needed
  • Linearly scalable
  • Highest aggregate bandwidth

Top-of-Rack Flash

  • Single, unified storage pool
  • NVMesh Target runs on dedicated storage nodes
  • NVMesh Client runs on application servers
  • Applications get performance of local NVMe storage
  • Various Protection Levels
  • Linearly scalable

Data Protection

There are also a number of options when it comes to data resiliency.

[image courtesy of Excelero]

Networking Options

You can choose either TCP/IP or RDMA. TCP/IP offers a latency hit, but it works with any NIC (and your existing infrastructure). RDMA has super low latency, but is only available on a limited subset of NICs.

 

NVEdge Then?

Excelero described NVEdge as “block storage software for building NVMe Flash Arrays for demanding workflows such as AI, ML and databases in the Cloud and at the Edge”.

Scale-up architecture

  • High NVMe AFA performance, leveraging NVMe-oF
  • Full bandwidth performance even at 4K block size

High availability, supporting:

  • Dual-port NVMe drives
  • Dual controllers (with fast failover, less than 100ms)
  • Active / active controller operation and active/passive logical volume access

Data services include:

  • RAID 1/0 data protection
  • Thin Provisioning: thousands of striped volumes of up to 1PB each
  • Enterprise grade block checksums (CRC 16/32/64).

Hardware Compatibility?

Supported Platforms

  • x86-based systems for higher aggregate performance
  • SmartNIC-based architectures for lower power & cost

HW Requirements

  • Each controller has PCIe connectivity to all drives
  • Controllers can communicate over a network
  • Controllers communicate over both the network and drive pairs to identify connectivity (failure) issues

Supported Networking

  • RDMA (InfiniBand or Ethernet) TCP/IP networking

 

Thoughts and Further Reading

NVMe has been a good news story for folks struggling with the limitations of the SAS protocol. I’ve waxed lyrical in the past about how impressed I was with Excelero’s offering. Not every workload is necessarily suited to NVMesh though, and NVEdge is an interesting approach to solving that problem. Where NVMesh provides a tonne of flexibility when it comes to deployment options and the hardware used, NVEdge doubles down on availability and performance for different workloads.

NVMe isn’t a handful of magic beans that will instantly have your storage workloads. You need to be able to feed it to really get value from it, and you need to be able to protect it too. It comes down to understanding what it is you’re trying to achieve with your applications, rather than just splashing cash on the latest storage protocol in the hope that it will make your business more money.

At this point I’d make some comment about data being the new oil, but I don’t really have enough background in the resources sector to be able to carry that analogy much further than that. Instead I’ll say this: data (in all of its various incantations) is likely very important to your business. Whether it’s something relatively straightforward like seismic data, or financial results, or policy documents, or it may be the value that you can extract from that data by having fast access to a lot of it. Whatever you’re doing with it, you’re likely investing in hardware and software that helps you get to that value. Excelero appears to have focused on ensuring that the ability to access data in a timely fashion isn’t the thing that holds you back from achieving your data value goals.

Datrium Enhances DRaaS – Makes A Cool Thing Cooler

Datrium recently made a few announcements to the market. I had the opportunity to speak with Brian Biles (Chief Product Officer, Co-Founder), Sazzala Reddy (Chief Technology Officer and Co-Founder), and Kristin Brennan (VP of Marketing) about the news and thought I’d cover it here.

 

Datrium DRaaS with VMware Cloud

Before we talk about the new features, let’s quickly revisit the DRaaS for VMware Cloud offering, announced by Datrium in August this year.

[image courtesy of Datrium]

The cool thing about this offering was that, according to Datrium, it “gives customers complete, one-click failover and failback between their on-premises data center and an on-demand SDDC on VMware Cloud on AWS”. There are some real benefits to be had for Datrium customers, including:

  • Highly optimised, and more efficient than some competing solutions;
  • Consistent management for both on-premises and cloud workloads;
  • Eliminates the headaches as enterprises scale;
  • Single-click resilience;
  • Simple recovery from current snapshots or old backup data;
  • Cost-effective failback from the public cloud; and
  • Purely software-defined DRaaS on hyperscale public clouds for reduced deployment risk long term.

But what if you want a little flexibility in terms of where those workloads are recovered? Read on.

Instant RTO

So you’re protecting your workloads in AWS, but what happens when you need to stand up stuff fast in VMC on AWS? This is where Instant RTO can really help. There’s no rehydration or backup “recovery” delay. Datrium tells me you can perform massively parallel VM restarts (hundreds at a time) and you’re ready to go in no time at all. The full RTO varies by run-book plan, but by booting VMs from a live NFS datastore, you know it won’t take long. Failback uses VADP.

[image courtesy of Datrium]

The only cost during normal business operations (when not testing or deploying DR) is the cost of storing ongoing backups. And these are are automatically deduplicated, compressed and encrypted. In the event of a disaster, Datrium DRaaS provisions an on-demand SDDC in VMware Cloud on AWS for recovery. All the snapshots in S3 are instantly made executable on a live, cloud-native NFS datastore mounted by ESX hosts in that SDDC, with caching on NVMe flash. Instant RTO is available from Datrium today.

DRaaS Connect

DRaaS Connect extends the benefits of Instant RTO DR to any vSphere environment. DRaaS Connect is available for two different vSphere deployment models:

  • DRaaS Connect for VMware Cloud offers instant RTO disaster recovery from an SDDC in one AWS Availability Zone (AZ) to another;
  • DRaaS Connect for vSphere On Prem integrates with any vSphere physical infrastructure on-premises.

[image courtesy of Datrium]

DRaaS Connect for vSphere On Prem extends Datrium DRaaS to any vSphere on-premises infrastructure. It will be managed by a DRaaS cloud-based control plane to define VM protection groups and their frequency, replication and retention policies. On failback, DRaaS will return only changed blocks back to vSphere and the local on-premises infrastructure through DRaaS Connect.

The other cool things to note about DRaaS Connect is that:

  • There’s no Datrium DHCI system required
  • It’s a downloadable VM
  • You can start protecting workloads in minutes

DRaaS Connect will be available in Q1 2020.

 

Thoughts and Further Reading

Datrium announced some research around disaster recovery and ransomware in enterprise data centres in concert with the product announcements. Some of it wasn’t particularly astonishing, with folks keen to leverage pay as you go models for DR, and wanting easier mechanisms for data mobility. What was striking is that one of the main causes of disasters is people, not nature. Years ago I remember we used to plan for disasters that invariably involved some kind of flood, fire, or famine. Nowadays, we need to plan for some script kid pumping some nasty code onto our boxes and trashing critical data.

I’m a fan of companies that focus on disaster recovery, particularly if they make it easy for consumers to access their services. Disasters happen frequently. It’s not a matter of if, just a matter of when. Datrium has acknowledged that not everyone is using their infrastructure, but that doesn’t mean they can’t offer value to customers using VMC on AWS. I’m not 100% sold on Datrium’s vision for “disaggregated HCI” (despite Hugo’s efforts to educate me), but I am a fan of vendors focused on making things easier to consume and operate for customers. Instant RTO and DRaaS Connect are both features that round out the DRaaS for VMwareCloud on AWS quite nicely.

I haven’t dived as deep into this as I’d like, but Andre from Datrium has written a comprehensive technical overview that you can read here. Datrium’s product overview is available here, and the product brief is here.

Random Short Take #25

Want some news? In a shorter format? And a little bit random? Here’s a short take you might be able to get behind. Welcome to #25. This one seems to be dominated by things related to Veeam.

  • Adam recently posted a great article on protecting VMConAWS workloads using Veeam. You can read it about it here.
  • Speaking of Veeam, Hal has released v2 of MS Office 365 Backup Analysis Tool. You can use it to work out how much capacity you’ll need to protect your O365 workloads. And you can figure out what your licensing costs will be, as well as a bunch of other cool stuff.
  • And in more Veeam news, the VeeamON Virtual event is coming up soon. It will be run across multiple timezones and should be really interesting. You can find out more about that here.
  • This article by Russ on copyright and what happens when bots go wild made for some fascinating reading.
  • Tech Field Day turns 10 years old this year, and Stephen has been running a series of posts covering some of the history of the event. Sadly I won’t be able to make it to the celebration at Tech Field Day 20, but if you’re in the right timezone it’s worthwhile checking it out.
  • Need to connect to an SMB share on your iPad or iPhone? Check out this article (assuming you’re running iOS 13 or iPadOS 13.1).
  • It grinds my gears when this kind of thing happens. But if the mighty corporations have launched a line of products without thinking it through, we shouldn’t expect them to maintain that line of products. Right?
  • Storage and Hollywood can be a real challenge. This episode of Curtis‘s podcast really got into some of the details with Jeff Rochlin.

 

SwiftStack Announces 7

SwiftStack recently announced version 7 of their solution. I had the opportunity to speak to Joe Arnold and Erik Pounds from SwiftStack about the announcement and thought I’d share some thoughts here.

 

Insane Data Requirements

We spoke briefly about just how insane modern data requirements are becoming, in terms of both volume and performance requirements. The example offered up was that of an Advanced Driver-Assistance System (ADAS). These things need a lot of capacity to work, with training data starting at 15PB of data with performance requirements approaching 100GB/s.

  • Autonomy – Level 2+
  • 10 Deep neural networks needed
  • Survey car – 2MP cameras
  • 2PB per year per car
  • 100 NVIDIA DGX-1 servers per car

When your hot data is 15 – 30PB and growing – it’s a problem.

 

What’s New In 7?

SwiftStack has been working to address those kinds of challenges with version 7.

Ultra-scale Performance Architecture

They’ve managed to get some pretty decent numbers under their belt, delivering over 100GB/s at scale with a platform that’s designed to scale linearly to higher levels. The numbers stack up well against some of their competitors, and have been validated through:

  • Independent testing;
  • Comparing similar hardware and workloads; and
  • Results being posted publicly (with solutions based on Cisco Validated Designs).

 

ProxyFS Edge

ProxyFS Edge takes advantage of SwiftStack’s file services to deliver distributed file services between edge, core, and cloud. The idea is that you can use it for “high-throughput, data-intensive use cases”.

[image courtesy of SwiftStack]

Enabling functionality:

  • Containerised deployment of ProxyFS agent for orchestrated elasticity
  • Clustered filesystem enables scale-out capabilities
  • Caching at the edge, minimising latency for improved application performance
  • Load-balanced, high-throughput API-based communication to the core

 

1space File Connector

But what if you have a bunch of unstructured data sitting in file environments that you want to use with your more modern apps? 1space File Connector brings enterprise file data into the cloud namespace, and “[g]ives modern, cloud-native applications access to existing data without migration”. The thinking is that you can modernise your workflows at an incremental rate, rather than having to deal with the app and the storage all in one go.  incrementally

[image courtesy of SwiftStack]

Enabling functionality:

  • Containerised deployment 1space File Connector for orchestrated elasticity
  • File data is accessible using S3 or Swift object APIs
  • Scales out and is load balanced for high-throughput
  • 1space policies can be applied to file data when migration is desired

The SwiftStack AI Architecture

SwiftStack have also developed a comprehensive AI Architecture model, describing it as “the customer-proven stack that enables deep learning at ultra-scale”. You can read more on that here.

Ultra-Scale Performance

  • Shared-nothing distributed architecture
  • Keep GPU compute complexes busy

Elasticity from Edge-to-Core-to-Cloud

  • With 1space, ingest and access data anywhere
  • Eliminate data silos and move beyond one cloud

Data Immutability

  • Data can be retained and referenced indefinitely as it was originally written
  • Enabling traceability, accountability, confidence, and safety throughout the life of a DNN

Optimal TCO

  • Compelling savings compared to public cloud or all-flash arrays Real-World Confidence
  • Notable AI deployments for autonomous vehicle development

SwiftStack PRO

The final piece is the SwiftStack PRO offering, a support service delivering:

  • 24×7 remote management and monitoring of your SwiftStack production cluster(s);
  • Incorporating operational best-practices learned from 100s of large-scale production clusters;
  • Including advanced monitoring software suite for log aggregation, indexing, and analysis; and
  • Operations integration with your internal team to ensure end-to-end management of your environment.

 

Thoughts And Further Reading

The sheer scale of data enterprises are working with every day is pretty amazing. And data is coming from previously unexpected places as well. The traditional enterprise workloads hosted on NAS or in structured applications are insignificant in size when compared to the PB-scale stuff going on in some environments. So how on earth do we start to derive value from these enormous data sets? I think the key is to understand that data is sometimes going to be in places that we don’t expect, and that we sometimes have to work around that constraint. In this case, SwiftStack have recognised that not all data is going to be sitting in the core, or the cloud, and they’re using some interesting technology to get that data where you need it to get the most value from it.

Getting the data from the edge to somewhere useable (or making it useable at the edge) is one thing, but the ability to use unstructured data sitting in file with modern applications is also pretty cool. There’s often reticence associated with making wholesale changes to data sources, and this solution helps to make that transition a little easier. And it gives the punters an opportunity to address data challenges in places that may have been inaccessible in the past.

SwiftStack have good pedigree in delivering modern scale-out storage solutions, and they’ve done a lot of work ensure that their platform adds value. Worth checking out.

Backblaze Has A (Pod) Birthday, Does Some Cool Stuff With B2

Backblaze has been on my mind a lot lately. And not just because of their recent expansion into Europe. The Storage Pod recently turned ten years old, and I was lucky enough to have the chance to chat with Yev Pusin and Andy Klein about that news and some of the stuff they’re doing with B2, Tiger Technology, and Veeam.

 

10 Years Is A Long Time

The Backblaze Storage Pod (currently version 6) recently turned 10 years old. That’s a long time for something to be around (and successful) in a market like cloud storage. I asked to Yev and Andy about where they saw the pod heading, and whether they thought there was room for Flash in the picture. Andy pointed out that, with around 900PB under management, Flash still didn’t look like the most economical medium for this kind of storage task. That said, they have seen the main HDD manufacturers starting to hit a wall in terms of the capacity per drive that they can deliver. Nonetheless, the challenge isn’t just performance, it’s also the fact that people are needing more and more capacity to store their stuff. And it doesn’t look like they can produce enough Flash to cope with that increase in requirements at this stage.

Version 7.0

We spoke briefly about what Pod 7.0 would look like, and it’s going to be a “little bit faster”, with the following enhancements planned:

  • Updating the motherboard
  • Upgrade the CPU and consider using an AMD CPU
  • Updating the power supply units, perhaps moving to one unit
  • Upgrading from 10Gbase-T to 10GbE SFP+ optical networking
  • Upgrading the SATA cards
  • Modifying the tool-less lid design

They’re looking to roll this out in 2020 some time.

 

Tiger Style?

So what’s all this about Veeam, Tiger Bridge, and Backblaze B2? Historically, if you’ve been using Veeam from the cheap seats, it’s been difficult to effectively leverage object storage to use as a repository for longer term data storage. Backblaze and Tiger Technology have gotten together to develop an integration that allows you to use B2 storage to copy your Veeam protection data to the Backblaze cloud. There’s a nice overview of the solution that you can read here, and you can read some more comprehensive instructions here.

 

Thoughts and Further Reading

I keep banging on about it, but ten years feels like a long time to be hanging around in tech. I haven’t managed to stay with one employer longer than 7 years (maybe I’m flighty?). Along with the durability of the solution, the fact that Backblaze made the design open source, and inspired a bunch of companies to do something similar, is a great story. It’s stuff like this that I find inspiring. It’s not always about selling black boxes to people. Sometimes it’s good to be a little transparent about what you’re doing, and relying on a great product, competitive pricing, and strong support to keep customers happy. Backblaze have certainly done that on the consumer side of things, and the team assures me that they’re experiencing success with the B2 offering and their business-oriented data protection solution as well.

The Veeam integration is an interesting one. While B2 is an object storage play, it’s not S3-compliant, so they can’t easily leverage a lot of the built-in options delivered by the bigger data protection vendors. What you will see, though, is that they’re super responsive when it comes to making integrations available across things like NAS devices, and stuff like this. If I get some time in the next month, I’ll look at setting this up in the lab and running through the process.

I’m not going to wax lyrical about how Backblaze is democratising data access for everyone, as they’re in business to make money. But they’re certainly delivering a range of products that is enabling a variety of customers to make good use of technology that has potentially been unavailable (in a simple to consume format) previously. And that’s a great thing. I glossed over the news when it was announced last year, but the “Rebel Alliance” formed between Backblaze, Packet and ServerCentral is pretty interesting, particularly if you’re looking for a more cost-effective solution for compute and object storage that isn’t reliant on hyperscalers. I’m looking forward to hearing about what Backblaze come up with in the future, and I recommend checking them out if you haven’t previously. You can read Ken‘s take over at Gestalt IT here.

Pure Storage Expands Portfolio, Adds Capacity And Performance

Disclaimer: I recently attended Pure//Accelerate 2019.  My flights, accommodation, and conference pass were paid for by Pure Storage. There is no requirement for me to blog about any of the content presented and I am not compensated by Pure Storage for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Pure Storage announced two additions to its portfolio of products today: FlashArray//C and DirectMemory Cache. I had the opportunity to hear about these two products at the Storage Field Day Exclusive event at Pure//Accelerate 2019 and thought I’d share some thoughts here.

 

DirectMemory Cache

DirectMemory Cache is a high-speed caching system that reduces read latency for high-locality, performance-critical applications.

  • High speed: based on Intel Optane SCM drives
  • Caching system: repeated accesses to “hot data” are sped up automatically – no tiering = no configuration
  • Read latency: only read performance is affected – no changes to latency
  • High-locality: only workloads that reuse often a dates that fits in the cache will benefit
  • Performance-Critical: high-throughput latency sensitive workloads

According to Pure, “DirectMemory Cache is the functionality within Purity that provides direct access to data and accelerates performance critical applications”. Note that this is only for read data, write caching is still done via DRAM.

How Can This Help?

Pure has used Pure1 Meta analysis to arrive at the following figures:

  • 80% of arrays can achieve 20% lower latency
  • 40% of arrays can achieve 30-50% lower latency (up to 2x boost)

So there’s some real potential to improve existing workloads via the use of this read cache.

DirectMemory Configurations

Pure Storage DirectMemory Modules plug directly into FlashArray//X70 and //X90, are inserted into the chassis, and are available in the following configurations:

  • 3TB (4x750GB) DirectMemory Modules
  • 6TB (8x750GB) DirectMemory Modules

Top of Rack Architecture

Pure are positioning the “top of rack” architecture as a way to compete some of the architectures that have jammed a bunch of flash in DAS or in compute to gain increased performance. The idea is that you can:

  • Eliminate data locality;
  • Bring storage and compute closer;
  • Provide storage services that are not possible with DAS;
  • Bring the efficiency of FlashArray to traditional DAS applications; and
  • Offload storage and networking load from application CPUs.

 

FlashArray//C

Typical challenges in Tier 2

Things can be tough in the tier 2 storage world. Pure outlined some of the challenges they were seeking to address by delivering a capacity optimised product.

Management complexity

  • Complexity / management
  • Different platforms and APIs
  • Interoperability challenges

Inconsistent Performance

  • Variable app performance
  • Anchored by legacy disk
  • Undersized / underperforming

Not enterprise class

  • <99.9999% resiliency
  • Disruptive upgrades
  • Not evergreen

The C Stands For Capacity Optimised All-Flash Array

Flash performance at disk economics

  • QLC architecture enables tier 2 applications to benefit from the performance of all-flash – predictable 2-4ms latency, 5.2PB (effective) in 9U delivers 10x consolidation for racks and racks of disk.

Optimised end-to-end for QLC Flash

  • Deep integration from software to QLC NAND solves QLC wear concerns and delivers market-leading economics. Includes the same evergreen maintenance and wear replacement as every FlashArray

“No Compromise” enterprise experience

  • Built for the same 99.9999%+ availability, Pure1 cloud management, API automation, and AI-driven predictive support of every FlashArray

Flash for every data workflow

  • Policy driven replication, snapshots, and migration between arrays and clouds – now use Flash for application tiering, DR, Test / Dev, Backup, and retention

Configuration Details

Configuration options include:

  • 366TB RAW – 1.3PB effective
  • 878TB RAW – 3.2PB effective
  • 1.39PB RAW – 5.2PB effective

Use Cases

  • Policy based VM tiering between //X and //C
  • Multi-cloud data protection and DR – on-premises and multi-site
  • Multi-cloud test / dev – workload consolidation

*File support (NFS / SMB) coming in 2020 (across the entire FlashArray family, not just //C)

 

Thoughts

I’m a fan of companies that expand their portfolio based on customer requests. It’s a good way to make more money, and sometimes it’s simplest to give the people what they want. The market has been in Pure’s ear for some time about delivering some kind of capacity storage solution. I think it was simply a matter of time before the economics and the technology intersected at a point where it made sense for it to happen. If you’re an existing Pure customer, this is a good opportunity to deploy Pure across all of your tiers of storage, and you get the benefit of Pure1 keeping an eye on everything, and your “slow” arrays will still be relatively performance-focused thanks to NVMe throughout the box. Good times in IT isn’t just about speeds and feeds though, so I think this announcement is more important in terms of simplifying the story for existing Pure customers that may be using other vendors to deliver Tier 2 capabilities.

I’m also pretty excited about DirectMemory Cache, if only because it’s clear that Pure has done its homework (i.e. they’ve run the numbers on Pure1 Meta) and realised that they could improve the performance of existing arrays via a reasonably elegant solution. A lot of the cool kids do DAS, because that’s what they’ve been told will yield great performance. And that’s mostly true, but DAS can be a real pain in the rear when you want to move workloads around, or consolidate performance, or do useful things like data services (e.g. replication). Centralised storage arrays have been doing this stuff for years, and it’s about time they were also able to deliver the performance required in order for those companies not to have to compromise.

You can read the press release here, and the Tech Field Day videos can be viewed here.

VMware – VMworld 2019 – HBI2537PU – Cloud Provider CXO Panel with Cohesity, Cloudian and PhoenixNAP

Disclaimer: I recently attended VMworld 2019 – US.  My flights and accommodation were paid for by Digital Sense, and VMware provided me with a free pass to the conference and various bits of swag. There is no requirement for me to blog about any of the content presented and I am not compensated by VMware for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my rough notes from “HBI2537PU – Cloud Provider CXO Panel with Cohesity, Cloudian and PhoenixNAP”, a panel-type presentation with the following people:

You can grab a PDF copy of my notes from here.

Introductions are done.

YR: William, given your breadth of experience, what are some of the emerging trends you’ve been seeing?

WB: Companies are struggling to keep up with the pace of information generation. Understanding the data, storing and retaining it, and protecting it. Multi-cloud adds a lot of complexity. We’ve heard studies that say 22% of data generated is actually usable. It’s just sitting there. Public cloud is still hot, but it’s settling down a little.

YR: William comes from a massive cloud provider. What are you guys using?

WB: We’ve standardised on vCloud Director (vCD) and vSphere. We came from build our own but it wasn’t providing the value that we hoped it would. Customers want a seamless way to manage multiple cloud resources.

YR: Are you guys familiar with VCPP?

AP: VCPP is the crown jewel of our partner program at VMware. 4000+ providers, 120+ countries, 10+ million VMs, 10000+ DCs. We help you save money, make money (things are services ready). We’re continuing to invest in vCD. Kubernetes, GPUs, etc. Lots of R&D.

YR: William, you mentioned you standardised on the VMware platform. Talk to us about your experience. Why vCD?

WB: It’s been a checkered past for vCD. We were one of the first five on the vCloud Express program in 2010 / 11. We didn’t like vCD in its 1.0 version. We thought we can do this better. And we did. We launched the first on-demand, pay by the hour public cloud for enterprise in 2011. But it didn’t really work out. 2012 / 13 we started to see investments being made in vCD. 5.0 / 5.5 improved. Many people thought vCD was gong to die. We now see a modern, flexible portal that can be customised. And we can take our devs and have them customise vCD, rather than build a customised portal. That’s where we can put our time and effort. We’ve always done things differently. Always been doing other things. How do we bring our work in visual cloud into that cloud provider portal with vCD?

YR: You have an extensive career at VMware.

RR: I was one of the first people to take vCD out to the world. But Enterprise wasn’t mature enough. When we focused on SPs, it was the right thing to do. DIY portals needs a lot of investment. VMware allows a lot of extensibility now. For us, as Cohesity, we want to be able to plug in to that as well.

WB: At one point we had 45 devs working on a proprietary portal.

YR: We’ve been doing a lot on the extensibility side. What role are services playing in cloud providers?

AP: It takes away the complexities of deploying the stack.

JT: We’re specifically in object. A third of our customers are service providers. You guys know that object is built for scale, easy to manage, cost-effective. 20% of the data gets used. We hear that customers want to improve on that. People are moving away from tape. There’s a tremendous opportunity for services built on storage. Amazon has shown that. Data protection like Cohesity. Big data with Splunk. You can offer an industry standard, but differentiate based on other services.

YR: As we move towards a services-oriented world, William how do you see cloud management services evolving?

WB: It’s not good enough to provide some compute infrastructure any more. You have to do something more. We’re stubbornly focussed on different types of IaaS. We’re not doing generic x86 on top of vSphere. Backup, DR – those are in our wheelhouse. From a platform perspective, more and more customers want some kind of single pane of glass across their data. For some that’s on-premises, for some its public, for some it’s SaaS. You have to be able to provide value to the customer, or they will disappear. Object storage, backup with Cohesity. You need to keep pace with data movement. Any cloud, any data, any where.

AP: I’ve been at VMware long enough not to drink the Kool-Aid. Our whole cloud provider business is rooted in some humility. vCD can help other people doing better things to integrate. vCD has always been about reducing OPEX. Now we’re hitting the top line. Any cloud management platform today needs to open, extensible, not try to do anything.

YR: Is the crowd seeing pressure on pure IaaS?

Commentator: Coming from an SP to enterprise is different. Economics. Are you able to do a show back with vCD 9 and vROps?

WB: We’re putting that in the hands of customers. Looking at CloudHealth. There’s a benefit to being in the business management space. You have the opportunity to give customers a better service. That, and more flexible business models. Moving into flexible billing models – gives more freedom to the enterprise customer. Unless you’re the largest of the large – enterprises have difficulty acting as a service provider. Citibank are an exception to this. Honeywell do it too. If you’re Discount Tire – it’s hard. You’re the guy providing the service, and you’re costing them money. There’s animosity – and there’s no choice.

Commentator: Other people have pushed to public because chargeback is more effective than internal show back with private cloud.

WB: IT departments are poorly equipped to offer a breadth of services to their customers.

JT: People are moving workloads around. They want choice and flexibility. VMware with S3 compatible storage. A common underlying layer.

YR: Economics, chargeback. Is VMware (and VCPP) doing enough?

WB: The two guys to my right (RR and JT) have committed to building products that let me do that. I’ve been working on object storage use cases. I was talking to a customer. They’re using our IaaS and connected to Amazon S3. You’ve gone to Amazon. They didn’t know about it though. Experience and cost that can be the same or better. Egress in Amazon S3 is ridiculous. You don’t know what you don’t know. You can take that service and deliver it cost-effectively.

YR: RR talk to us about the evolution of data protection.

RR: Information has grown. Data is fragmented. Information placement is almost unmanageable. Services have now become available in a way that can be audited, secured, managed. At Cohesity, first thing we did was data protection, and I knew the rest was coming. Complexity’s a problem.

YR: JT. We know Cloudian’s a leader in object storage. Where do you see object going?

JT: It’s the underlying storage layer of the cloud. Brings down cost of your storage layer. It’s all about TCO. What’s going to help you build more revenue streams? Cloudian has been around since 2011. New solutions in backup, DR, etc, to help you build new revenue streams. S3 users on Amazon are looking for alternatives. Many of Cloudian’s customers are ex-Amazon customers. What are we doing? vCD integration. Search Cloudian and vCD on YouTube. Continuously working to drive down the cost of managing storage. 1.5PB in a 4RU box in collaboration with Seagate.

WB: Expanding service delivery, specifically around object storage, is important. You can do some really cool stuff – not just backup, it’s M&E, it’s analytics. Very few of our customers are using object just to store files and folders.

YR: We have a lot of providers in the room. JT can you talk more about these key use cases?

JT: It runs the gamut. You can break it down by verticals. M&E companies are offering editing suites via service providers. People are doing that for the legal profession. Accounting – storing financial records. Dental records and health care. The back end is the same thing – compute with S3 storage behind it. Cloudian provides multi-tenanted, scalable performance. Cost is driven down as you get larger.

YR: RR your key use cases?

RR: DRaaS is hot right now. When I was at VMware we did stuff with SRM. DR is hard. It’s so simple now. Now every SP can do it themselves. Use S3 to move data around from the same interface. And it’s very needed too. Everyone should have ubiquitous access to their data. We have that capability. We can now do vulnerability scans on the data we store on the platform. We can tell you if a VM is compromised. You can orchestrate the restoration of an environment – as a service.

YR: WB what are the other services you want us to deliver?

WB: We’re an odd duck. One of our major practices is information security. The idea that we have intelligent access to data residing in our infrastructure. Being able to detect vulnerabilities, taking action, sending an email to the customer, that’s the type of thing that cloud providers have. You might not be doing it yet – but you could.

YR: Security, threat protection. RR – do you see Cohesity as the driver to solve that problem?

RR: Cohesity will provide the platform. Data is insecure because it’s fragmented. Cohesity lets you run applications on the platform. Virus scanners, run books, all kinds of stuff you can offer as a service provider.

YR: William, where does the onus lie, how do you see it fitting together?

WB: The key for us is being open. Eg Cohesity integration into vCD. If I don’t want to – I don’t have to. Freedom of choice to pick and choose where we went to deliver our own IP to the customer. I don’t have to use Cohesity for everything.

JT: That’s exactly what we’re into. Choice of hardware, management. That’s the point. Standards-based top end.

YR: Security

*They had 2 minutes to go but I ran out of time and had to get to another meeting. Informative session. 4 stars.

Formulus Black Announces Forsa 3.0

Formulus Black recently announced version 3.0 of its Forsa product. I had the opportunity to speak with Mark Iwanowski and Jing Xie about the announcement and wanted to share some thoughts here.

 

So What’s A Forsa Again?

It’s a software solution for running applications in memory without needing to re-tool your applications or hardware. You can present persistent storage (think Intel Optane) or non-persistent memory (think DRAM) as a block device to the host and run your applications on that. Here’s a look at the architecture.

[image courtesy of Formulus Black]

Is This Just a Linux Thing?

No, not entirely. There’s Ubuntu and CentOS support out of the box, and Red Hat support is imminent. If you don’t use those operating systems though, don’t stress. You can also run this using a KVM-based hypervisor. So anything supported by that can be supported by Forsa.

But What If My Memory Fails?

Formulus Black has a technology called “BLINK” which provides the ability to copy your data down to SSDs, or you can failover the data to another host.

Won’t I Need A Bunch Of RAM?

Formulus Black uses Bit Markers – a memory efficient technology (like deduplication) – to make efficient use of the available memory. They call it “amplification” as opposed to deduplication, as it amplifies the available space.

Is This Going To Cost Me?

A little, but not as much as you’d think (because nothing’s ever free). The software is licensed on a per-socket basis, so if you decide to add memory capacity you’re not up for additional licensing costs.

 

Thoughts and Further Reading

I don’t do as much work with folks requiring in-memory storage solutions as much as I’d like to do, but I do appreciate the requirement for these kinds of solutions. The big appeal here is the lack of requirement to re-tool your applications to work in-memory. All you need is something that runs on Linux or KVM and you’re pretty much good to go. Sure, I’m over-simplifying things a little, but it looks like there’s a good story here in terms of the lack of integration required to get some serious performance improvements.

Formulus Black came out of stealth around 4 and a bit months ago and have already introduced a raft of improvements over version 2.0 of their offering. It’s great to see the speed with which they’ve been able to execute on new features in their offering. I’m curious to see what’s next, as there’s obviously been a great focus on performance and simplicity.

The cool kids are all talking about the benefits of NVMe-based, centralised storage solutions. And they’re right to do this, as most applications will do just fine with these kinds of storage platforms. But there are still going to be minuscule bottlenecks associated with these devices. If you absolutely need things to run screamingly fast, you’ll likely want to run them in-memory. And if that’s the case, Formulus Black’s Forsa solution might be just what you’re looking for. Plus, it’s a pretty cool name for a company, or possibly an aspiring wizard.