Pure Storage Expands Portfolio, Adds Capacity And Performance

Disclaimer: I recently attended Pure//Accelerate 2019.  My flights, accommodation, and conference pass were paid for by Pure Storage. There is no requirement for me to blog about any of the content presented and I am not compensated by Pure Storage for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Pure Storage announced two additions to its portfolio of products today: FlashArray//C and DirectMemory Cache. I had the opportunity to hear about these two products at the Storage Field Day Exclusive event at Pure//Accelerate 2019 and thought I’d share some thoughts here.

 

DirectMemory Cache

DirectMemory Cache is a high-speed caching system that reduces read latency for high-locality, performance-critical applications.

  • High speed: based on Intel Optane SCM drives
  • Caching system: repeated accesses to “hot data” are sped up automatically – no tiering = no configuration
  • Read latency: only read performance is affected – no changes to latency
  • High-locality: only workloads that reuse often a dates that fits in the cache will benefit
  • Performance-Critical: high-throughput latency sensitive workloads

According to Pure, “DirectMemory Cache is the functionality within Purity that provides direct access to data and accelerates performance critical applications”. Note that this is only for read data, write caching is still done via DRAM.

How Can This Help?

Pure has used Pure1 Meta analysis to arrive at the following figures:

  • 80% of arrays can achieve 20% lower latency
  • 40% of arrays can achieve 30-50% lower latency (up to 2x boost)

So there’s some real potential to improve existing workloads via the use of this read cache.

DirectMemory Configurations

Pure Storage DirectMemory Modules plug directly into FlashArray//X70 and //X90, are inserted into the chassis, and are available in the following configurations:

  • 3TB (4x750GB) DirectMemory Modules
  • 6TB (8x750GB) DirectMemory Modules

Top of Rack Architecture

Pure are positioning the “top of rack” architecture as a way to compete some of the architectures that have jammed a bunch of flash in DAS or in compute to gain increased performance. The idea is that you can:

  • Eliminate data locality;
  • Bring storage and compute closer;
  • Provide storage services that are not possible with DAS;
  • Bring the efficiency of FlashArray to traditional DAS applications; and
  • Offload storage and networking load from application CPUs.

 

FlashArray//C

Typical challenges in Tier 2

Things can be tough in the tier 2 storage world. Pure outlined some of the challenges they were seeking to address by delivering a capacity optimised product.

Management complexity

  • Complexity / management
  • Different platforms and APIs
  • Interoperability challenges

Inconsistent Performance

  • Variable app performance
  • Anchored by legacy disk
  • Undersized / underperforming

Not enterprise class

  • <99.9999% resiliency
  • Disruptive upgrades
  • Not evergreen

The C Stands For Capacity Optimised All-Flash Array

Flash performance at disk economics

  • QLC architecture enables tier 2 applications to benefit from the performance of all-flash – predictable 2-4ms latency, 5.2PB (effective) in 9U delivers 10x consolidation for racks and racks of disk.

Optimised end-to-end for QLC Flash

  • Deep integration from software to QLC NAND solves QLC wear concerns and delivers market-leading economics. Includes the same evergreen maintenance and wear replacement as every FlashArray

“No Compromise” enterprise experience

  • Built for the same 99.9999%+ availability, Pure1 cloud management, API automation, and AI-driven predictive support of every FlashArray

Flash for every data workflow

  • Policy driven replication, snapshots, and migration between arrays and clouds – now use Flash for application tiering, DR, Test / Dev, Backup, and retention

Configuration Details

Configuration options include:

  • 366TB RAW – 1.3PB effective
  • 878TB RAW – 3.2PB effective
  • 1.39PB RAW – 5.2PB effective

Use Cases

  • Policy based VM tiering between //X and //C
  • Multi-cloud data protection and DR – on-premises and multi-site
  • Multi-cloud test / dev – workload consolidation

*File support (NFS / SMB) coming in 2020 (across the entire FlashArray family, not just //C)

 

Thoughts

I’m a fan of companies that expand their portfolio based on customer requests. It’s a good way to make more money, and sometimes it’s simplest to give the people what they want. The market has been in Pure’s ear for some time about delivering some kind of capacity storage solution. I think it was simply a matter of time before the economics and the technology intersected at a point where it made sense for it to happen. If you’re an existing Pure customer, this is a good opportunity to deploy Pure across all of your tiers of storage, and you get the benefit of Pure1 keeping an eye on everything, and your “slow” arrays will still be relatively performance-focused thanks to NVMe throughout the box. Good times in IT isn’t just about speeds and feeds though, so I think this announcement is more important in terms of simplifying the story for existing Pure customers that may be using other vendors to deliver Tier 2 capabilities.

I’m also pretty excited about DirectMemory Cache, if only because it’s clear that Pure has done its homework (i.e. they’ve run the numbers on Pure1 Meta) and realised that they could improve the performance of existing arrays via a reasonably elegant solution. A lot of the cool kids do DAS, because that’s what they’ve been told will yield great performance. And that’s mostly true, but DAS can be a real pain in the rear when you want to move workloads around, or consolidate performance, or do useful things like data services (e.g. replication). Centralised storage arrays have been doing this stuff for years, and it’s about time they were also able to deliver the performance required in order for those companies not to have to compromise.

You can read the press release here, and the Tech Field Day videos can be viewed here.

Independent Research Firm Cites Druva As A Strong Performer in latest Data Resiliency Solutions Wave

Disclaimer: This is a sponsored post and you’ll probably see the content elsewhere on the Internet. Druva provided no editorial input and the words and opinions in this post are my own.

Druva was among the select companies that Forrester invited to participate in their latest Data Resiliency Solutions Wave, for Q3 2019. In its debut for this report, Druva was cited as a Strong Performer in Data Resilience. I recently had an opportunity to speak to W. Curtis Preston, Druva’s Chief Technologist, about the report, and thought I’d share some of my thoughts here.

 

Let’s Get SaaS-y

Druva was the only company listed in the Forrester Wave™ Data Resiliency Solutions whose products are only offered as a service. One of the great things about Software-as-a-Service (SaaS) is that the vendor takes care of everything for you. Other models of solution delivery require hardware, software (or both) to be installed on-premises or close to the workload you want to protect. The beauty of a SaaS delivery model is that Druva can provide you with a data protection solution that they manage from end to end. If you’re hoping that there’ll be some new feature delivered as part of the solution, you don’t have to worry about planning the upgrade to the latest version; Druva takes care of that for you. There’s no need for you to submit change management documentation or negotiate infrastructure outages with key stakeholders. And if something goes wrong with the platform upgrade, the vendor will take care of it. All you need to worry about is ensuring that your network access is maintained and you’re paying the bills. If your capacity is growing out of line with your expectations, it’s a problem for Druva, not you. And, as I alluded to earlier, you get access to features in a timely fashion. Druva can push those out when they’ve tested them, and everyone gets access to them without having to wait. Their time to market is great, and there aren’t a lot of really long release cycles involved.

 

Management

The report also called out how easy it was to manage Druva, as Forrester gave them their highest score 5 (out of 5) in this category. All of their services are available via a single management interface. I don’t recall at what point in my career I started to pay attention to vendors talking to me about managing everything from a single pane of glass. I think that the nature of enterprise infrastructure operations dictates that we look for unified management solutions wherever we can. Enterprise infrastructure is invariably complicated, and we want simplicity wherever we can get it. Having everything on one screen doesn’t always mean that things will be simple, but Druva has focused on ensuring that the management experience delivers on the promise of simplified operations. The simplified operations are also comprehensive, and there’s support for cloud-native / AWS resources (with CloudRanger), data centre workloads (with Druva Phoenix) and SaaS workloads (with Druva inSync) via a single pane of glass.  Although not included in the report, Druva also supports backing up endpoints, such as laptops and mobile devices.

 

Deduplication Is No Joke

One of Forrester’s criteria was whether or not a product offered deduplication. Deduplication has radically transformed the data protection storage market. Prior to the widespread adoption of deduplication and compression technologies in data protection storage, tape provided the best value in terms of price and capacity. This all changed when enterprises were able to store many copies of their data in the space required by one copy. Druva uses deduplication effectively in its solution, and has a patent on its implementation of the technology. They also leverage global deduplication in their solution, providing enterprises with an efficient use of protection data storage. Note that this capability needs to be in a single AWS region, as you wouldn’t want it running across regions. The key to Druva’s success with deduplication has been also due to its use of DynamoDB to support deduplication operations at scale.

 

Your Security Is Their Concern

Security was a key criterion in Forrester’s evaluation, and Druva received another 5 – the highest score possible – in that category as well. One of the big concerns for enterprises is the security of protection data being stored in cloud platforms. There’s no point spending a lot of money trying to protect your critical information assets if a copy of those same assets has been left exposed on the Internet for all to see. With Druva’s solution, everything stored in S3 is sharded and stored as separate objects. They’re not just taking big chunks of your protection data and storing them in buckets for everyone to see. Even if someone were able to access the storage, and put all of the pieces back together, it would be useless because all of these shards are also encrypted.  In addition, the metadata needed to re-assemble the shards is stored separately in DynamoDB and is also encrypted.

 

Thoughts

I believe being named a Strong Performer in the Forrester Wave™ Data Resiliency Solutions validates what Druva’s been telling me when it comes to their ability to protect workloads in the data centre, the cloud, and in SaaS environments. Their strength seems to lie in their ability to leverage native cloud tools effectively to provide their customers with a solution that is simple to operate and consume. If you have petabytes of seismic data you need to protect, Druva (and the laws of physics) may not be a great fit for you. But if you have less esoteric requirements and a desire to reduce your on-premises footprint and protect workloads across a number of environments, then Druva is worthy of further consideration. If you wanted to take a look at the report yourself, you can do so here (registration required).

VMware – VMworld 2019 – Wrap-Up And Link-O-Rama

Disclaimer: I recently attended VMworld 2019 – US.  My flights and accommodation were paid for by Digital Sense, and VMware provided me with a free pass to the conference and various bits of swag. There is no requirement for me to blog about any of the content presented and I am not compensated by VMware for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

A quick post to provide some closing thoughts on VMworld US 2019 and link to the posts I did during the event. Not in that order. I’ll add to this as I come across interesting posts from other people too.

 

Link-o-rama

Here’s my stuff.

Intro

VMware – VMworld 2019 – See you in San Francisco

Session Notes

VMware – VMworld 2019 – Monday General Session Notes

VMware – VMworld 2019 – HCI2888BU – Site Recovery Manager 8.2: What’s New and Demo

VMware – VMworld 2019 – HBI2537PU – Cloud Provider CXO Panel with Cohesity, Cloudian and PhoenixNAP

VMware – VMworld 2019 – HBI3516BUS – Scaling Virtual Infrastructure for the Enterprise: Truths, Beliefs and the Real World

VMware – VMworld 2019 – HBI3487BUS – Rethink Data Protection & Management for VMware

Tech Field Day Extra at VMworld US 2019

NetApp, Workloads, and Pizza

Apstra’s Intent – What Do They Mean?

Disclosure

VMware – VMworld 2019 – (Fairly) Full Disclosure

 

Articles From Elsewhere (And Some Press Releases)

VMworld 2019 US – Community Blog Posts

Other Tech Field Day Extra Delegates

A Software First Approach

Is VMware Project Pacific ‘Kubernetes done right’ for the enterprise?

General Session Replays

See the General Session Replays

NSX-T

NSX-T 2.5 – A New Marker on the Innovation Timeline

VMware Announces NSX-T 2.5

VMware Tanzu

Introducing VMware Tanzu Mission Control to Bring Order to Cluster Chaos

VMware Tanzu Completes the Modern Applications Picture

VMware Announces VMware Tanzu Portfolio to Transform the Way Enterprises Build, Run and Manage Software on Kubernetes

Project Pacific

Introducing Project Pacific

Project Pacific – Technical Overview

Project Pacific: Kubernetes to the Core

Workspace ONE

VMware Unveils Innovations Across Its Industry-Leading Workspace ONE Platform to Help Organizations Grow, Expand and Transform Their Business

vRealize

Announcing VMware vRealize Automation 8.0

vRealize Automation 8 – What’s New Overview

Announcing VMware vRealize Operations 8.0

vRealize Suite Lifecycle Manager 8.0 – What’s New

VCPP

VMware Enables Cloud Providers to Deliver the Software-Defined Data Center From any Cloud

VCF

Introducing VMware Cloud Foundation for Cloud Providers

Accelerating Kubernetes Adoption with VMware PKS on Cloud Foundation

Announcing VMware Cloud Foundation and HPE Synergy with HPE GreenLake

Extending Composable Hybrid Cloud for Workload Mobility Use Cases

 

Wrap-up

This was my fourth VMworld US event, and I had a lot of fun. I’d like to thank all the people who helped me out with getting there, the people who stopped and chatted to me at the event, everyone participating in the vCommunity, and VMware for putting on a great show. I’m looking forward to (hopefully) getting along to it again in 2020 (August 30 – September 3).

Apstra’s Intent – What Do They Mean?

Disclaimer: I recently attended VMworld 2019 – US.  My flights and accommodation were paid for by Digital Sense, and VMware provided me with a free pass to the conference and various bits of swag. There is no requirement for me to blog about any of the content presented and I am not compensated by VMware for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

As part of my attendance at VMworld US 2019 I had the opportunity to attend Tech Field Day Extra sessions. You can view the videos from the Apstra session here, and download my rough notes from here.

 

More Than Meets The Eye

A lot of people like to talk about how organisations need to undertake “digital transformation”. One of the keys to success with this kind of transformation comes in the form of infrastructure transformation. The idea is that, if you’re doing it right, you can improve:

  • Business agility;
  • Application reliability; and
  • Control costs.

Apstra noted that “a lot of organisations start with choosing their hardware and all other choices are derived from that choice, including the software”. As a result of this, you’re constrained by the software you’ve bought from that vendor. The idea is you need to focus on business-oriented outcomes, which are then used to determine the technical direction you’ll need to take to achieve those outcomes.

But even if you’ve managed to get yourself a platform that helps you achieve the outcomes you’re after, if you don’t have an appropriate amount of automation and visibility in your environment, you’re going to struggle with deployments being slowed down. You’ll likely also find that that a lack of efficient automation can lead to:

  • Physical and logical topologies that are decoupled but dependent;
  • Error-prone deployments; and
  • No end to end validation.

When you’re in that situation, you’ll invariably find that you’ll struggle with reduced operational agility and a lack of visibility. This makes it hard to troubleshoot issues in the field, and people generally feel sad (I imagine).

 

Intent, Is That What You Mean?

So how can Apstra help? Will they magically make everything work the way you want it to? Not necessarily. There are a bunch of cool features available within the Apstra solution, but you need to do some work up front to understand what you’re trying to achieve in the first place. But once you have the framework in place, you can do some neat stuff, using AOS to accelerate initial and day 2 fabric configuration. You can, for example, deploy new racks and L2 / L3 fabric VLANs at scale in a few clicks:

  • Streamline new rack design and deployment;
  • Automate fabric VLAN deployment;
  • Closed-loop validation (endpoint configuration, EVPN routes expectations); and
  • Include jumbo frame configuration for overlay networks.

The idea behind intent-based networking (IBN) is fairly straightforward:

  • Collect intent;
  • Expose intent;
  • Validate; and
  • Remediate.

You can read a little more about IBN here. There’s a white paper on Intent-based DCs can be found here.

 

Thoughts

I don’t deal with complicated network deployments on a daily basis, but I do know some people who play that role on TV. Apstra delivered a really interesting session that had me thinking about the effectiveness of software solutions to control infrastructure architecture at scale. There’s been a lot of talk during conference keynotes about the importance of digital transformation in the enterprise and how we all need to be leveraging software-defined widgets to make our lives better. I’m all for widgets making life easier, but they’re only going to be able to do that when you’ve done a bit of work to understand what it is you’re trying to do with all of this technology. The thing that struck me about Apstra is that they seem to understand that, while they’re selling some magic software, it’s not going to be any good to you if you haven’t done some work to prepare yourself for it.

I rabbit on a lot about how technology organisations struggle to understand what “the business” is trying to achieve. This isn’t a one-way problem either, and the business frequently struggles with the idea that technology seems to be a constant drain on an organisation’s finances without necessarily adding value to the business. In most cases though, technology is doing some really cool stuff in the background to make businesses run better, and more efficiently. Apstra is a good example of using technology to deliver reliable services to the business. Whether you’re an enterprise networker, or toiling away at a cloud service provider, I recommend checking out how Apstra can make things easier when it comes to keeping your network under control.

Random Short Take #21

Here’s a semi-regular listicle of random news items that might be of some interest.

  • This is a great article covering QoS enhancements in Purity 5.3. Speaking of Pure Storage I’m looking forward to attending Pure//Accelerate in Austin in the next few weeks. I’ll be participating in a Storage Field Day Exclusive event as well – you can find more details on that here.
  • My friends at Scale Computing have entered into an OEM agreement with Acronis to add more data protection and DR capabilities to the HC3 platform. You can read more about that here.
  • Commvault just acquired Hedvig for a pretty penny. It will be interesting to see how they bring them into the fold. This article from Max made for interesting reading.
  • DH2i are presenting a webinar on September 10th at 11am Pacific, “On the Road Again – How to Secure Your Network for Remote User Access”. I’ve spoken to the people at DH2i in the past and they’re doing some really interesting stuff. If your timezone lines up with this, check it out.
  • This was some typically insightful coverage of VMworld US from Justin Warren over at Forbes.
  • I caught up with Zerto while I was at VMworld US last week, and they talked to me about their VAIO announcement. Justin Paul did a good job of summarising it here.
  • Speaking of VMworld, William has posted links to the session videos – check it out here.
  • Project Pacific was big news at VMworld, and I really enjoyed this article from Joep.

Backblaze’s World Tour Of Europe

I spoke with Ahin Thomas at VMworld US last week about what Backblaze has been up to lately. The big news is that they’ve expanded data centre operations into Europe (Amsterdam specifically). Here’s a blog post from Backblaze talking about their new EU DC, and these three articles do a great job of explaining the process behind the DC selection.

So what does this mean exactly? If you’re not so keen on keeping your data in a US DC, you can create an account and start leveraging the EU region. There’s no facility to migrate existing data (at this stage), but if you have a lot of data you want to upload, you could use the B2 Fireball to get it in there.

 

Thoughts and Further Reading

When you think of Backblaze it’s likely that you think of their personal backup product, and the aforementioned hard drive stats and storage pod reference designs. So it might seem a little weird to see them giving briefings at a show like VMworld. But their B2 business is ramping up, and a lot of people involved in delivering VMware-based cloud services are looking at object storage as a way to do cost-effective storage at scale. There are also plenty of folks in the mid-market segment trying to find more cost effective ways to store older data and protect it without making huge investments in the traditional data protection offerings on the market.

It’s still early days in terms of some of the features on offer from Backblaze that can leverage multi-region capabilities, but the EU presence is a great first step in expanding their footprint and giving non-US customers the option to use resources that aren’t located on US soil. Sure, you’re still dealing with a US company, and you’re paying in US dollars, but at least you’ve got a little more choice in terms of where the data will be stored. I’ve been a Backblaze customer for my personal backups for some time, and I’m always happy to hear good news stories coming out of the company. I’m a big fan of the level of transparency they’ve historically shown, particularly when other vendors have chosen to present their solutions as magical black boxes. Sharing things like the storage pod design and hard drive statistics goes a long way to developing trust in Backblaze as the keeper of your stuff.

The business of using cloud storage for data protection and scalable file storage isn’t as simple as jamming a few rackmount boxes in a random DC, filling them with hard drives, charging $5 a month, and waiting for the money to roll in. There’s a lot more to it than that. You need to have a product that people want, you need to know how to deliver that product, and you need to be able to evolve as technology (and the market) evolves. I’m happy to see that Backblaze have moved into storage services with B2, and the move to the EU is another sign of that continuing evolution. I’m looking forward (with some amount of anticipation) to hearing what’s next with Backblaze.

If you’re thinking about taking up a subscription with Backblaze – you can use my link to sign up and I’ll get a free month and you will too.

NetApp, Workloads, and Pizza

Disclaimer: I recently attended VMworld 2019 – US.  My flights and accommodation were paid for by Digital Sense, and VMware provided me with a free pass to the conference and various bits of swag. There is no requirement for me to blog about any of the content presented and I am not compensated by VMware for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

As part of my attendance at VMworld US 2019 I had the opportunity to attend Tech Field Day Extra sessions. You can view the videos from the NetApp session here, and download my rough notes from here.

 

Enhanced DC Workloads

In The Beginning There Were Workloads

Andy Banta started his presentation by talking about the evolution of the data centre (DC). The first-generation DCs were resource-constrained. As long as there was something limiting (disk, CPU, memory), things didn’t get done. The later first-generation DCs were comprised of standalone hosts with applications. Andy called “2nd-generation DCs” those hosts that were able to run multiple workloads. The evolution of these 2nd-generation DCs was virtualisation – now you could run multiple applications and operating systems on one host.

The DC though, is still all about compute, memory, throughput, and capacity. As Andy described it, “the DC is full of boxes”.

[image courtesy of NetApp]

 

But There’s Cool Stuff Happening

Things are changing in the DC though, primarily thanks to a few shifts in key technologies that have developed in recent times.

Persistent Memory

Persistent memory has become more mainstream, and application vendors are developing solutions that can leverage this technology effectively. There’s also technology out there that will let you slice this stuff up and share it around, just like you would a pizza. And it’s resilient too, so if you drop your pizza, there’ll be some still left on your plate (or someone else’s plate). Okay I’ll stop with the tortured analogy.

Microvisors

Microvisors are being deployed more commonly in the DC (and particularly at the edge). What’s a microvisor? “Imagine a Hypervisor stripped down to only what you need to run modern Linux based containers”. The advent of the microvisor is leading to different types of workloads (and hardware) popping up in racks where they may not have previously been found.

Specialised Cores on Demand

You can now also access specialised cores on demand from most service providers. You need access to some GPUs to get some particular work done? No problem. There are a bunch of different ways you can slice this stuff up, and everyone’s hip to the possibility that you might only need them for a short time, but you can pay a consumption fee for however long that time will be.

HPC

Even High Performance Compute (HPC) is doing stuff with new technology (in this case NVMeoF). What kinds of workloads?

  • Banking – low-latency transactions
  • Fluid dynamics – lots of data being processed quickly in a parallel stream
  • Medical and nuclear research

 

Thoughts

My favourite quote from Andy was “NVMe is grafting flesh back on to the skeleton of fibre channel”. He (and most of us in the room) are of the belief that FC (in its current incantation at least) is dead. Andy went on to say that “[i]t’s out there for high margin vendors” and “[t]he more you can run on commodity hardware, the better off you are”.

The DC is changing, and not just in the sense that a lot of organisations aren’t running their own DCs any more, but also in the sense that the types of workloads in the DC (and their form factor) are a lot different to those we’re used to running in first-generation DC deployments.

Where does NetApp fit in all of this? The nice thing about having someone like Andy speak on their behalf is that you’re not going to get a product pitch. Andy has been around for a long time, and has seen a lot of different stuff. What he can tell you, though, is that NetApp have started developing (or selling) technology that can accommodate these newer workloads and newer DC deployments. NetApp will be happy to sell you storage that runs over IP, but they can also help you out with compute workloads (in the core and edge), and show you how to run Kubernetes across your estate.

The DC isn’t just full of apps running on hosts accessing storage any more – there’s a lot more to it than that. Workload diversity is becoming more and more common, and it’s going to be really interesting to see where it’s at in ten years from now.

VMware – VMworld 2019 – HBI3487BUS – Rethink Data Protection & Management for VMware

Disclaimer: I recently attended VMworld 2019 – US.  My flights and accommodation were paid for by Digital Sense, and VMware provided me with a free pass to the conference and various bits of swag. There is no requirement for me to blog about any of the content presented and I am not compensated by VMware for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my rough notes from “HBI3487BUS – Rethink Data Protection & Management for VMware”, presented by Curt Hayes (Cloud and Data Center Engineer, Regeneron) and Mike Palmer (Chief Product Officer, Druva). You can grab a PDF copy of my notes from here.

 

The World is Changing

Cloud Storage Costs Continue To Decline

  • 67 price decreases in AWS storage with CAGR of (60%) – AWS
  • 68% (110+) of countries have Data protection and privacy legislation – United Nations
  • 40% of IT will be “Versatilists” by 2021 – Gartner
  • 54% of CIOs believe streamlining storage is best opportunity for cost optimisation – ESG
  • 80% of enterprises will migrate away and close their on-premises DCs by 2025 – Gartner
  • 256% of increase in demand for Data scientists in last 5 years – Indeed

Druva’s 4 Pillars of Value

  • Costs Decrease – storage designed to optimise performance and cost reduces per TB costs, leaving more money for innovation
  • Eliminate Effort – Capacity management, patching, upgrades, certification, training, professional services gone.
  • Retire HW/SW silos – Druva builds in data services: DR, Archive, eDiscovery and more
  • Put Data to work – eliminating silos allows global tagging. Searchability, access and governance.

The best work you can do is when you don’t have to do it.

Curt (customer) says “[d]ata is our greatest asset”.

Regeneron’s Drivers to Move to Cloud

Challenges

Opportunities

Ireland backup platform is nearing end-of-life

Regeneron has a perfect opportunity to consider cloud as an alternative solution for backup and DR

3 distinct tools for managing backups

Harmonize backup tool set

Expansion and upgrades are costly and time-consuming

Minimize operational overhead

Need to improve business continuity posture

Instantly enable offsite backups & disaster recovery requirement

Scientists have tough time accessing the data they need

Advanced search capabilities to offer greater value added data services

Regeneron’s TCO Analysis

Druva Enables Intelligent Tiering in the Cloud

Traditional, expensive, and inflexible on-premises storage

  • Limited and expensive to scale and store
  • Complex administration
  • Lack of visibility and data silos
  • Tradeoff between cost and visibility for Long Term Retention requirements

Modern, scalable and cost-effective multi-tier storage

  • Scalable, efficient cloud story
  • Intelligent progressive tiering of data for maximum cost effiency with minimum effort
  • Support cloud bursting, hot/cold data
  • Cost efficient storage on most innovative AWS tiers
  • Enable reporting / audit on historical data

Regeneron’s Adoption of Cloud Journey

  • DC modernisation / consolidation
  • Workload migration to the cloud – Amazon EC2
  • Simplify and streamline backup / recovery and DR
  • Longer-term retention for advanced data mining
  • Protecting cloud applications – Sharepoint, O365, etc
  • Future – do more with data

 

How Did Druva help?

Basics

  • Cheaper
  • Simpler
  • Faster
  • Unified protection

Future Proof

  • Scalable
  • Ease of integration
  • No training
  • Business continuity

Data Value

  • Search
  • Data Mining
  • Analytics

Looking Beyond Data Protection …

 

Thoughts and Further Reading

I think the folks at Druva have been doing some cool stuff lately, and chances are quite high that I’ll be writing more about them in the future. There’s a good story with their cloud-native architecture, and it was nice to hear how a customer leveraged them to do things better than they had been doing previously.

Two things really stood out to me during this session. The first was the statement “[t]he best work you can do is when you don’t have to do it”. I’ve heard it said before that the best storage operation is one you don’t have to do, and I think we sometimes lose site of how this approach can help us get stuff done in a more efficient fashion, ultimately leading to focussing our constrained resources elsewhere.

The second was the idea of looking beyond data protection. The “secondary storage” market is riding something of a gravy train at the moment, with big investment from the VC funds in current and next-generation data protection (management?) solutions. There’s been some debate over how effective these solutions are at actually deriving value from that secondary data, but you’d have to think they’re in a prime position to succeed. I’m curious to see just what shape that value takes when we all start to agree on the basic premise.

Sponsored sessions aren’t everyone’s cup of tea, but I like hearing from customers about how it’s worked out well for them. And the cool thing about VMworld is that there’s a broader ecosystem supporting VMware across a number of different technology stacks. This makes for a diverse bunch of sessions, and I think it makes for an extremely interesting vendor conference. If you want to learn a bit more about what Druva have been up to, check out my post from Tech Field Day 19 here, and you can also find a useful overview of the product here. Good session. 3.5 stars.

VMware – VMworld 2019 – HBI3516BUS – Scaling Virtual Infrastructure for the Enterprise: Truths, Beliefs and the Real World

Disclaimer: I recently attended VMworld 2019 – US.  My flights and accommodation were paid for by Digital Sense, and VMware provided me with a free pass to the conference and various bits of swag. There is no requirement for me to blog about any of the content presented and I am not compensated by VMware for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

These are my rough notes from “HBI3516BUS – Scaling Virtual Infrastructure for the Enterprise: Truths, Beliefs and the Real World” was a sponsored panel session hosted by George Crump (of Storage Switzerland fame) and sponsored by Tintri by DDN. The panellists were:

JP: Hyper-V is not really for the enterprise. Configuration, and automation were a challenge. Tintri made it easier to deal with the hypervisor.

JD: You put a bunch of disks and connect it up to what you want to. It’s really simple to setup. “Why would you want to go complex if you didn’t have to?”

MB: When we had block storage, we were beholden to the storage team. We’ve never had problems with their [Tintri’s] smallest hybrid arrays.

AA: Back in the ESX 2.5 days – single LUN per VM. We would buy our arrays half-populated – ready to grow. We’re now running 33 – 34 devices. Tintri was great with QoS for VMs. It became a great troubleshooting tool for VMware.

GC: Reporting and analytics with Tintri has always been great.

MB: We use Tintri analytics to create reports for global infrastructure. Tintri will give you per-VM allocation by default. Performance like a Tivo – you can go back and look at analytics at a very granular level.

GC: How did the addition of new arrays go with Global Center?

MB: We manage our purchases based on capacity or projects. 80 – 85% we consider additional capacity. Global Center has a Pools function. It does a storage vMotion “like” feature to move data between arrays. There’s no impact.

JP: We used a UCS chassis, Tintri arrays, and Hyper-V hypervisor. We used a pod architecture. We knew how many users we wanted to host per pod. We have 44000 users globally. VDI is the only thing the bank uses.

AA: We’re more of a compute / core based environment, rather than users.  One of the biggest failings of Tintri is that it just works. When you’re not causing problems – people aren’t paying attention to it.

MB: HCI in general has a problem with very large VMs.

AA: We use a lot of scripting, particularly on the Red Hat (RHV) side of things. Tintri is fixing a lot of those at a different level.

GC: What would you change?

JP: I would run VMware.

MB: The one thing that can go wrong is the network. It was never a standardised network deployment. We had different network people in different regions doing different things.

JP: DR in the cloud. How do you do bank infrastructure in the cloud? Can we DR into the cloud? Tested Tintri replicating into Azure.

AA: We’re taking on different people. Moving “up” the stack.

Consistency in environments. It’s still a hard thing to do.

Wishlist?

  • Containers
  • A Virtual Appliance

 

Thoughts

Some folks get upset about these sponsored sessions at VMworld. I’ve heard it said before that they’re nothing more than glorified advertising for the company that sponsors the session. I’m not sure that it’s really any different to a vendor holding a four day conference devoted to themselves, but some people like to get ornery about stuff like that. One of my favourite things about working with technology is hearing from people out in the field about how they use that technology to do their jobs better / faster / more efficiently.

Sure, this session was a bit of a Tintri fan panel, but I think the praise is warranted. I’ve written enthusiastically in the past about how I thought Tintri has really done some cool stuff in terms of storage for virtualisation. I was sad when things went south for them as a company, but I have hopes that they’ll recover and continue to innovate under the control of DDN.

When everything I’ve been hearing from the keynote speakers at this conference revolved around cloud-native tools and digital transformation, it was interesting to come across a session where the main challenges still involved getting consistent, reliable and resilient performance from block storage to serve virtual desktop workloads to the enterprise. That’s not to say that we shouldn’t be looking at what’s happening with Kubernetes, etc, but I think there’s still room to understand what’s making these bigger organisations tick in terms of successful storage infrastructure deployments.

Useful session. 4 stars.

VMware – VMworld 2019 – (Fairly) Full Disclosure

Disclaimer: I recently attended VMworld 2019 – US.  My flights and accommodation were paid for by Digital Sense, and VMware provided me with a free pass to the conference and various bits of swag. There is no requirement for me to blog about any of the content presented and I am not compensated by VMware for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my notes on gifts, etc, that I received as an attendee at VMworld US 2019. Apologies if it’s a bit dry but I’m just trying to make it clear what I received during this event to ensure that we’re all on the same page as far as what I’m being influenced by. I’m going to do this in chronological order, as that was the easiest way for me to take notes during the week. Whilst every attendee’s situation is different, I was paid by me employer to be at this event.

 

Saturday

My wife kindly dropped me at the airport. I flew Qantas economy class from BNE – SYD – SFO courtesy of my employer (Digital Sense). My taxi to the hotel was also covered by my employer. I stayed at The Fairmont on Nob Hill. This was also covered by my employer. On Saturday night we went out and fought valiantly against jet-lag.

 

Sunday

On Sunday I went to the conference venue and picked up my VMworld backpack (containing a notepad, pen, and water bottle). I was also given a VMworld-branded pop socket because I’d uploaded my photo for my badge to the portal earlier.

On Sunday afternoon I attended a VMware Cloud Provider Technical Advisory Board (TAB) Meeting. Lunch consisted of rice, chicken, fish, and salad. My manager saved the day by fetching real flat white coffees from Bluestone Lane. During the break I had some coffee and a choc-chip cookie. As we left we were all given a VMware cloud provider platform polo shirt and Keen wireless charging desk clock.

That night there was an attendee welcome reception in the Solutions Exchange. I had 2 Sapporo beers, some shrimp and some cheese. I also picked up:

I then headed over to the VMunderground party at Tabletop Tap House. I caught up with a few people and helped myself to 2 Coronas, 1 Firestone 805 and some bruschetta.

 

Monday

Before the general session I grabbed some coffee and a muffin from the Square. I also managed to grab a vExpert gift bag, consisting of one of those drawstring bags, a vExpert 2019 pin and sticker, and a Raspberry Pi 3. I then swung by the VMUG stand and picked up my VMUG leader gift – a nice leather notepad folio.

For lunch I had one of the boxed turkey sandwiches.

For dinner a group of use met at Osha Thai Restaurant. We convinced them to let us share plates from the set menu, and I had a bit of everything. For appetizers we had:

  • Kobe beef wasabi roll with carrot, celery and mint
  • Miang Kham Shrimp – lettuce wrapped with shrimp, ginger, lime, roasted coconut, peanut, fresh chilli, and coconut herb caramel.
  • Vegetarian crispy roll – silver noodle, shiitake mushroom, cabbage and carrot served with sweet and sour plum sauce
  • Tuna Tower – yellow fin tuna tartare with mango, avocado and sriracha sesame oil

There was also a really tasty Shrimp Tom-Kha coconut soup. And for the main course they served:

  • Volcanic Beef – wok-fried grilled premium USDA flank steak with Thai basil, bell pepper in lava sauce
  • Chu-chi salmon – pan-seared salmon fillet served with “Chu-Chi” fragrant and flavourful red curry
  • Pad Thai – Classic Pad Thai noodles, tamarind reduction, peanut with choice of chicken or tofu
  • Country chicken – stir-fried lightly battered chicken with cashew, onion, garlic and honey-ginger sauce

It was all really nice. These people put down their credit cards to pay for it:

Thanks to Keith Townsend for organising the evening. I also had 3 Singha beers. I took a ride-sharing service paid for by Stephen Foskett and walked from his hotel back to mine.

 

Tuesday

On Tuesday morning I was fortunate enough to be selected to attend a vExpert breakfast with Cohesity CEO Mohit Aron at Grill in the St Regis hotel. I had 2 cups of coffee, and the “Local Farm” egg sandwich, with a scramble of organic Petaluma farmed eggs, Hobbs smoked black pepper bacon, cheddar cheese, and sourdough bread. We were also given a Cohesity mug that surprisingly survived the flight home. Regular readers of the blog will know I’m a fan of Cohesity and this was a great opportunity to learn more about Mohit.

After the general session I did another whip around the Solutions Exchange and picked up:

I had the box lunch again, which consisted of a chilli rubbed beef torta sandwich with avocado, black been puree, grilled onions, roaster tomatoes and cotija cheese on teller bread, a lemon bar dessert, and an apple. Later in the afternoon I stopped by Super Duper as I knew I wouldn’t have much time for dinner in the evening. I had the Super Burger with cheese, and a Pilsner. My arteries did not thank me.

I then headed over to the Dell Technologies Cloud & VMware VeloCloud MeetUp at The Grid on 4th Street. I had 2 Holy Ghost Pilsner beers and Dell very kindly gave me a DJI Osmo Pocket camera and 32GB Sandisk microSD card. Big thanks to Konnie for having me along.

I then headed back to my hotel and Howard Marks swung by in a cab to take me to The Orpheum to see Hamilton. I now understand what all of the fuss is about. Big thanks to John White at Expedient for the tickets. I took a ride-sharing service back to the hotel – this was paid for by Becky Elliot.

 

Wednesday

On Wednesday morning I walked down to the venue with Becky. We stopped at Starbucks and I had an egg, bacon, and cheese sandwich – it was just what I needed. I attended two Tech Field Day Extra sessions in the morning, and had 2 coffees and some extremely tasty Baklava (provided by Al Rasheed). For lunch we had Mexican, consisting of corn tortillas, rice & beans, house made carnitas, cheese quesadillas, sour cream, guacamole, salsa, and house made churros.

After lunch I headed back to the Solutions Exchange for a final walk around and picked up:

  • A Druva shirt;
  • Some stickers, a cable organizer, and a glasses cleaner from Apstra;
  • Some more Gorilla Guides and stickers from ActualTech Media; and
  • A Faction T-shirt (one of my favourites).

I then attended a 3-hour VCPP APJ Roundtable event at the W Hotel. I helped myself to some bottled water while I was there.

For dinner I caught up with some of the Tech Field Day crew at Thirsty Bear. This is a taps-style place, and I had devilled eggs, pulled pork empanadas, buffalo chicken empanadas, bacon and corn flatbread, and bacon-wrapped shrimp. I also had 3 Kolsch beers. We then retreated across the road to the bar at the W Hotel where I had 1 Cal Lager. This was paid for by Stephen Foskett. I then took a ride-sharing service back to the hotel with Becky Elliot. This was paid for by Becky.

 

Thursday

On Thursday my colleagues and I attended a NetApp EBC in Santa Clara. NetApp paid for our transport to and from the city. We had breakfast there, consisting of coffee, potato gems, bacon, quiche and a muffin. We were also given a NetApp-branded notepad and socks. For lunch we had 6 cheese mac and cheese, beef short rib, grilled chicken, mashed potato, salad, and water.

I had 2 Stella Artois beers and some crumbed prawns during the happy hour. We then headed to Birk’s Restaurant for dinner. I had 3 Firestone Pivo beers, blackened ribeye and mashed potatoes, and a shrimp cocktail for dinner. This was paid for by NetApp.

When we returned to the hotel we checked out the Tonga Room. I had a Fog Lifter cocktail, which seemed to have a lot of rum and crushed ice in it. This was paid for by my colleague.

 

Friday

We went for breakfast at Lori’s Diner and I had the Blues Burger. The blue cheese worked pretty well I thought. We then headed over to Pier 39 to check out some of the tourist shops and ended up having lunch at the Barrel House Tavern in Sausalito. I had the Tartare Tacos consisting of four wonton tacos (2 ahi tuna and 2 salmon tartare) with avocado mousse, summer slaw, chili oil, soy lime vinaigrette, and chili aioli. It was really nice. I also had 3 Kolsch beers. Once we were back at the hotel I took a cab to SFO and flew home via LAX. Please now enjoy this photo of a baseball card with my likeness on it – thanks Rubrik!