Brisbane VMUG – October 2022

The October 2022 edition of the Brisbane VMUG meeting will be held on Wednesday 12th October at the Cube (QUT) from 5pm – 7pm. It’s sponsored by NetApp and promises to be a great afternoon.

Two’s Company, Three’s a Cloud – NetApp, VMware and AWS

NetApp has had a strategic relationship with VMware for over 20 years, and with AWS for over 10 years. Recently at VMware Explore we made a significant announcement about VMC support for NFS Datastores provided by the AWS FSx for NetApp ONTAP service.

Come and learn about this exciting announcement and more on the benefits of NetApp with VMware Cloud. We will discuss architecture concepts, use cases and cover topics such as migration, data protection and disaster recovery as well as Hybrid Cloud configurations.

There will be a lucky door prize as well as a prize for best question on the night. Looking forward to see you there!

Wade Juppenlatz – Specialist Systems Engineer – QLD/NT

Chris (Gonzo) Gondek – Partner Technical Lead QLD/NT

 

PIZZA AND NETWORKING BREAK!

This will be followed by:

All the News from VMware Explore – (without the jet lag)

We will cover a variety of cloudy announcements from VMware Explore, including:

  • vSphere 8
  • vSAN 8
  • VMware Cloud on AWS
  • VMware Cloud Flex Storage
  • GCVE, OCVS, AVS
  • Cloud Universal
  • VMware Ransomware Recovery for Cloud DR

Dan Frith – Staff Solutions Architect – VMware Cloud on AWS, VMware

 

And we will be finishing off with:

Preparing for VMware Certifications

With the increase of position requirements in the last few years, certifications help you demonstrate your skills and move you a step forward on getting better jobs. In this Community Ssession we will help you understand how to prepare for a VMware certification exam and some useful tips you can use during the exam.

 

We will talk about:

  • Different types of exams
  • How to schedule an exam
  • Where to get material to study
  • Lessons learned from the field per type of exam

Francisco Fernandez Cardarelli – Senior Consultant (4 x VCIX)

 

Soft drinks and vBeers will be available throughout the evening! We look forward to seeing you there!

Doors open at 5pm. Please make your way to The Atrium, on Level 6.

You can find out more information and register for the event here. I hope to see you there. Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.

VMware Cloud on AWS – Supplemental Storage – A Few Notes …

At VMware Explore 2022 in the US, VMware announced a number of new offerings for VMware Cloud on AWS, including something we’re calling “Supplemental Storage”. There are some great (official) posts that have already been published, so I won’t go through everything here. I thought it would be useful to provide some high-level details and cover some of the caveats that punters should be aware of.

 

The Problem

VMware Cloud on AWS has been around for just over 5 years now, and in that time it’s proven to be a popular platform for a variety of workloads, industry verticals, and organisations of all different sizes. However, one of the challenges that a hyper-converged architecture presents is that resource growth is generally linear (depending on the types of nodes you have available). In the case of VMware Cloud on AWS, we (now) have 3 nodes available for use: the I3, I3en, and I4i. Each of these instances provides a fixed amount of CPU, RAM, and vSAN storage for use within your VMC cluster. So when your storage grows past a certain threshold (80%), you need to add an additional node. This is a longwinded way of saying that, even if you don’t need the additional CPU and RAM, you need to add it anyway. To address this challenge, VMware now offers what’s called “Supplemental Storage” for VMware Cloud on AWS. This is ostensibly external dat stores presented to the VMC hosts over NFS. This comes in two flavours: FSx for NetApp ONTAP and VMware Cloud Flex Storage. I’ll cover this in a little more detail below.

[image courtesy of VMware]

 

Amazon FSx for NetApp ONTAP

The first cab off the rank is Amazon FSx for NetApp ONTAP (or FSxN to its friends). This one is ONTAP-like storage made available to your VMC environment as a native service. It’s fully customer managed, and VMware managed from a networking perspective.

[image courtesy of VMware]

There’s a 99.99% Availability SLA attached to the service. It’s based on NetApp ONTAP, and offers support for:

  • Multi-Tenancy
  • SnapMirror
  • FlexClone
​Note that it currently requires VMware Managed Transit Gateway (vTGW) for Multi-AZ deployment (the only deployment architecture currently supported), and can connect to multiple clusters and SDDCs for scale. You’ll need to be on SDDC version 1.20 (or greater) to leverage this service in your SDDC, and there is currently no support for attachment to stretched clusters. While you can only connect datastores to VMC hosts using NFSv3, there is support for connecting directly to guest via other protocols. More information can be found in the FAQ here. There’s also a simulator you can access here that runs you through the onboarding process.

 

VMware Cloud Flex Storage

The other option for supplemental storage is VMware Cloud Flex Storage (sometimes referred to as VMC-FS). This is a datastore presented to your hosts over NFSv3.

Overview

VMware Cloud Flex Storage is:

  • A natively integrated cloud storage service for VMware Cloud on AWS that is fully managed by VMware;
  • Cost effective multi-cloud Cloud storage solution built on SCFS;
  • Delivered via a two-tier architecture for elasticity and performance (AWS S3 and local NVMe cache); and
  • Provides integrated Data-Management.

In short, VMware has taken a lot of the technology used in VMware Cloud Disaster Recovery (the result of the Datrium acquisition in 2020) and used it to deliver up to 400 TiB of storage per SDDC.

[image courtesy of VMware]
The intent of the solution, at this stage at least, is that it is only offered as a datastore for hosts via NFSv3, rather than other protocols directly to guests. There are some limitations around the supported topologies too, with stretched clusters not currently supported. From a disaster recovery perspective, it’s important to note that VMware Cloud Flex Storage is currently only offered on a single-AZ basis (although the supporting components are spread across multiple Availability Zones), and there is currently no support for VMware Cloud Disaster Recovery co-existence with this solution.

 

Thoughts
I’ve only been at VMware for a short period of time, but I’ve had numerous conversations with existing and potential VMware Cloud on AWS customers looking to solve their storage problems without necessarily putting everything on vSAN. There are plenty of reasons why you wouldn’t want to use vSAN for high capacity storage workloads, and I believe these two initial solutions go some ways to solving that issue. Many of the caveats that are wrapped around these two products at General Availability will be removed over time, and the traditional objections relating to VMware Cloud on AWS being not great at high-capacity, cost-effective storage will also have been removed.
Finally, if you’re an existing NetApp ONTAP customer, and were thinking about what you were going to do with that Petabyte of unstructured data you had lying about when you moved to VMware Cloud on AWS, or wanting to take advantage of the sweat equity you’ve poured into managing your ONTAP environment over the years, I think we’ve got you covered as well.

Random Short Take #73

Welcome to Random Short Take #73. Let’s get random.

Ransomware? More Like Ransom Everywhere …

Stupid title, but ransomware has been in the news quite a bit recently. I’ve had some tabs open in my browser for over twelve months with articles about ransomware that I found interesting. I thought it was time to share them and get this post out there. This isn’t comprehensive by any stretch, but rather it’s a list of a few things to look at when looking into anti-ransomware solutions, particularly for NAS environments.

 

It Kicked Him Right In The NAS

The way I see it (and I’m really not the world’s strongest security person), there are (at least) three approaches to NAS and ransomware concerns.

The Endpoint

This seems to be where most companies operate – addressing ransomware as it enters the organisation via the end users. There are a bunch of solutions out there that are designed to protect humans from themselves. But this approach doesn’t always help with alternative attack vectors and it’s only as good as the update processes you have in place to keep those endpoints updated. I’ve worked in a few shops where endpoint protection solutions were deployed and then inadvertently clobbered by system updates or users with too many privileges. The end result was that the systems didn’t do what they were meant to and there was much angst.

The NAS Itself

There are things you can do with NetApp solutions, for example, that are kind of interesting. Something like Stealthbits looks neat, and Varonis also uses FPolicy to get a similar result. Your mileage will vary with some of these solutions, and, again, it comes down to the ability to effectively ensure that these systems are doing what they say they will, when they will.

Data Protection

A number of the data protection vendors are talking about their ability to recover quickly from ransomware attacks. The capabilities vary, as they always do, but most of them have a solid handle on quick recovery once an infection is discovered. They can even help you discover that infection by analysing patterns in your data protection activities. For example, if a whole bunch of data changes overnight, it’s likely that you have a bit of a problem. But, some of the effectiveness of these solutions is limited by the frequency of data protection activity, and whether anyone is reading the alerts. The challenge here is that it’s a reactive approach, rather than something preventative. That said, companies like Rubrik are working hard to enhance its Radar capability into something a whole lot more interesting.

Other Things

Other things that can help limit your exposure to ransomware include adopting generally robust security practices across the board, monitoring all of your systems, and talking to your users about not clicking on unknown links in emails. Some of these things are easier to do than others.

 

Thoughts

I don’t think any of these solutions provide everything you need in isolation, but the challenge is going to be coming up with something that is supportable and, potentially, affordable. It would also be great if it works too. Ransomware is a problem, and becoming a bigger problem every day. I don’t want to sound like I’m selling you insurance, but it’s almost not a question of if, but when. But paying attention to some of the above points will help you on your way. Of course, sometimes Sod’s Law applies, and things will go badly for you no matter how well you think you’ve designed your systems. At that point, it’s going to be really important that you’ve setup your data protection systems correctly, otherwise you’re in for a tough time. Remember, it’s always worth thinking about what your data is worth to you when you’re evaluating the relative value of security and data protection solutions. This article from Chin-Fah had some interesting insights into the problem. And this article from Cohesity outlined a comprehensive approach to holistic cyber security. This article from Andrew over at Pure Storage did a great job of outlining some of the challenges faced by organisations when rolling out these systems. This list of NIST ransomware resources from Melissa is great. And if you’re looking for a useful resource on ransomware from VMware’s perspective, check out this site.

NetApp Keystone – How Do you Want It?

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

NetApp recently presented at Storage Field Day 21. You can see videos of the presentation here, and download my rough notes from here. This post is focussed on the Keystone presentation, but I recommend you check out the Oracle performance analysis session as well – I found it extremely informative.

 

Keystone? What Is It?

According to the website, “Keystone provides a portfolio of payment solutions and storage-as-a-service offerings for hybrid cloud environments to deliver greater agility, financial flexibility, and reduced financial risk that helps you meet your business outcomes”. In short, it gives you a flexible way to consume the broader portfolio of NetApp solutions as a service on-premises (and off-premises).

 

How Much XaaS Is Too Much?

According to NetApp’s research, no amount of XaaS is too much. The market is apparently hungry for everything as a service to be a thing. It seems we’re no longer just looking to do boring old Infrastructure or Software as a Service. We really seem to want everything as a Service.

[image courtesy of NetApp]

Why?

There are some compelling reasons to consume things as a service via operating expenditure rather than upfront capital expenditure. In the olden days, when I needed some storage for my company, I usually had a line item in the budget for some kind of storage array. What invariably happened was that the budget would only be made available once every 3 – 5 years. It doesn’t make any sense necessarily, but I’m sure there are accounting reasons behind it. So I would try to estimate how much storage the company would need for the next 5 years (and usually miss the estimate by a country mile). I’d then buy as much storage as I could and watch it fill up at an alarming rate.

The other problem with this approach was that we were paying for spindles that weren’t necessarily in use for the entirety of the asset’s lifecycle. There was also an issue that some storage vendors would offer special discounting to buy everything up front. When you went to add additional storage, however, you’d be slugged with pricing that was nowhere near as good as it was if you’d have bought everything up front. The appeal of solutions like storage as a service is that you can start with a smallish footprint and grow it as required, spending what you need, and going from there. It’s also nice for the vendors, as the sales engagement is a little more regular, and the opportunity to sell other services into the environment that may not have been identified previously becomes a reality.

No, But Really Why?

If you’ve watched the NetApp Keystone presentation, and maybe checked out the supporting documentation, you’re going to wonder why folks aren’t just moving everything to public cloud, and skipping the disk slinging middle man. As anyone who’s worked with or consulted for enterprise IT organisations will be able to tell you though, it’s rarely that simple. There may indeed be appetite to leverage public cloud storage services, for example, but there may also be a raft of reasons why this can’t be done, including latency requirements, legacy application support, data sovereignty concerns, and so forth.

[image courtesy of NetApp]

Sometimes the best thing that can happen is that there’s a compromise to be had between the desire for the business to move to a different operating model and the ability for the IT organisation to service that need.

 

Thoughts and Further Reading

The growth of XaaS over the last decade has been fascinating to see. There really is an expectation that you can do pretty much anything as a service, and folks are queuing up for the privilege. As I mentioned earlier, I think there are reasons why it’s popular on both sides, and I certainly don’t want to come across as some weird on-premises storage hugger who doesn’t believe the future of infrastructure is heavily entwined with as a service offerings. Heck, my day job is a a company that is entirely built on this model. What I do wonder at times is whether folks in organisations looking to transform their services are really ready to relinquish the control of the infrastructure part of the equation in exchange for a per-GB / per month option. Offerings like Keystone aren’t just fancy financial models to make getting kit on the floor easier, they’re changing the way that vendors and IT organisations interact at a fairly fundamental level. In much the same way that public cloud has changed the role of the IT bod in the organisation, so too does XaaS change that value proposition.

I think the folks at NetApp have done quite a good job with Keystone, particularly recognising that there is still a place for on-premises infrastructure, but acknowledging that the market wants both a “cloud-like” experience, and a new way of consuming these services. I’ll be interested to see how Keystone develops over the next 12 – 24 months now that it’s been released to the market at large. We all talk about as a service being the future, so I’m keen to see if folks are really buying it.

Storage Field Day 21 – (Fairly) Full Disclosure

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my notes on gifts, etc, that I received as a conference attendee at Storage Field Day 21. This is by no stretch an interesting post from a technical perspective, but it’s a way for me to track and publicly disclose what I get and how it looks when I write about various things. With all of … this stuff … happening, it’s not going to be as lengthy as normal, but I did receive a couple of boxes of stuff in the mail, so I wanted to disclose it.

The Tech Field Day team sent a keyboard cloth (a really useful thing to save the monitor on my laptop from being bashed against the keyboard), a commemorative TFD coin, and some TFD patches. The team also sent me a snack pack with a variety of treats in it, including Crunch ‘n Munch caramel popcorn with peanuts, fudge brownie M&M’s, Pop Rocks, Walker’s Monster Munch pickled onion flavour baked corn snacks, peanut butter Cookie Dough Bites, Airheads, Razzles, a giant gobstopper, Swedish Fish, a Butterfinger bar, some Laffy Taffy candy, Hershey’s Kisses, Chewy Lemonhead, Bottlecaps, Airheads, Candy Sours and Milk Duds. I don’t know what most of this stuff is but I guess I’ll find out. I can say the pickled onion flavour baked corn snacks were excellent.

Pliops came through with the goods and sent me a Lume Cube Broadcast Lighting Kit. Hammerspace sent a stainless steel water bottle and Hammerspace-branded Leeman notepad. Nasuni threw in a mug, notepad, and some pens, while NetApp gave me a travel mug and notepad. Tintri kindly included a Tintri trucker cap, Tintri-branded hard drive case and Tintri-branded OGIO backpack in the swag box.

My Secret Santa gift was the very excellent “Working for the clampdown: The Clash, the dawn of neoliberalism and the political promise of punk“, edited by Colin Coulter.

It wasn’t fancy food and limos this time around. But it was nonetheless an enjoyable event. Hopefully we can get back to in-person events some time this decade. Thanks again to Stephen and the team for having me back.

Random Short Take #46

Welcome to Random Short Take #46. Not a great many players have worn 46 in the NBA, but one player who has is one of my favourite Aussie players: Aron “Bangers” Baynes. So let’s get random.

  • Enrico recently attended Cloud Field Day 9, and had some thoughts on NetApp’s identity in the new cloud world. You can read his insights here.
  • This article from Chris Wahl on multi-cloud design patterns was fantastic, and well worth reading.
  • I really enjoyed this piece from Russ on technical debt, and some considerations when thinking about how we can “future-proof” our solutions.
  • The Raspberry Pi 400 was announced recently. My first computer was an Amstrad CPC 464, so I have a real soft spot for jamming computers inside keyboards.
  • I enjoyed this piece from Chris M. Evans on hybrid storage, and what it really means nowadays.
  • Working from home a bit this year? Me too. Tom wrote a great article on some of the security challenges associated with the new normal.
  • Everyone has a quadrant nowadays, and Zerto has found itself in another one recently. You can read more about that here.
  • Working with VMware Cloud Director and wanting to build a custom theme? Check out this article.

NetApp And The StorageGRID Evolution

Disclaimer: I recently attended Storage Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

NetApp recently presented at Storage Field Day 19. You can see videos of the presentation here, and download my rough notes from here.

 

StorageGRID

If you haven’t heard of it before, StorageGRID is NetApp’s object storage platform. It offers a lot of the features you’d expect from an object storage platform. The latest version of the offering, 11.3, was released a little while ago, and includes a number of enhancements, as well as some new hardware models.

Workloads Are Changing

Object storage has been around a bit longer than you might think, and its capabilities and use cases have changed over that time. Some of the newer object workloads don’t just need a scale out bucket to store old archive data. Instead, they want more performance and flexibility.

Higher performance

  • Ingest / Retrieve
  • Delete

Flexibility

  • Support mixed workloads and multiple tenants
  • Granular data protection policies
    • Optimise data placement and retention
    • Adapt to new requirements and regulations

Agility / Simplicity

  • Leverage resources across multiple clouds – Move data to and from public cloud
  • Open standards for data portability
  • Low touch operations

 

New Hardware

SG1000

  • Load Balancer
  • Can run Admin node
Description Compute Appliance – Gateway Node
Performance High performance load balancer and optional Admin node function
Key Features 1U

Dual-socket Intel platform

768GB memory

Two dual-port 100GbE Mellanox NICs (10/25/40/100GbE)

Dual 1GBase-T ports for management

Redundant power and cooling

Two internal NVMe SSDs

Multi-shelf SG6060

[image courtesy of NetApp]

The SG6060 is mighty dense, offering 2PB in a single node.

SGF6024

[image courtesy of NetApp]

The SGF6024 is an All-Flash Storage Node.

Description All-Flash 24 SSD Appliance
Performance High performance, low latency, small object workloads
Key Features ·       2U (3U with compute node)

·       40 2.4 GHz CPU cores (compute node)

·       192 GB memory (compute node)

·       4x10GbE/4x25GbENICs

·       24 SSD drives

Max capacity 367.2 TB RAW (15.3 TB SSDs)
SSD drive support NON-FDE: 800Gb, 3.8TB, 7.6TB, 15.3TB FIPS: 1.6TB; SED: 3.8TB

 

Architecture

Flexible Deployment Options

  • Appliance-based
  • VMware-based
  • Software only

Storage Nodes

Manages metadata

Manages storage

  • Disk
  • Cloud

Policy Engine

  • Applies policy at ingest
  • Continual data integrity checks
  • Applies new policy if applicable

Minimum 3 storage nodes required per site

Admin Nodes

Admin / Tenant portal

  • Create tenants
  • Define grid configuration
  • Create ILM policies

Audit

  • Granular audit log of tenant actions

Metrics

  • Collect and store metrics via Prometheus

Load balancer

  • Create HA groups for Storage Nodes and optionally Admin portal

Service Provider Model

Separation between GRID admin and Tenant admin

Grid administration

  • Manages infrastructure
  • Creates data management policies
  • Creates tenant accounts – No data access

Tenant administration

  • Storage User administration
  • Tenant data is isolated by default
  • Use standard S3 IAM and Bucket Policy
  • Leverage multi-cloud Platform Services (Cloud mirror, SNS, ElasticSearch)

 

Thoughts and Further Reading

I’ve been a fan of StorageGRID for some time, and not just because I work at a service provider that sells it as a consumable service. NetApp has a good grasp of what’s required to make an object storage platform do what it needs to do to satisfy most requirements, and it also understands what’s required to ensure that the platform delivers on its promise of reliability and durability. I’m a big fan of the flexible deployment models, and the focus on service providers and multi-tenancy is a big plus.

The new hardware introduced in this update helps remove the requirement for a hypervisor to run admin VMs to keep the whole shooting match going. This is particularly appealing if you really just want to run a storage as a service offering and don’t want to mess about with all that pesky compute. Or you might want to be wanting to use this as a backup repository for one of the many products that can use it.

NetApp has owned Bycast for around 10 years now, and continues to evolve the StorageGRID platform in terms of resiliency, performance, and capabilities. I’m really quite keen to see what the next 10 years have in store. You can read more about what’s new with StorageGRID 11.3 here.

NetApp Announces New AFF And FAS Models

NetApp recently announced some new storage platforms at INSIGHT 2019. I didn’t attend the conference, but I had the opportunity to be briefed on these announcements recently and thought I’d share some thoughts here.

 

All Flash FAS (AFF) A400

Overview

  • 4U enclosure
  • Replacement for AFF A300
  • Available in two possible configurations:
    • Ethernet: 4x 25Gb Ethernet (SFP28) ports
    • Fiber Channel: 4x 16Gb FC (SFP+) ports
  • Based on latest Intel Cascade Lake processors
  • 25GbE and 16Gb FC host support
  • 100GbE RDMA over Converged Ethernet (RoCE) connectivity to NVMe expansion storage shelves
  • Full 12Gb/sec SAS connectivity expansion storage shelves

It wouldn’t be a storage product announcement without a box shot.

[image courtesy of NetApp]

More Numbers

Each AFF A400 packs some grunt in terms of performance and capacity:

  • 40 CPU cores
  • 256GB RAM
  • Max drives: 480

Aggregates and Volumes

Maximum number of volumes 2500
Maximum aggregate size 800 TiB
Maximum volume size 100 TiB
Minimum root aggregate size 185 GiB
Minimum root volume size 150 GiB

Other Notes

NetApp is looking to position the A400 as a replacement for the A300 and A320. That said, they will continue to offer the A300. Note that it supports NVMe, but also SAS SSDs – and you can mix them in the same HA pair, same aggregate, and even the same RAID group (if you were so inclined). For those of you looking for MetroCluster support, FC MCC support is targeted for February, with MetroCluster over IP being targeted for the ONTAP 9.8 release.

 

FAS8300 And FAS8700

Overview

  • 4U enclosure
  • Two models available
    • FAS8300
    • FAS8700
  • Available in two possible configurations
    • Ethernet: 4x 25Gb Ethernet (SFP28) ports
    • Unified: 4x 16Gb FC (SFP+) ports

[image courtesy of NetApp]

  • Based on latest Intel Cascade Lake processors
  • Uses NVMe M.2 connection for onboard Flash Cache™
  • 25GbE and 16Gb FC host support
  • Full 12Gbps SAS connectivity expansion storage shelves

Aggregates and Volumes

Maximum number of volumes 2500
Maximum aggregate size 400 TiB
Maximum volume size 100 TiB
Minimum root aggregate size 185 GiB
Minimum root volume size 150 GiB

Other Notes

The 8300 can do everything the 8200 can do, and more! And it also supports more drives (720 vs 480). The 8700 supports a maximum of 1440 drives.

 

Thoughts And Further Reading

Speeds and feeds announcement posts aren’t always the most interesting things to read. It demonstrates that NetApp is continuing to evolve both its AFF and FAS lines, and coupled with improvements in ONTAP 9.7, there’s a lot to like about these new iterations. It looks like there’s enough here to entice customers looking to scale up their array performance. Whilst it adds to the existing portfolio, NetApp is mindful of this, and working on streamlining the portfolio. Shipments are expected to start mid-December.

Midrange storage isn’t always the sexiest thing to read about. But the fact that “midrange” storage now offers up this kind of potential performance is pretty cool. Think back to 5 – 10 years ago, and your bang for buck wasn’t anywhere near like it is now. This is to be expected, given the improvements we’ve seen in processor performance over the last little while, but it’s also important to note that improvements in the software platform are also helping to drive performance improvements across the board.

There have also been some cool enhancements announced with StorageGRID, and NetApp has also announced an “All-SAN” AFF model, with none of the traditional NAS features available. The All-SAN idea had a few pundits scratching their heads, but it makes sense in a way. The market for block-only storage arrays is still in the many billions of dollars worldwide, and NetApp doesn’t yet have a big part of that pie. This is a good way to get into opportunities that it may have been excluded from previously. I don’t think there’s been any suggestion that file or hybrid isn’t the way for them to go, but it is interesting to see this being offered up as a direct competitor to some of the block-only players out there.

I’ve written a bit about NetApp’s cloud vision in the past, as that’s seen quite a bit of evolution in recent times. But that doesn’t mean that they don’t have a good hardware story to tell, and I think it’s reflected in these new product announcements. NetApp has been doing some cool stuff lately. I may have mentioned it before, but NetApp’s been named a leader in the Gartner 2019 Magic Quadrant for Primary Storage. You can read a comprehensive roundup of INSIGHT news over here at Blocks & Files.

Random Short Take #22

Oh look, another semi-regular listicle of random news items that might be of some interest.

  • I was at Pure Storage’s //Accelerate conference last week, and heard a lot of interesting news. This piece from Chris M. Evans on FlashArray//C was particularly insightful.
  • Storage Field Day 18 was a little while ago, but that doesn’t mean that the things that were presented there are no longer of interest. Stephen Foskett wrote a great piece on IBM’s approach to data protection with Spectrum Protect Plus that’s worth read.
  • Speaking of data protection, it’s not just for big computers. Preston wrote a great article on the iOS recovery process that you can read here. As someone who had to recently recover my phone, I agree entirely with the idea that re-downloading apps from the app store is not a recovery process.
  • NetApp were recently named a leader in the Gartner Magic Quadrant for Primary Storage. Say what you will about the MQ, a lot of folks are still reading this report and using it to help drive their decision-making activities. You can grab a copy of the report from NetApp here. Speaking of NetApp, I’m happy to announce that I’m now a member of the NetApp A-Team. I’m looking forward to doing a lot more with NetApp in terms of both my day job and the blog.
  • Tom has been on a roll lately, and this article on IT hero culture, and this one on celebrity keynote speakers, both made for great reading.
  • VMworld US was a little while ago, but Anthony‘s wrap-up post had some great content, particularly if you’re working a lot with Veeam.
  • WekaIO have just announced some work their doing Aiden Lab at the Baylor College of Medicine that looks pretty cool.
  • Speaking of analyst firms, this article from Justin over at Forbes brought up some good points about these reports and how some of them are delivered.