NetApp Announces New AFF And FAS Models

NetApp recently announced some new storage platforms at INSIGHT 2019. I didn’t attend the conference, but I had the opportunity to be briefed on these announcements recently and thought I’d share some thoughts here.

 

All Flash FAS (AFF) A400

Overview

  • 4U enclosure
  • Replacement for AFF A300
  • Available in two possible configurations:
    • Ethernet: 4x 25Gb Ethernet (SFP28) ports
    • Fiber Channel: 4x 16Gb FC (SFP+) ports
  • Based on latest Intel Cascade Lake processors
  • 25GbE and 16Gb FC host support
  • 100GbE RDMA over Converged Ethernet (RoCE) connectivity to NVMe expansion storage shelves
  • Full 12Gb/sec SAS connectivity expansion storage shelves

It wouldn’t be a storage product announcement without a box shot.

[image courtesy of NetApp]

More Numbers

Each AFF A400 packs some grunt in terms of performance and capacity:

  • 40 CPU cores
  • 256GB RAM
  • Max drives: 480

Aggregates and Volumes

Maximum number of volumes 2500
Maximum aggregate size 800 TiB
Maximum volume size 100 TiB
Minimum root aggregate size 185 GiB
Minimum root volume size 150 GiB

Other Notes

NetApp is looking to position the A400 as a replacement for the A300 and A320. That said, they will continue to offer the A300. Note that it supports NVMe, but also SAS SSDs – and you can mix them in the same HA pair, same aggregate, and even the same RAID group (if you were so inclined). For those of you looking for MetroCluster support, FC MCC support is targeted for February, with MetroCluster over IP being targeted for the ONTAP 9.8 release.

 

FAS8300 And FAS8700

Overview

  • 4U enclosure
  • Two models available
    • FAS8300
    • FAS8700
  • Available in two possible configurations
    • Ethernet: 4x 25Gb Ethernet (SFP28) ports
    • Unified: 4x 16Gb FC (SFP+) ports

[image courtesy of NetApp]

  • Based on latest Intel Cascade Lake processors
  • Uses NVMe M.2 connection for onboard Flash Cache™
  • 25GbE and 16Gb FC host support
  • Full 12Gbps SAS connectivity expansion storage shelves

Aggregates and Volumes

Maximum number of volumes 2500
Maximum aggregate size 400 TiB
Maximum volume size 100 TiB
Minimum root aggregate size 185 GiB
Minimum root volume size 150 GiB

Other Notes

The 8300 can do everything the 8200 can do, and more! And it also supports more drives (720 vs 480). The 8700 supports a maximum of 1440 drives.

 

Thoughts And Further Reading

Speeds and feeds announcement posts aren’t always the most interesting things to read. It demonstrates that NetApp is continuing to evolve both its AFF and FAS lines, and coupled with improvements in ONTAP 9.7, there’s a lot to like about these new iterations. It looks like there’s enough here to entice customers looking to scale up their array performance. Whilst it adds to the existing portfolio, NetApp are mindful of this, and working on streamlining the portfolio. Shipments are expected to start mid-December.

Midrange storage isn’t always the sexiest thing to read about. But the fact that “midrange” storage now offers up this kind of potential performance is pretty cool. Think back to 5 – 10 years ago, and your bang for buck wasn’t anywhere near like it is now. This is to be expected, given the improvements we’ve seen in processor performance over the last little while, but it’s also important to note that improvements in the software platform are also helping to drive performance improvements across the board.

There have also been some cool enhancements announced with StorageGRID, and NetApp has also announced an “All-SAN” AFF model, with none of the traditional NAS features available. The All-SAN idea had a few pundits scratching their heads, but it makes sense in a way. The market for block-only storage arrays is still in the many billions of dollars worldwide, and NetApp doesn’t yet have a big part of that pie. This is a good way to get into opportunities that it may have been excluded from previously. I don’t think there’s been any suggestion that file or hybrid isn’t the way for them to go, but it is interesting to see this being offered up as a direct competitor to some of the block-only players out there.

I’ve written a bit about NetApp’s cloud vision in the past, as that’s seen quite a bit of evolution in recent times. But that doesn’t mean that they don’t have a good hardware story to tell, and I think it’s reflected in these new product announcements. NetApp has been doing some cool stuff lately. I may have mentioned it before, but NetApp’s been named a leader in the Gartner 2019 Magic Quadrant for Primary Storage. You can read a comprehensive roundup of INSIGHT news over here at Blocks & Files.

Random Short Take #22

Oh look, another semi-regular listicle of random news items that might be of some interest.

  • I was at Pure Storage’s //Accelerate conference last week, and heard a lot of interesting news. This piece from Chris M. Evans on FlashArray//C was particularly insightful.
  • Storage Field Day 18 was a little while ago, but that doesn’t mean that the things that were presented there are no longer of interest. Stephen Foskett wrote a great piece on IBM’s approach to data protection with Spectrum Protect Plus that’s worth read.
  • Speaking of data protection, it’s not just for big computers. Preston wrote a great article on the iOS recovery process that you can read here. As someone who had to recently recover my phone, I agree entirely with the idea that re-downloading apps from the app store is not a recovery process.
  • NetApp were recently named a leader in the Gartner Magic Quadrant for Primary Storage. Say what you will about the MQ, a lot of folks are still reading this report and using it to help drive their decision-making activities. You can grab a copy of the report from NetApp here. Speaking of NetApp, I’m happy to announce that I’m now a member of the NetApp A-Team. I’m looking forward to doing a lot more with NetApp in terms of both my day job and the blog.
  • Tom has been on a roll lately, and this article on IT hero culture, and this one on celebrity keynote speakers, both made for great reading.
  • VMworld US was a little while ago, but Anthony‘s wrap-up post had some great content, particularly if you’re working a lot with Veeam.
  • WekaIO have just announced some work their doing Aiden Lab at the Baylor College of Medicine that looks pretty cool.
  • Speaking of analyst firms, this article from Justin over at Forbes brought up some good points about these reports and how some of them are delivered.

NetApp Wants You To See The Whole Picture

Disclaimer: I recently attended Tech Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

NetApp recently presented at Tech Field Day 19. You can see videos of their presentation here, and download my rough notes from here.

 

Management or Monitoring?

James Holden (Director, Cloud Analytics) delivered what I think was a great presentation on NetApp Cloud Insights. Early on he made the comment that “[w]e’re as read-only as we possibly can be. Being actionable puts you in a conversation where you’re doing something with the infrastructure that may not be appropriate.” It’s a comment that resonated with me, particularly as I’ve been on both sides of the infrastructure management and monitoring fence (yes, I know, it sounds like a weird fence – just go with it). I remember vividly providing feedback to vendors that I wanted their fancy single pane of glass monitoring solution to give me more management capabilities as well. And while they were at it, it would be great if they could develop software that would automagically fix issues in my environment as they arose.

But do you want your cloud monitoring tools to really have that much control over your environment? Sure, there’s a lot of benefit to be had deploying solutions that can reduce the stick time required to keep things running smoothly, but I also like the idea that the software won’t just dive in a fix what it perceives as errors in an environment based on a bunch of pre-canned constraints that have been developed by people that may or may not always have a good grip on what’s really happening in these types of environments.

Keep Your Cloud Happy

So what can you do with Cloud Insights? As it turns out, all kinds of stuff, including cost optimisation. It doesn’t always sound that cool, but customers are frequently concerned with the cost of their cloud investment. What they get with Cloud Insights is:

Understanding

  • What’s my last few months cost?
  • What’s my current month running cost
  • Cost broken down by AWS service, account, region?
  • Does it meet the budget?

Analysis

  • Real time cost analysis to alert on sudden rise in cost
  • Project cost over period of time

Optimisation

  • Save costs by using “reserved instances”
  • Right sizing compute resources
  • Remove waste: idle EC2 instances, unattached EBS volumes, unused reserved instances
  • Spot instance use

There are a heap of other features, including:

  • Alerting and impact analysis; and
  • Forensic analysis.

It’s all wrapped up in an alarmingly simple SaaS solution meaning quick deployment and faster time to value.

The Full Picture

One of my favourite bits of the solution though is that NetApp are striving to give you access to the full picture:

  • There are application services running in the environment; and
  • There are operating systems and hardware underneath.

“The world is not just VMs on compute with backend storage”, and NetApp have worked hard to ensure that the likes of micro services are also supported.

 

Thoughts and Further Reading

One of the recurring themes of Tech Field Day 19 was that of management and monitoring. When you really dig into the subject, every vendor has a different take on what can be achieved through software. And it’s clear that every customer also has an opinion on what they want to achieve with their monitoring and management solutions. Some folks are quite keen for their monitoring solutions to take action as events arise to resolve infrastructure issues. Some people just want to be alerted about the problem and have a human intervene. And some enterprises just want an easy way to report to their C-level what they’re spending their money on. With all of these competing requirements, it’s easy to see how I’ve ended up working in enterprises running 10 different solutions to monitor infrastructure. They also had little idea what the money was being spent on, and had a large team of operations staff dealing with issues that weren’t always reported by the tools, or they got buried in someone’s inbox.

IT operations has been a hard nut to crack for a long time, and it’s not always the fault of the software vendors that it isn’t improving. It’s not just about generating tonnes of messages that no-one will read. It’s about doing something with the data that people can derive value from. That said, I think NetApp’s solution is a solid attempt at providing a useful platform to deliver on some pretty important requirements for the modern enterprise. I really like the holistic view they’ve taken when it comes to monitoring all aspects of the infrastructure, and the insights they can deliver should prove invaluable to organisations struggling with the myriad of moving parts that make up their (private and public) cloud footprint. If you’d like to know more, you can access the data sheet here, and the documentation is hosted here.

NetApp And The Space In Between

Disclaimer: I recently attended Storage Field Day 18.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

NetApp recently presented at Storage Field Day 18. You can see their videos from Storage Field Day 18 here, and download a PDF copy of my rough notes from here.

 

Bye, Dave

We were lucky enough to have Dave Hitz (now “Founder Emeritus” at NetApp) spend time with us on his last day in the office. I’ve only met him a few times but I’ve always enjoyed listening to his perspectives on what’s happening in the industry.

Cloud First?

In a previous life I worked in a government department architecting storage and virtualisation solutions for a variety of infrastructure scenarios. The idea, generally speaking, was that those solutions would solve particular business problems, or at least help to improve the processes to resolve those problems. At some point, probably late 2008 or early 2009, we started to talk about developing a “Cloud First” architecture policy, with the idea being that we would resolve to adopt cloud technologies where we could, and reduce our reliance on on-premises solutions as time passed. The beauty of working in enterprise environments is that things can take an awfully long time to happen, so that policy didn’t really come into effect until some years later.

So what does cloud first really mean? It’s possibly not as straightforward as having a “virtualisation first” policy. With the virtualisation first approach, there was a simple qualification process we undertook to determine whether a particular workload was suited to run on our virtualisation platform. This involved all the standard stuff, like funding requirements, security constraints, anticipated performance needs, and licensing concerns. We then pushed the workload one of two ways. With cloud though, there are a few more ways you can skin the cat, and it’s becoming more obvious to me that cloud means different things to different people. Some people want to push workloads to the cloud because they have a requirement to reduce their capital expenditure. Some people have to move to cloud because the CIO has determined that there needs to be a reduction in the workforce managing infrastructure activities. Some people go to cloud because they saw a cool demo at a technology conference. Some people go to cloud because their peers in another government department told them it would be easy to do. The common thread is that “people’s paths to the cloud can be so different”.

Can your workload even run in the cloud? Hitz gave us a great example of some stuff that just can’t (a printing press). The printing press needs to pump out jobs at a certain time of the day every day. It’s not going to necessarily benefit from elastic scalability for its compute workload. The workloads driving the presses would likely run a static workload.

Should it run in the cloud?

It’s a good question to ask. Most of the time, I’d say the answer is yes. This isn’t just because I work for a telco selling cloud products. There are a tonne of benefits to be had in running various, generic workloads in the cloud. Hitz suggests though, that the should it question is a corporate strategy question, and I think he’s spot on. When you embed “cloud first” in your infrastructure architecture, you’re potentially impacting a bunch of stuff outside of infrastructure architecture, including financial models, workforce management, and corporate security postures. It diens’t have to be a big deal, but it’s something that people sometimes don’t think about. And just because you start with that as your mantra, doesn’t mean you need to end up in cloud.

Does It Feel Cloudy?

Cloudy? It’s my opinion that NetApp’s cloud story is underrated. But, as Hitz noted, they’ve had the occasional misstep. When they first introduced Cloud ONTAP, Anthony Lye said it “didn’t smell like cloud”. Instead, Hitz told us he said it “feels like a product for storage administrators”. Cloudy people don’t want that, and they don’t want to talk to storage administrators. Some cloudy people were formerly storage folks, and some have never had the misfortune of managing over-provisioned midrange arrays at scale. Cloud comes in all different flavours, but it’s clear that just shoving a traditional on-premises product on a public cloud provider’s infrastructure isn’t really as cloudy as we’d like to think.

 

Bridging The Gap

NetApp are focused now on “finding the space between the old and the new, and understanding that you’ll have both for a long time”. And that’s what NetApp’s focusing on moving forward. They’re not just working on cloud-only solutions, and they have no plans to ditch their on-premises. Indeed, as Hitz noted in his presentation, “having good cloudy solutions will help them gain share in on-premises footprint”. It’s a good strategy, as the on-premises market will be around for some time to come (do you like how vague that is?). It’s been my belief for some time that companies, like NetApp, that can participate in both the on-premises and cloud market effectively will be successful.

 

Thoughts and Further Reading

So why did I clumsily paraphrase a How To Destroy Angels song title and ramble on about the good old days of my career in this article instead of waxing lyrical about Charlotte Brooks’s presentation on NetApp Data Availability Services? I’m not exactly sure. I do recommend checking out Charlotte’s demo and presentation, because she’s really quite good at getting the message across, and NDAS looks pretty interesting.

Perhaps I spent the time focusing on the “cloud first” conversation because it was Dave Hitz, and it’s likely the last time I’ll see him presenting in this kind of forum. But whether it was Dave or not, conversations like this one are important, in my opinion. It often feels like we’re putting the technology ahead of the why. I’m a big fan of cloud first, but I’m an even bigger fan of people understanding the impact that their technology decisions can have on the business they’re working for. It’s nice to see a vendor who can comfortably operate on both sides of the equation having this kind of conversation, and I think it’s one that more businesses need to be having with their vendors and their internal staff.

Random Short Take #8

Here are a few links to some news items and other content that might be useful. Maybe.

NetApp Announces NetApp ONTAP AI

As a member of NetApp United, I had the opportunity to sit in on a briefing from Mike McNamara about NetApp‘s recently announced AI offering, the snappily named “NetApp ONTAP AI”. I thought I’d provide a brief overview here and share some thoughts.

 

The Announcement

So what is NetApp ONTAP AI? It’s a “proven” architecture delivered via NetApp’s channel partners. It’s comprised of compute, storage and networking. Storage is delivered over NFS. The idea is that you can start small and scale out as required.

Hardware

Software

  • NVIDIA GPU Cloud Deep Learning Stack
  • NetApp ONTAP 9
  • Trident, dynamic storage provisioner

Support

  • Single point of contact support
  • Proven support model

 

[image courtesy of NetApp]

 

Thoughts and Further Reading

I’ve written about NetApp’s Edge to Core to Cloud story before, and this offering certainly builds on the work they’ve done with big data and machine learning solutions. Artificial Intelligence (AI) and Machine Learning (ML) solutions are like big data from five years ago, or public cloud. You can’t go to any industry event, or take a briefing from an infrastructure vendor, without hearing all about how they’re delivering solutions focused on AI. What you do with the gear once you’ve bought one of these spectacularly ugly boxes is up to you, obviously, and I don’t want to get in to whether some of these solutions are really “AI” or not (hint: they’re usually not). While the vendors are gushing breathlessly about how AI will conquer the world, if you tone down the hyperbole a bit, there’re still some fascinating problems being solved with these kinds of solutions.

I don’t think that every business, right now, will benefit from an AI strategy. As much as the vendors would like to have you buy one of everything, these kinds of solutions are very good at doing particular tasks, most of which are probably not in your core remit. That’s not to say that you won’t benefit in the very near future from some of the research and development being done in this area. And it’s for this reason that I think architectures like this one, and those from NetApp’s competitors, are contributing something significant to the ongoing advancement of these fields.

I also like that this is delivered via channel partners. It indicates, at least at first glance, that AI-focused solutions aren’t simply something you can slap a SKU on and sells 100s of. Partners generally have a better breadth of experience across the various hardware, software and services elements and their respective constraints, and will often be in a better position to spend time understanding the problem at hand rather than treating everything as the same problem with one solution. There’s also less chance that the partner’s sales people will have performance accelerators tied to selling one particular line of products. This can be useful when trying to solve problems that are spread across multiple disciplines and business units.

The folks at NVIDIA have made a lot of noise in the AI / ML marketplace lately, and with good reason. They know how to put together blazingly fast systems. I’ll be interested to see how this architecture goes in the marketplace, and whether customers are primarily from the NetApp side of the fence, from the NVIDIA side, or perhaps both. You can grab a copy of the solution brief here, and there’s an AI white paper you can download from here. The real meat and potatoes though, is the reference architecture document itself, which you can find here.

Come And Splash Around In NetApp’s Data Lake

Disclaimer: I recently attended Storage Field Day 15.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

NetApp recently presented at Storage Field Day 15. You can see videos of their presentation here, and download my rough notes from here.

 

You Say Day-ta, I Say Dar-ta

Santosh Rao (Senior Technical Director, Workloads and Ecosystems) took us through some of the early big data platform challenges NetApp are looking to address.

 

Early Generation Big Data Analytics Platform

These were designed to deliver initial analytics solutions and were:

  • Implemented as Proof of concept; and
  • Solved a point project need.

The primary considerations of these solutions were usually cost and agility. The focus was to:

  • Limit up front costs and get the system operational quickly; and
  • Scalability, availability, and governance were afterthoughts

A typical approach to this was to use cloud or commodity infrastructure. This ended up becoming the final architecture. The problem with this approach, according to NetApp, is that it lead to unpredictable behaviour as copies manifested. You’d end up with 3-5 replicas of data copied across lines of business and various functions. Not a great situation.

 

Early Generation Analytics Platform Challenges

Other challenges with this architecture included:

  • Unpredictable performance;
  • Inefficient storage utilisation;
  • Media and node failures;
  • Total cost of ownership;
  • Not enterprise ready; and
  • Storage and compute tied (creating imbalance).

 

Next Generation Data Pipeline

So what do we really need from a data pipeline? According to NetApp, the key is “Unified Insights across LoBs and Functions”. By this they mean:

  • A unified enterprise data lake;
  • Federated data sources across the 2nd and 3rd platforms;
  • In-place access to the data pipeline (copy avoidance);
  • Spanned across edge, core and cloud; and
  • Future proofed to allow shifts in architecture.

Another key consideration is the deployment. The first proof of concept is performed by the business unit, but it needs to scale for production use.

  • Scale edge, core and cloud as a single pipeline
  • Predictable availability
  • Governance, data protection, security on data pipeline

This provides for a lower TCO over the life of the solution.

 

Data Pipeline Requirements

We’re not just playing in the core any more, or exclusively in the cloud. This stuff is everywhere. And everywhere you look the requirements differ as well.

Edge

  • Massive data (few TB/device/day)
  • Real-time Edge Analytics / AI
  • Ultra Low Latency
  • Network Bandwidth
  • Smart Data Movement

Core

  • Ultra high IO bandwidth (20 – 200+ GBps)
  • Ultra-low latency (micro – nanosecond)
  • Linear scale (1 – 128 node AI)
  • Overall TCO for 1-100+ PB

Cloud

  • Cloud analytics, AI/DL/ML
  • Consume and not operate
  • Cloud vendor vs on-premises stack
  • Cost-effective archive
  • Need to avoid cloud lock-in

Here’s picture of what the data pipeline looks like for NetApp.

[Image courtesy of NetApp]

 

NetApp provided the following overview of what the data pipeline looks like for AI / Deep Learning environments. You can read more about that here.

[Image courtesy of NetApp]

 

What Does It All Mean?

NetApp have a lot of tools at their disposal, and a comprehensive vision for meeting the requirements of big data, AI and deep learning workloads from a number of different angles. It’s not just about performance, it’s about understanding where the data needs to be to be considered useful to the business. I think there’s a good story to tell here with NetApp’s Data Fabric, but it felt a little like there remains some integration work to do. Big data, AI and deep learning means different things to different people, and there’s sometimes a reluctance to change the way people do things for the sake of adopting a new product. NetApp’s biggest challenge will be demonstrating the additional value they bring to the table, and the other ways in which they can help enterprise succeed.

NetApp, like some of the other Tier 1 storage vendors, has a broad portfolio of products at its disposal. The Data Fabric play is a big bet on being able to tie this all together in a way that their competitors haven’t managed to do yet. Ultimately, the success of this strategy will rely on NetApp’s ability to listen to customers and continue to meet their needs. As a few companies have found out the hard way, it doesn’t matter how cool you think your idea is, or how technically innovative it is, if you’re not delivering results for the business you’re going to struggle to gain traction in the market. At this stage I think NetApp are in a good place, and hopefully they can stay there by continuing to listen to their existing (and potentially new) customers.

For an alternative perspective, I recommend reading Chin-Fah’s thoughts from Storage Field Day 15 here.

NetApp United – Happy To Be Part Of The Team

 

NetApp recently announced the 2018 list of NetApp United members and I’m happy to see I’m on the list. If you’re not familiar with the NetApp United program, it’s “NetApp’s global influencer program. Founded in 2017, NetApp United is a community of 140+ individuals united by their passion for great technology and the desire to share their expertise with the world”. One of the nice things about it is the focus on inclusiveness, community and knowledge sharing. I’m doing a lot more with NetApp than I have in the past and I’m really looking forward to the year ahead from both the NetApp United perspective and the broader NetApp view. You can read the announcement here and view the list of members. Chan also did a nice little write-up you can read here. And while United isn’t a football team, I will leave you with this great quote from the inimitable Eric Cantona – “When the seagulls follow the trawler, it is because they think sardines will be thrown into the sea“.

Brisbane VMUG – November 2017

hero_vmug_express_2011

   

The November edition of the Brisbane VMUG meeting will be held on Thursday 30th November at the Toobirds Bistro and Bar (127 Creek Street, Brisbane) from 4 – 6:30pm. It’s sponsored by HyTrust and NetApp and promises to be a great afternoon.

Here’s the jam-packed agenda:

  • Refreshments and drinks
  • VMUG Intro (by me)
  • VMware Presentation: vRealize Lifecycle Manager (Michael Francis, VCDX #42)
  • HyTrust Presentation: Regain Control of the Cloud (Kevin Middleton)
  • NetApp Presentation: What’s new @ NetApp? SolidFire, HCI & an irrational love of VVols
  • Skills and Career Progression (Claire O’Dwyer, Sydney VMUG Leader and Recruitment Specialist with FTS Resourcing)
  • Q&A
  • Refreshments and drinks

HyTrust and NetApp have gone to great lengths to make sure this will be a fun and informative session. I’m really looking forward to hearing about HyTrust’s take on protecting virtualised cloud infrastructure and virtual workloads. I’m also interested to hear more about NetApp’s HCI offering, what’s happening with SolidFire and their VVols integration. You can find out more information and register for the event here. I hope to see you there. Also, if you’re interested in sponsoring one of these events, please get in touch with me and I can help make it happen.

The Thing About NetApp HCI Is …

Disclaimer: I recently attended VMworld 2017 – US.  My flights were paid for by ActualTech Media, VMware provided me with a free pass to the conference and various bits of swag, and Tech Field Day picked up my hotel costs. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

You can view the video of NetApp‘s presentation here, and download a copy of my rough notes from here.

 

What’s In A Name?

There’s been some amount of debate about whether NetApp’s HCI offering is really HCI or CI. I’m not going to pick sides in this argument. I appreciate that words mean things and definitions are important, but I’d like to focus more on what NetApp’s offering delivers, rather than whether someone in Tech Marketing made the right decision to call this HCI. Let’s just say they’re closer to HCI than WD is to cloud.

 

Ye Olde Architectures (The HCI Tax)

NetApp spent some time talking about the “HCI Tax” – the overhead of providing various data services with first generation HCI appliances. Gabe touched on the impact of running various iterations of controller VMs, along with the increased memory requirements for services such as deduplication, erasure coding, compression, and encryption. The model for first generation HCI is simple – grow your storage and compute in lockstep as your performance requirements increase. The great thing with this approach is that you can start small and grow your environment as required. The problem with this approach is that you may only need to grow your storage, or you may only need to grow your compute requirement, but not necessarily both. Granted, a number of HCI vendors now offer storage-only nodes to accommodate this requirement, but NetApp don’t think the approach is as polished as it could be. The requirement to add compute as you add storage can also have a financial impact in terms of the money you’ll spend in licensing for CPUs. Whilst one size fits all has its benefits for linear workloads, this approach still has some problems.

 

The New Style?

NetApp suggest that their solution offers the ability to “scale on your terms”. With this you can

  • Optimise and protect existing investments;
  • Scale storage and compute together or independently; and
  • Eliminate the “HCI Tax”.

Note that only the storage nodes have disks, the compute nodes get blanks. The disks are on the front of the unit and the nodes are stateless. You can’t have different tiers of storage nodes as it’s all one cluster. It’s also BYO switch for connectivity, supporting 10/25Gbps. In terms of scalability, from a storage perspective you can scale as much as SolidFire can nowadays (around 100 nodes), and your compute nodes are limited by vSphere’s maximum configuration.

There are “T-shirt sizes” for implementation, and you can start small with as little as two blocks (2 compute nodes and 4 storage nodes). I don’t believe you mix t-shirt sizes in the same cluster. Makes sense if you think about it for more than a second.

 

Thoughts

Converged and hyper-converged are different things, and I think this post from Nick Howell (in the context of Cohesity as HCI) sums up the differences nicely. However, what was interesting for me during this presentation wasn’t whether or not this qualifies as HCI or not. Rather, it was about NetApp building on the strengths of SolidFire’s storage offering (guaranteed performance with QoS and good scale) coupled with storage / compute independence to provide customers with a solution that seems to tick a lot of boxes for the discerning punter.

Unless you’ve been living under a rock for the last few years, you’ll know that NetApp are quite a different beast to the company first founded 25 years ago. The great thing about them (and the other major vendors) entering the already crowded HCI market is that they offer choices that extend beyond the HCI play. For the next few years at least, there are going to be workloads that just may not go so well with HCI. If you’re already a fan of NetApp, chances are they’ll have an alternative solution that will allow you to leverage their capability and still get the outcome you need. Gabe made the excellent point that “[y]ou can’t go from traditional to cloud overnight, you need to evaluate your apps to see where they fit”. This is exactly the same with HCI. I’m looking forward to see how they go against the more established HCI vendors in the marketplace, and whether the market responds positively to some of the approaches they’ve taken with the solution.