NetApp Announces New AFF And FAS Models

NetApp recently announced some new storage platforms at INSIGHT 2019. I didn’t attend the conference, but I had the opportunity to be briefed on these announcements recently and thought I’d share some thoughts here.

 

All Flash FAS (AFF) A400

Overview

  • 4U enclosure
  • Replacement for AFF A300
  • Available in two possible configurations:
    • Ethernet: 4x 25Gb Ethernet (SFP28) ports
    • Fiber Channel: 4x 16Gb FC (SFP+) ports
  • Based on latest Intel Cascade Lake processors
  • 25GbE and 16Gb FC host support
  • 100GbE RDMA over Converged Ethernet (RoCE) connectivity to NVMe expansion storage shelves
  • Full 12Gb/sec SAS connectivity expansion storage shelves

It wouldn’t be a storage product announcement without a box shot.

[image courtesy of NetApp]

More Numbers

Each AFF A400 packs some grunt in terms of performance and capacity:

  • 40 CPU cores
  • 256GB RAM
  • Max drives: 480

Aggregates and Volumes

Maximum number of volumes 2500
Maximum aggregate size 800 TiB
Maximum volume size 100 TiB
Minimum root aggregate size 185 GiB
Minimum root volume size 150 GiB

Other Notes

NetApp is looking to position the A400 as a replacement for the A300 and A320. That said, they will continue to offer the A300. Note that it supports NVMe, but also SAS SSDs – and you can mix them in the same HA pair, same aggregate, and even the same RAID group (if you were so inclined). For those of you looking for MetroCluster support, FC MCC support is targeted for February, with MetroCluster over IP being targeted for the ONTAP 9.8 release.

 

FAS8300 And FAS8700

Overview

  • 4U enclosure
  • Two models available
    • FAS8300
    • FAS8700
  • Available in two possible configurations
    • Ethernet: 4x 25Gb Ethernet (SFP28) ports
    • Unified: 4x 16Gb FC (SFP+) ports

[image courtesy of NetApp]

  • Based on latest Intel Cascade Lake processors
  • Uses NVMe M.2 connection for onboard Flash Cache™
  • 25GbE and 16Gb FC host support
  • Full 12Gbps SAS connectivity expansion storage shelves

Aggregates and Volumes

Maximum number of volumes 2500
Maximum aggregate size 400 TiB
Maximum volume size 100 TiB
Minimum root aggregate size 185 GiB
Minimum root volume size 150 GiB

Other Notes

The 8300 can do everything the 8200 can do, and more! And it also supports more drives (720 vs 480). The 8700 supports a maximum of 1440 drives.

 

Thoughts And Further Reading

Speeds and feeds announcement posts aren’t always the most interesting things to read. It demonstrates that NetApp is continuing to evolve both its AFF and FAS lines, and coupled with improvements in ONTAP 9.7, there’s a lot to like about these new iterations. It looks like there’s enough here to entice customers looking to scale up their array performance. Whilst it adds to the existing portfolio, NetApp is mindful of this, and working on streamlining the portfolio. Shipments are expected to start mid-December.

Midrange storage isn’t always the sexiest thing to read about. But the fact that “midrange” storage now offers up this kind of potential performance is pretty cool. Think back to 5 – 10 years ago, and your bang for buck wasn’t anywhere near like it is now. This is to be expected, given the improvements we’ve seen in processor performance over the last little while, but it’s also important to note that improvements in the software platform are also helping to drive performance improvements across the board.

There have also been some cool enhancements announced with StorageGRID, and NetApp has also announced an “All-SAN” AFF model, with none of the traditional NAS features available. The All-SAN idea had a few pundits scratching their heads, but it makes sense in a way. The market for block-only storage arrays is still in the many billions of dollars worldwide, and NetApp doesn’t yet have a big part of that pie. This is a good way to get into opportunities that it may have been excluded from previously. I don’t think there’s been any suggestion that file or hybrid isn’t the way for them to go, but it is interesting to see this being offered up as a direct competitor to some of the block-only players out there.

I’ve written a bit about NetApp’s cloud vision in the past, as that’s seen quite a bit of evolution in recent times. But that doesn’t mean that they don’t have a good hardware story to tell, and I think it’s reflected in these new product announcements. NetApp has been doing some cool stuff lately. I may have mentioned it before, but NetApp’s been named a leader in the Gartner 2019 Magic Quadrant for Primary Storage. You can read a comprehensive roundup of INSIGHT news over here at Blocks & Files.

NetApp Wants You To See The Whole Picture

Disclaimer: I recently attended Tech Field Day 19.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

NetApp recently presented at Tech Field Day 19. You can see videos of their presentation here, and download my rough notes from here.

 

Management or Monitoring?

James Holden (Director, Cloud Analytics) delivered what I think was a great presentation on NetApp Cloud Insights. Early on he made the comment that “[w]e’re as read-only as we possibly can be. Being actionable puts you in a conversation where you’re doing something with the infrastructure that may not be appropriate.” It’s a comment that resonated with me, particularly as I’ve been on both sides of the infrastructure management and monitoring fence (yes, I know, it sounds like a weird fence – just go with it). I remember vividly providing feedback to vendors that I wanted their fancy single pane of glass monitoring solution to give me more management capabilities as well. And while they were at it, it would be great if they could develop software that would automagically fix issues in my environment as they arose.

But do you want your cloud monitoring tools to really have that much control over your environment? Sure, there’s a lot of benefit to be had deploying solutions that can reduce the stick time required to keep things running smoothly, but I also like the idea that the software won’t just dive in a fix what it perceives as errors in an environment based on a bunch of pre-canned constraints that have been developed by people that may or may not always have a good grip on what’s really happening in these types of environments.

Keep Your Cloud Happy

So what can you do with Cloud Insights? As it turns out, all kinds of stuff, including cost optimisation. It doesn’t always sound that cool, but customers are frequently concerned with the cost of their cloud investment. What they get with Cloud Insights is:

Understanding

  • What’s my last few months cost?
  • What’s my current month running cost
  • Cost broken down by AWS service, account, region?
  • Does it meet the budget?

Analysis

  • Real time cost analysis to alert on sudden rise in cost
  • Project cost over period of time

Optimisation

  • Save costs by using “reserved instances”
  • Right sizing compute resources
  • Remove waste: idle EC2 instances, unattached EBS volumes, unused reserved instances
  • Spot instance use

There are a heap of other features, including:

  • Alerting and impact analysis; and
  • Forensic analysis.

It’s all wrapped up in an alarmingly simple SaaS solution meaning quick deployment and faster time to value.

The Full Picture

One of my favourite bits of the solution though is that NetApp are striving to give you access to the full picture:

  • There are application services running in the environment; and
  • There are operating systems and hardware underneath.

“The world is not just VMs on compute with backend storage”, and NetApp have worked hard to ensure that the likes of micro services are also supported.

 

Thoughts and Further Reading

One of the recurring themes of Tech Field Day 19 was that of management and monitoring. When you really dig into the subject, every vendor has a different take on what can be achieved through software. And it’s clear that every customer also has an opinion on what they want to achieve with their monitoring and management solutions. Some folks are quite keen for their monitoring solutions to take action as events arise to resolve infrastructure issues. Some people just want to be alerted about the problem and have a human intervene. And some enterprises just want an easy way to report to their C-level what they’re spending their money on. With all of these competing requirements, it’s easy to see how I’ve ended up working in enterprises running 10 different solutions to monitor infrastructure. They also had little idea what the money was being spent on, and had a large team of operations staff dealing with issues that weren’t always reported by the tools, or they got buried in someone’s inbox.

IT operations has been a hard nut to crack for a long time, and it’s not always the fault of the software vendors that it isn’t improving. It’s not just about generating tonnes of messages that no-one will read. It’s about doing something with the data that people can derive value from. That said, I think NetApp’s solution is a solid attempt at providing a useful platform to deliver on some pretty important requirements for the modern enterprise. I really like the holistic view they’ve taken when it comes to monitoring all aspects of the infrastructure, and the insights they can deliver should prove invaluable to organisations struggling with the myriad of moving parts that make up their (private and public) cloud footprint. If you’d like to know more, you can access the data sheet here, and the documentation is hosted here.

NetApp And The Space In Between

Disclaimer: I recently attended Storage Field Day 18.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

NetApp recently presented at Storage Field Day 18. You can see their videos from Storage Field Day 18 here, and download a PDF copy of my rough notes from here.

 

Bye, Dave

We were lucky enough to have Dave Hitz (now “Founder Emeritus” at NetApp) spend time with us on his last day in the office. I’ve only met him a few times but I’ve always enjoyed listening to his perspectives on what’s happening in the industry.

Cloud First?

In a previous life I worked in a government department architecting storage and virtualisation solutions for a variety of infrastructure scenarios. The idea, generally speaking, was that those solutions would solve particular business problems, or at least help to improve the processes to resolve those problems. At some point, probably late 2008 or early 2009, we started to talk about developing a “Cloud First” architecture policy, with the idea being that we would resolve to adopt cloud technologies where we could, and reduce our reliance on on-premises solutions as time passed. The beauty of working in enterprise environments is that things can take an awfully long time to happen, so that policy didn’t really come into effect until some years later.

So what does cloud first really mean? It’s possibly not as straightforward as having a “virtualisation first” policy. With the virtualisation first approach, there was a simple qualification process we undertook to determine whether a particular workload was suited to run on our virtualisation platform. This involved all the standard stuff, like funding requirements, security constraints, anticipated performance needs, and licensing concerns. We then pushed the workload one of two ways. With cloud though, there are a few more ways you can skin the cat, and it’s becoming more obvious to me that cloud means different things to different people. Some people want to push workloads to the cloud because they have a requirement to reduce their capital expenditure. Some people have to move to cloud because the CIO has determined that there needs to be a reduction in the workforce managing infrastructure activities. Some people go to cloud because they saw a cool demo at a technology conference. Some people go to cloud because their peers in another government department told them it would be easy to do. The common thread is that “people’s paths to the cloud can be so different”.

Can your workload even run in the cloud? Hitz gave us a great example of some stuff that just can’t (a printing press). The printing press needs to pump out jobs at a certain time of the day every day. It’s not going to necessarily benefit from elastic scalability for its compute workload. The workloads driving the presses would likely run a static workload.

Should it run in the cloud?

It’s a good question to ask. Most of the time, I’d say the answer is yes. This isn’t just because I work for a telco selling cloud products. There are a tonne of benefits to be had in running various, generic workloads in the cloud. Hitz suggests though, that the should it question is a corporate strategy question, and I think he’s spot on. When you embed “cloud first” in your infrastructure architecture, you’re potentially impacting a bunch of stuff outside of infrastructure architecture, including financial models, workforce management, and corporate security postures. It diens’t have to be a big deal, but it’s something that people sometimes don’t think about. And just because you start with that as your mantra, doesn’t mean you need to end up in cloud.

Does It Feel Cloudy?

Cloudy? It’s my opinion that NetApp’s cloud story is underrated. But, as Hitz noted, they’ve had the occasional misstep. When they first introduced Cloud ONTAP, Anthony Lye said it “didn’t smell like cloud”. Instead, Hitz told us he said it “feels like a product for storage administrators”. Cloudy people don’t want that, and they don’t want to talk to storage administrators. Some cloudy people were formerly storage folks, and some have never had the misfortune of managing over-provisioned midrange arrays at scale. Cloud comes in all different flavours, but it’s clear that just shoving a traditional on-premises product on a public cloud provider’s infrastructure isn’t really as cloudy as we’d like to think.

 

Bridging The Gap

NetApp are focused now on “finding the space between the old and the new, and understanding that you’ll have both for a long time”. And that’s what NetApp’s focusing on moving forward. They’re not just working on cloud-only solutions, and they have no plans to ditch their on-premises. Indeed, as Hitz noted in his presentation, “having good cloudy solutions will help them gain share in on-premises footprint”. It’s a good strategy, as the on-premises market will be around for some time to come (do you like how vague that is?). It’s been my belief for some time that companies, like NetApp, that can participate in both the on-premises and cloud market effectively will be successful.

 

Thoughts and Further Reading

So why did I clumsily paraphrase a How To Destroy Angels song title and ramble on about the good old days of my career in this article instead of waxing lyrical about Charlotte Brooks’s presentation on NetApp Data Availability Services? I’m not exactly sure. I do recommend checking out Charlotte’s demo and presentation, because she’s really quite good at getting the message across, and NDAS looks pretty interesting.

Perhaps I spent the time focusing on the “cloud first” conversation because it was Dave Hitz, and it’s likely the last time I’ll see him presenting in this kind of forum. But whether it was Dave or not, conversations like this one are important, in my opinion. It often feels like we’re putting the technology ahead of the why. I’m a big fan of cloud first, but I’m an even bigger fan of people understanding the impact that their technology decisions can have on the business they’re working for. It’s nice to see a vendor who can comfortably operate on both sides of the equation having this kind of conversation, and I think it’s one that more businesses need to be having with their vendors and their internal staff.

NetApp United – Happy To Be Part Of The Team

 

NetApp recently announced the 2018 list of NetApp United members and I’m happy to see I’m on the list. If you’re not familiar with the NetApp United program, it’s “NetApp’s global influencer program. Founded in 2017, NetApp United is a community of 140+ individuals united by their passion for great technology and the desire to share their expertise with the world”. One of the nice things about it is the focus on inclusiveness, community and knowledge sharing. I’m doing a lot more with NetApp than I have in the past and I’m really looking forward to the year ahead from both the NetApp United perspective and the broader NetApp view. You can read the announcement here and view the list of members. Chan also did a nice little write-up you can read here. And while United isn’t a football team, I will leave you with this great quote from the inimitable Eric Cantona – “When the seagulls follow the trawler, it is because they think sardines will be thrown into the sea“.