Intel – It’s About Getting The Right Kind Of Fast At The Edge

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Intel recently presented at Storage Field Day 22. You can see videos of the presentation here, and download my rough notes from here.

 

The Problem

A lot of countries have used lockdowns as a way to combat the community transmission of COVID-19. Apparently, this has led to an uptick in the consumption of streaming media services. If you’re somewhat familiar with streaming media services, you’ll understand that your favourite episode of Hogan’s Heroes isn’t being delivered from a giant storage device sitting in the bowels of your streaming media provider’s data centre. Instead, it’s invariably being delivered to your device from a content delivery network (CDN) device.

 

Content Delivery What?

CDNs are not a new concept. The idea is that you have a bunch of web servers geographically distributed delivering content to users who are also geographically distributed. Think of it as a way to cache things closer to your end users. There are many reasons why this can be a good idea. Your content will load faster for users if it resides on servers in roughly the same area as them. Your bandwidth costs are generally a bit cheaper, as you’re not transmitting as much data from your core all the way out to the end user. Instead, those end users are getting the content from something close to them. You can potentially also deliver more versions of content (in terms of resolution) easily. It can also be beneficial in terms of resiliency and availability – an outage on one part of your network, say in Palo Alto, doesn’t need to necessarily impact end users living in Sydney. Cloudflare does a fair bit with CDNs, and there’s a great overview of the technology here.

 

Isn’t All Content Delivery The Same?

Not really. As Intel covered in its Storage Field Day presentation, there are some differences with the performance requirements of video on demand and live-linear streaming CDN solutions.

Live-Linear Edge Cache

Live-linear video streaming is similar to the broadcast model used in television. It’s basically programming content streamed 24/7, rather than stuff that the user has to search for. Several minutes of content are typically cached to accommodate out-of-sync users and pause / rewind activities. You can read a good explanation of live-linear streaming here.

[image courtesy of Intel]

In the example above, Intel Optane PMem was used to address the needs of live-linear streaming.

  • Live-linear workloads consume a lot of memory capacity to maintain a short-lived video buffer.
  • Intel Optane PMem is less expensive than DRAM.
  • Intel Optane PMem has extremely high endurance, to handle frequent overwrite.
  • Flexible deployment options – Memory Mode or App-Direct, consuming zero drive slots.

With this solution they were able to achieve better channel and stream density per server than with DRAM-based solutions.

Video on Demand (VoD)

VoD providers typically offer a large library of content allowing users to view it at any time (e.g. Netflix and Disney+). VoD servers are a little different to live-linear streaming CDNs. They:

  • Typically require large capacity and drive fanout for performance / failure domains; and
  • Have a read-intensive workload, with typically large IOs.

[image courtesy of Intel]

 

Thoughts and Further Reading

I first encountered the magic of CDNs years ago when working in a data centre that hosted some Akamai infrastructure. Windows Server updates were super zippy, and it actually saved me from having to spend a lot of time standing in the cold aisle. Fast forward about 15 years, and CDNs are being used for all kinds of content delivery on the web. With whatever the heck this is is in terms of the new normal, folks are putting more and more strain on those CDNs by streaming high-quality, high-bandwidth TV and movie titles into their homes (except in backwards places like Australia). As a result, content providers are constantly searching for ways to tweak the throughput of these CDNs to serve more and more customers, and deliver more bandwidth to those users.

I’ve barely skimmed the surface of how CDNs help providers deliver content more effectively to end users. What I did find interesting about this presentation was that it reinforced the idea that different workloads require different infrastructure solutions to deliver the right outcomes. It sounds simple when I say it like this, but I guess I’ve thought about streaming video CDNs as being roughly the same all over the place. Clearly they aren’t, and it’s not just a matter of jamming some SSDs in one RU servers and hoping that your content will be delivered faster to punters. It’s important to understand that Intel Optane PMem and Intel Optane 3D NAND can give you different results depending on what you’re trying to do, with PMem arguably giving you better value for money (per GB) than DRAM. There are some great papers on this topic available on the Intel website. You can read more here and here.

Random Short Take #55

Welcome to Random Short Take #55. A few players have worn 55 in the NBA. I wore some Mutombo sneakers in high school, and I enjoy watching Duncan Robinson light it up for the Heat. My favourite ever to wear 55 was “White Chocolate” Jason Williams. Let’s get random.

  • This article from my friend Max around Intel Optane and VMware Cloud Foundation provided some excellent insights.
  • Speaking of friends writing about VMware Cloud Foundation, this first part of a 4-part series from Vaughn makes a compelling case for VCF on FlashStack. Sure, he gets paid to say nice things about the company he works for, but there is plenty of info in here that makes a lot of sense if you’re evaluating which hardware platform pairs well with VCF.
  • Speaking of VMware, if you’re a VCD shop using NSX-V, it’s time to move on to NSX-T. This article from VMware has the skinny.
  • You want an open source version of BMC? Fine, you got it. Who would have thought securing BMC would be a thing? (Yes, I know it should be)
  • Stuff happens, hard drives fail. Backblaze recently published its drive stats report for Q1. You can read about that here.
  • Speaking of drives, check out this article from Netflix on its Netflix Drive product. I find it amusing that I get more value from Netflix’s tech blog than I do its streaming service, particularly when one is free.
  • The people in my office laugh nervously when I say I hate being in meetings where people feel the need to whiteboard. It’s not that I think whiteboard sessions can’t be valuable, but oftentimes the information on those whiteboards should be documented somewhere and easy to bring up on a screen. But if you find yourself in a lot of meetings and need to start drawing pictures about new concepts or whatever, this article might be of some use.
  • Speaking of office things not directly related to tech, this article from Preston de Guise on interruptions was typically insightful. I loved the “Got a minute?” reference too.

 

Intel Optane – Challenges and Triumphs

Disclaimer: I recently attended Storage Field Day 21.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Intel recently presented at Storage Field Day 21. You can see videos of the presentation here, and download my rough notes from here.

 

Alive and Kicking

Kristie Mann, Sr. Director Products, Intel Optane Group, kicked off the session by telling us that “Intel Optane is alive and well”. I don’t think anyone thought it was really going away, particularly given the effort that folks inside Intel have gone to to get this product to market. But from a consumer perspective, it’s potentially been a little confusing.

[image courtesy of Intel]

In terms of data centre penetration, it’s been a different story, and taking Optane from idea to reality has been quite a journey. It was also noted that the “strong uptake of PMem in HPC was unexpected”, but no doubt welcome.

 

Learnings

Some of the other learnings that were covered as part of the session were as follows.

Software Really Matters

It’s one thing to come out with some super cool hardware that is absolutely amazing, but it’s quite another to get software support for that hardware. Unfortunately, the hardware doesn’t give you much without the software, no matter how well it performs. While this has been something of a challenge for Optane until recent times, there’s definitely been more noise from the big ISVs about enhanced Optane support.

IaaS Adoption

Adoption in IaaS has not been great, mainly due to some uneven performance. This will only improve as the software support matures. But the IaaS market can be tough for a bunch of reasons. IaaS vendors are trying to do a things at a certain price point. That doesn’t mean that they’re still making people run VMs on spinning disk (hopefully), but rolling out All-Flash support for platforms is something that’s only going to be done when the $/GB makes sense for the providers. You also might have seen in the field that IaaS providers are pretty particular about performance and quality of service. It makes sense when you’re trying to host a whole bunch of different workloads at large scale. So it makes sense that they’d be somewhat cautious about launching new media types on their platforms without running through a whole bunch of performance and integration testing. I’m not saying they’re not going to get there, they just may not be the first cabs off the rank.

Can you spell OEM?

OEM qualifications have been slow to date with Optane. This is key to getting the product out there. Enterprise folks don’t like to buy things until their favourite Tier 1 vendors are offering it as a default option in their server / storage array / fabric switch. If Dell has the Optane Inside sticker (not a real thing, but you know what I mean), the infrastructure architects inside large government entities are more likely to get on board.

Battling The Status Quo

Status quo thinking makes it hard to understand this isn’t just memory or storage. This has been something of a problem for Intel since Optane became a thing. I’m still having conversations with people and running up against significant confusion about the difference between PMem and Optane SSD. I think that’s going to improve as time goes on, but it can make things difficult when it comes to broad market penetration.

Thoughts and Further Reading

I don’t want people reading this to think that I’m down on Intel and what it’s done with Optane. If anything, I’m really into it. I enjoyed the presentation at Storage Field Day 21 tremendously, and not just because my friend Howard was on the panel repping VAST Data. It’s unusual that a vendor as big as Intel would be so frank about some of the challenges that it’s faced with getting new media to market. But I think it’s the willingness to share some of this information that demonstrates how committed Intel is to Optane moving forward. I was lucky enough to speak to Intel Senior Fellow Al Fazio about the Optane journey, and it was clear that there’s a whole lot of innovation and sweat that goes into making a product like this work.

Some folks think that these panel presentations are marketing disguised as a presentation. Invariably, the reference customers are friendly with the company, and you’ll only ever hear good stories. But I think those stories from those customers are still extremely powerful. After all, having a customer jump on a session to tell the world about how good your product has been means you’ve done something right. As a consumer of these products, I find these kind of testimonials invaluable. Ultimately, products are successful in the market when they serve the market’s needs. From what I can see, Intel Optane is on its way to meeting those needs, and it has a bright future.

Random Short Take #49

Happy new year and welcome to Random Short Take #49. Not a great many players have worn 49 in the NBA (2 as it happens). It gets better soon, I assure you. Let’s get random.

  • Frederic has written a bunch of useful articles around useful Rubrik things. This one on setting up authentication to use Active Directory came in handy recently. I’ll be digging in to some of Rubrik’s multi-tenancy capabilities in the near future, so keep an eye out for that.
  • In more things Rubrik-related, this article by Joshua Stenhouse on fully automating Rubrik EDGE / AIR deployments was great.
  • Speaking of data protection, Chris Colotti wrote this useful article on changing the Cloud Director database IP address. You can check it out here.
  • You want more data protection news? How about this press release from BackupAssist talking about its partnership with Wasabi?
  • Fine, one more data protection article. Six backup and cloud storage tips from Backblaze.
  • Speaking of press releases, WekaIO has enjoyed some serious growth in the last year. Read more about that here.
  • I loved this article from Andrew Dauncey about things that go wrong and learning from mistakes. We’ve all likely got a story about something that went so spectacularly wrong that you only made that mistake once. Or twice at most. It also reminds me of those early days of automated ESX 2.5 builds and building magical installation CDs that would happily zap LUN 0 on FC arrays connected to new hosts. Fun times.
  • Finally, I was lucky enough to talk to Intel Senior Fellow Al Fazio about what’s happening with Optane, how it got to this point, and where it’s heading. You can read the article and check out the video here.

Intel Optane And The DAOS Storage Engine

Disclaimer: I recently attended Storage Field Day 20.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Intel recently presented at Storage Field Day 20. You can see videos of the presentation here, and download my rough notes from here.

 

Intel Optane Persistent Memory

If you’re a diskslinger, you’ve very likely heard of Intel Optane. You may have even heard of Intel Optane Persistent Memory. It’s a little different to Optane SSD, and Intel describes it as “memory technology that delivers a unique combination of affordable large capacity and support for data persistence”. It looks a lot like DRAM, but the capacity is greater, and there’s data persistence across power losses. This all sounds pretty cool, but isn’t it just another form factor for fast storage? Sort of, but the application of the engineering behind the product is where I think it starts to get really interesting.

 

Enter DAOS

Distributed Asynchronous Object Storage (DAOS) is described by Intel as “an open source software-defined scale-out object store that provides high bandwidth, low latency, and high I/O operations per second (IOPS) storage containers to HPC applications”. It’s ostensibly a software stack built from the ground up to take advantage of the crazy speeds you can achieve with Optane, and at scale. There’s a handy overview of the architecture available on Intel’s website. Traditional object (and other storage systems) haven’t really been built to take advantage of Optane in quite the same way DAOS has.

[image courtesy of Intel]

There are some cool features built into DAOS, including:

  • Ultra-fine grained, low-latency, and true zero-copy I/O
  • Advanced data placement to account for fault domains
  • Software-managed redundancy supporting both replication and erasure code with online rebuild
  • End-to-end (E2E) data integrity
  • Scalable distributed transactions with guaranteed data consistency and automated recovery
  • Dataset snapshot capability
  • Security framework to manage access control to storage pools
  • Software-defined storage management to provision, configure, modify, and monitor storage pools

Exciting? Sure is. There’s also integration with Lustre. The best thing about this is that you can grab it from Github under the Apache 2.0 license.

 

Thoughts And Further Reading

Object storage is in its relative infancy when compared to some of the storage architectures out there. It was designed to be highly scalable and generally does a good job of cheap and deep storage at “web scale”. It’s my opinion that object storage becomes even more interesting as a storage solution when you put a whole bunch of really fast storage media behind it. I’ve seen some media companies do this with great success, and there are a few of the bigger vendors out there starting to push the All-Flash object story. Even then, though, many of the more popular object storage systems aren’t necessarily optimised for products like Intel Optane PMEM. This is what makes DAOS so interesting – the ability for the storage to fundamentally do what it needs to do at massive scale, and have it go as fast as the media will let it go. You don’t need to worry as much about the storage architecture being optimised for the storage it will sit on, because the folks developing it have access to the team that developed the hardware.

The other thing I really like about this project is that it’s open source. This tells me that Intel are both focused on Optane being successful, and also focused on the industry making the most of the hardware it’s putting out there. It’s a smart move – come up with some super fast media, and then give the market as much help as possible to squeeze the most out of it.

You can grab the admin guide from here, and check out the roadmap here. Intel has plans to release a new version every 6 months, and I’m really looking forward to seeing this thing gain traction. For another perspective on DAOS and Intel Optane, check out David Chapa’s article here.

 

 

Intel’s Form Factor Is A Factor

Disclaimer: I recently attended Storage Field Day 17.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

The Intel Optane team recently presented at Storage Field Day 17. You can see their videos from Storage Field Day 17 here, and download a PDF copy of my rough notes from here. I urge you to check out the videos, as there was a heck of a lot of stuff in there. But rather than talk about benchmarks and the SPDK, I’m going to focus on what’s happening with Intel’s approach to storage in terms of the form factor.

 

Of Form Factors And Other Matters Of Import

An Abbreviated History Of Drive Form Factors

Let’s start with little bit of history to get you going. IBM introduced the first hard drive – the IBM 350 disk storage unit – in 1956. Over time we’ve gone from a variety of big old drives to smaller form factors. I’m not old enough to reminisce about the Winchester drives, but I do remember the 5.25″ drives in the XT. Wikipedia provides a good a place to start as any if you’re interested in knowing more about hard drives. In any case, we now have the following prevailing form factors in use as hard drive storage:

  • 3.5″ drives – still reasonably common in desktop computers and “cheap and deep” storage systems;
  • 2.5″ drives (SFF) – popular in laptops and used as a “dense” form factor for a variety of server and storage solutions;
  • U.2 – mainstream PCIe SSD form factor that has the same dimensions as 2.5″ drives; and
  • M.2 – designed for laptops and tablets.

Challenges

There are a number of challenges associated with the current drive form factors. The most notable of these is the density issue. Drive (and storage) vendors have been struggling for years to try and cram more and more devices into smaller spaces whilst increasing device capacities as well. This has led to problems with cooling, power, and overall reliability. Basically, there’s only so much you can put in 1RU without the whole lot melting.

 

A Ruler? Wait, what?

Intel’s “Ruler” is a ruler-like, long (or short) drive based on the EDSFF (Enterprise and Datacenter Storage Form Factor) specification. There’s a tech brief you can view here. There are a few different versions (basically long and short), and it still leverages NVMe via PCIe.

[image courtesy of Intel]

It’s Denser

You can cram a lot of these things in a 1RU server, as Super Micro demonstrated a few months ago.

  • Up to 32 E1.L 9.5mm drives per 1RU
  • Up to 48 E1.S drives per 1RU

Which means you could be looking at around a petabyte of raw storage in 1RU (using 32TB E1.L drives). This number is only going to go up as capacities increase. Instead of half a rack of 4TB SSDs, you can do it all in 1RU.

It’s Cooler

Cooling has been a problem for storage systems for some time. A number of storage vendors have found out the hard way that jamming a bunch of drives in a small enclosure has a cost in terms of power and cooling. Intel tell us that they’ve had some (potentially) really good results with the E1.L and E1.S based on testing to date (in comparison to traditional SSDs). They talked about:

  • Up to 2x less airflow needed per E1.L 9.5mm SSD vs. U.2 15mm (based on Intel’s internal simulation results); and
  • Up to 3x less airflow needed per E1.S SSD vs. U.2 7mm.

Still Serviceable

You can also replace these things when they break. Intel say they’re:

  • Fully front serviceable with an integrated pull latch;
  • Support integrated, programmable LEDs; and
  • Support remote, drive specific power cycling.

 

Thoughts And Further Reading

SAN and NAS became popular in the data centre because you could jam a whole bunch of drives in a central location and you weren’t limited by what a single server could support. For some workloads though, having storage decoupled from the server can be problematic either in terms of latency, bandwidth, or both. Some workloads need their storage as close to the processor as possible. Technologies such as NVMe over Fabrics are addressing that issue to an extent, and other vendors are working to bring the compute closer to the storage. But some people just want to do what they do, and they need more and more storage to do it. I think the “ruler” form factor is an interesting approach to the issue traditionally associated with cramming a bunch of capacity in a small space. It’s probably going to be some time before you see this kind of thing in data centres as a matter of course, because it takes a long time to change the way that people design their servers to accommodate new standards. Remember how long it took for SFF drives to become as common in the DC as they are? No? Well it took a while. Server designs are sometimes developed years (or at least months) ahead of their release to the market. That said, I think Intel have come up with a really cool idea here, and if they can address the cooling and capacity issues as well as they say they can, this will likely take off. Of course, the idea of having 1PB of data sitting in 1RU should be at least a little scary in terms of failure domains, but I’m sure someone will work that out. It’s just physics after all, isn’t it?

There’s also an interesting article at The Register on the newer generation of drive form factors that’s worth checking out.

Intel Are Putting Technology To Good Use

Disclaimer: I recently attended Storage Field Day 12.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

Here are some notes from Intel‘s presentation at Storage Field Day 12. You can view the video here and download my rough notes here.

 

I/O Can Be Hard Work

With the advent of NVM Express, things go pretty fast nowadays. Or, at least, faster than they used to with those old-timey spinning disks we’ve loved for so long. According to Intel, systems with multiple NVMe SSDs are now capable of performing millions of I/Os per second. Which is great, but it results in many cores of software overhead with a kernel-based interrupt-driven driver model. The answer, according to Intel, is the Storage Performance Development Kit (SPDK). The SPDK enables more CPU cycles for storage services, with lower I/O latency. The great news is that there’s now almost no premium now on capacity to do IOPS with a system. So how does this help in the real world?

 

Real World Applications?

SPDK VM I/O Efficiency

The SPDK offers some excellent performance improvements when dishing up storage to VMs.

  • NVMe ephemeral storage
  • SPDK-based 3rd party storage services

Leverage existing infrastructure for:

  • QEMU vhost-scsi;
  • QEMU/DPDK vhost-net user.

Features and benefits

  • High performance storage virtualisation
  • Reduced VM exit
  • Lower latency
  • Increased VM density
  • Reduced tail latencies
  • Higher throughput

Intel say that Ali Cloud sees ~300% improvement in IOPS and latency using SPDK

 

VM Ephemeral Storage

  • Improves Storage virtualisation
  • Works with KVM/QEMU
  • 6x efficiency vs kernel host
  • 10x efficiency vs QEMU virtuo
  • Increased VM density

 

SPDK and NVMe over Fabrics

SPDK also works a treat with NVMe over Fabrics.

VM Remote Storage

  • Enable disaggregation and migration of VMs using remote storage
  • Improves storage virtualisation and flexibility
  • Works with KVM/QEMU

 

NVMe over Fabrics

NVMe over Fabrics
Feature Benefit
Utilises NVM Express (NVMe) Polled Mode Driver Reduced overhead per NVMe I/O
RDMA Queue Pair Polling No interrupt overhead
Connections pinned to CPU cores No synchronisation overhead

 

NVMe-oF Key Takeaways

  • Preserves the latency-optimised NVMe protocol through network hops
  • Potentially radically efficient, depending on implementation
  • Actually fabric agnostic: InfinBand, RDMA, TCP/IP, FC … all ok!
  • Underlying protocol for existing and emerging technologies
  • Using SPDK, can integrate NVMe and NVMe-oF directly into applications

 

VM I/O Efficiency Key Takeaways

  • Huge improvement in latency for VM workloads
  • Application-level sees 3-4X performance gains
  • Application unmodified: it’s all under the covers
  • Virtuous cycle with VM density
  • Fully compatible with NVMe-oF!

 

Further Reading and Conclusion

Intel said during the presentation that “[p]eople find ways of consuming resources you provide to them”. This is true, and one of the reasons I became interested in storage early in my career. What’s been most interesting about the last few years worth of storage developments (as we’ve moved beyond spinning disks and simple file systems to super fast flash subsystems and massively scaled out object storage systems) is that people are still really only interested in have lots of storage that is fast and reliable. The technologies talked about during this presentation obviously aren’t showing up in consumer products just yet, but it’s an interesting insight into the direction the market is heading. I’m mighty excited about NVMe over Fabrics and looking forward to this technology being widely adopted in the data centre.

If you’ve had the opportunity to watch the video from Storage Field Day 12 (and some other appearances by Intel Storage at Tech Field Day events), you’ll quickly understand that I’ve barely skimmed the surface of what Intel are doing in the storage space, and just how much is going on before your precious bits are hitting the file system / object store / block device. NVMe is the new way of doing things fast, and I think Intel are certainly pioneering the advancement of this technology through real-world applications. This is, after all, the key piece of the puzzle – understanding how to take blazingly fast technology and apply a useful programmatic framework that companies can build upon to deliver useful outcomes.

For another perspective, have a look at Chan’s article here. You also won’t go wrong checking out Glenn’s post here.

Storage Field Day – I’ll Be At Storage Field Day 12

In what can only be considered excellent news, I’ll be heading to the US in early March for another Storage Field Day event. If you haven’t heard of the very excellent Tech Field Day events, you should check them out. I’m looking forward to time travel and spending time with some really smart people for a few days. It’s also worth checking back on the Storage Field Day 12 website during the event (March 8 – 10) as there’ll be video streaming and updated links to additional content. You can also see the list of delegates and event-related articles that have been published.

I think it’s a great line-up of presenting companies this time around. There are a few I’m very familiar with and some I’ve not seen in action before.

 

It’s not quite a total greybeard convention this time around, but I think that’s only because of Jon‘s relentless focus on personal grooming. I won’t do the delegate rundown, but having met a number of these people I can assure the videos will be worth watching.

Here’s the rough schedule (all times are ‘Merican Pacific and may change).

Wednesday, March 8 10:00 – 12:00 StarWind Presents at Storage Field Day 12
Wednesday, March 8 13:00 – 15:00 Elastifile Presents at Storage Field Day 12
Wednesday, March 8 16:00 – 18:00 Excelero Presents at Storage Field Day 12
Thursday, March 9 08:00 – 10:00 Nimble Storage Presents at Storage Field Day 12
Thursday, March 9 11:00 – 13:00 NetApp Presents at Storage Field Day 12
Thursday, March 9 14:00 – 16:00 Datera Presents at Storage Field Day 12
Friday, March 10 09:00 – 10:00 SNIA Presents at Storage Field Day 12
Friday, March 10 10:30 – 12:30 Intel Presents at Storage Field Day 12

I’d like to publicly thank in advance the nice folks from Tech Field Day who’ve seen fit to have me back, as well as my employer for giving me time to attend these events. Also big thanks to the companies presenting. It’s going to be a lot of fun. Seriously.

Storage Field Day 8 – Wrap-up and Link-o-rama

Disclaimer: I recently attended Storage Field Day 8.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

This is a quick post to say thanks once again to Stephen, Claire and the presenters at Storage Field Day 8. I had a fantastic time and learnt a lot. For easy reference, here’s a list of the posts I did covering the event (not necessarily in chronological order).

Storage Field Day – I’ll be at SFD8

Storage Field Day 8 – Day 0

Storage Field Day 8 – (Fairly) Full Disclosure

Cohesity – There’s more to this than just “Secondary Storage”

Violin Memory – Sounds a lot better than it used to

Pure Storage – Orange is the new black, now what?

INFINIDAT – What exactly is a “Moshe v3.0”?

Nimble Storage – InfoSight is still awesome

Primary Data – Because we all want our storage to do well

NexGen Storage – Storage QoS and other good things

Coho Data – Numerology, but not as we know it

Intel – They really are “Inside”

Qumulo – Storage for people who care about their data

 

Also, here’s a number of links to posts by my fellow delegates. They’re all switched-on people, and you’d do well to check out what they’re writing about. I’ll try and update this list as more posts are published. But if it gets stale, the SFD8 landing page has updated links.

 

Ray Lucchesi (@RayLucchesi)

Coho Data, the packet processing squeeze and working set exploits

Primary data’s path to better data storage presented at SFD8

PB are the new TB, GreyBeards talk with Brian Carmody, CTO Inifinidat

 

Mark May (@CincyStorage)

Can Violin Step back From the Brink?

Storage software can change enterprise workflow

Redefining Secondary Storage

 

Scott D. Lowe (@OtherScottLowe)

IT as a Change Agent: It’s Time to Look Inward, Starting with Storage

Overcoming “New Vendor Risk”: Pure Storage’s Techniques

So, What is Secondary Storage Cohesity-Style?

Data Awareness Is Increasingly Popular in the Storage Biz

 

Jon Klaus (@JonKlaus)

Storage Field Day – I will be attending SFD8!

Wow it’s early – Traveling to Storage Field Day 8

Coho Data: storage transformation without disruption

Pure Storage: Non Disruptive Everything

Cohesity is changing the definition of secondary storage

Qumulo: data-aware scale-out NAS

Nimble Storage – InfoSight VMVision

NexGen Storage: All-Flash Arrays can be hybrids too!

Infinidat: Enterprise reliability and performance

 

Alex Galbraith (@AlexGalbraith)

Looking Forward to Storage Field Day 8

Without good Analytics you dont have a competitive storage product

How often do you upgrade your storage array software?

Where and why is my data growing?…

Why are storage snapshots so painful?

 

Jarett Kulm (@JK47TheWeapon)

Storage Field Day 8 – Here I come!

 

Enrico Signoretti (@ESignoretti)

#SFD8, it’s storage prime time!

Analytics, the key to (storage) happiness

We are entering the Data-aware infrastructure era

Has the next generation of monolithic storage arrived?

Juku.beats 25: Qumulo, data-aware scale-out NAS

Infinidat: awesome tech, great execution

Juku.beats 27: NexGen Storage, QoS made easy.

Software defined? No no no, it’s poorly defined storage (and why Primary Data is different)

Juku.beats 28 – Infinidat storage: multiple nine resiliency, high performance, $1/GB

Are you going All-Flash? Nah, the future is hybrid

 

Vipin V.K. (@VipinVK111)

Tech Field Day Calling…! – #SFD8

Infinibox – Enterprise storage solution from Infinidat

Understanding NVMe (Non-Volatile Memory Express)

All-Flash symphony from Violin Memory

Cohesity – Secondary storage consolidation

With FLASH, things are changing ‘in a flash’ !?

 

Josh De Jong (@EuroBrew)

Storage Field Day Here I Come!

Thoughts in the Airport

NexGen Storage – The Future is Hybrid

Pure Storage – Enterprise Ready, Pure and Simple

 

Finally, thanks again to Stephen, Claire (and Tom in absentia). It was an educational and enjoyable few days and I really valued the opportunity I was given to attend.

SFD8_Group

 

Intel – They really are “Inside”

Disclaimer: I recently attended Storage Field Day 8.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

For each of the presentations I attended at SFD8, there are a few things I want to include in the post. Firstly, you can see video footage of the Intel presentation here. You can also download my raw notes from the presentation here. Finally, here’s a link to the Intel website that covers some of what they presented.

 

1000px-Intel-logo_svg_-196x130

If you’ve spent any time shooting the breeze with me, you’ll probably know I’m into punk music. I may have also gotten into a monologue about how much I enjoy listening to the Dead Kennedys. I have a vinyl copy of Jello Biafra‘s third spoken-word album, “I blow minds for a living“. This is a great album, and I recommend listening to it if you haven’t already. While this is a somewhat tortured segue, what I’m trying to say is that a few of the guys working at Intel seem to also specialise in blowing minds for a living, because I walked out of that presentation at SFD8 with very little understanding of what I’d just seen :)

 

Intel is working hard so you don’t have to

There’s a whole lot to the Intel presentation, and I hearty recommend you watch it for yourself. I found Nate Marushak’s part of the presentation, “Enabling the storage transformation – Intel ISA-L & SPDK” particularly interesting. As I stated previously, I didn’t really keep up with a lot of it. Here are a few of the notes I was able to take.

Intel are keen to address the bottleneck pendulum with a few key pieces of technology:

  • 25/50/100GbE
  • Intel 3D XPoint
  • RDMA

They want to “enable the storage transformation” a couple of ways. The first of these is the Storage Performance Development Kit (SPDK), built on Data Plane Development Kit (DPDK) it provides

  • Software infrastructure to accelerate the packet IO to Intel CPU

Userspace Network Services (UNS)

  • TCP/IP stack implemented as polling, lock-light library, bypassing kernel bottlenecks, and enabling accessibility

Userspace NVMe, Intel Xeon / Intel Atom Processors DMA and Linux AIO drivers

  • optimises back-end driver performance and prevents kernel bottlenecks from forming at the back end of the IO chain

Reference Software and Example Application

  • Intel provides a customer-relevant example application leveraging ISA-L, with support provided on a best-effort basis

SPDK

What is Provided?

  • Builds upon optimised DPDK technology
  • Optimised UNS TCP/IP technology
  • Optimised storage target SW stack
  • Optimised persistent media SW stack
  • Supports Linux OS

How does it help?

  • Avoids legacy SW bottlenecks
  • Removes overhead due to interrupt processing (use polling)
  • Removes overhead due to kernel transitions
  • Removes overhead due to locking
  • Enables greater system level performance
  • Enables lower system level latency

Intel Intelligent Storage Acceleration Library

This is an algorithmic library to address key storage market segment needs:

  • Optimised libraries for Xeon, Atom architectures
  • Enhances performance for data integrity, security / encryption, data protection, deduplication and compression
  • Has available C language demo functions to increase library comprehension
  • Tested on Linux, FreeBSD, MacOS and Windows Server OS

ISA-L Functions include

  • Performance Optimisation
  • Data Protection – XOR (r5), P+Q (r6), Reed-solomon Erasure Code)
  • Data Integrity – CRC-T10, CRC-IEEE (802.3), CRC32-iSCSI
  • Cryptographic Hashing – Multi-buffer: SHA-1, SHA-256, SHA-512, MD5
  • Compression “Deflate” – IGZIP: Fast Compression
  • Encryption

 

Closing Thoughts and Further Reading

As I stated at the start of this post, a lot of what I heard in this presentation went way over my head. I urge you to check out the Intel website and links above to get a feel for just how much they’re doing in this space to make things easier for the various vendors of SDS offerings out there. If you think about just how much Intel is inside everything nowadays, you’ll get a good sense of just how important their work is to the continued evolution of storage platforms in the modern data centre. And if nothing else you might find yourself checking out a Jello Biafra record.

 

IMG_2453