NGD Systems Are On The Edge Of Glory

Disclaimer: I recently attended Storage Field Day 17.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

NGD Systems recently presented at Storage Field Day 17. You can see their videos from Storage Field Day 17 here, and download a PDF copy of my rough notes from here.

 

Edgy

Storage and compute / processing requirements at the edge aren’t necessarily new problems. People have been trying to process data outside of their core data centres for some time now. NGD Systems have a pretty good handle on the situation, and explained it thusly:

  • A massive amount of data is now produced at the edge;
  • AI algorithms demand large amounts of data; and
  • Moving data to cloud is often not practical.

They’ve taken a different approach with “computational storage” by moving the compute to storage. It then becomes a problem to solve in terms of Power/TB + $/GB + in-situ processing. Their focus has been on delivering a power efficient, low cost, computational storage solution.

A Novel Solution – Move Compute to Storage

Key attributes:

  • Maintain familiar methodology (no new learning)
  • Use standard protocols (NVMe) and processes (no new commands)
  • Minimise interface traffic (power and time savings)
  • Enhancing limited footprint with maximum benefit (customer TCO)

Moving Computation to Data is Cheaper than moving Data

  • A computation requested by an application is much more efficient if it is executed near the data it operates on
    • Minimises network traffic
    • Increases effective throughput and performance of the system (eg Hadoop Distributed File System)
    • Enables distributed processing
  • Especially true for big data (analytics): large sets and unstructured data
  • Traditional approach: high-performance servers coupled with SAN/NAS storage – Eventually limited by networking bottlenecks

 

Thoughts and Further Reading

NGD are targeting some interesting use cases, including:

  • Hyperscalers;
  • Content Delivery Networks; and
  • Fog Storage” market.

They say that these storage solutions solve the low power, more efficient compute needs without placing strain on the edge and “fog” platforms. I found the CDN use case to be particularly interesting. When you have a bunch of IP-addressable storage sitting in a remote point of presence it can sometimes be a pain to have them talking back to a centralised server to get decryption keys for protected content, for example. In this case you can have the drives do the key handling and authentication, providing faster access to content than would be possible in latency-constrained environments.

It seems silly to quote Gaga Herself when writing about tech, but I think NGD Systems are taking a really interesting approach to solving some of the compute problems at the edge. They’re not just talking about jamming a bunch of disks together with some compute. Instead, they’re jamming the compute in each of the disks. It’s not a traditional approach to solving some of the challenges of the edge, but it seems like it has legs for those use cases mentioned above. Edge compute and storage is often deployed in reasonably rugged environments that are not as well-equipped as large DCs in terms of cooling and power. The focus on delivering processing at storage that relies on minimal power and uses standard protocols is intriguing. They say they can do it at a reasonable price too, making the solution all the more appealing for those companies facing difficulties using more traditional edge storage and compute solutions.

You can check out the specifications of the Newport Platform here. Note the various capacities depend on the form factor you are consuming. There’s also a great paper on computational storage that you can download from here. For some other perspectives on computational storage, check out Max‘s article here, and Enrico’s article here.