NGD Systems Are On The Edge Of Glory

Disclaimer: I recently attended Storage Field Day 17.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

NGD Systems recently presented at Storage Field Day 17. You can see their videos from Storage Field Day 17 here, and download a PDF copy of my rough notes from here.

 

Edgy

Storage and compute / processing requirements at the edge aren’t necessarily new problems. People have been trying to process data outside of their core data centres for some time now. NGD Systems have a pretty good handle on the situation, and explained it thusly:

  • A massive amount of data is now produced at the edge;
  • AI algorithms demand large amounts of data; and
  • Moving data to cloud is often not practical.

They’ve taken a different approach with “computational storage” by moving the compute to storage. It then becomes a problem to solve in terms of Power/TB + $/GB + in-situ processing. Their focus has been on delivering a power efficient, low cost, computational storage solution.

A Novel Solution – Move Compute to Storage

Key attributes:

  • Maintain familiar methodology (no new learning)
  • Use standard protocols (NVMe) and processes (no new commands)
  • Minimise interface traffic (power and time savings)
  • Enhancing limited footprint with maximum benefit (customer TCO)

Moving Computation to Data is Cheaper than moving Data

  • A computation requested by an application is much more efficient if it is executed near the data it operates on
    • Minimises network traffic
    • Increases effective throughput and performance of the system (eg Hadoop Distributed File System)
    • Enables distributed processing
  • Especially true for big data (analytics): large sets and unstructured data
  • Traditional approach: high-performance servers coupled with SAN/NAS storage – Eventually limited by networking bottlenecks

 

Thoughts and Further Reading

NGD are targeting some interesting use cases, including:

  • Hyperscalers;
  • Content Delivery Networks; and
  • Fog Storage” market.

They say that these storage solutions solve the low power, more efficient compute needs without placing strain on the edge and “fog” platforms. I found the CDN use case to be particularly interesting. When you have a bunch of IP-addressable storage sitting in a remote point of presence it can sometimes be a pain to have them talking back to a centralised server to get decryption keys for protected content, for example. In this case you can have the drives do the key handling and authentication, providing faster access to content than would be possible in latency-constrained environments.

It seems silly to quote Gaga Herself when writing about tech, but I think NGD Systems are taking a really interesting approach to solving some of the compute problems at the edge. They’re not just talking about jamming a bunch of disks together with some compute. Instead, they’re jamming the compute in each of the disks. It’s not a traditional approach to solving some of the challenges of the edge, but it seems like it has legs for those use cases mentioned above. Edge compute and storage is often deployed in reasonably rugged environments that are not as well-equipped as large DCs in terms of cooling and power. The focus on delivering processing at storage that relies on minimal power and uses standard protocols is intriguing. They say they can do it at a reasonable price too, making the solution all the more appealing for those companies facing difficulties using more traditional edge storage and compute solutions.

You can check out the specifications of the Newport Platform here. Note the various capacities depend on the form factor you are consuming. There’s also a great paper on computational storage that you can download from here. For some other perspectives on computational storage, check out Max‘s article here, and Enrico’s article here.

VMware and StorMagic – Happy Days at the Edge

It seems like only a few months ago that I was introduced to StorMagic via Storage Field Day 6. You can read my thoughts on that here. I was pretty impressed with StorMagic’s focus on their strengths and the solution’s capacity to solve some difficult problems when it came to virtualised storage at the edge of the network.

In any case, StorMagic announced recently that they’ve officially partnered with VMware as the ROBO storage solution of choice when it comes deploying a VSA at the edge. What that translates to is one SKU from VMware to order the software and licences and one SKU from StorMagic to get your hands on a very solid edge storage VSA solution. Here’s a link to StorMagic’s solution brief on their website. And here’s a picture.

StorMagic_ROBO

The solution runs on anything that’s in the VMware HCL, can scale down to 2 servers (as opposed to VSAN’s 3-node requirement) and provides edge HA for the large enterprise.

You can also read a great write-up from Amit Panchal here, as well as a typically astute analysis from Jon Klaus here. I think it’s great that StorMagic have been able to make this announcement and look forward to hearing about future developments.

My SAN-OS skills are wack

I was making some port-channels between one of our MDS 9513 director switches and a 9124e edge and managed to add the interfaces to the wrong port-channel. Here’re the basic steps on the 9124e end that I took to rectify the issue. I’ve created a pdf file which, while inconvenient, solves the problems related to both my wordpress skills and the age of the theme I use. That is, a 4 page doc was going to look pretty ugly if I tried to insert it in-line. I apologise in advance for the inconvenience you will no doubt experience.

Cisco 9124(e) firmware downgrade

Sometimes, for any number of reasons, you’ll find yourself wanting to downgrade the firmware on your Cisco edge devices to match what you have running in the core. Fortunately, at least for the 9100-series switches, this is basically the same as upgrading the firmware. I’ve included the commands to run here, and also the full output of the process. For the director-class switches, there are a few more things to do, such as clearing out the space on the standby supervisor as well as the active sup card. I’ll try and post something 9500-series specific in the next few weeks.

In short, do this (assuming you’re loading version 3.3(4a) of the code):

copy running-config startup-config

copy startup-config tftp://192.168.101.9/startup-config_FOSLAB5A08_28072010

show module

copy tftp://192.168.101.9/m9100-s2ek9-mz.3.3.4a.bin bootflash:m9100-s2ek9-mz.3.3.4a.bin

copy tftp://192.168.101.9/m9100-s2ek9-kickstart-mz.3.3.4a.bin bootflash:m9100-s2ek9-kickstart-mz.3.3.4a.bin

dir bootflash:

show version image bootflash:m9100-s2ek9-mz.3.3.4a.bin

show incompatibility system m9100-s2ek9-mz.3.3.4a.bin

install all system bootflash:m9100-s2ek9-mz.3.3.4a.bin kickstart bootflash:m9100-s2ek9-kickstart-mz.3.3.4a.bin

y

show module

show version

You can also see the full output here. Note that this process works equally well for HP’s 9124e switches (the type you find in the back of c7000 blade chassis for instance), although you should be downloading the firmware from HP’s site, not Cisco’s.