The HE500 series has been introduced to provide smaller customers and edge infrastructure environments with components that better meet the sizing and pricing requirements of those environments. There are a few different flavours of nodes, with every node offering E-2100 Intel CPUs, 32 – 64GB RAM, and dual power supplies. There are a couple of minor differences with regards to other configuration options.
HE500 – 4x 1,2,4 or 8TB HDD, 4x 1GbE, 4x 10GbE
HE550 – 1x 480GB or 960GB SSD, 3x 1,2, or 4TB HDD, 4x 1GbE, 4x 10GbE
HE500T – 4x 1,2,4 or 8TB HDD, 8 x HDD 4TB, 8TB, 2x 1GbE
HE550TF – 4 x 240GB, 480GB, 960GB SSD, 2x 1GbE
The “T” version comes in a tower form factor, and offers 1GbE connectivity. Everything runs on Scale’s HC3 platform, and offers all of the features and support you expect with that platform. In terms of scalability, you can run up to 8 nodes in a cluster.
Thoughts And Further Reading
In the past I’ve made mention of Scale Computing and Lenovo’s partnership, and the edge infrastructure approach is also something that lends itself well to this arrangement. If you don’t necessarily want to buy Scale-badged gear, you’ll see that the models on offer look a lot like the SR250 and ST250 models from Lenovo. In my opinion, the appeal of Scale’s hyper-converged infrastructure story has always been the software platform that sits on the hardware, rather than the specifications of the nodes they sell. That said, these kinds of offerings play an important role in the market, as they give potential customers simple options to deliver solutions at a very competitive price point. Scale tell me that an entry-level 3-node cluster comes in at about US $16K, with additional nodes costing approximately $5K. Conboy described it as “[l]owering the barrier to entry, reducing the form factor, but getting access to the entire stack”.
Combine some of these smaller solutions with various reference architectures and you’ve got a pretty powerful offering that can be deployed in edge sites for a small initial outlay. People often deploy compute at the edge because they have to, not because they necessarily want to. Anything that can be done to make operations and support simpler is a good thing. Scale Computing are focused on delivering an integrated stack that meets those requirements in a lightweight form factor. I’ll be interested to see how the market reacts to this announcement. For more information on the HC3 Edge offering, you can grab a copy of the data sheet here, and the press release is available here. There’s a joint Lenovo – Scale Computing case study that can be found here.
Disclaimer: I recently attended Storage Field Day 17. My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
Storage and compute / processing requirements at the edge aren’t necessarily new problems. People have been trying to process data outside of their core data centres for some time now. NGD Systems have a pretty good handle on the situation, and explained it thusly:
A massive amount of data is now produced at the edge;
AI algorithms demand large amounts of data; and
Moving data to cloud is often not practical.
They’ve taken a different approach with “computational storage” by moving the compute to storage. It then becomes a problem to solve in terms of Power/TB + $/GB + in-situ processing. Their focus has been on delivering a power efficient, low cost, computational storage solution.
A Novel Solution – Move Compute to Storage
Maintain familiar methodology (no new learning)
Use standard protocols (NVMe) and processes (no new commands)
Minimise interface traffic (power and time savings)
Enhancing limited footprint with maximum benefit (customer TCO)
Moving Computation to Data is Cheaper than moving Data
A computation requested by an application is much more efficient if it is executed near the data it operates on
Minimises network traffic
Increases effective throughput and performance of the system (eg Hadoop Distributed File System)
Enables distributed processing
Especially true for big data (analytics): large sets and unstructured data
Traditional approach: high-performance servers coupled with SAN/NAS storage – Eventually limited by networking bottlenecks
They say that these storage solutions solve the low power, more efficient compute needs without placing strain on the edge and “fog” platforms. I found the CDN use case to be particularly interesting. When you have a bunch of IP-addressable storage sitting in a remote point of presence it can sometimes be a pain to have them talking back to a centralised server to get decryption keys for protected content, for example. In this case you can have the drives do the key handling and authentication, providing faster access to content than would be possible in latency-constrained environments.
It seems silly to quote Gaga Herself when writing about tech, but I think NGD Systems are taking a really interesting approach to solving some of the compute problems at the edge. They’re not just talking about jamming a bunch of disks together with some compute. Instead, they’re jamming the compute in each of the disks. It’s not a traditional approach to solving some of the challenges of the edge, but it seems like it has legs for those use cases mentioned above. Edge compute and storage is often deployed in reasonably rugged environments that are not as well-equipped as large DCs in terms of cooling and power. The focus on delivering processing at storage that relies on minimal power and uses standard protocols is intriguing. They say they can do it at a reasonable price too, making the solution all the more appealing for those companies facing difficulties using more traditional edge storage and compute solutions.
You can check out the specifications of the Newport Platform here. Note the various capacities depend on the form factor you are consuming. There’s also a great paper on computational storage that you can download from here. For some other perspectives on computational storage, check out Max‘s article here, and Enrico’s article here.
It seems like only a few months ago that I was introduced to StorMagic via Storage Field Day 6. You can read my thoughts on that here. I was pretty impressed with StorMagic’s focus on their strengths and the solution’s capacity to solve some difficult problems when it came to virtualised storage at the edge of the network.
In any case, StorMagic announced recently that they’ve officially partnered with VMware as the ROBO storage solution of choice when it comes deploying a VSA at the edge. What that translates to is one SKU from VMware to order the software and licences and one SKU from StorMagic to get your hands on a very solid edge storage VSA solution. Here’s a link to StorMagic’s solution brief on their website. And here’s a picture.
The solution runs on anything that’s in the VMware HCL, can scale down to 2 servers (as opposed to VSAN’s 3-node requirement) and provides edge HA for the large enterprise.
You can also read a great write-up from Amit Panchal here, as well as a typically astute analysis from Jon Klaus here. I think it’s great that StorMagic have been able to make this announcement and look forward to hearing about future developments.
I was making some port-channels between one of our MDS 9513 director switches and a 9124e edge and managed to add the interfaces to the wrong port-channel. Here’re the basic steps on the 9124e end that I took to rectify the issue. I’ve created a pdf file which, while inconvenient, solves the problems related to both my wordpress skills and the age of the theme I use. That is, a 4 page doc was going to look pretty ugly if I tried to insert it in-line. I apologise in advance for the inconvenience you will no doubt experience.
Sometimes, for any number of reasons, you’ll find yourself wanting to downgrade the firmware on your Cisco edge devices to match what you have running in the core. Fortunately, at least for the 9100-series switches, this is basically the same as upgrading the firmware. I’ve included the commands to run here, and also the full output of the process. For the director-class switches, there are a few more things to do, such as clearing out the space on the standby supervisor as well as the active sup card. I’ll try and post something 9500-series specific in the next few weeks.
In short, do this (assuming you’re loading version 3.3(4a) of the code):
show version image bootflash:m9100-s2ek9-mz.3.3.4a.bin
show incompatibility system m9100-s2ek9-mz.3.3.4a.bin
install all system bootflash:m9100-s2ek9-mz.3.3.4a.bin kickstart bootflash:m9100-s2ek9-kickstart-mz.3.3.4a.bin
You can also see the full output here. Note that this process works equally well for HP’s 9124e switches (the type you find in the back of c7000 blade chassis for instance), although you should be downloading the firmware from HP’s site, not Cisco’s.