Disclaimer: I recently attended Storage Field Day 19. My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
Stellus Technologies recently came out of stealth mode. I had the opportunity to see the company present at Storage Field Day 19 and thought I’d share my thoughts here. You can grab a copy of my notes here.
Jeff Treuhaft (CEO) spent a little time discussing the company background and its development up to this point in time.
- Founded in 2016
- Data Path architecture developed in 2017
- Data path validations in 2018
- First customer deployments in 2019
- Commercial availability in 2020
What’s the problem Stellus is trying to solve then? There’s been a huge rise in unstructured data (driven in large part by AI / ML workloads) and an exponential increase in the size of data sources that enterprises are working with. There have also been significant increases in performance requirements for unstructured data. This has been driven primarily by:
- Life sciences;
- Media and entertainment; and
The result is that the storage solutions supporting these workloads need to:
- Offer scalable, consistent performance;
- Support common global namespaces;
- Work with variable file sizes;
- Deliver high throughput;
- Ensure that there are no parallel access penalties;
- Easily manage data over time; and
- Function as a data system of record.
It’s Stellus’s belief that “[c]urrent competitors have built legacy file systems at the time when spinning disk and building private data centres were the focus”.
Stellus Data Platform
- Constant performance
- Decoupling capacity and performance
- Independently scale perfromance and capacity on commodity hardware
- Distributed all, share everything KV based data model data path ready for new memories
- Consistently high performance even as system scales
File System as Software
- Stores unstructured data closest to native format: objects
- Data Services provided on Stellus objects
- Stateless – state in Key Value Stores
- User mode enables
- Independent from custom hardware and kernel
Don’t currently have deduplication capability built in.
Algorithmic Data Locality and Data Services
- Enables scale by algorithmically determining location – no cluster-wide maps
- Built for resilience to multiple failure – pet vs. cattle
- Understands topology of persistent stores
- Architecture maintains versions – enables data services such as snapshots
- Decoupled data services and persistence requires transport
- Architecture maintains native data structure – objects
- NVMe-over-Fabric protocol enhanced to transport KV commands
- Transport independent
Native Key-Value Stores
- Unstructured data is generally immutable
- Updates result in new objects
- Available in different sizes and performance characteristics
- We used application-specific KV stores, such as:
- Immutable data
- Short-lived updates
Thoughts and Further Reading
Every new company emerging from stealth has a good story to tell. And they all want it to be a memorable one. I think Stellus certainly has a good story to tell in terms of how it’s taking newer technologies to solve more modern storage problems. Not every workload requires massive amounts of scalability at the storage layer. But for those that do, it can be hard to solve that problem with traditional storage architectures. The key-value implementation from Stellus allows it to do some interesting stuff with larger drives, and I can see how this will have appeal as we move towards the use of larger and larger SSDs to store data. Particularly as a large amount of modern storage workloads are leveraging unstructured data.
More and more NVMe-oF solutions are hitting the market now. I think this is a sign that evolving workload requirements are pushing the capabilities of traditional storage solutions. A lot of the data we’re dealing with is coming from machines, not people. It’s not about how I derive value from a spreadsheet. It’s about how I derive value from terabytes of log data from Internet of Things devices. This requires scale – in terms of both capacity and performance. Using key-value over NVMe-oF is an interesting approach to the challenge – one that I’m keen to explore further as Stellus makes its way in the market. In the meantime, check out Chris Evans’s article on Stellus over at Architecting IT.