Disclaimer: I recently attended Storage Field Day 8. My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
For each of the presentations I attended at SFD8, there are a few things I want to include in the post. Firstly, you can see video footage of the Primary Data presentation here. You can also download my raw notes from the presentation here. Finally, here’s a link to the Primary Data website that covers some of what they presented.
Primary Data presented at Storage Field Day 7 – and I did a bit of a write-up here. At the time they had no shipping product, but now they do. Primary Data were big on putting data in the right place, and they’ve continued that theme with DataSphere.
Because we all want our storage to do well
I talk to my customers a lot about the concept of service catalogues for their infrastructure. Everyone is talking about X as a Service and the somewhat weird concept of applying consumer behaviour to the enterprise. But for a long time this approach has been painful, at least with mid-range storage products, because coming up with classifications of performance and availability for these environments is a non-trivial task. In larger environments, it’s also likely you won’t have consistent storage types across applications, with buckets of data being stored all over the place, and accessible via a bunch of different protocols. The following image demonstrates nicely the different kinds of performance levels you might apply to your environment, as not all applications were created equal. Neither are storage arrays, come to think of it.
[image courtesy of Primary Data]
Primary Data say that “every single capability and characteristic of the storage system can be thought of in terms of whether the data needs it or not”. As I said before, you need to look at your application requirements in terms of both:
- Performance (reads vs writes, IOPS, bandwidth, latency); and
- Protection (data durability, availability, security).
Of course, this isn’t simple when you then attempt to apply compute requirements and network requirements to the applications as well, but let’s just stick with storage requirements for the time being. Once you understand the client, and the value of the objectives being met on the data, you can start to apply Objectives and “Smart Objectives” (Objectives applied to particular types of data) to the data. With this approach, you can begin to understand the cost of specific performance and protection levels. Everyone wants Platinum, until they start having to pay for it. These costs can then be translated and presented as Service Level Agreements in your organisation’s service catalogue for consumption by various applications.
Closing Thoughts and Further Reading
Primary Data has a lot of smart people involved in the product, and they always put on one hell of whiteboard session when I see them present. To my mind, the key thing to understand with DataSphere isn’t that it will automatically transform your storage environment into a fully optimised, service catalogue enabled, application performance nirvana. Rather, it’s simply providing the smarts to leverage the insights that you provide. If you don’t have a service catalogue, or a feeling in your organisation that this might be a good thing to have, then you’re not going to get the full value out of Primary Data’s offering.
And while you’re at it, check out Cormac’s post for a typically thorough overview of the technology involved.
Pingback: Storage Field Day 8 – Wrap-up and Link-o-rama | penguinpunk.net
Pingback: Primary Data – Because we all want our storage to do well - Tech Field Day