Storage Field Day 22 – Wrap-up and Link-o-rama

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

This is a quick post to say thanks once again to Stephen and Ben, and the presenters at Storage Field Day 22. I had a great time. For easy reference, here’s a list of the posts I did covering the events (they may not match the order of the presentations).

Storage Field Day 22 – I’ll Be At Storage Field Day 22

Storage Field Day 22 – (Fairly) Full Disclosure

Komprise – It’s About Data, Not Storage

Infrascale Puts The Customer First

Fujifilm Object Archive – Not Your Father’s Tape Library

Intel – It’s About Getting The Right Kind Of Fast At The Edge

CTERA – Storage The Way Your Users Want It

Pure Storage – Pure1 Makes Life Easy

Also, here are a number of links to posts by my fellow delegates (in no particular order). They’re all very smart people, and you should check out their stuff, particularly if you haven’t before. I’ll attempt to keep this updated as more posts are published. But if it gets stale, the Storage Field Day 22 landing page will have updated links.

Erik Ableson (@EAbleson)

 

Jason Benedicic (@JABenedicic)

 

Brandon Graves (@BrandonGraves08)

 

Mikael Korsgaard Jensen (@Jekomi)

 

David Klee (@KleeGeek)

 

Rob Koper (@50mu)

 

Ray Lucchesi (@RayLucchesi)

CTERA, Cloud NAS on steroids

 

Christian Mohn (@h0bbel)

Storage Field Day #22 — Here We Go

 

Enrico Signoretti (@esignoretti)

 

Wolfgang Stief (@SpeicherStief)

Storage Field Day 22: Ein Ausblick

Data://express 10: Datenmamagement, Backup Und Die Cloud

 

Justin Warren (@JPWarren)

 

Gestalt IT (@GestaltIT)

CTERA: Multi-cloud Unstructured Data Management

Handling Object Storage at Scale with Fujifilm Object Archive

Intel and Lightbits Labs Make Storage Optimization Easy

Pure1 by Pure Storage Optimizes Hybrid Storage Management

Streamlining BDR with DRaaS from Infrascale

Analytics-Driven Unstructured Data Management from Komprise

[image courtesy of Stephen Foskett]

Pure Storage – Pure1 Makes Life Easy

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Pure Storage recently presented at Storage Field Day 22. You can see videos of the presentation here, and download my rough notes from here.

 

What You Need

If you’ve spent any time working with storage infrastructure, you’ll know that it can be a pain to manage and operate in an efficient manner. Pure1 has always been a great tool to manage your Pure Storage fleet. But Pure has taken that idea of collecting and analysing a whole bunch of telemetry data and taken it even further. So what is it you need?

Management and Observation

  • Setup needs to be easy to reduce risk and accelerate delivery
  • Alerting needs to be predictive to prevent downtime
  • Management has to be done anywhere to be responsive

Planning and Upgrades

  • Determining when to buy requires forecasting to manage costs
  • Workload optimisations should be intuitive to help keep users happy
  • Non-disruptive upgrades are critical to prevent disruptions

Purchasing and Scaling

  • Resources should be available as a service for on-demand scaling.
  • Data service purchasing should be self-service for speed and simplicity
  • Hybrid cloud should be available from one vendor, in one place

 

Pure1 Has It

Sounds great, so how do you get that with Pure1? Pure breaks it down into three key areas:

  • Optimise
  • Recommend
  • Empower

Optimise

Reduce the time you spend on management and take the guesswork out of support. With aggregated fleet / group metrics, you get:

  • Capacity utilisation
  • Performance
  • Data reduction savings
  • Alerts and support cases

[image courtesy of Pure Storage]

Recommend

Every organisation wants to improve the speed and accuracy of resource planning while enhancing user experience. Pure1 provides the ability to use “What-If” modelling to stay ahead of demands.

  • Select application to be added
  • Provide sizing details
  • Get recommendations based on Pure best practices and AI analysis of our telemetry databases

[image courtesy of Pure Storage]

The process is alarmingly simple:

  • Pick a Workload Type – Choose a preset application type from a list of the most deployed enterprise applications, including SAP HANA, Microsoft SQL, and more.
  • Set Application Parameter – Define size of the deployment. Attributes are auto-populated based on Pure1 analytics across its global database. Adjust as needed for your environment.
  • Simulate Deployment – Identify where you want to deploy the application data. Pure1 analyses the impact on performance and capacity.

Empower

Build your hybrid-cloud infrastructure your way and on demand without the headaches of legacy purchasing. Pure has a great story to tell when it comes to Pure as-a-Service and OpEx acquisition models.

 

Thoughts and Further Reading

In a previous job, I was a Pure1 user and found the overall experience to be tremendous. Much has changed with Pure1 since I first installed it on my phone, and it’s my opinion that the integration and usefulness of the service have both increased exponentially. The folks at Pure have always understood that it’s not enough to deliver high-performance storage solutions built on All-Flash. This is considered table-stakes nowadays. Instead, Pure has done a great job of focussing on the management and operation of these high-performance storage solutions to ensure that users get what they need from the system. I sound like a broken record, I’m sure, but it’s this relentless focus on the customer experience that I think sets Pure apart from many of its competitors.

Most of the tier 1 storage vendors have had a chop at delivering management and operations systems that make extensive use of field telemetry data and support knowledge to deliver proactive support for customers. Everyone is talking about how they use advanced analytics, AI / ML, and so on to deliver a great support experience. But I think it’s the other parts of the equation that really brings it together nicely for Pure: the “evergreen” hardware lifecycle options, the consumption flexibility, and the focus on constantly improving the day 2 operations experience that’s required when managing storage at scale in the enterprise. Add to that the willingness to embrace hybrid cloud technologies, and the expanding product portfolio, and I’m looking forward to seeing what’s next for Pure. Finally, shout out to Stan Yanitskiy for jumping in at the last minute to present when his colleague had a comms issue – I think the video shows that he handled it like a real pro.

CTERA – Storage The Way Your Users Want It

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

CTERA recently presented at Storage Field Day 22. You can see their videos from Storage Field Day 22 here, and download a PDF copy of my rough notes from here.

 

CTERA?

In a nutshell, CTERA is:

  • Enterprise NAS over Object
  • 100% Private
  • Multi-cloud, hybrid consistent
  • Delivers data placement policy and mobility
  • Caching, not tiering
  • Zero-trust

 

The Problem

So what’s the problem we’re trying to solve with unstructured data?

  • Every IT environment is hybrid
  • More data is being generated at the edge
  • Workload placement strategies are driving storage placement
  • Storage must be instrumented and accessible anywhere

Seems simple enough, but edge storage is hard to get right.

[image courtesy of CTERA]

What Else Do You Want?

We want a lot from our edge storage solutions, including the ability to:

  • Migrate data to cloud, while keeping a fast local cache
  • Connect branches and users over a single namespace
  • Enjoy a HQ-grade experience regardless of location
  • Achieve 80% cost saving with global dedupe and cloud economics.

 

The Solution?

CTERA Multi-cloud Global File System – a “software-defined file over object with distributed SMB/NFS edge caching and endpoint collaboration”.

[image courtesy of CTERA]

CTERA Architecture

  • Single namespace connecting HQ, branches and users with ACL support
  • Object-native backend with cache accelerated access for remote sites
  • Multi-cloud scale-out to customer’s private or public infrastructure
  • Source-based encryption and global deduplication
  • Multi-tenant administration scalable to thousands of sites
  • Data management ecosystem for content security, analytics and DevOps automation

[image courtesy of CTERA]

Use Cases?

  • NAS Modernisation – Hybrid Edge Filer, Object-based Filesystem, Elastic scaling, Built-in Backup & DR
  • Remote Workforce – Endpoint Sync, Share, Backup & Cached Drive Distributed VDI clusters Small-form-factor Filer Mobile Collaboration
  • Media – Large Dataset Handling, Ultra-Fast Cloud Sync, MacOS Experience, Cloud Streaming
  • Multi-site Collaboration – Global File System Distributed Sync Scalable Central Mgt.
  • Edge Data Processing – Integrated HCI Filers Distributed Data Analysis Machine-Generated Data
  • Container-native – Global File System Across Distributed Kubernetes Clusters and Tethered Cloud Services

 

Thoughts and Further Reading

It should come as no surprise that people expect data to be available to them everywhere nowadays. And that’s not just sync and share solutions or sneaker net products on USB drives. No, folks want to be able to access corporate data in a non-intrusive fashion. It gets worse for the IT department though, because your end users aren’t just “heavy spreadsheet users”. They’re also editing large video files, working on complicated technical design diagrams, and generating gigabytes of log files for later analysis. And it’s not enough to say “hey, can you download a copy and upload it later and hope that no-one else has messed with the file?”. Users are expecting more from their systems. There are a variety of ways to deal with this problem, and CTERA seems to have provided a fairly robust solution, with many ways of accessing data, collaborating, and storing data in the cloud and at the edge. The focus isn’t limited to office automation data, with key verticals such as media and entertainment, healthcare, and financial services all having solutions suited to their particular requirements.

CTERA’s Formula One slide is impressive, as is the variety of ways it works to help organisations address the explosion unstructured data in the enterprise. With large swathes of knowledge workers now working more frequently outside the confines of the head office, these kinds of solutions are only going to become more in demand, particularly those that can leverage cloud in an effective (and transparent) fashion. I’m excited to see what’s to come with CTERA. Check out Ray’s article for a more comprehensive view of what CTERA does.

Intel – It’s About Getting The Right Kind Of Fast At The Edge

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Intel recently presented at Storage Field Day 22. You can see videos of the presentation here, and download my rough notes from here.

 

The Problem

A lot of countries have used lockdowns as a way to combat the community transmission of COVID-19. Apparently, this has led to an uptick in the consumption of streaming media services. If you’re somewhat familiar with streaming media services, you’ll understand that your favourite episode of Hogan’s Heroes isn’t being delivered from a giant storage device sitting in the bowels of your streaming media provider’s data centre. Instead, it’s invariably being delivered to your device from a content delivery network (CDN) device.

 

Content Delivery What?

CDNs are not a new concept. The idea is that you have a bunch of web servers geographically distributed delivering content to users who are also geographically distributed. Think of it as a way to cache things closer to your end users. There are many reasons why this can be a good idea. Your content will load faster for users if it resides on servers in roughly the same area as them. Your bandwidth costs are generally a bit cheaper, as you’re not transmitting as much data from your core all the way out to the end user. Instead, those end users are getting the content from something close to them. You can potentially also deliver more versions of content (in terms of resolution) easily. It can also be beneficial in terms of resiliency and availability – an outage on one part of your network, say in Palo Alto, doesn’t need to necessarily impact end users living in Sydney. Cloudflare does a fair bit with CDNs, and there’s a great overview of the technology here.

 

Isn’t All Content Delivery The Same?

Not really. As Intel covered in its Storage Field Day presentation, there are some differences with the performance requirements of video on demand and live-linear streaming CDN solutions.

Live-Linear Edge Cache

Live-linear video streaming is similar to the broadcast model used in television. It’s basically programming content streamed 24/7, rather than stuff that the user has to search for. Several minutes of content are typically cached to accommodate out-of-sync users and pause / rewind activities. You can read a good explanation of live-linear streaming here.

[image courtesy of Intel]

In the example above, Intel Optane PMem was used to address the needs of live-linear streaming.

  • Live-linear workloads consume a lot of memory capacity to maintain a short-lived video buffer.
  • Intel Optane PMem is less expensive than DRAM.
  • Intel Optane PMem has extremely high endurance, to handle frequent overwrite.
  • Flexible deployment options – Memory Mode or App-Direct, consuming zero drive slots.

With this solution they were able to achieve better channel and stream density per server than with DRAM-based solutions.

Video on Demand (VoD)

VoD providers typically offer a large library of content allowing users to view it at any time (e.g. Netflix and Disney+). VoD servers are a little different to live-linear streaming CDNs. They:

  • Typically require large capacity and drive fanout for performance / failure domains; and
  • Have a read-intensive workload, with typically large IOs.

[image courtesy of Intel]

 

Thoughts and Further Reading

I first encountered the magic of CDNs years ago when working in a data centre that hosted some Akamai infrastructure. Windows Server updates were super zippy, and it actually saved me from having to spend a lot of time standing in the cold aisle. Fast forward about 15 years, and CDNs are being used for all kinds of content delivery on the web. With whatever the heck this is is in terms of the new normal, folks are putting more and more strain on those CDNs by streaming high-quality, high-bandwidth TV and movie titles into their homes (except in backwards places like Australia). As a result, content providers are constantly searching for ways to tweak the throughput of these CDNs to serve more and more customers, and deliver more bandwidth to those users.

I’ve barely skimmed the surface of how CDNs help providers deliver content more effectively to end users. What I did find interesting about this presentation was that it reinforced the idea that different workloads require different infrastructure solutions to deliver the right outcomes. It sounds simple when I say it like this, but I guess I’ve thought about streaming video CDNs as being roughly the same all over the place. Clearly they aren’t, and it’s not just a matter of jamming some SSDs in one RU servers and hoping that your content will be delivered faster to punters. It’s important to understand that Intel Optane PMem and Intel Optane 3D NAND can give you different results depending on what you’re trying to do, with PMem arguably giving you better value for money (per GB) than DRAM. There are some great papers on this topic available on the Intel website. You can read more here and here.

Random Short Take #60

Welcome to Random Short take #60.

  • VMware Cloud Director 10.3 went GA recently, and this post will point you in the right direction when it comes to planning the upgrade process.
  • Speaking of VMware products hitting GA, VMware Cloud Foundation 4.3 became available about a week ago. You can read more about that here.
  • My friend Tony knows a bit about NSX-T, and certificates, so when he bumped into an issue with NSX-T and certificates in his lab, it was no big deal to come up with the fix.
  • Here’s everything you wanted to know about creating an external bootable disk for use with macOS 11 and 12 but were too afraid to ask.
  • I haven’t talked to the good folks at StarWind in a while (I miss you Max!), but this article on the new All-NVMe StarWind Backup Appliance by Paolo made for some interesting reading.
  • I loved this article from Chin-Fah on storage fear, uncertainty, and doubt (FUD). I’ve seen a fair bit of it slung about having been a customer and partner of some big storage vendors over the years.
  • This whitepaper from Preston on some of the challenges with data protection and long-term retention is brilliant and well worth the read.
  • Finally, I don’t know how I came across this article on hacking Playstation 2 machines, but here you go. Worth a read if only for the labels on some of the discs.

Fujifilm Object Archive – Not Your Father’s Tape Library

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Fujifilm recently presented at Storage Field Day 22. You can see videos of the presentation here, and download my rough notes from here.

 

Fujifilm Overview

You’ve heard of Fujifilm before, right? They do a whole bunch of interesting stuff – batteries, cameras, copiers. Nami Matsumoto, Director of DMS Marketing and Operations, took us through some of Fujifilm’s portfolio. Fujifilm’s slogan is “Value From Innovation”, and it certainly seems to be looking to extract maximum value from its $1.4B annual spend on research and development. The Recording Media Products Division is focussed on helping “companies future proof their data”.

[image courtesy of Fujifilm]

 

The Problem

The challenge, as always (it seems), is that data growth continues apace while budgets remain flat. As a result, both security and scalability are frequently sacrificed when solutions are deployed in enterprises.

  • Rapid data creation: “More than 59 Zettabytes (ZB) of data will be created, captured, copied, and consumed in the world this year” (IDC 2020)
  • Shift from File to Object Storage
  • Archive Market – 60 – 80%
  • Flat IT budgets
  • Cybersecurity concerns
  • Scalability

 

Enter The Archive

FUJIFILM Object Archive

Chris Kehoe, Director of DMS Sales and Engineering, spent time explaining what exactly FUJIFILM Object Archive was. “Object Archive is an S3 based archival tier designed to reduce cost, increase scale and provide the highest level of security for long-term data retention”. In short, it:

  • Works like Amazon S3 Glacier in your DC
  • Simply integrates with other object storage
  • Scales on tape technology
  • Secure with air gap and full chain of custody
  • Predictable costs and TCO with no API or egress fees

Workloads?

It’s optimised to handle the long-term retention of data, which is useful if you’re doing any of these things:

  • Digital preservation
  • Scientific research
  • Multi-tenant managed services
  • Storage optimisation
  • Active archiving

What Does It Look Like?

There are a few components that go into the solution, including a:

  • Storage Server
  • Smart cache
  • Tape Server

[image courtesy of Fujifilm]

Tape?

That’s right, tape. The tape library supports LTO7, LTO8, TS1160. The data is written using “OTFormat” specification (you can read about that here). The idea is that it packs a bunch of objects together so they get written efficiently.  

[image courtesy of Fujifilm]

Object Storage Too

It uses an “S3-compatible” API – the S3 server is built on Zenko inside (Scality). From an object storage perspective, it works with Cloudian HyperStore, Caringo Swarm, NetApp StorageGRID, Scality Ring. It also has Starfish and Tiger Bridge support.

Other Notes

The product starts at 1PB of licensing. You can read the Solution Brief here. There’s an informative White Paper here. And there’s one of those nice Infographic things here.

Deployment Example

So what does this look like from a deployment perspective? One example was a typical primary storage deployment, with data archived to an on-premises object storage platform (in this case NetApp StorageGRID). When your archive got really “cold”, it would be moved to the Object Archive.

[image courtesy of Fujifilm]

[image courtesy of Fujifilm]

 

Thoughts

Years ago, when a certain deduplication storage appliance company was acquired by a big storage slinger, stickers with “Tape is dead, get over it” were given out to customers. I think I still have one or two in my office somewhere. And I think the sentiment is spot on, at least in terms of the standard tape library deployments I used to see in small to mid to large enterprise. The problem that tape was solving for those organisations at the time has largely been dealt with by various disk-based storage solutions. There are nonetheless plenty of use cases where tape is still considered useful. I’m not going to go into every single reason, but the cost per GB of tape, at a particular scale, is hard to beat. And when you want to safely store files for a long period of time, even offline? Tape, again, is hard to beat. This podcast from Curtis got me thinking about the demise of tape, and I think this presentation from Fujifilm reinforced the thinking that it was far from on life support – at least in very specific circumstances.

Data keeps growing, and we need to keep it somewhere, apparently. We also need to think about keeping it in a way that means we’re not continuing to negatively impact the environment. It doesn’t necessarily make sense to keep really old data permanently online, despite the fact that it has some appeal in terms of instant access to everything ever. Tape is pretty good when it comes to relatively low energy consumption, particularly given the fact that we can’t yet afford to put all this data on All-Flash storage. And you can keep it available in systems that can be relied upon to get the data back, just not straight away. As I said previously, this doesn’t necessarily make sense for the home punter, or even for the small to midsize enterprise (although I’m tempted now to resurrect some of my older tape drives and see what I can store on them). It really works better at large scale (dare I say hyperscale?). Given that we seem determined to store a whole bunch of data with the hyperscalers, and for a ridiculously long time, it makes sense that solutions like this will continue to exist, and evolve. Sure, Fujifilm has sold something like 170 million tapes worldwide. But this isn’t simply a tape library solution. This is a wee bit smarter than that. I’m keen to see how this goes over the next few years.

Infrascale Puts The Customer First

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Infrascale recently presented at Storage Field Day 22. You can see videos of the presentation here, and download my rough notes from here.

 

Infrascale and Customer Experience

Founded in 2011, Infrascale is headquartered is in Reston, Virginia, with around 170 employees and offices in the Ukraine and India as well. As COO Brian Kuhn points out in the presentation, the company is “[a]ll about customers and their data”. Infrascale’s vision is “to be the most trusted data protection provider”.

Build Trust via Four Ps

Predictable

  • Reliable connections, response time, product
  • Work side by side like a dependable friend

Personal

  • People powered – partners, not numbers
  • Your success is our success

Proficient

  • Support and product experts with the right tools
  • Own the issue from beginning to end

Proactive

  • Onboarding, outreach to proactively help you
  • Identify issues before they impact your business

“Human beings dealing with human beings”

 

Product Portfolio

Infrascale Cloud Application Backup (ICAB)

SaaS Backup

  • Backup Microsoft 365, Google Workspace, Salesforce, Box, and Dropbox
  • Recover individual items (mail, file, or record) or entire mailboxes, folders, or databases
  • Close the retention gap between the SaaS provider and corporate, legal, and / or regulatory policy

Infrascale Cloud Backup (ICB)

Endpoint Backup

  • Backup desktop, laptop, or mobile devices directly to the cloud – wherever you work
  • Recover data in seconds – and with ease
  • Optimised for branch office and remote / home workers
  • Provides ransomware detection and remediation

Infrascale Backup and Disaster Recovery (IBDR)

Backup and DR / DRaaS for Servers

  • Backup mission-critical servers to both an on-premises and bootable cloud appliance
  • Boot ready in ~2 minutes (locally or in the cloud)
  • Restore system images or files / folders
  • Optimised for VMware and Hyper-V VMs and Windows bare metal

 

Digging Deeper with IBDR

What Is It?

Infrascale describes IBDR as a hybrid-cloud solution, with hardware and software on-premises, and service infrastructure in the cloud. In terms of DR as a service, Infrascale provides the ability to backup and replicate your data to a secondary location. In the event of a disaster, customers have the option to restore individual files and folders, or the entire infrastructure if required. Restore locations are flexible as well, with a choice of on-premises or in the cloud. Importantly, you also have the ability to failback when everything’s sorted out.

One of the nice features of the service is unlimited DR and failover testing, and there are no fees attached to testing, recovery, or disaster failover.

Range

The IBDR solution also comes in a few different versions, as the table below shows.

[image courtesy of Infrascale]

The appliances are also available in a range of shapes and sizes.

[image courtesy of Infrascale]

Replication Options

In terms of replication, there are multiple destinations available, and you can fairly easily fire up workloads in the Infrascale cloud if need be.

[image courtesy of Infrascale]

 

Thoughts and Further Reading

Anyone who’s worked with data protection solutions will understand that it can be difficult to put together a combination of hardware and software that meets the needs of the business from a commercial, technical, and process perspective – particularly when you’re starting at a small scale and moving up from there. Putting together a managed service for data protection and disaster recovery is possibly harder still, given that you’re trying to accommodate a wide variety of use cases and workloads. And doing this using commercial off-the-shelf offerings can be a real pain. You’re invariably tied to the roadmap of the vendor in terms of features, and your timeframes aren’t normally the same as your vendor (unless you’re really big). So there’s a lot to be said for doing it yourself. If you can get the software stack right, understand what your target market wants, and get everything working in a cost-effective manner, you’re onto a winner.

I commend Infrascale for the level of thought the company has given to this solution, its willingness to work with partners, and the fact that it’s striving to be the best it can in the market segment it’s targeting. My favourite part of the presentation was hearing the phrase “we treat [data] like it’s our own”. Data protection, as I’ve no doubt rambled on about before, is hard, and your customers are trusting you with getting them out of a pickle when something goes wrong. I think it’s great that the folks at Infrascale have this at the centre of everything they’re doing. I get the impression that it’s “all care, all responsibility” when it comes to the approach taken with this offering. I think this counts for a lot when it comes to data protection and DR as a service offerings. I’ll be interested to see how support for additional workloads gets added to the platform, but what they’re doing now seems to be enough for many organisations. If you want to know more about the solution, the resource library has some handy datasheets, and you can get an idea of some elements of the recommended retail pricing from this document.

Komprise – It’s About Data, Not Storage

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Komprise recently presented at Storage Field Day 22. You can see their videos from Storage Field Day 22 here, and download a PDF copy of my rough notes from here.

 

The Age Of Data, Not Storage

It’s probably been the age of data for some time now, but I couldn’t think of a catchy heading. One comment from the Komprise folks during the presentation that really stood out to me was “Data outlives its storage infrastructure”. If I think back ten years to how I thought about managing data movement, it was certainly tied to the storage platform hosting the data, rather than what the data did. Whenever I had to move from one array to the next, or one protocol to another, I wasn’t thinking in terms of where the data would necessarily be best placed to serve the business. Generally speaking, I was approaching the problem in terms of getting good performance for blocks and files, but rarely was I thinking in terms of the value of the data to the business. Nowadays, it seems that there’s an improved focus on getting the “[d]ata in the right place at the right time – not just for efficiency – but to extract maximum value”. We’re no longer thinking about data in terms of old stuff living on slow storage, and fresh bits living on the fast stuff. As the amount of data being managed in enterprises continues to grow at an insane rate, it’s becoming more important than ever to understand just what usefulness the data offers the business.

[image courtesy of Komprise]

The variety of storage platforms available now is also a little more extensive than it was last century, and that presents some more interesting challenges in getting the data to where it needs to be. As I mentioned earlier, data growth is going berserk the world over. Add to this the problem of ubiquitous cloud access (and IT departments struggling to keep up with the governance necessary to wrangle these solutions into some sensible shape), and most enterprises looking to save money wherever possible, and data management can present real problems to most enterprise shops.

[image courtesy of Komprise]

 

Analytics To The Rescue!

Komprise has come up with an analytics-driven approach to data management that is built on some sound foundational principles. The solution needs to:

  1. Go beyond storage efficiency – it’s not just about dedupe and compression at a certain scale.
  2. Must be multi-directional – you need to be able to get stuff back.
  3. Not disrupt users and workflows – do that and you may as well throw the solution in the bin.
  4. Should create new uses for your data – it’s all about value, after all.
  5. Puts your data first.

The final point is possibly the most critical one. If I think about the storage-centric approaches to data management that I’ve seen over the years, there’s definitely been a viewpoint that the underlying storage infrastructure would heavily influence how the data is used, rather than the data dictating how the storage platforms should be architected. Some of that is a question of visibility – if you don’t understand your data, it’s hard to come up with tailored solutions. Some of the problem is also the disconnect that seems to exist between “the business” and IT departments in a large number of enterprises. It’s not an easy problem to solve, by any stretch, but it does explain some of the novel approaches to data management that I’ve seen over the years.

 

Thoughts and Further Reading

Data management is hard, and it keeps getting harder because we keep making more and more data. And we frequently don’t have the time, or take the time, to work out what value the data actually has. This problem isn’t going to go away, so it’s good to see Komprise moving the conversation past that and into the realm of how we can best focus on deriving value from the data itself. There was certainly some interesting discussion during the presentation about the term analytics,  and what that really meant in terms of the Komprise solution. Ultimately, though, I’m a fan of anything that elevates the conversation beyond “I can move your terabytes from this bucket to that bucket”. I want something that starts to tell me more about what type of data I’m storing, who’s using it, and how they’re using it. That’s when it gets interesting from a data management perspective. I think there’s a ways to go in terms of getting this solution right for everyone, but it strikes me that Komprise is on the right track, and I’m looking forward to seeing how the solution evolves alongside the storage technologies it’s using to get the most from everyone’s data. You can read more on the Komprise approach here.

Storage Field Day 22 – (Fairly) Full Disclosure

Disclaimer: I recently attended Storage Field Day 22.  Some expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my notes on gifts, etc, that I received as a conference attendee at Storage Field Day 22. This is by no stretch an interesting post from a technical perspective, but it’s a way for me to track and publicly disclose what I get and how it looks when I write about various things. With all of this stuff happening (waves hands around), it’s not going to be as lengthy as normal, but I did receive a box of stuff in the mail, so I wanted to disclose it.

The Tech Field Day team sent over some stickers, a TFD tote bag, and a TFD pin, and a TFD patch. Fujifilm kindly gave me a 16GB USB drive (with both USB 2 and Lightning connectors), a webcam cover, stylus, USB charging cable, a Bluetooth tracker, a phone cradle, and a beach towel. Komprise sent over some neat socks, three Komprise-branded Titleist golf balls, and a sticker.

It wasn’t fancy food and limos this time around, but it was nonetheless an enjoyable event. Hopefully we can get back to in-person events some time this decade. Thanks again to Stephen and the team for having me back. Thanks also to my employer for giving me time away from the office to attend.

Random Short Take #59

Welcome to Random Short take #59.

  • It’s been a while since I’ve looked at Dell Technologies closely, but Tech Field Day recently ran an event and Pietro put together a pretty comprehensive view of what was covered.
  • Dr Bruce Davie is a smart guy, and this article over at El Reg on decentralising Internet services made for some interesting reading.
  • Clean installs and Time Machine system recoveries on macOS aren’t as nice as they used to be. I found this out a day or two before this article was published. It’s worth reading nonetheless, particularly if you want to get your head around the various limitations with Recovery Mode on more modern Apple machines.
  • If you follow me on Instagram, you’ll likely realise I listen to records a lot. I don’t do it because they “sound better” though, I do it because it works for me as a more active listening experience. There are plenty of clowns on the Internet ready to tell you that it’s a “warmer” sound. They’re wrong. I’m not saying you should fight them, but if you find yourself in an argument this article should help.
  • Speaking of technologies that have somewhat come and gone (relax – I’m joking!), this article from Chris M. Evans on HCI made for some interesting reading. I always liked the “start small” approach with HCI, particularly when comparing it to larger midrange storage systems. But things have definitely changed when it comes to available storage and converged options.
  • In news via press releases, Datadobi announced version 5.12 of its data mobility engine.
  • Leaseweb Global has also made an announcement about a new acquisition.
  • Russ published an interesting article on new approaches to traditional problems. Speaking of new approaches, I was recently a guest on the On-Premise IT Podcast discussing when it was appropriate to scrap existing storage system designs and start again.