Dell – Dell Technologies World 2018 – See You In Las Vegas

This is a quick post to let you all know that I’ll be heading to Dell EMC’s annual conference (now known as Dell Technologies World) this year in Las Vegas, NV. I’m looking forward to catching up with some old friends and meeting some new ones. If you haven’t registered yet but feel like that’s something you might want to do – the registration page is here. To get a feel for what’s on offer, you can check out the agenda here. I’m keen to hear the latest from Dell EMC.

Massive thanks to Konstanze and Debbie from Dell EMC for organising the “influencer” pass for me. Keep an eye out for me at the conference and surrounding events and don’t be afraid to come and say hi (if you need a visual – think Grandad Wolverine).

Western Digital – The A Is For Active, The S Is For Scale

Disclaimer: I recently attended Storage Field Day 15.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.


Western Digital recently presented at Storage Field Day 15. You might recall there are a few different brands under the WD umbrella, including Tegile and HGST and folks from both Tegile and HGST presented during Storage Field Day 15. I’d like to talk about the ActiveScale session however, mainly because I’m interested in object solutions. I’ve written about Tegile previously, although obviously a fair bit has changed for them too. You can see their videos from Storage Field Day 15 here, and download a PDF copy of my rough notes from here.


ActiveScale, Probably Not What You Thought It Was

ActiveScale isn’t some kind of weight measurement tool for exercise fanatics, but rather the brand of scalable object system that HGST sells. It comes in two flavours: the P100 and X100. Apparently the letters in product names sometimes do mean things, with the “P” standing for Petabyte, and the “X” for Exabyte (possibly in the same way that X stands for Excellent). From a speeds and feeds perspective, the typical specs are as follows:

  • P100 – starts as low as 720TB, goes to 18PB. 17x 9s data durability, 4.6KVA typical power consumption; and
  • X100 – 5.4PB in a rack, 840TB – 52PB, 17x 9s data durability, 6.5KVA typical power consumption.

You can scale out to 9 expansion racks, with 52PB of scale out object storage goodness per namespace. Some of the key capabilities of the ActiveScale platform include:

  • Archive and Backup;
  • Active Data for Analytics;
  • Data Forever Architecture;
  • Versioning;
  • Encryption;
  • Replication;
  • Single Pane Management;
  • S3 Compatible APIs;
  • Multi-Geo Availability Zones; and
  • Scale Up and Scale Out.

They use “BitSpread” for dynamic data placement and you can read a little about their erasure coding mechanism here. “BitDynamics” assures continuous data integrity, offering the following features:

  • Background – verification process always running
  • Performance – not impacted by verification or repair
  • Automatic – all repairs happen with no intervention

There’s also a feature called “GeoSpread” for geographical availability.

  • Single – Distributed erasure coded copy;
  • Available – Can sustain the loss of an entire site; and
  • Efficient – Better than 2 or 3 copy replication.


What Do I Use It For Again?

Like a number of other object storage systems in the market, ActiveScale is being positioned as a very suitable platform for:

  • Media & Entertainment
    • Media Archive
    • Tape replacement and augmentation
    • Transcoding
    • Playout
  • Life Sciences
    • Bio imaging
    • Genomic Sequencing
  • Analytics


Thoughts And Further Reading

Unlike a lot of people, I find technical sessions discussing object storage at extremely large scale to be really interesting. It’s weird, I know, but there’s something that I really like about the idea of petabytes of storage servicing media and entertainment workloads. Maybe it’s because I don’t frequently come across these types of platforms in my day job. If I’m lucky I get to talk to folks about using object as a scalable archive platform. Occasionally I’ll bump into someone doing stuff with life sciences stuff in a higher education setting, but they’ve invariably built something that’s a little more home-brew than HGST’s offering. Every now and then I’m lucky enough to spend some time with media types who regale me with tales of things that go terribly wrong when the wrong bit of storage infrastructure is put in the path of a particular editing workflow or transcode process. Oh how we laugh. I can certainly see these types of scalable platforms being a good fit for archive and tape replacement. I’m not entirely convinced they make for a great transcode or playout platform, but I’m relatively naive when it comes to those kinds of workloads. If there are folks reading this who are familiar with that kind of stuff, I’d love to have a chat.

But enough with my fascination with the media and entertainment industry’s infrastructure requirements. From what I’ve seen of ActiveScale, it looks to be a solid platform with a lot of very useful features. Coupled with the cloud management feature it seems like they’re worth a look. Western Digital aren’t just making hard drives for your NAS (and other devices), they’re doing a whole lot more, and a lot of it is really cool. You can read El Reg’s article on the X100 here.

NetApp United – Happy To Be Part Of The Team


NetApp recently announced the 2018 list of NetApp United members and I’m happy to see I’m on the list. If you’re not familiar with the NetApp United program, it’s “NetApp’s global influencer program. Founded in 2017, NetApp United is a community of 140+ individuals united by their passion for great technology and the desire to share their expertise with the world”. One of the nice things about it is the focus on inclusiveness, community and knowledge sharing. I’m doing a lot more with NetApp than I have in the past and I’m really looking forward to the year ahead from both the NetApp United perspective and the broader NetApp view. You can read the announcement here and view the list of members. Chan also did a nice little write-up you can read here. And while United isn’t a football team, I will leave you with this great quote from the inimitable Eric Cantona – “When the seagulls follow the trawler, it is because they think sardines will be thrown into the sea“.

Cohesity Understands The Value Of What Lies Beneath

Disclaimer: I recently attended Storage Field Day 15.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Cohesity recently presented at Storage Field Day 15. It’s not the first time I’ve spoken about them, and you can read a few of my articles on them here and here. You can see their videos from Storage Field Day 15 here, and download a PDF copy of my rough notes from here.


The Data Centre Is Boring

Well, not boring exactly. Okay, it’s a little boring. Cohesity talk a lot about the concept of secondary storage and, in their view, most of the storage occupying the DC is made up of secondary storage. Think of your primary storage tier as your applications, and your secondary storage as being comprised of:

  • Backups;
  • Archival data;
  • Analytics; Test/Dev workloads; and
  • File shares.

In other words, it’s a whole lot of unstructured data. Cohesity like to talk about the “storage iceberg”, and it’s a pretty reasonable analogy for what’s happening.

[Image courtesy of Cohesity]


Cohesity don’t see all this secondary data as simply a steaming pile of unmanaged chaos and pain. Instead, they see it as a potential opportunity for modernisation. The secondary storage market has delivered, in Cohesity’s view, an opportunity to “[c]lean up the mess left by enterprise backup products”. The idea is that you can use an “Apple-like UI”, operating at “Google-like scale”, to consolidate workloads on the Cohesity DataPlatform and then take advantage of copy data management to really extract value from that data.


The Cohesity Difference

So what differentiates Cohesity from other players in the secondary storage space?

Mohit Aron (pictured above) took us though a number of features in the Cohesity DataPlatform that are making secondary storage both useful and interesting. These include:

  • Global Space Efficiency
    • Variable length dedupe
    • Erasure coding
  • QoS
    • Multi workload isolation
    • Noisy neighbour prevention
  • Instant Mass Restore
    • Any point in time
    • Highly available
  • Data Resiliency
    • Strict consistency
    • Ensures data integrity
  • Cloud/Apps Integration
    • Multiprotocol
    • Universal access

I’ve been fortunate enough to have some hands on experience with the Cohesity solution and can attest that these features (particularly things like storage efficiency and resiliency) aren’t just marketing. There are some other neat features, such as public cloud support with AWS and Azure that are also worthy of further investigation.


Thoughts And Further Reading

There’s a lot to like about Cohesity’s approach to leveraging secondary storage in the data centre. For a very long time, the value of secondary storage hasn’t been at the forefront of enterprise analytics activities. Or, more bluntly put, copy data management has been something of an ongoing fiasco, with a number of different tools and groups within organisations being required to draw value from the data that’s just sitting there. Cohesity don’t like to position themselves simply as a storage target for data protection, because the DataPlatform is certainly capable of doing a lot more than that. While the messaging has occasionally been confusing, the drive of the company to deliver a comprehensive data management solution that extends beyond traditional solutions shouldn’t be underestimated. Coupled with a relentless focus on ease of use and scalability and the Cohesity offering looks to be a great way of digging in to the “dark data” in your organisation to make sense of what’s going on.

There are still situations where Cohesity may not be the right fit (at the moment), particularly if you have requirements around non-x86 workloads or particularly finicky (read: legacy) enterprise applications. That said, Cohesity are working tirelessly to add new features to the solution at a rapid pace, and are looking to close the gap between themselves and some of the more established players in the market. The value here, however, isn’t just in the extensive data protection capability, but also in the analytics that can be leveraged to provide further insight into your organisation’s protected data. It’s sometimes not immediately obvious why you need to be mining your unstructured data for information. But get yourself the right tools and the right people and you can discover a whole lot of very useful (and sometimes scary) information about your organisation that you wouldn’t otherwise know. And it’s that stuff that lies beneath the surface that can have a real impact on your organisation’s success. Even if it is a little boring.

WekaIO – Not The Matrix You’re Thinking Of

Disclaimer: I recently attended Storage Field Day 15.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

WekaIO recently presented at Storage Field Day 15. It’s not the first time I’ve heard from them, and you can read my initial thoughts on them here. You can see their videos from Storage Field Day 15 here, and download a PDF copy of my rough notes from here.


Enter The Matrix

Fine, I just rewatched The Matrix on the plane home. But any company with Matrix in the product name is going to get a few extra points from me. So what is it, Neo?

  • Fully coherent POSIX file system that delivers local file system performance;
  • Distributed Coding, more resilient at scale, fast rebuilds, end to end data protection
  • Instantaneous snapshots, clones, tiering to S3, partial file rehydration;
  • InfiniBand or Ethernet, Hyper-converged or Dedicated Storage Server; and
  • Bare-metal, containerised, or running in a VM.

There’s an on-premises version and one built for public cloud use.

Liran Zvibel (Co-founder and CEO) took us through some of the key features of the architecture.

Software based for dynamic scalability

  • Software scales to thousands of nodes and trillions of records;
  • Significantly more scalable than any appliance offering; and
  • Metadata scales to thousands of servers.

Patented erasure coding technology

  • Allows us to use 66% less NVMe compared to triple replication;
  • Fully distributed data and metadata for best parallelism / performance; and
  • Snapshots for “free” with no performance impact.

Integrated tiering in a single namespace

  • Allows for unlimited namespace critical for deep learning; and
  • Enables backup and cloud bursting to public cloud.


I Know Kung Fu

[Look, I’m just going to torture the Matrix analogy for a little longer, so bear with me]. So what do I do with all of this performance in a storage subsystem? Well, the key focus areas for WekaIO include:

  • Machine learning / AI;
  • Digital Radiology / Pathology;
  • Algorithmic Trading; and
  • Genomic Sequencing and Analytics.

Most of these workloads deal with millions of files, very large capacities, and are very sensitive to poor latency. There’s also a cool use case for media and entertainment environments that’s worth checking out if you’re into that sort of thing.



WekaIO are aiming to do about 30% of their sales directly, meaning they lean heavily on the channel. Both HPE and Penguin Computing are OEM providers, and obviously there’s also a software-only play with the AWS version. They’re talking about delivering some very big numbers when it comes to performance, but my favourite thing about them is the focus on being able to access the same data through all interfaces, and quickly.

WekaIO make some strong claims about their ability to deliver a fast and scalable file system solution, but they certainly have the pedigree to deliver a solution that meets a number of those claims. There’re some nice features, such as the ability to add servers with different profiles to the cluster, and running nodes in hyper-converged mode. When it comes down to it, performance is defined by the amount of cores available. If you add more compute, you get more performance.

In my mind, the solution isn’t for everyone right now, but if you have a requirement for a performance focused, massively parallel, scale-out storage solution with the ability to combine NVMe and S3, you’d do worse than to check out what WekaIO can do.

StarWind VTL? What? Yes, And It’s Great!

Disclaimer: I recently attended Storage Field Day 15.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

StarWind recently presented at Storage Field Day 15. You can see videos of their presentation here, and download my rough notes from here.


VTL? Say What Now?

Max and Anton from StarWind are my favourites. If I was a professional analyst I wouldn’t have favourites, but I do. Anyone who calls their presentation “From Dusk Till Dawn” is alright in my books. Here’s a shot of Max presenting.


In The Beginning

The concept of sending recovery data to tape is not a new one. After all, tape was often referred to as “backup’s best friend”. Capacity-wise it’s always been cheap compared to disk, and it’s been a (relatively) reliable medium to work with. This was certainly the case in the late 90s when I got my start in IT. Since then, though, disks have come a long way in terms of capacity (and reduced cost). StorageTek introduced Virtual Tape Libraries (VTLs) in the late 90s and a lot of people moved to using disk storage for their backups. Tape still played a big part in this workflow, with a lot of people being excited about disk to disk to tape (D2D2T) architectures in the early 2000s. IT was cool because it was a fast way to do backups (when it worked). StarWind call this the “dusk” of the VTL era.


Disks? Object? The Cloud? Heard Of Them?

According to StarWind though (and I have anecdotal evidence to support this), backup applications (early on) struggled to speak sensibly to disk. Since then, object storage has become much more popular. StarWind also suggested that it’s hard to do file or block to object effectively.

Tape (or a tape-like mechanism) for cold data is still a great option.  No matter how you slice it, tape is still a lot cheaper than disk. At least in terms of raw $/GB. It also offers:

  • Longevity;
  • Can be stored offline; and
  • Streams at a reasonably high bandwidth.

Object storage is a key cloud technology. And object storage can deliver similar behaviour to tape, in that it is:

  • Non-blocking;
  • Capable of big IO; and
  • Doesn’t need random writes.

From StarWind’s perspective, the “dawn” of VTL is back. The combination of cheap disk, mature object storage technology and newer backup software means that VTL can be a compelling option for business that still needs a tape-like workflow. They offer a turnkey appliance, based on NL-SAS. It has 16 drives per appliance (in a 3.5” form factor), delivering roughly 120TB of capacity before deduplication. You can read more about it here.


Thoughts And Conclusion

StarWind never fail to deliver an interesting presentation at Tech Field Day events. I confess I didn’t expect to be having a conversation with someone about their VTL offering. But I must also confess that I do come across customers in my day job who still need to leverage VTL technologies to ensure their data protection workflow continues to work. Why don’t they re-tool their data protection architecture to get with the times? I wish it were that simple. Sometimes the easiest part of modernising your data protection environment is simply replacing the hardware.

StarWind are not aiming to compete in enterprise environments, focusing more on the SMB market. There are some nice integration points with their existing product offerings. And the ability to get the VTL data to a public cloud offering will keep CxOs playing the “cloud at all cost” game happy as well.

[Image courtesy of StarWind]


There are a lot of reasons to get your data protected in as many locations as possible. StarWind has a good story here with the on-premises part of the equation. According to StarWind, VTL will remain around “until backup applications (all of them) learn all cloud and on-premises object storage APIs … or until all object storage settles on a single, unified “standard” API”. This looks like it might still be some time away. A lot of environments are still using technology from last decade to perform business-critical functions inside their companies. There’s no shame in delivering products that can satisfy that market segment. It would be nice if everyone would refactor their applications for cloud, but it’s simply not the case right now. StarWind understand this, and understand that VTL is performs a useful function right now, particularly in environments where the advent of virtualisation might still be a recent event. I know people still using VTL in crusty mainframe environments and flashy, cloud-friendly, media and entertainment shops. Tape might be dead, but it feels like there are a lot of folks still using it, or its virtual counterpart.

Cohesity Basics – Auto Protect

I’ve been doing some work with Cohesity in our lab and thought it worth covering some of the basic features that I think are pretty neat. In this edition of Cohesity Basics, I thought I’d quickly cover off the “Auto Protect” feature. If you read their white paper on data protection, you’ll find the following line: “As new virtual machines are added, they are auto discovered and included in the protection policy that meets the desired SLAs”. It seems like a pretty cool feature, and was introduced in version 4.0. I wanted to find out a bit more about how it works.


What Is It?

Auto Protect will “protect new VMs that are added to a selected parent Object (such as a Datacenter, Folder, Cluster or Host)”. The idea behind this is that you can add a source and have Cohesity automatically protect all of the VMs in a folder, cluster, etc. The cool thing is that it will also protect any new VMs added to that source.

When you’re adding Objects to a Protection Job, you can select what to auto protect. In the screenshot below you can see that the Datacenter in my vCenter has Auto Protect turned off.

The good news is that you can explicitly exclude Objects as well. Here’s what the various icons mean.

[Image courtesy of Cohesity]


What Happens?

When you create a Protection Job in Cohesity you add Objects to the job. If you select to Auto Protect this Object, anything under that Object will automatically be protected. Every time the Protection Job runs, if the Object hierarchy has been refreshed on the Cohesity Cluster, new VMs are also backed up even though the new VM has not been manually included in the Protection Job. There are two ways that the Object hierarchy gets refreshed. It is automatically done every 4 hours by the cluster. If you’re in a hurry though, you can do it manually. Go to Protection -> Sources and click on the Source you’d like to refresh. There’s a refresh button to click on and you’ll see your new Objects showing up.


Why Wouldn’t You?

As part of my testing, I’ve been creating “catchall” Protection Jobs and adding all the VMs in the environment into the jobs. But we have some VMware NSX Controller VMs in our lab, and VMware “only supports backing up the NSX Edge and controller through the NSX Manager“. Not only that, but it simply won’t work.

In any case, you can use FTP to back up your NSX VMs if you really feel like that’s emoting you want to do. More info on that is here. You also want to be careful that you’re not backing up stuff you don’t need to, such as clones and odds and sods. Should I try protecting the Cohesity Virtual Edition appliance VM? I don’t know about that …



I generally prefer data protection configurations that “protect everything and exclude as required”. While Auto Protect is turned off by default, it’s simple enough to turn on when you get started. And it’s a great feature, particularly in dynamic environments where there’s no automation of data protection when new workloads are provisioned (a problem for another time). Hat tip to my Cohesity SE Pete Marfatia for pointing this feature out to me.

Dropbox – It’s Scale Jim, But Not As We Know It

Disclaimer: I recently attended Storage Field Day 15.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Dropbox recently presented at Storage Field Day 15. You can see videos of their presentation here, and download my rough notes from here.


What’s That In Your Pocket?

James Cowling spent some time talking to us about Dropbox’s “Magic Pocket” system architecture. Despite the naff name, it’s a pretty cool bit of tech. Here’s a shot of James answering a question.


Magic Pocket

Dropbox uses Magic Pocket to store users’ file content:

  • 1+ EB of user file data currently stored
  • Growing at over 10PB per month

Customising the stack end-to-end allowed them to:

  • Improve performance and reliability for our unique use case
  • Improve economics


Inside the Magic Pocket

Brief history of development

  • Prototype and development
  • Production validation
    • Ran in dark phase to find any unknown bugs
    • Deleted first byte of data from third party cloud provider in February 2015
  • Scale out and cut over
    • 600,000+ disks
    • 3 regions in USA, expanding to EU
  • Migrated more than 500PB of user data from third party cloud provider into MP in 6 months

It’s worth watching the video to get a feel for the scale of the operation. You can also read more on the Magic Pocket here and here. Chan also did a nice write-up that you can access here.


Beyond Public Cloud

A bit’s been made of Dropbox’s move from public cloud back to its own infrastructure, but Dropbox were careful to point out that they used third parties where it made sense for them, and still leveraged various public cloud and SaaS offerings as part of their daily operations. The key for them was understanding whether building their own solution made sense or not. To that end, they asked themselves three questions:

  • At what scale is investment in infrastructure cost effective?
  • Will this scale enable innovation by building custom services and integrating hardware / software more tightly?
  • Can that innovation add value for users?

From a scale perspective, it was fairly simple, with Dropbox being one of the oldest, largest and most used collaboration platforms around. From an integration perspective, they needed a lot of network and storage horsepower, which set them apart from some of the other web-scale services out there. They were able to add value to users through an optimised stack, increased reliability and better security.


It Makes Sense, But It’s Not For Everyone

That all sounds pretty good, but one of the key things to remember is that they haven’t just cobbled together a bunch of tin and a software stack and become web-scale overnight. While the time to production was short, all things considered, there was still investment (in terms of people, infrastructure and so forth) in making the platform work. When you commit to going your own way, you need to be mindful that there are a lot of ramifications involved, including the requirement to invest in people who know what they’re doing, the capacity to do what you need to do from a hardware perspective, and the right minds to come up with the platform to make it all work together. The last point is probably hardest for people to understand. I’ve ranted before about companies not being anywhere near the scale of Facebook, Google or the other hyperscalers and expecting that they can deliver similar services, for a similar price, with minimal investment.

Scale at this level is a hard thing to do well, and it takes investment in terms of time and resources to get it right. And to make that investment it has to make sense for your business. If your company’s main focus is putting nuts and bolts together on an assembly line, then maybe this kind of approach to IT infrastructure isn’t really warranted. I’m not suggesting that we can’t all learn something from the likes of Dropbox in terms of how to do cool infrastructure at scale. But I think they key takeaways should be that Dropbox have:

  • Been around for a while;
  • Put a lot of resources into solving the problems they faced; and
  • Spent a lot of time deciding what did and did not make sense to do themselves.

I must confess I was ignorant of the scale at which Dropbox is operating, possibly because I saw them as a collaboration piece and didn’t really think of them as an infrastructure platform company. The great thing, however, is they’re not just a platform company. In the same way that Netflix does a lot of really cool stuff with their tech, Dropbox understands that users value performance, reliability and security, and have focused their efforts on ensuring that the end user experience meets those requirements. The Dropbox backend infrastructure makes for a fascinating story, because the scale of their operations is simply not something we come across every day. But I think the real success for Dropbox is their relentless focus on making the end user experience a positive one.

IBM Spectrum Protect Plus Has A Nice Focus On Modern Data Protection

Disclaimer: I recently attended Storage Field Day 15.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

IBM recently presented at Storage Field Day 15. You can see videos of their presentation here, and download my rough notes from here.


SLA Policy-driven, Eh?

IBM went to some length to talk about their “SLA-based data protection” available with their Spectrum Protect Plus product (not to be confused with Spectrum Protect). So, what is Spectrum Protect Plus? IBM defined it as a “Data Reuse solution for virtual environments and applications supporting multiple use cases”, offering the following features:

  • Simple, flexible, lightweight (easy to deploy, configure and manage);
  • Pre-defined SLA based protection;
  • Self-service (RBAC) administration;
  • Enterprise proven, scalable;
  • Utilise copied data for production workflows;
  • Data recovery and reuse automation; and
  • Easily fits your budget.

They also spoke about SLA-based automation, with the following capabilities:

  • Define frequency of copies, retention and data and target location of data copies for any resources assigned to the SLA;
  • Comes installed with 3 pre-defined policies (Gold, Silver, and Bronze);
  • Modify or create as many SLAs as necessary to meet business needs;
  • Supports policy-based include / exclude rules;
  • Capability to offload data to IBM Spectrum Protect ensuring corporate governance / compliance with long term retention / archiving; and
  • Enable administrators to create customised templates that provide values for desired RPO.

This triggered Chris Evans to tweet the following during the session.

He went to write an insightful post on the difference between service level agreements, objectives, and policy-based configuration, amongst other things. It was great, and well worth a read.


So It’s Not A Service Level Agreement?

No, that’s not really what IBM are delivering. What they are delivering, however, is software that supports the ability to meet SLAs through configuration-level service level objectives (SLOs), or policies. I like SLOs better simply because a policy could just be something that the business has to adhere to and may not have anything to do with the technology or its relative usefulness. An SLO, on the other hand, is helping you to meet your SLAs. “Policy-driven” looks and sounds better when it’s splashed all over marketing slides though.

The pre-defined SLOs are great, because you’d be surprised how many organisations just don’t know where to start with their data protection activities. In my opinion though, the one of the most important steps in configuring these SLOs is taking a step back and understanding what you need to protect, how often you need to protect it, and how long you’ll have if you need to get it back. More importantly, you need to be sure that you have the same understanding of this as people running your business do.


You Say Potato …

Words mean things. I get mighty twitchy when people conflate premise and premises and laugh it off, telling me that language is evolving. It’s not evolving in that way. That’s like excusing a native English speaker’s misuse of their, they’re and there. It’s silly. Maybe it’s because I pronounce premise and premises differently. In any case, SLAs are different to SLOs. But I’m not going to have IBM’s lunch over this, because I think what’s more exciting about the presentation I saw is that IBM are possibly dragging themselves and their customers into the 21st century with Spectrum Protect Plus.

Plenty of people I’ve spoken to have been quick to tell me that SPP isn’t terribly exciting and that other vendors (namely startups or smaller competitors) have been delivering these kind of capabilities for some time. This is likely very true, and those vendors are doing well in their respective markets and keeping their customers happy with SLO-focused data protection capabilities. But I’ve historically spent a lot of my career toiling away in enterprise IT environments and those places are not what you’d call progressive environments (on a number of levels, unfortunately). IBM has a long and distinguished history in the industry, and service a large number of enterprise shops. Heck, they’re still making a bucket of cash selling iSeries and pSeries boxes. So I think it’s actually pretty cool when a company like IBM steps up and delivers capabilities in its software that enables businesses to meet their data protection requirements in a fashion that doesn’t rely on methods developed decades ago.

Storage Field Day 15 – (Fairly) Full Disclosure

Disclaimer: I recently attended Storage Field Day 15.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Here are my notes on gifts, etc, that I received as a delegate at Storage Field Day 15. I’d like to point out that I’m not trying to play companies off against each other. I don’t have feelings one way or another about receiving gifts at these events (although I generally prefer small things I can fit in my suitcase). Rather, I’m just trying to make it clear what I received during this event to ensure that we’re all on the same page as far as what I’m being influenced by. Some presenters didn’t provide any gifts as part of their session – which is totally fine. I’m going to do this in chronological order, as that was the easiest way for me to take notes during the week. Whilst every delegate’s situation is different, I took 5 days of unpaid leave to be at this event.



My wife drove me to the domestic airport. I recently acquired some status with Qantas so I get the special treatment and lounge access. In Sydney I partook of some very nice blue cheese and a couple of Coopers Original Pale Ales in the lounge.



I caught up with some of the Nutanix folks for a quick coffee (an American version of a flat white) and a choc chip muffin at a Starbucks near their EBC. This was paid for by Nutanix.

When I arrived at the hotel on Tuesday afternoon I was given a snack bag by the Tech Field Day crew filled with various snacks and bottles of water.

We had dinner at The Farmers Union in downtown San Jose. I had foccacia with herb butter, mushroom cigars with porcini aioli, and creamy tomato soup. For the main I had Flat Iron Steak, with dirty fries, Fresno child chimichurri and a fried egg. For desert I had the warm chocolate brownie with ice cream. I washed this down with 3 Firestone Pivo Pilsner beers. It’s a pretty neat place with nice food. As part of the Yankee Gift Swap I received a variety of spicy snack foods from Glenn Dekhayser.



For breakfast at the hotel I had bacon, scrambled eggs, sausage, strawberry yoghurt, and coffee. We were all given a Gestalt IT clear bag with a few bits and pieces in it. WekaIO gave each of us a branded notepad, water bottle and sticker.

We had lunch at the hotel. This was tacos, chicken and rice.

Dropbox gave us each a Moleskine notebook and Dropbox sticker. I declined to take the small bag of coffee they had available as well. FYI my eldest daughter will be your best friend if you keep sending home Moleskine notepads.

We had dinner at Faultline. I had some “crispy calamari” for an entree and the Brewhouse Bacon cheeseburger and 3 Faultline Kolsch beers for the main. The burger was really quite good.



Breakfast at Hedvig was pancakes, berries and cream, bacon, sausage and coffee. Hedvig also kindly gave each delegate a Kenneth Cole backpack. It’s really a lot flasher than I’d normally use. At NetApp I had some coffee and water during the session. For lunch I had steak, chicken, potato, polenta, salad and some bite-sized eclairs.

We were at Levi’s Stadium for the Western Digital and Datrium sessions, so we all took a self-guided tour of the 49ers Museum before dinner. At the social event I had a few sliders and 3 Anchor Steam beers. This was all covered by Datrium. We then went back to the hotel where I had a gin martini and some clam chowder at the hotel bar. This was paid for by Tech Field Day.



As we had our first session at the hotel, breakfast was scrambled eggs, bacon, eggs benedict, potato and coffee. I’m almost used to the way Americans cook bacon. Almost. Mariusz Kaczorek gave each of us a block of Polish chocolate.

Cohesity provided each delegate with a gift box containing a water bottle, fidget spinner, pen, sunglasses, multi-format charging cable, and USB battery. We had lunch at Cohesity. This was felafel, hummus, wraps and one of those zero orange vitamin water things.

After the last session Tech Field Day put me in a car to SFO. Apparently Qantas share lounge space with Air France (thanks to my wife for finding this out), so I spent a few hours there and helped myself to some bread and cheese before travelling home. The only thing paying for this is my waistline.

All in all, it was a great trip. Thanks again to Tech Field Day for having me, thanks to the other delegates for being super nice and smart, and thanks to the presenters for some educational and engaging sessions. Please enjoy this photo of a statue of Joe Montana and Bill Walsh from the 49ers Museum.