Random Short Take #22

Oh look, another semi-regular listicle of random news items that might be of some interest.

  • I was at Pure Storage’s //Accelerate conference last week, and heard a lot of interesting news. This piece from Chris M. Evans on FlashArray//C was particularly insightful.
  • Storage Field Day 18 was a little while ago, but that doesn’t mean that the things that were presented there are no longer of interest. Stephen Foskett wrote a great piece on IBM’s approach to data protection with Spectrum Protect Plus that’s worth read.
  • Speaking of data protection, it’s not just for big computers. Preston wrote a great article on the iOS recovery process that you can read here. As someone who had to recently recover my phone, I agree entirely with the idea that re-downloading apps from the app store is not a recovery process.
  • NetApp were recently named a leader in the Gartner Magic Quadrant for Primary Storage. Say what you will about the MQ, a lot of folks are still reading this report and using it to help drive their decision-making activities. You can grab a copy of the report from NetApp here. Speaking of NetApp, I’m happy to announce that I’m now a member of the NetApp A-Team. I’m looking forward to doing a lot more with NetApp in terms of both my day job and the blog.
  • Tom has been on a roll lately, and this article on IT hero culture, and this one on celebrity keynote speakers, both made for great reading.
  • VMworld US was a little while ago, but Anthony‘s wrap-up post had some great content, particularly if you’re working a lot with Veeam.
  • WekaIO have just announced some work their doing Aiden Lab at the Baylor College of Medicine that looks pretty cool.
  • Speaking of analyst firms, this article from Justin over at Forbes brought up some good points about these reports and how some of them are delivered.

IBM Spectrum Protect Plus – More Than Meets The Eye

Disclaimer: I recently attended Storage Field Day 18.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

IBM recently presented at Storage Field Day 18. You can see videos of their presentation here, and download my rough notes from here.


We Want A Lot From Data Protection

Data protection isn’t just about periodic protection of applications or files any more. Or, at the very least, we seem to want more than that from our data protection solutions. We want:

  • Application / data recovery – providing data availability;
  • Disaster Recovery – recovering from a minor to major data loss;
  • BCP – reducing the risk to the business, employees, market perception;
  • Application / data reuse – utilise for new routes to market; and
  • Cyber resiliency – recover the business from a compromised attack.

There’s a lot to cover there. And it could be argued that you’d need five different solutions to meet those requirements successfully. With IBM Spectrum Protect Plus (SPP) though, you’re able to meet a number of those requirements.


There’s Much That Can Be Done

IBM are positioning SPP as a tool that can help you extend your protection options beyond the traditional periodic data protection solution. You can use it for:

  • Data management / operational recovery – modernise and expanded use cases with instant data access, instant recovery leveraging snapshots;
  • Backup – traditional backup / recovery using streaming backups; and
  • Archive – long-term data retention / compliance, corporate governance.


Key Design Principles

Easy Setup

  • Deploy Anywhere: virtual appliance, cloud, bare metal;
  • Zero touch application agents;
  • Automated deployment for IBM Cloud for VMware; and
  • IBM SPP Blueprints.

The benefits of this include:

  • Easy to get started;
  • Reduced deployment costs; and
  • Hybrid and multi-cloud configurations.


  • Protect databases and applications hosted on-premises or in cloud;
  • Incremental forever using native hypervisor, database, and OS APIs; and
  • Efficient data reduction using deduplication and compression.

The benefits of this include:

  • Efficiency through reduced storage and network usage;
  • Stringent RPOs compliance with a reduced backup window; and
  • Application backup with multi-cloud portability.


  • Centralised, SLA-driven management;
  • Simple, secure RBAC based user self service; and
  • Lifecycle management of space efficient point-in-time snapshots.

The benefits of this include:

  • Lower TCO by reducing operational costs;
  • Consistent management / governance of multi-cloud environments; and
  • Secure by design with RBAC.

Recover, Reuse

  • Instant access / sandbox for DevOps and test environments;
  • Recover applications in cloud or data centre; and
  • Global file search and recovery.

The benefits of this include:

  • Improved RTO via instant access;
  • Eliminate time finding the right copy (file search across all snapshots with a globally indexed namespace);
  • Data reuse (versus backup as just an insurance policy); and
  • Improved agility; efficiently capture and use copy of production data for test.


One Workflow, Multiple Use Cases

There’s a lot you can with SPP, and the following diagram shows the breadth of the solution.

[image courtesy of IBM]


Thoughts and Further Reading

When I first encountered IBM SPP at Storage Field Day 15, I was impressed with their approach to policy-driven protection. It’s my opinion that we’re asking more and more of modern data protection solutions. We don’t just want to use them as insurance for our data and applications any more. We want to extract value from the data. We want to use the data as part of test and development workflows. And we want to manipulate the data we’re protecting in ways that have proven difficult in years gone by. It’s not just about having a secondary copy of an important file sitting somewhere safe. Nor is it just about using that data to refresh an application so we can test it with current business problems. It’s all of those things and more. This add complexity to the solution, as many people who’ve administered data protection solutions have found out over the years. To this end, IBM have worked hard with SPP to ensure that it’s a relatively simple process to get up and running, and that you can do what you need out of the box with minimal fuss.

If you’re already operating in the IBM ecosystem, a solution like SPP can make a lot of sense, as there are some excellent integration points available with other parts of the IBM portfolio. That said, there’s no reason you can’t benefit from SPP as a standalone offering. All of the normal features you’d expect in a modern data protection platform are present, and there’s good support for enhanced protection use cases, such as analytics.

Enrico had some interesting thoughts on IBM’s data protection lineup here, and Chin-Fah had a bit to say here.

IBM Spectrum Protect Plus Has A Nice Focus On Modern Data Protection

Disclaimer: I recently attended Storage Field Day 15.  My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

IBM recently presented at Storage Field Day 15. You can see videos of their presentation here, and download my rough notes from here.


SLA Policy-driven, Eh?

IBM went to some length to talk about their “SLA-based data protection” available with their Spectrum Protect Plus product (not to be confused with Spectrum Protect). So, what is Spectrum Protect Plus? IBM defined it as a “Data Reuse solution for virtual environments and applications supporting multiple use cases”, offering the following features:

  • Simple, flexible, lightweight (easy to deploy, configure and manage);
  • Pre-defined SLA based protection;
  • Self-service (RBAC) administration;
  • Enterprise proven, scalable;
  • Utilise copied data for production workflows;
  • Data recovery and reuse automation; and
  • Easily fits your budget.

They also spoke about SLA-based automation, with the following capabilities:

  • Define frequency of copies, retention and data and target location of data copies for any resources assigned to the SLA;
  • Comes installed with 3 pre-defined policies (Gold, Silver, and Bronze);
  • Modify or create as many SLAs as necessary to meet business needs;
  • Supports policy-based include / exclude rules;
  • Capability to offload data to IBM Spectrum Protect ensuring corporate governance / compliance with long term retention / archiving; and
  • Enable administrators to create customised templates that provide values for desired RPO.

This triggered Chris Evans to tweet the following during the session.

He went to write an insightful post on the difference between service level agreements, objectives, and policy-based configuration, amongst other things. It was great, and well worth a read.


So It’s Not A Service Level Agreement?

No, that’s not really what IBM are delivering. What they are delivering, however, is software that supports the ability to meet SLAs through configuration-level service level objectives (SLOs), or policies. I like SLOs better simply because a policy could just be something that the business has to adhere to and may not have anything to do with the technology or its relative usefulness. An SLO, on the other hand, is helping you to meet your SLAs. “Policy-driven” looks and sounds better when it’s splashed all over marketing slides though.

The pre-defined SLOs are great, because you’d be surprised how many organisations just don’t know where to start with their data protection activities. In my opinion though, the one of the most important steps in configuring these SLOs is taking a step back and understanding what you need to protect, how often you need to protect it, and how long you’ll have if you need to get it back. More importantly, you need to be sure that you have the same understanding of this as people running your business do.


You Say Potato …

Words mean things. I get mighty twitchy when people conflate premise and premises and laugh it off, telling me that language is evolving. It’s not evolving in that way. That’s like excusing a native English speaker’s misuse of their, they’re and there. It’s silly. Maybe it’s because I pronounce premise and premises differently. In any case, SLAs are different to SLOs. But I’m not going to have IBM’s lunch over this, because I think what’s more exciting about the presentation I saw is that IBM are possibly dragging themselves and their customers into the 21st century with Spectrum Protect Plus.

Plenty of people I’ve spoken to have been quick to tell me that SPP isn’t terribly exciting and that other vendors (namely startups or smaller competitors) have been delivering these kind of capabilities for some time. This is likely very true, and those vendors are doing well in their respective markets and keeping their customers happy with SLO-focused data protection capabilities. But I’ve historically spent a lot of my career toiling away in enterprise IT environments and those places are not what you’d call progressive environments (on a number of levels, unfortunately). IBM has a long and distinguished history in the industry, and service a large number of enterprise shops. Heck, they’re still making a bucket of cash selling iSeries and pSeries boxes. So I think it’s actually pretty cool when a company like IBM steps up and delivers capabilities in its software that enables businesses to meet their data protection requirements in a fashion that doesn’t rely on methods developed decades ago.

IBM SVC – svcinfo Basics – Part 2

In part 2 in a series of posts on random informational commands you can type into the SVC, here’s a few more commands you may find helpful.

First one is lsiogrp. The lsiogrp command returns a concise list or a detailed view of I/O groups visible to the cluster. More information can be found here.

IBM_2145:dc1-0001svccl:admin>svcinfo lsiogrp -delim ,

Another useful command is lshost. The lshost command generates a list with concise information about all the hosts visible to the cluster and detailed information about a single host. More information can be found here.

IBM_2145:dc1-0001svccl:admin>svcinfo lshost
id               name              port_count     iogrp_count
0                dc1-0031esx      2              4
1                dc1-0032esx      2              4
2                dc1-0024d        2              4
3                dc1-0025d        2              4
4                dc1-0026d        2              4
5                dc1-0027dq       2              4
6                dc1-0028d        2              4
7                dc1-0029d        2              4
8                dc1-0001esx      2              4
9                dc1-0002esx      2              4
239              dc1-0071esx      2              4
240              dc1-0072esx      2              4
241              dc1-0073esx      2              4
242              dc1-0048iwsuat   2              1
243              dc1-0047iwsuat   2              1

Want to find out some more information on a particular host? Use lshost again, but specify the hostname.

IBM_2145:dc1-0001svccl:admin>svcinfo lshost dc1-0001esx
id 148
name dc1-0001esx
port_count 2
type generic
mask 1111
iogrp_count 4
WWPN 2101001B32BF64F1
node_logged_in_count 2
state active
WWPN 2100001B329F64F1
node_logged_in_count 2
state active

Need to know what I/O Groups a given host is a member of? The lshostiogrp command displays a list of all the I/O groups that are associated with a specified host. More information can be found here.

IBM_2145:dc1-0001svccl:admin>svcinfo lshostiogrp dc1-0001esx
id               name
0                io_grp0
1                io_grp1
2                io_grp2
3                io_grp3

IBM SVC – svcinfo Basics – Part 1

I was doing an Exchange 2010 storage health check recently and needed some information some volumes presented to the environment from our SVC. My colleague gave me some commands to get the information I needed. I also found a useful website with pretty much identical commands listed. Check out the “SAN Admin Newbie — My notes on Useful Commands” blog, the post I looked at was “Commands to look around the SVC -> svcinfo”, located here. This is basic stuff for the seasoned SVC admin, but I’m really new to it, so I’m putting it up here.

The first order of business was to identify the vdisks that were mapped to one of the hosts I was looking at. To do this I used lshostvdiskmap. The lshostvdiskmap command displays a list of volumes that are mapped to a given host. These are the volumes that are recognized by the specified host. More info can be found here

IBM_2145:dc1-0001svccl:admin>svcinfo lshostvdiskmap dc1-0041esx
id               name              SCSI_id        vdisk_id       vdisk_name        vdisk_UID
148              dc1-0041esx      4              56             b3-003vol_4R1     60050768018E82BD3800000000000247
148              dc1-0041esx      5              57             b3-003vol_5R2     60050768018E82BD3800000000000248
148              dc1-0041esx      6              58             b3-003vol_6R3     60050768018E82BD3800000000000249
148              dc1-0041esx      7              59             b3-004vol_7R1     60050768018E82BD380000000000024A
148              dc1-0041esx      8              60             b3-004vol_8R2     60050768018E82BD380000000000024B
148              dc1-0041esx      9              61             b3-004vol_9R3     60050768018E82BD380000000000024C
148              dc1-0041esx      10             129            dc1C2T3L010       60050768018E82BD3800000000000253
148              dc1-0041esx      11             130            dc1C2T3L011       60050768018E82BD3800000000000254
148              dc1-0041esx      72             106            B3_3vol_72R0      60050768018E82BD3800000000000233
148              dc1-0041esx      73             127            B3_4vol_73R0      60050768018E82BD3800000000000234

So now I know the vdisks, but what if I want to check the capacity or find out the IO Group or MDisk name? I can use lsvdisk to get the job done. The lsvdisk command displays a concise list or a detailed view of volumes that are recognized by the clustered system. More information on this command can be found here

IBM_2145:dc1-0001svccl:admin>svcinfo lsvdisk B3_4vol_73R0
id 127
name B3_4vol_73R0
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 4
mdisk_grp_name G00304ST100007
capacity 700.00GB
type striped
formatted no
vdisk_UID 60050768018E82BD3800000000000234
throttling 0
preferred_node_id 1
fast_write_state not_empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 4
mdisk_grp_name G00304ST100007
type striped
fast_write_state empty
used_capacity 700.00GB
real_capacity 700.00GB
free_capacity 0.00MB
overallocation 100

Great, so what about the MDisk group that that vdisk sits on? Let’s use lsmdiskgrp for that one. The lsmdiskgrp command returns a concise list or a detailed view of MDisk groups visible to the cluster. More information can be found here

IBM_2145:dc1-0001svccl:admin>svcinfo lsmdiskgrp G00304ST100007
id 4
name G00304ST100007
status online
mdisk_count 32
vdisk_count 136
capacity 57.3TB
extent_size 2048
free_capacity 454.0GB
virtual_capacity 56.85TB
used_capacity 56.85TB
real_capacity 56.85TB
overallocation 99
warning 0

Now let’s find out all the vdisks residing on a given MDisk group. In this example I’ve filtered by
mdisk_grp_name as well as adding the -delim , so that I can dump the output in a csv file and work with it in a spreadsheet application.

IBM_2145:dc1-0001svccl:admin>svcinfo lsvdisk -delim , -filtervalue mdisk_grp_name=G00304ST100007


IBM – SAN Volume Controller Information Center

This is just a quick one for my own reference. The SVC Information Center is a great starting point if you’re new to SVC and need to get up to speed quickly. The link can be found here. Of particular interest to me was the information on the CLI – this can be found here. There are also some Flash and non-Flash tutorials that I found quite useful – these can be found here. I also recommend you check out some of the Redbooks available from IBM, particularly “Implementing the IBM System Storage SAN Volume Controller V6.1“. Enjoy!