Getting Started With The Pure Storage CLI

I used to write a lot about how to manage CLARiiON and VNX storage environments with EMC’s naviseccli tool. I’ve been doing some stuff with Pure Storage FlashArrays in our lab and thought it might be worth covering off some of the basics of their CLI. This will obviously be no replacement for the official administration guide, but I thought it might come in useful as a starting point.

 

Basics

Unlike EMC’s CLI, there’s no executable to install – it’s all on the controllers. If you’re using Windows, PuTTY is still a good choice as an ssh client. Otherwise the macOS ssh client does a reasonable job too. When you first setup your FlashArray, a virtual IP (VIP) was configured. It’s easiest to connect to the VIP, and Purity then directs your session to whichever controller is the current primary controller. Note that you can also connect via the physical IP address if that’s how you want to do things.

The first step is to login to the array as pureuser, with the password that you’ve definitely changed from the default one.

login as: pureuser
pureuser@10.xxx.xxx.30's password:
Last login: Fri Aug 10 09:36:05 2018 from 10.xxx.xxx.xxx

Mon Aug 13 10:01:52 2018
Welcome pureuser. This is Purity Version 4.10.4 on FlashArray purearray
http://www.purestorage.com/

“purehelp” is the command to run to list available commands.

pureuser@purearray> purehelp
Available commands:
-------------------
pureadmin
purealert
pureapp
purearray
purecert
pureconfig
puredns
puredrive
pureds
purehelp
purehgroup
purehost
purehw
purelog
pureman
puremessage
purenetwork
purepgroup
pureplugin
pureport
puresmis
puresnmp
puresubnet
puresw
purevol
exit
logout

If you want to get some additional help with a command, you can run “command -h” (or –help).

pureuser@purearray> purevol -h
usage: purevol [-h]
               {add,connect,copy,create,destroy,disconnect,eradicate,list,listobj,monitor,recover,remove,rename,setattr,snap,truncate}
               ...

positional arguments:
  {add,connect,copy,create,destroy,disconnect,eradicate,list,listobj,monitor,recover,remove,rename,setattr,snap,truncate}
    add                 add volumes to protection groups
    connect             connect one or more volumes to a host
    copy                copy a volume or snapshot to one or more volumes
    create              create one or more volumes
    destroy             destroy one or more volumes or snapshots
    disconnect          disconnect one or more volumes from a host
    eradicate           eradicate one or more volumes or snapshots
    list                display information about volumes or snapshots
    listobj             list objects associated with one or more volumes
    monitor             display I/O performance information
    recover             recover one or more destroyed volumes or snapshots
    remove              remove volumes from protection groups
    rename              rename a volume or snapshot
    setattr             set volume attributes (increase size)
    snap                take snapshots of one or more volumes
    truncate            truncate one or more volumes (reduce size)

optional arguments:
  -h, --help            show this help message and exit

There’s also a facility to access the man page for commands. Just run “pureman command” to access it.

Want to see how much capacity there is on the array? Run “purearray list –space”.

pureuser@purearray> purearray list --space
Name        Capacity  Parity  Thin Provisioning  Data Reduction  Total Reduction  Volumes  Snapshots  Shared Space  System  Total
purearray  12.45T    100%    86%                2.4 to 1        17.3 to 1        350.66M  3.42G      3.01T         0.00    3.01T

Need to check the software version or generally availability of the controllers? Run “purearray list –controller”.

pureuser@purearray> purearray list --controller
Name  Mode       Model   Version  Status
CT0   secondary  FA-450  4.10.4   ready
CT1   primary    FA-450  4.10.4   ready

 

Connecting A Host

To connect a host to an array (assuming you’ve already zoned it to the array), you’d use the following commands.

purehost create hostname
purehost create -wwnlist WWNs hostname
purehost list
purevol connect --host [host] [volume]

 

Host Groups

You might need to create a Host Group if you’re running ESXi and want to have multiple hosts accessing the same volumes. Here’re the commands you’ll need. Firstly, create the Host Group.

purehgroup create [hostgroup]

Add the hosts to the Host Group (these hosts should already exist on the array)

purehgroup setattr --hostlist host1,host2,host3 [hostgroup]

You can then assign volumes to the Host Group

purehgroup connect --vol [volume] [hostgroup]

 

Other Volume Operations

Some other neat (and sometimes destructive) things you can do with volumes are listed below.

To resize a volume, use the following commands.

purevol setattr --size 500G [volume]
purevol truncate --size 20GB [volume]

Note that a snapshot is available for 24 hours to roll back if required. This is good if you’ve shrunk a volume to be smaller than the data on it and have consequently munted the filesystem.

When you destroy a volume it immediately becomes unavailable to host, but remains on the array for 24 hours. Note that you’ll need to remove the volume from any hosts connected to it first.

purevol disconnect [volume] --host [hostname]
purevol destroy [volume]

If you’re running short of capacity, or are just curious about when a deleted volume will disappear, use the following command.

purevol list --pending

If you need the capacity back immediately, the deleted volume can be eradicated with the following comamnd.

purevol eradicate [volume]

 

Further Reading

The Pure CLI is obviously not a new thing, and plenty of bright folks have already done a few articles about how you can use it as part of a provisioning workflow. This one from Chadd Kenney is a little old now but still demonstrates how you can bring it all together to do something pretty useful. You can obviously extend that to do some pretty interesting stuff, and there’s solid parity between the GUI and CLI in the Purity environment.

It seems like a small thing, but the fact that there’s no need to install an executable is a big thing in my book. Array vendors (and infrastructure vendors in general) insisting on installing some shell extension or command environment is a pain in the arse, and should be seen as an act of hostility akin to requiring Java to complete simple administration tasks. The sooner we get everyone working with either HTML5 or simple ssh access the better. In any csase, I hope this was a useful introduction to the Purity CLI. Check out the Administration Guide for more information.

QNAP – Upgrading Firmware via the CLI

For some reason, I keep persisting with my QNAP TS-639 II, despite the fact that every time something goes wrong with it I spend hours trying to revive it. In any case, I recently had an issue with a disk showing SMART warnings. I figured it would be a good idea to replace it before it became a big problem. I had some disks on the shelf from the last upgrade. When I popped one in, however, it sent me this e-mail.

Server Name: qnap639
IP Address: 192.168.0.110
Date/Time: 28/05/2015 06:27:00
Level: Warning
The firmware versions of the system built-in flash (4.1.3 Build 20150408) and the hard drive (4.1.2 Build 20150126) are not consistent. It is recommended to update the firmware again for higher system stability.

Not such a great result. I ignored the warning and manually rebuilt the /dev/md0 device. When I rebooted, however, I still had the warning. And a missing disk from the md0 device (but that’s a story for later). To get around this problem, it is recommended that you reinstall the array firmware via the shell. I took my instructions from here. In short, you copy the image file to a share, copy that to an update directory, run a script, and reboot. It fixed my problem as it relates to that warning, but I’m still having issues getting a drive to join the RAID device. I’m currently clearing the array again and will put in a new drive next week. Here’s what it looks like when you upgrade the firmware this way.

[/etc/config] # cd /
[/] # mkdir /mnt/HDA_ROOT/update
mkdir: Cannot create directory `/mnt/HDA_ROOT/update': File exists
[/] # cd /mnt/HDA_ROOT/update
[/mnt/HDA_ROOT/update] # ls
[/mnt/HDA_ROOT/update] # cd /
[/] # cp /share/Public/TS-639_20150408-4.1.3.img /mnt/HDA_ROOT/update/
[/] # ln -sf /mnt/HDA_ROOT/update /mnt/update
[/] # /etc/init.d/update.sh /mnt/HDA_ROOT/update/TS-639_20150408-4.1.3.img 
cksum=238546404
Check RAM space available for FW update: OK.
Using 120-bit encryption - (QNAPNASVERSION4)
len=1048576
model name = TS-639
version = 4.1.3
boot/
bzImage
bzImage.cksum
config/
fw_info
initrd.boot
initrd.boot.cksum
libcrypto.so.1.0.0
libssl.so.1.0.0
qpkg.tar
qpkg.tar.cksum
rootfs2.bz
rootfs2.bz.cksum
rootfs_ext.tgz
rootfs_ext.tgz.cksum
update/
update_img.sh
4.1.3 20150408 
OLD MODEL NAME = TS-639
Allow upgrade
Allow upgrade
/mnt/HDA_ROOT/update
1+0 records in
1+0 records out
tune2fs 1.41.4 (27-Jan-2009)
Setting maximal mount count to -1
Setting interval between checks to 0 seconds
Update image using HDD ...
bzImage cksum ... Pass
initrd.boot cksum ... Pass
rootfs2.bz cksum ... Pass
rootfs_ext.tgz cksum ... Pass
rootfs_ext.tgz cksum ... Pass
qpkg.tar cksum ... Pass
Update RFS1...
mke2fs 1.41.4 (27-Jan-2009)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
13832 inodes, 55296 blocks
0 blocks (0.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=56623104
7 block groups
8192 blocks per group, 8192 fragments per group
1976 inodes per group
Superblock backups stored on blocks: 
8193, 24577, 40961
Writing inode tables: done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 21 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
Checking bzImage ... ok
Checking initrd.boot ... ok
Checking rootfs2.bz ... ok
Checking rootfs_ext.tgz ... ok
Update RFS2...
mke2fs 1.41.4 (27-Jan-2009)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
13832 inodes, 55296 blocks
0 blocks (0.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=56623104
7 block groups
8192 blocks per group, 8192 fragments per group
1976 inodes per group
Superblock backups stored on blocks: 
8193, 24577, 40961
Writing inode tables: done                            
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 31 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
1+0 records in
1+0 records out
Update Finished.
Make a Backup
/share/MD0_DATA
qpkg.tar cksum ... Pass
set cksum [238546404]
[/] # reboot
[/] #


Dell Compellent – Getting started with CompCU.jar

CompCU.jar is the Compellent Command Utility. You can download it from Compellent’s support site (registration required). This is a basic article that demonstrates how to get started.

The first thing you’ll want to do is create an authentication file that you can re-use, similar to what you do with EMC’s naviseccli tool. The file I specify is saved in the directory I’m working from, and the Storage Center IP is the cluster IP, not the IP address of the controllers.

E:\CU060301_002A>java –jar CompCU.jar –default -defaultname saved_default -host StorageCenterIP -user Admin -password SCPassword

Now you can run commands without having to input credentials each time. I like to ouput to a text file, although you’ll notice that CompCU also dumps output on the console at the same time. The “system show” command provides a brief summary of the system configuration.

E:\CU060301_002A>java -jar CompCU.jar -defaultname saved_default.cli -c "system show -txt 'outputfile.txt'"
Compellent Command Utility (CompCU) 6.3.1.2
 =================================================================================================
User Name: Admin
Host/IP Address: 192.168.0.10
Single Command: system show -txt 'systemshow.txt'
=================================================================================================
Connecting to Storage Center: 192.168.0.10 with user: Admin
java.lang.IllegalStateException: TrustManagerFactoryImpl is not initialized
Running Command: system show -txt 'outputfile.txt'
SerialNumber Name ManagementIP Version OperationMode PortsBalanced MailServer BackupMailServer
----------------- -------------------------------- ---------------- ---------------- -------------- -------------- -------------------- --------------------
22640 Compellent1 192.168.0.10 6.2.2.15 Normal Yes 192.168.0.200 192.168.0.201
Save to Text (txt) File: outputfile.txt
Successfully finished running Compellent Command Utility (CompCU) application.

Notice I get java errors every time I run this command. I think that’s related to an expired certificate, but I need to research that further. Another useful command is “storagetype show“. Here’s one I prepared earlier.

E:\CU060301_002A>java -jar CompCU.jar -defaultname saved_default.cli -c "storagetype show -txt 'storagetype.txt'"
Compellent Command Utility (CompCU) 6.3.1.2
=================================================================================================
User Name: Admin
Host/IP Address: 192.168.0.10
Single Command: storagetype show -txt 'storagetype.txt'
=================================================================================================
Connecting to Storage Center: 192.168.0.10 with user: Admin
java.lang.IllegalStateException: TrustManagerFactoryImpl is not initialized
Running Command: storagetype show -txt 'storagetype.txt'
Index Name DiskFolder Redundancy PageSize PageSizeBlocks SpaceUsed SpaceUsedBlocks SpaceAllocated SpaceAllocatedBlocks
------ -------------------------------- -------------------- -------------------- ---------- --------------- -------------------- -------------------- -------------------- --------------------
1 Assigned-Redundant-4096 Assigned Redundant 2.00 MB 4096 1022.51 GB 2144350208 19.67 TB 42232291328
Save to Text (txt) File: storagetype.txt
Successfully finished running Compellent Command Utility (CompCU) application.
E:\CU060301_002A>

There’s a bunch of useful things you can do with CompCU, particularly when it comes to creating volumes and allocating them to hosts, for example. I’ll cover these in the next little while. In the meantime, I hope this was a useful introduction to CompCU.

IBM SVC – svcinfo Basics – Part 1

I was doing an Exchange 2010 storage health check recently and needed some information some volumes presented to the environment from our SVC. My colleague gave me some commands to get the information I needed. I also found a useful website with pretty much identical commands listed. Check out the “SAN Admin Newbie — My notes on Useful Commands” blog, the post I looked at was “Commands to look around the SVC -> svcinfo”, located here. This is basic stuff for the seasoned SVC admin, but I’m really new to it, so I’m putting it up here.

The first order of business was to identify the vdisks that were mapped to one of the hosts I was looking at. To do this I used lshostvdiskmap. The lshostvdiskmap command displays a list of volumes that are mapped to a given host. These are the volumes that are recognized by the specified host. More info can be found here

IBM_2145:dc1-0001svccl:admin>svcinfo lshostvdiskmap dc1-0041esx
id               name              SCSI_id        vdisk_id       vdisk_name        vdisk_UID
148              dc1-0041esx      4              56             b3-003vol_4R1     60050768018E82BD3800000000000247
148              dc1-0041esx      5              57             b3-003vol_5R2     60050768018E82BD3800000000000248
148              dc1-0041esx      6              58             b3-003vol_6R3     60050768018E82BD3800000000000249
148              dc1-0041esx      7              59             b3-004vol_7R1     60050768018E82BD380000000000024A
148              dc1-0041esx      8              60             b3-004vol_8R2     60050768018E82BD380000000000024B
148              dc1-0041esx      9              61             b3-004vol_9R3     60050768018E82BD380000000000024C
148              dc1-0041esx      10             129            dc1C2T3L010       60050768018E82BD3800000000000253
148              dc1-0041esx      11             130            dc1C2T3L011       60050768018E82BD3800000000000254
148              dc1-0041esx      72             106            B3_3vol_72R0      60050768018E82BD3800000000000233
148              dc1-0041esx      73             127            B3_4vol_73R0      60050768018E82BD3800000000000234

So now I know the vdisks, but what if I want to check the capacity or find out the IO Group or MDisk name? I can use lsvdisk to get the job done. The lsvdisk command displays a concise list or a detailed view of volumes that are recognized by the clustered system. More information on this command can be found here

IBM_2145:dc1-0001svccl:admin>svcinfo lsvdisk B3_4vol_73R0
id 127
name B3_4vol_73R0
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 4
mdisk_grp_name G00304ST100007
capacity 700.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018E82BD3800000000000234
throttling 0
preferred_node_id 1
fast_write_state not_empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 4
mdisk_grp_name G00304ST100007
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 700.00GB
real_capacity 700.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize

Great, so what about the MDisk group that that vdisk sits on? Let’s use lsmdiskgrp for that one. The lsmdiskgrp command returns a concise list or a detailed view of MDisk groups visible to the cluster. More information can be found here

IBM_2145:dc1-0001svccl:admin>svcinfo lsmdiskgrp G00304ST100007
id 4
name G00304ST100007
status online
mdisk_count 32
vdisk_count 136
capacity 57.3TB
extent_size 2048
free_capacity 454.0GB
virtual_capacity 56.85TB
used_capacity 56.85TB
real_capacity 56.85TB
overallocation 99
warning 0
IBM_2145:dc1-0001svccl:admin>

Now let’s find out all the vdisks residing on a given MDisk group. In this example I’ve filtered by
mdisk_grp_name as well as adding the -delim , so that I can dump the output in a csv file and work with it in a spreadsheet application.

IBM_2145:dc1-0001svccl:admin>svcinfo lsvdisk -delim , -filtervalue mdisk_grp_name=G00304ST100007
id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type,FC_id,FC_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count,fast_write_state
0,dc1_418D_0,0,io_grp0,online,4,G00304ST100007,1000.00GB,striped,,,,,60050768018E82BD380000000000022A,0,1,not_empty
1,B3-01MITMBX_5R0,0,io_grp0,online,4,G00304ST100007,300.00GB,striped,,,,,60050768018E82BD3800000000000236,0,1,not_empty
5,dc1-0027dq_1,0,io_grp0,online,4,G00304ST100007,150.00GB,striped,,,,,60050768018E82BD3800000000000180,0,1,not_empty
10,CL7dc1_000,0,io_grp0,online,4,G00304ST100007,550.00GB,striped,,,,,60050768018E82BD3800000000000181,0,1,not_empty
11,B3-01RMSQCL_1R1,0,io_grp0,online,4,G00304ST100007,1.00GB,striped,,,,,60050768018E82BD3800000000000182,0,1,empty
16,dc1-0001LMF_R0,0,io_grp0,online,4,G00304ST100007,350.00GB,striped,,,,,60050768018E82BD3800000000000010,0,1,not_empty
24,dc1-0006svm_0,0,io_grp0,online,4,G00304ST100007,50.00GB,striped,,,,,60050768018E82BD3800000000000018,0,1,not_empty
25,dc1-0006svm_1,0,io_grp0,online,4,G00304ST100007,20.00GB,striped,,,,,60050768018E82BD3800000000000019,0,1,not_empty
29,dc1-0001vdq_1,0,io_grp0,online,4,G00304ST100007,100.00GB,striped,,,,,60050768018E82BD380000000000001D,0,1,not_empty
33,dc1-WIC864DQ_0,0,io_grp0,online,4,G00304ST100007,200.00GB,striped,,,,,60050768018E82BD3800000000000021,0,1,empty
36,dc1-0051dp_r9,0,io_grp0,online,4,G00304ST100007,50.00GB,striped,,,,,60050768018E82BD3800000000000024,0,1,empty
39,dc1-0052dq_0,0,io_grp0,online,4,G00304ST100007,50.00GB,striped,,,,,60050768018E82BD3800000000000027,0,1,empty
[snip]
440,dc1-0006qcl_1,0,io_grp0,online,4,G00304ST100007,270.00GB,striped,,,,,60050768018E82BD38000000000001FE,0,1,empty
444,dc1-0006qcl_5,0,io_grp0,online,4,G00304ST100007,150.00GB,striped,,,,,60050768018E82BD3800000000000202,0,1,empty
448,dc1-0006qcl_9,0,io_grp0,online,4,G00304ST100007,380.00GB,striped,,,,,60050768018E82BD3800000000000206,0,1,empty
452,dc1-0006qcl_13,0,io_grp0,online,4,G00304ST100007,300.00GB,striped,,,,,60050768018E82BD380000000000020A,0,1,empty
454,dc1-0006qcl_15,0,io_grp0,online,4,G00304ST100007,50.00GB,striped,,,,,60050768018E82BD380000000000020C,0,1,empty
455,dc1-0006qcl_16,0,io_grp0,online,4,G00304ST100007,2.00GB,striped,,,,,60050768018E82BD380000000000020D,0,1,empty
456,dc1-0006qcl_17,0,io_grp0,online,4,G00304ST100007,2.00GB,striped,,,,,60050768018E82BD380000000000020E,0,1,empty
457,b3-0007iwsuat_1,0,io_grp0,online,4,G00304ST100007,100.00GB,striped,,,,,60050768018E82BD380000000000020F,0,1,not_empty
IBM_2145:dc1-0001svccl:admin>

 

QNAP – How to repair RAID brokenness – Redux

I did a post a little while ago (you can see it here) that covered using mdadm to repair a munted RAID config on a QNAP NAS. So I popped another disk recently, and took the opportunity to get some proper output. Ideally you’ll want to use the web interface on the QNAP to do this type of thing but sometimes it no worky. So here you go.

Stop everything on the box.

[~] # /etc/init.d/services.sh stop
Stop service: recycled.sh mysqld.sh atalk.sh ftp.sh bt_scheduler.sh btd.sh ImRd.sh init_iTune.sh twonkymedia.sh Qthttpd.sh crond.sh nfs smb.sh lunportman.sh iscsitrgt.sh nvrd.sh snmp rsyslog.sh qsyncman.sh iso_mount.sh antivirus.sh .
Stop qpkg service: Disable Optware/ipkg
Shutting down SlimServer...
Stopping SqueezeboxServer 7.5.1-30836 (please wait) .... OK.
Stopping thttpd-ssods .. OK.
/etc/rcK.d/QK107Symform: line 48: /share/MD0_DATA/.qpkg/Symform/Symform.sh: No such file or directory

(By the way it really annoys me when I’ve asked software to remove itself and it doesn’t cleanly uninstall – I’m looking at you Symform plugin)

Unmount the volume

[~] # umount /dev/md0

Stop the array

[~] # mdadm -S /dev/md0
mdadm: stopped /dev/md0

Reassemble the volume

[~] # mdadm --assemble /dev/md0 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3 /dev/sde3 /dev/sdf3
mdadm: /dev/md0 has been started with 5 drives (out of 6).

Wait, wha? What about that other disk that I think is okay?

[~] # mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Fri May 22 21:05:28 2009
Raid Level : raid5
Array Size : 9759728000 (9307.60 GiB 9993.96 GB)
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Raid Devices : 6
Total Devices : 5
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Wed Dec 14 19:09:25 2011
State : clean, degraded
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 7c440c84:4b9110fe:dd7a3127:178f0e97
Events : 0.4311172
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 0 0 1 removed
2 8 35 2 active sync /dev/sdc3
3 8 51 3 active sync /dev/sdd3
4 8 67 4 active sync /dev/sde3
5 8 83 5 active sync /dev/sdf3

Or in other words

[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid5 sda3[0] sdf3[5] sde3[4] sdd3[3] sdc3[2]
9759728000 blocks level 5, 64k chunk, algorithm 2 [6/5] [U_UUUU]
md6 : active raid1 sdf2[2](S) sde2[3](S) sdd2[4](S) sdc2[1] sda2[0]
530048 blocks [2/2] [UU]
md13 : active raid1 sdb4[2] sdc4[0] sdf4[5] sde4[4] sdd4[3] sda4[1]
458880 blocks [6/6] [UUUUUU]
bitmap: 0/57 pages [0KB], 4KB chunk
md9 : active raid1 sdf1[1] sda1[0] sdc1[4] sdd1[3] sde1[2]
530048 blocks [6/5] [UUUUU_]
bitmap: 34/65 pages [136KB], 4KB chunk
unused devices: <none>

So, when you see [U_UUUU] you’ve got a disk missing, but you knew that already. You can add it back in to the array thusly.

[~] # mdadm --add /dev/md0 /dev/sdb3
mdadm: re-added /dev/sdb3

So let’s check on the progress.

[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid5 sdb3[6] sda3[0] sdf3[5] sde3[4] sdd3[3] sdc3[2]
9759728000 blocks level 5, 64k chunk, algorithm 2 [6/5] [U_UUUU]
[>....................] recovery = 0.0% (355744/1951945600) finish=731.4min speed=44468K/sec
md6 : active raid1 sdf2[2](S) sde2[3](S) sdd2[4](S) sdc2[1] sda2[0]
530048 blocks [2/2] [UU]
md13 : active raid1 sdb4[2] sdc4[0] sdf4[5] sde4[4] sdd4[3] sda4[1]
458880 blocks [6/6] [UUUUUU]
bitmap: 0/57 pages [0KB], 4KB chunk
md9 : active raid1 sdf1[1] sda1[0] sdc1[4] sdd1[3] sde1[2]
530048 blocks [6/5] [UUUUU_]
bitmap: 34/65 pages [136KB], 4KB chunk
unused devices: <none>
[~] #

And it will rebuild. Hopefully. Unless the disk is really truly dead. You should probably order yourself a spare in any case.

EMC – naviseccli -AddUserSecurity

There are any number of reasons why you mightn’t want to store your CLARiiON credentials in an encrypted file in your home directory. I can’t think of any. This post will cover the basics of setting yourself up with a security file that means you won’t have to keep entering your username, scope and password every time you want to use naviseccli.

-AddUserSecurity
This is the command to add user security information to the security file on this host. You need to use the -scope switch to add scope information to the security file. You can also use the -password switch or enter your password into the password prompt, to supply the required password information to the security file. If you don’t specify the -user switch, naviseccli assumes that the currently logged in user is the username you wish to use. The -secfilepath switch is also optional with this command. Note that if you use the -secfilepath switch, you can specify an alternative location to your default home directory, for the security file on this host. Keep in mind that you will then need to use the -secfilepath switch in each subsequent command you issue. You might find this tiresome.

-RemoveUserSecurity
This blats any user security information about the current user from the security file on this host.

-scope 0|1|2
Specifies whether the user account on the storage system you want to log in to is global (0), local (1), or LDAP (2). A global account is, as the name implies, global for the Navisphere / Unisphere domain you’re working in. A local account is effective on only the storage systems for which the administrator creates the account. LDAP maps the username/password entries to an external LDAP or active directory server for authentication.

-secfilepath filepath
Stores the security file in a specified location. This is useful if for some reason you don’t want the security file stored in your default home directory.

Enough talk. Here’s an example of how to setup the security file.

c:\>naviseccli -AddUserSecurity -Scope 0 -user san_admin

Enter password:

Assuming that the user san_admin is valid for the domain, and assuming that I’ve entered the password correctly, I can now run commands against any array in the domain without entering the username, password or scope. When you have a long password this can lead to some real time savings :)