QNAP – Expand Volume With Larger Drives

This is one of those articles I’ve been meaning to post for a while, simply because I forget every time I do it how to do it properly. One way to expand the capacity of your QNAP NAS (non-disruptively) is to replace the drives one at a time with larger capacity drives. It’s recommended that you follow this process, rather than just ripping the drives out one by one and waiting for the RAID Group to expand. It’s a simple enough process to follow, although the QNAP UI has always struck me as a little on the confusing side to navigate, so I took some pictures. Note that this was done on QNAP firmware 5.0.0.1986.

Firstly, go to Storage/Snapshots under Storage in the ControlPanel. Click on Manage.

Select the Storage Pool you want to expand, and click on Manage again.

This will give you a drop-down menu. Select Replace Disks One by One.

Now select the disk you want to replace and click on Change.

Once you’ve done this for all of the disks (and it will take some time to rebuild depending on a variety of factors), click on Expand Capacity. It will ask you if you’re sure and hopefully you’ll click OK.

It’ll take a while for the RAID Group to synchronise.

You’ll notice then that, while the Storage Pool has expanded, the Volume is still the original size. Select the Volume and click on Manage.

Now you can click on Resize Volume.

The Wizard will give you information on the Storage Pool capacity and give you the option to set the new capacity of the volume. I usually click on Set to Max.

It will warn you about that. Click on OK because you like to live on the edge.

It will take a little while, but eventually your Volume will have expanded to fill the space.

 

macOS Catalina and Frequent QNAP SMB Disconnections

TL;DR – I have no idea why this is happening as frequently as it is, what’s causing it, or how to stop it. So I’m using AFP for the moment.

Background

I run a Plex server on my Mac mini 2018 running macOS Catalina 10.15.6. I have all of my media stored on a QNAP TS-831X NAS running QTS 4.4.3.1400 with volumes connected to macOS over SMB. This has worked reasonably well with Catalina for the last year (?) or so, but with the latest Catalina update I’ve had frequent (as in every few hours) disconnections from the shares. I’ve tried a variety of fixes, and thought I’d document them here. None of them really worked, so what I’m hoping is that someone with solid macOS chops will be able to tell me what I’m doing wrong.

 

Possible Solutions

QNAP and SMB

I made sure I was running the latest QNAP firmware version. I noticed in the latest release notes for 4.4.3.1381 that it fixed a problem where “[u]sers could not mount NAS shared folders and external storage devices at the same time on macOS via SMB”. This wasn’t quite the issue I was experiencing, but I was nonetheless hopeful. This was not the case. This thread talked about SMB support levels. I was running my shares with support for SMB 2.1 through 3.0. I’ve since changed that to 3.0 only. No dice.

macOS

This guy on this thread thinks he’s nailed it. He may have, but not for me. I’ve included some of the text for reference below.

Server Performance Mode

https://support.apple.com/en-us/HT202528

The description reads:

Performance mode changes the system parameters of your Mac. These changes take better advantage of your hardware for demanding server applications. A Mac that needs to run high-performance services can turn on performance mode to dedicate additional system resources for server applications.

“Solution:

1 – First check to see if server performance mode is enabled on your machine using this Terminal command. You should see the command return serverperfmode=1 if it is enabled.

nvram boot-args

2 – If you do not see serverperfmode=1 returned, enter this following line of code to enable it. (I recommend rebooting your system afterwards)

sudo nvram boot-args="serverperfmode=1 $(nvram boot-args 2>/dev/null | cut -f 2-)"

I’ve also tried changing the power settings on the Mac mini, and disabled power nap. No luck there either. I’ve also tried using the FQDN of the NAS as opposed to the short name of the device when I map the drives. Nope, nothing.

 

Solution

My QNAP still supports Apple File Protocol, and it supports multiple protocols for the same share. So I turned on AFP and mapped the drives that way. I’m pleased to say that I haven’t had the shares disconnect since (and have thus had a much smoother Plex experience), but I’m sad to say that this is the only solution I have to offer for the moment. And if your storage device doesn’t support AFP? Sod knows. I haven’t tried doing it via NFS, but I’ve heard reports that NFS was its own special bin fire in recent versions of Catalina. It’s an underwhelming situation, and maybe one day I’ll happen across the solution. And I can share it here and we can all do a happy dance.

Random Short Take #42

Welcome to Random Short Take #42. A few players have worn 42 in the NBA, including Vin Baker, but my favourite from this list is Walt Williams.  A big man with a jumpshot and a great tube sock game. Let’s get random.

  • Datadobi has formed a partnership with Melillo Consulting to do more in the healthcare data management space. You can read the release here.
  • It’s that time of the year when Backblaze releases its quarterly hard drive statistics. It makes for some really interesting reading, and I’m a big fan of organisations that are willing to be as transparent as Backblaze is with the experience it’s having in the field. It has over 142000 drives in the field, across a variety of vendors, and the insights it delivers with this report are invaluable. In my opinion this is nothing but a good thing for customers and the industry in general. You can read more about the report here.
  • Was Airplay the reason you littered your house with Airport Express boxes? Same here. Have you been thinking it might be nice to replace the Airport Express with a Raspberry Pi since you’ve moved on to a different wireless access point technology? Same here. This article might just be the thing you’ve been looking for. I’m keen to try this out.
  • I’ve been trying to optimise my weblog, and turned on Cloudflare via my hosting provider. The website ran fine, but I had issues accessing the WordPress admin page after a while. This article got me sorted out.
  • I’ve been a bit loose with the security of my home infrastructure from time to time, but even I don’t use WPS. Check out this article if you’re thinking it might somehow be a good idea.
  • This article on caching versus tiering from Chris Evans made for some interesting reading.
  • This was a thorough review of the QNAP QSW-308-1C Unmanaged Switch, an 11 (!) port unmanaged switch boasting 3 10Gbps ports and 8 1Gbps ports. It’s an intriguing prospect, particularly given the price.
  • DH2i has announced it’s extending free access to DxOdyssey Work From Home (WFH) Software until December 31st. Read more about that here.

 

QNAP – TR-004 Firmware Issue Workaround

I’ve been a user of QNAP products for over 10 years now. I have a couple of home systems running at the moment, including a TS-831X with a TR-004 enclosure attached to it. Last week I was prompted to update the external enclosure firmware to 1.1.0. After I did that, I had an issue where, once the unit spun down its disks, the volume would be marked as “Not active” by the system and I’d lose access to the data. Recovery was simple enough – I could either reboot the box or manually recover the enclosure via the QTS interface. I raised a job with QNAP web support, and we went back and forth with troubleshooting over the course of a week. The ticket was eventually escalated, and it was acknowledged that the current fix was to rollback to version 1.0.4 of the enclosure firmware.

The box is only used for media storage for Plex, but I figured it was worth backing up the contents of the external enclosure to another location in case something went wrong with the rollback. In any case, I’ve not done a downgrade on a QNAP device before, so I thought it was worth documenting the procedure here.

For some reason I needed to use Chrome over Safari in this example. I don’t know why that is. But whatever. In QTS, click on Storage & Snapshots, then Storage. Click on External RAID Management and then click on Check for Update.

You’ll see in this example, the installed TR-004 version is 1.1.0. Click on Browse to get the firmware file you want to roll back to.

You’ll get a stern warning that this kind of thing might cause problems.

Take a backup. Then tick the box.

The update will progress. It doesn’t take too long.

You then need to power off the enclosure and power it back on.

And, hopefully, your data will still be there. One side effect I noted was that the shared folder on that particular volume no longer had the correct permissions associated with the share. Fortunately, this is a home environment, and I’m using one user account to provide access to the share. I don’t know what you’d do if you had a complicated permissions situation in place.

And there you go. Like most things with QNAP, it’s a fairly simple process. This is the first time I’ve had to use QNAP support, and I found them responsive and helpful. I’ll report back if I get any other issues with the enclosure.

Random Short Take #11

Here are a few links to some random news items and other content that I found interesting. You might find it interesting too. Maybe. Happy New Year too. I hope everyone’s feeling fresh and ready to tackle 2019.

  • I’m catching up with the good folks from Scale Computing in the next little while, but in the meantime, here’s what they got up to last year.
  • I’m a fan of the fruit company nowadays, but if I had to build a PC, this would be it (hat tip to Stephen Foskett for the link).
  • QNAP announced the TR-004 over the weekend and I had one delivered on Tuesday. It’s unusual that I have cutting edge consumer hardware in my house, so I’ll be interested to see how it goes.
  • It’s not too late to register for Cohesity’s upcoming Helios webinar. I’m looking forward to running through some demos with Jon Hildebrand and talking about how Helios helps me manage my Cohesity environment on a daily basis.
  • Chris Evans has published NVMe in the Data Centre 2.0 and I recommend checking it out.
  • I went through a basketball card phase in my teens. This article sums up my somewhat confused feelings about the card market (or lack thereof).
  • Elastifile Cloud File System is now available on the AWS Marketplace – you can read more about that here.
  • WekaIO have posted some impressive numbers over at spec.org if you’re into that kind of thing.
  • Applications are still open for vExpert 2019. If you haven’t already applied, I recommend it. The program is invaluable in terms of vendor and community engagement.

 

 

QNAP – Manually Mounting a Belligerent eSATA Drive Filesystem

It seems I’m complaining about my oldest QNAP every other week. Arguably, given the sweat equity invested in these devices, I’d be better off just replacing it. But I persist nonetheless. Recently I had to evacuate the array again and replace a drive. No, I know I shouldn’t have to do that but … look, fine. Anyway, I had used a combination of eSATA docked HDDs, a Windows 7 HTPC and some USB drives (basically whatever had capacity) to copy the data off. When I had the NAS sorted, I started to copy data back. This all went fine until I got to the last SATA drive. I kept getting a message that the filesystem wasn’t recognised. Even though the NAS had created said NTFS filesystem for me. It was a bit weird.

qnap1

The solution, after a bit of searching, was to manually mount the filesystem and copy off the files. Here’s how to do it.

Firstly, you can identify the device via dmesg.

[150521.371058] scsi 2:0:0:0: Direct-Access     Seagate  ST2000DL003-9VT1 CC32 PQ: 0 ANSI: 5
[150521.377213] Check proc_name[ahci].
[150521.389517] sd 2:0:0:0: [sdza] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
[150521.396630] Check proc_name[ahci].
[150521.403685] sd 2:0:0:0: [sdza] Write Protect is off
[150521.409037] sd 2:0:0:0: [sdza] Mode Sense: 00 3a 00 00
[150521.409217] sd 2:0:0:0: Attached scsi generic sg676 type 0
[150521.415783] sd 2:0:0:0: [sdza] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[150521.448217]  sdza: sdza1 sdza2 sdza3 sdza4
[150521.469368] sd 2:0:0:0: [sdza] Attached SCSI disk

Create a mount point on the NAS to mount the device to.

[~] # cd /mnt
[/mnt] # ls
HDA_ROOT/ HDB_ROOT/ HDC_ROOT/ HDD_ROOT/ HDE_ROOT/ HDF_ROOT/ QUPNP/    config/   ext/
[/mnt] # mkdir tmpmnt

Once that’s done, you can mount the device. In this case I’m using ntfs-3g as I know it’s a NTFS filesystem. If you’re using something else, like ext4, then the mount command will be different.

[/mnt] # ntfs-3g /dev/sdza3 /mnt/tmpmnt/
[/mnt] # cd /mnt/tmpmnt/
[/mnt/tmpmnt] # ls
movies/
[/mnt/tmpmnt] #

And you can then copy the files to where you need them to be. Note that while I said this was a problem with eSATA, it was really a problem with the filesystem, not the transport mechanism, as using USB didn’t work either.

 

QNAP – Upgrading Firmware via the CLI

For some reason, I keep persisting with my QNAP TS-639 II, despite the fact that every time something goes wrong with it I spend hours trying to revive it. In any case, I recently had an issue with a disk showing SMART warnings. I figured it would be a good idea to replace it before it became a big problem. I had some disks on the shelf from the last upgrade. When I popped one in, however, it sent me this e-mail.

Server Name: qnap639
IP Address: 192.168.0.110
Date/Time: 28/05/2015 06:27:00
Level: Warning
The firmware versions of the system built-in flash (4.1.3 Build 20150408) and the hard drive (4.1.2 Build 20150126) are not consistent. It is recommended to update the firmware again for higher system stability.

Not such a great result. I ignored the warning and manually rebuilt the /dev/md0 device. When I rebooted, however, I still had the warning. And a missing disk from the md0 device (but that’s a story for later). To get around this problem, it is recommended that you reinstall the array firmware via the shell. I took my instructions from here. In short, you copy the image file to a share, copy that to an update directory, run a script, and reboot. It fixed my problem as it relates to that warning, but I’m still having issues getting a drive to join the RAID device. I’m currently clearing the array again and will put in a new drive next week. Here’s what it looks like when you upgrade the firmware this way.

[/etc/config] # cd /
[/] # mkdir /mnt/HDA_ROOT/update
mkdir: Cannot create directory `/mnt/HDA_ROOT/update': File exists
[/] # cd /mnt/HDA_ROOT/update
[/mnt/HDA_ROOT/update] # ls
[/mnt/HDA_ROOT/update] # cd /
[/] # cp /share/Public/TS-639_20150408-4.1.3.img /mnt/HDA_ROOT/update/
[/] # ln -sf /mnt/HDA_ROOT/update /mnt/update
[/] # /etc/init.d/update.sh /mnt/HDA_ROOT/update/TS-639_20150408-4.1.3.img 
cksum=238546404
Check RAM space available for FW update: OK.
Using 120-bit encryption - (QNAPNASVERSION4)
len=1048576
model name = TS-639
version = 4.1.3
boot/
bzImage
bzImage.cksum
config/
fw_info
initrd.boot
initrd.boot.cksum
libcrypto.so.1.0.0
libssl.so.1.0.0
qpkg.tar
qpkg.tar.cksum
rootfs2.bz
rootfs2.bz.cksum
rootfs_ext.tgz
rootfs_ext.tgz.cksum
update/
update_img.sh
4.1.3 20150408 
OLD MODEL NAME = TS-639
Allow upgrade
Allow upgrade
/mnt/HDA_ROOT/update
1+0 records in
1+0 records out
tune2fs 1.41.4 (27-Jan-2009)
Setting maximal mount count to -1
Setting interval between checks to 0 seconds
Update image using HDD ...
bzImage cksum ... Pass
initrd.boot cksum ... Pass
rootfs2.bz cksum ... Pass
rootfs_ext.tgz cksum ... Pass
rootfs_ext.tgz cksum ... Pass
qpkg.tar cksum ... Pass
Update RFS1...
mke2fs 1.41.4 (27-Jan-2009)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
13832 inodes, 55296 blocks
0 blocks (0.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=56623104
7 block groups
8192 blocks per group, 8192 fragments per group
1976 inodes per group
Superblock backups stored on blocks: 
8193, 24577, 40961
Writing inode tables: done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 21 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
Checking bzImage ... ok
Checking initrd.boot ... ok
Checking rootfs2.bz ... ok
Checking rootfs_ext.tgz ... ok
Update RFS2...
mke2fs 1.41.4 (27-Jan-2009)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
13832 inodes, 55296 blocks
0 blocks (0.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=56623104
7 block groups
8192 blocks per group, 8192 fragments per group
1976 inodes per group
Superblock backups stored on blocks: 
8193, 24577, 40961
Writing inode tables: done                            
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 31 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
1+0 records in
1+0 records out
Update Finished.
Make a Backup
/share/MD0_DATA
qpkg.tar cksum ... Pass
set cksum [238546404]
[/] # reboot
[/] #


QNAP – Add swap to your NAS for large volume fsck activities

That’s right, another heading from the Department of not terribly catchy blog article titles. I’ve been having a mighty terrible time with one of my QNAP arrays lately. After updating to 4.1.2, I’ve been getting some weird symptoms. For example, every time the NAS reboots, the filesystem is marked as unclean. Worse, it mounts as read-only from time to time. And it seems generally flaky. So I’ve spent the last week trying to evacuate the data with the thought that maybe I can re-initialize it and clear out some of the nasty stuff that’s built up over the last 5 years. Incidentally, while we all like to moan about how slow SATA disks are, try moving a few TB via a USB2 interface. The eSATA seems positively snappy after that.

Of course, QNAP released version 4.1.3 of their platform recently, and a lot of the symptoms I’ve been experiencing have stopped occurring. I’m going to continue down this path though, as I hadn’t experienced these problems on my other QNAP, and just don’t have a good feeling about the state of the filesystem. And you thought that I would be all analytical about it, didn’t you?

In any case, I’ve been running e2fsck on the filesytem fairly frequently, particularly when it goes read-only and I have to stop the services, unmount and remount the volume.

[/] # cd /share/MD0_DATA/
[/share/MD0_DATA] # cd Qmultimedia/    
[/share/MD0_DATA/Qmultimedia] # mkdir temp         
mkdir: Cannot create directory `temp': Read-only file system
[/share/MD0_DATA/Qmultimedia] # cd /
[/] # /etc/init.d/services.sh stop
Stop qpkg service: chmod: /share/MD0_DATA/.qpkg: Read-only file system
Shutting down Download Station: OK
Disable QUSBCam ... 
Shutting down SlimServer... 
Error: Cannot stop, SqueezeboxServer is not running.
WARNING: rc.ssods ERROR: script /opt/ssods4/etc/init.d/K20slimserver failed.
Stopping thttpd-ssods .. OK.
rm: cannot remove `/opt/ssods4/var/run/thttpd-ssods.pid': Read-only file system
WARNING: rc.ssods ERROR: script /opt/ssods4/etc/init.d/K21thttpd-ssods failed.
Shutting down QiTunesAir services: Done
Disable Optware/ipkg
.
Stop service: cloud3p.sh vpn_openvpn.sh vpn_pptp.sh ldap_server.sh antivirus.sh iso_mount.sh qbox.sh qsyncman.sh rsyslog.sh snmp lunportman.sh iscsitrgt.sh twonkymedia.sh init_iTune.sh ImRd.sh crond.sh nvrd.sh StartMediaService.sh bt_scheduler.sh btd.sh mysqld.sh recycled.sh Qthttpd.sh atalk.sh nfs ftp.sh smb.sh versiond.sh .
[/] # umount /dev/md0

 

So then I run e2fsck to check the filesystem. But on a large volume (in this case 8 and a bit TB), it uses a lot of RAM. And invariably runs out of swap space.

[/] # e2fsck /dev/md0
e2fsck 1.41.4 (27-Jan-2009)
/dev/md0: recovering journal
/dev/md0 contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Error allocating block bitmap (4): Memory allocation failed
e2fsck: aborted

 

So here’s what I did to enable some additional swap on a USB stick (courtesy of a QNAP forum post from RottUlf).

Insert a USB stick with more than 3GB of space. Create a swap file on it.

[/] # dd if=/dev/zero of=/share/external/sdi1/myswapfile bs=1M count=3072

Set it as a swap file.

[/] # mkswap /share/external/sdi1/myswapfile

Enable it as swap for the system.

[/] # swapon /share/external/sdi1/myswapfile

Check it.

[/] # cat /proc/swaps
Filename Type Size Used Priority
/dev/md8 partition 530040 8216 -1
/share/external/sdi1/myswapfile file 3145720 12560 -2

You should then be able to run e2fsck. Note that the example I linked to used e2fsck_64, but this isn’t available on the TS639 Pro II. Once you’ve fixed your filesystem issues, you’ll want to disable the swap file on the stick, remount the volume and restart your services.

[/] # swapoff /share/external/sdi1/myswapfile
[/] # mount /dev/md0
mount: can't find /dev/md0 in /etc/fstab or /etc/mtab

Oh no …

[/] # mount /dev/md0 /share/MD0_DATA/

Yeah, I don’t know what’s going on there either. I’ll report back in a while when I’ve wiped it and started again.

 

 

QNAP – Increase RAID rebuild times with mdadm

I recently upgraded some disks in my TS-412 NAS and it was taking some time. I vaguely recalled playing with min and max settings on the TS-639. Here’s a link to the QNAP forums on how to do it. The key is the min setting, and, as explained in the article, it really depends on how much you want to clobber the CPU. Keep in mind, also, that you can only do so much with a 3+1 RAID 5 configuration. I had my max set to 200000, and the min was set to 1000. As a result I was getting about 20MBs, and each disk was taking a little less than 24 hours to rebuild. I bumped up the min setting to 50000, and it’s now rebuilding at about 40MBs. The CPU is hanging at around 100%, but the NAS isn’t used that frequently.

To check your settings, use the following commands:

cat /proc/sys/dev/raid/speed_limit_max
cat /proc/sys/dev/raid/speed_limit_min

To increase the min setting, issue the following command:

echo 50000 >/proc/sys/dev/raid/speed_limit_min

And you’ll notice that, depending on the combination of disks, CPU and RAID configuration, your rebuild will go a wee bit faster than before.

QNAP – How to repair RAID brokenness – Redux

I did a post a little while ago (you can see it here) that covered using mdadm to repair a munted RAID config on a QNAP NAS. So I popped another disk recently, and took the opportunity to get some proper output. Ideally you’ll want to use the web interface on the QNAP to do this type of thing but sometimes it no worky. So here you go.

Stop everything on the box.

[~] # /etc/init.d/services.sh stop
Stop service: recycled.sh mysqld.sh atalk.sh ftp.sh bt_scheduler.sh btd.sh ImRd.sh init_iTune.sh twonkymedia.sh Qthttpd.sh crond.sh nfs smb.sh lunportman.sh iscsitrgt.sh nvrd.sh snmp rsyslog.sh qsyncman.sh iso_mount.sh antivirus.sh .
Stop qpkg service: Disable Optware/ipkg
Shutting down SlimServer...
Stopping SqueezeboxServer 7.5.1-30836 (please wait) .... OK.
Stopping thttpd-ssods .. OK.
/etc/rcK.d/QK107Symform: line 48: /share/MD0_DATA/.qpkg/Symform/Symform.sh: No such file or directory

(By the way it really annoys me when I’ve asked software to remove itself and it doesn’t cleanly uninstall – I’m looking at you Symform plugin)

Unmount the volume

[~] # umount /dev/md0

Stop the array

[~] # mdadm -S /dev/md0
mdadm: stopped /dev/md0

Reassemble the volume

[~] # mdadm --assemble /dev/md0 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3 /dev/sde3 /dev/sdf3
mdadm: /dev/md0 has been started with 5 drives (out of 6).

Wait, wha? What about that other disk that I think is okay?

[~] # mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Fri May 22 21:05:28 2009
Raid Level : raid5
Array Size : 9759728000 (9307.60 GiB 9993.96 GB)
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Raid Devices : 6
Total Devices : 5
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Wed Dec 14 19:09:25 2011
State : clean, degraded
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 7c440c84:4b9110fe:dd7a3127:178f0e97
Events : 0.4311172
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 0 0 1 removed
2 8 35 2 active sync /dev/sdc3
3 8 51 3 active sync /dev/sdd3
4 8 67 4 active sync /dev/sde3
5 8 83 5 active sync /dev/sdf3

Or in other words

[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid5 sda3[0] sdf3[5] sde3[4] sdd3[3] sdc3[2]
9759728000 blocks level 5, 64k chunk, algorithm 2 [6/5] [U_UUUU]
md6 : active raid1 sdf2[2](S) sde2[3](S) sdd2[4](S) sdc2[1] sda2[0]
530048 blocks [2/2] [UU]
md13 : active raid1 sdb4[2] sdc4[0] sdf4[5] sde4[4] sdd4[3] sda4[1]
458880 blocks [6/6] [UUUUUU]
bitmap: 0/57 pages [0KB], 4KB chunk
md9 : active raid1 sdf1[1] sda1[0] sdc1[4] sdd1[3] sde1[2]
530048 blocks [6/5] [UUUUU_]
bitmap: 34/65 pages [136KB], 4KB chunk
unused devices: <none>

So, when you see [U_UUUU] you’ve got a disk missing, but you knew that already. You can add it back in to the array thusly.

[~] # mdadm --add /dev/md0 /dev/sdb3
mdadm: re-added /dev/sdb3

So let’s check on the progress.

[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid5 sdb3[6] sda3[0] sdf3[5] sde3[4] sdd3[3] sdc3[2]
9759728000 blocks level 5, 64k chunk, algorithm 2 [6/5] [U_UUUU]
[>....................] recovery = 0.0% (355744/1951945600) finish=731.4min speed=44468K/sec
md6 : active raid1 sdf2[2](S) sde2[3](S) sdd2[4](S) sdc2[1] sda2[0]
530048 blocks [2/2] [UU]
md13 : active raid1 sdb4[2] sdc4[0] sdf4[5] sde4[4] sdd4[3] sda4[1]
458880 blocks [6/6] [UUUUUU]
bitmap: 0/57 pages [0KB], 4KB chunk
md9 : active raid1 sdf1[1] sda1[0] sdc1[4] sdd1[3] sde1[2]
530048 blocks [6/5] [UUUUU_]
bitmap: 34/65 pages [136KB], 4KB chunk
unused devices: <none>
[~] #

And it will rebuild. Hopefully. Unless the disk is really truly dead. You should probably order yourself a spare in any case.