Manually Repair QNAP RAID 5 Array

I have a QNAP TS-421 that has a failed RAID5. The Web interface shows the RAID as in degraded mode. Drive 2 has failed. I’ve replaced drive 2 (/dev/sdb) with a new 2TB drive to match the other (working) three drives.

The Web interface fails to rebuild the array. Whilst the data is still accessible on the RAID it is in degraded mode and has no fault tolerance.

Resorting to SSH / PUTTY to fix this:

Check each of the drives:

with:

fdisk -l /dev/sda

fdisk -l /dev/sdb

fdisk -l /dev/sdc

fdisk -l /dev/sdd

Each drive yeilds similar to:

[~] # fdisk -l /dev/sda

Disk /dev/sda: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 1 66 530125 83 Linux
/dev/sda2 67 132 530142 83 Linux
/dev/sda3 133 243138 1951945693 83 Linux
/dev/sda4 243139 243200 498012 83 Linux

Check the status of the RAID with mdadm

mdadm --detail /dev/md0
/dev/md0:
 Version : 01.00.03
 Creation Time : Tue Jan 24 17:34:17 2017
 Raid Level : raid5
 Array Size : 5855836608 (5584.56 GiB 5996.38 GB)
 Used Dev Size : 1951945536 (1861.52 GiB 1998.79 GB)
 Raid Devices : 4
 Total Devices : 3
Preferred Minor : 0
 Persistence : Superblock is persistent

Update Time : Wed Jun 7 09:25:11 2017
 State : clean, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
 Spare Devices : 0

Layout : left-symmetric
 Chunk Size : 64K

Name : 0
 UUID : f15b5712:01afc7a3:61851d59:5d6f8393
 Events : 4391

Number Major Minor RaidDevice State
 0 8 3 0 active sync /dev/sda3
 1 0 0 1 removed
 2 8 35 2 active sync /dev/sdc3
 3 8 51 3 active sync /dev/sdd3


As expected we can see /dev/sdb3 is missing from above.

Make sure that the drive we want to add into the array isn’t mounted using

umount /dev/sdb3

Next add the drive into the array (this will delete any existing data on the drive being added /dev/sdb in this case)

[~] # mdadm /dev/md0 --add /dev/sdb3

Now check the details of the array again

[~] # mdadm --detail /dev/md0
/dev/md0:
 Version : 01.00.03
 Creation Time : Tue Jan 24 17:34:17 2017
 Raid Level : raid5
 Array Size : 5855836608 (5584.56 GiB 5996.38 GB)
 Used Dev Size : 1951945536 (1861.52 GiB 1998.79 GB)
 Raid Devices : 4
 Total Devices : 4
Preferred Minor : 0
 Persistence : Superblock is persistent

Update Time : Wed Jun 7 09:27:20 2017
 State : clean, degraded, recovering
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
 Spare Devices : 1

Layout : left-symmetric
 Chunk Size : 64K

Rebuild Status : 0% complete

Name : 0
 UUID : f15b5712:01afc7a3:61851d59:5d6f8393
 Events : 4421

Number Major Minor RaidDevice State
 0 8 3 0 active sync /dev/sda3
 4 8 19 1 spare rebuilding /dev/sdb3
 2 8 35 2 active sync /dev/sdc3
 3 8 51 3 active sync /dev/sdd3

 

Here we can see that /dev/sdb3 is now part of the array and rebuilding

The Web GUI should now also show the array rebuilding.

Leave a Reply