It was more complicated than I thought, but it finally was and therefore is possible to move a system from (Software-)RAID-1 using 2 disks to (Software-)RAID-5 using 3 disks without the help of an additional disk during transfer.
Some things I had to keep in mind:
- One harddisk (say, hda) has to be excluded from the RAID-1, i.e. it must be marked as failed. Otherwise the array would automatically get re-assembled at boot time, as the partitions are still flagged as type 0xFD (“Linux RAID autodetect”). The boot manager needs a new entry to boot from this one. The /etc/fstab on hda needs to be modified not to mount the md-devices, but the hda-devices only.
- Using an additional PCI ATA-controller, it appears as SCSI boot device in the BIOS. If hda is hanged over to this one, make the SCSI device the first to boot from. The kernel needs support for the controller’s chipset compiled in. Note that the devices on the PCI ATA-controller appear as hd[a-d], and those on the mainboard’s controller come afterwards!
- It is then possible to create a degraded RAID-5 with the two other disks:
# mkraid --dangerous-no-resync --really-force /dev/md1
RAID-5 knows the two states ‘dirty’ (i.e. up and running) and ‘clean’ (right before shutdown). RAID-5 does not re-assemble automatically at boot time and ‘raidstart’ fails if the array had not been marked as ‘clean’ before. You need mdadm to mark the array as clean for the first time:
# mdadm --stop /dev/md1
# mdadm --assemble /dev/md1 /dev/hdb2 /dev/hde2
- As in my case the count of the md-devices now has decreased from 8 to 4, I had a problem with the “Linux RAID autodetect” partitions on hda - They still try to reassemble themselves at boot time as /dev/mdx for x>3! Therefore, I had to use those automatically detected md-devices instead of the pure hda-devices in the /etc/fstab of hda, although they contained the hda-devices only. But that was not consistent: Booting from a Linux live-CD, the numbering of the md-devices changed its order *forever*, i.e. the changed order remained even after booting from the harddisks again!
To handle that problem I started freshly with a different strategy: I duplicated the partitioning of hda to the other two disks and recreated the original RAID-1 arrays there. Then I could do a (backup) sync from hda to the RAID-1 devices. After rebooting from that RAID, I flagged the partitions of hda as ordinary Linux partitions - The data even remained intact! Rebooting from hda as root, there was no more autodetection of the hda partitions and I could create the RAID-5 with the other two disks as mentioned above.
After that I could do the usual mounting, rsync’ing and dual booting, praying that no disk fails right in that “degraded array phase”. As everything seemed to work, I repartitioned hda and raidhotadd’ed it to the running system. The sync for the 147GB array took about 90 minutes.