Checking for and installing GRUB on both disks of an md RAID 1 array

If you are using a two disk md RAID 1 for your system to provide for redundancy then you want to be able to boot from either drive in the event of a disk failure. That means you need to have GRUB installed on both boot blocks. The check is easy:

-bash-4.1# dd if=/dev/sdb bs=1 count=512 | grep -o -a --color '[\x20\x30-\x7a]\{2,4\}' 
512+0 records in
512+0 records out
512 bytes (512 B) copied, 0.000566734 s, 903 kB/s

The command above intentionally shows the use of grep rather than the more convenient strings because if you boot into linux rescue from a flash drive you won’t have strings! If you are being proactive and setting this up before a disk failure then you can use strings. The dd dumps the first 512 bytes of the disk to stdout and pipes it into grep. The -a tells grep to process the bytes as if they were text, and the -o instructs grep to only show the matched strings. If you see GRUB you are good to go. Continue reading Checking for and installing GRUB on both disks of an md RAID 1 array

Editing initrd (Initial ramdisk)

Editing an initrd is often simpler than creating one.

I needed to create a new node for the system md device because while working on the system RAID1 md after a power failure under linux rescue, the minor device number changed. init was throwing the error “mount: could not find filesystem /dev/root
The following steps are what I did to fix it…
Continue reading Editing initrd (Initial ramdisk)