If you are using a two disk md RAID 1 for your system to provide for redundancy then you want to be able to boot from either drive in the event of a disk failure. That means you need to have GRUB installed on both boot blocks. The check is easy:
-bash-4.1# dd if=/dev/sdb bs=1 count=512 | grep -o -a --color '[\x20\x30-\x7a]\{2,4\}'
ZR
;D
GRUB
Ha
512+0 records in
512+0 records out
512 bytes (512 B) copied, 0.000566734 s, 903 kB/s
-bash-4.1#
The command above intentionally shows the use of grep
rather than the more convenient strings
because if you boot into linux rescue from a flash drive you won’t have strings
! If you are being proactive and setting this up before a disk failure then you can use strings
. The dd
dumps the first 512 bytes of the disk to stdout
and pipes it into grep
. The -a
tells grep
to process the bytes as if they were text, and the -o
instructs grep
to only show the matched strings. If you see GRUB you are good to go.
But what if you don’t see GRUB? Let’s assume your two disks are /dev/sdb and /dev/sdc. If you have shutdown the system and removed the failed drive, eg. /dev/sdc, then when you boot into linux rescue you will need to install GRUB onto /dev/sdb. If your flash drive comes up as /dev/sda (it should) and you see /dev/sdb as the system found by linux rescue and currently mounted under /mnt/sysimage, then you will run GRUB and install the boot loader onto BIOS drive (hd1). (The BIOS will have enumerated /dev/sda as (hd0).)
-bash-4.1# grub Probing devices to guess BIOS drives. This may take a long time. GNU GRUB version 0.97 (640K lower / 3072K upper memory) [ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename.] grub> root (hd1,1) root (hd1,1) Filesystem type is ext2fs, partition type 0xfd grub> setup (hd1) setup (hd1) Checking if "/boot/grub/stage1" exists... no Checking if "/grub/stage1" exists... yes Checking if "/grub/stage2" exists... yes Checking if "/grub/e2fs_stage1_5" exists... yes Running "embed /grub/e2fs_stage1_5 (hd1)"... 27 sectors are embedded. succeeded Running "install /grub/stage1 (hd1) (hd1)1+27 p (hd1,1)/grub/stage2 /grub/grub.conf"... succeeded Done. grub> quit quit -bash-4.1#
The first think to do after launching grub is to set the “root device” to the boot partition of the working drive. You have to know the partition map of the drive.
-bash-4.1# sfdisk -l /dev/sdb Disk /dev/sdb: 30401 cylinders, 255 heads, 63 sectors/track Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/sdb1 0+ 522- 523- 4194304 82 Linux swap / Solaris /dev/sdb2 * 522+ 587- 66- 524288 fd Linux raid autodetect /dev/sdb3 587+ 30401- 29814- 239478784 fd Linux raid autodetect /dev/sdb4 0 - 0 0 0 Empty -bash-4.1#
You can see that /dev/sdb1 is a swap and /dev/sdb2 is my /boot partition with /dev/sdb3 as /. From grub’s perspective, the partitions are 0,1,2 so we refer to the boot partition as (hd1,1). (Remember, the flash drive is /dev/sda right now which is (hd0) to grub). The grub setup command will do the magic for you. It will find the required files on your boot partition and write the required sectors on hd1. You can check for GRUB using the dd | grep
command described above.
You should be able to reboot now, removing the flash drive, and the system should come up on the remaining working drive of the md RAID 1.
Of course, you want to put in a new similar size drive and rebuild the md array to get back to normal operation. I’ll write that up next.
The issue of pretend news is important ffor Google from a
business standpoint, as many advertisers are
not looking for their brands to be touted alongside dubious
content. Google should continually hone its methods to try
tto stay one step ahead of unscrupulous publishers,
the former employee stated. http://www.chictopia.com/abildgaardstewart9