Boot Problem after Crash

Tony Abou-Assaleh taa-HInyCGIudOg at public.gmane.org
Mon Mar 10 04:17:39 UTC 2008


Tony Abou-Assaleh wrote:
> Lennart Sorensen wrote:
>> On Fri, Mar 07, 2008 at 04:56:21PM -0400, Tony Abou-Assaleh wrote:
>>> Lennart Sorensen wrote:
>>>> On Fri, Mar 07, 2008 at 01:39:18PM -0400, Tony Abou-Assaleh wrote:
>>>>> Hi tlugers,
>>>>>
>>>>> My PSU was toasted. I took the HDD out and installed it on another 
>>>>> PC. The file system had problems but after e2fsck it appears 
>>>>> stable. No data was lost as far as I can tell, but I couldn't boot 
>>>>> into by Ubuntu 7.10 (xubunutu) Linux.
>>>>>
>>>>> I booted using a live CD and went to rescue mode. I 
>>>>> updated/upgraded packages using apt-get, executed update-initramfs, 
>>>>> and all seemed well. I can even start apache and sshd from this 
>>>>> rescue shell.
>>>>>
>>>>> When I try to boot from the HDD I consistently get the same thing: 
>>>>> I get the BusyBox initramfs shell and I don't know how to go past 
>>>>> that. Nothing is mounted.
>>>>>
>>>>> I have the boot partition on /dev/sda1 and the root partition on 
>>>>> raid1 volume. I am using only a single drive from the raid array.
>>>>>
>>>>> Any ideas why I'm getting the initramfs prompt on boot and how to 
>>>>> get past that?
>>>> Perhaps the new machine enumerates the disks differently so it doesn't
>>>> know where to look for the root.
>>>>
>>>> What do you get from 'cat /proc/partitions' in the initramfs shell?
>>> -- 
>>> major    minor    #blocks    name
>>> 8    0    xxx    sda
>>> 8    1    xxx    sda1
>>> 8    2    xxx    sda2
>>> 9    0    xxx    md0
>>> -- 
>>>
>>> I omitted the block counts. sda1 is the boot partition, and sda2 is 
>>> the raid container. When I run the same command from the recovery 
>>> shell, I get 2 additional entries:
>>>
>>> -- 
>>> 253    0    xxx    dm-0
>>> 253    1    xxx    dm-1
>>> -- 
>>>
>>> Where dm-0 is the swap partition and dm-1 is the root partition.
>>>
>>>> How about /proc/mdstat?
>>> -- 
>>> Personalities : [raid1]
>>> md0 : active raid1 sda2[0]
>>>     xxx blocks [2/1] [U_]
>>
>> So one disk in the raid failed to appear?
> 
> Yes.
> 
>>> unused devices: <none>
>>> -- 
>>>
>>>> Perhaps you can find out what root is called and change the boot loader
>>>> to use root= whatever that is.
>>>>
>>>> I do everything in the boot loader and fstab by UUID these days just to
>>>> avoid this kind of hassle.
>>> I changed it to use root=UUID=xxx in the grub menu.lst, same thing.
>>
>> Can you mount the root partitions from initramfs and then pivotroot or
>> whatever they call it now to it and continue the boot?
> 
> When I did lvm scan, it showed my root and swap volumes as inactive. 
> After I activated them, I was able to mount them. I tried to chroot to 
> the root but that didn't go too well because none the libraries required 
> by many standard utilities were loaded. So I don't know what to do next 
> from initramfs prompt.
> 
>>>> The other option is that your initramfs only loaded the driver modules
>>>> needed by the old machine and not the ones for the new one, although
>>>> most I have seen recently (in debian at least) try to load pretty much
>>>> everything.  Is the new machine perhaps too new or simply not supported
>>>> by your linux version?
>>> I can boot fine from a live CD, so the machine is linux compatible. I 
>>> was able to connect to the Internet, start apache and sshd manually, 
>>> and connect to them from another machine.
>>>
>>> Also I ran update-initramfs on the new machine, so again that's not 
>>> likely to be the problem.
>>>
>>> It looks like the raid volume is recognized during the boot sequence, 
>>> but the partitions within it are not. Any ideas?
>>
>> I have never used partitions on raid.  I always run LVM on raid.  I know
>> how that works.  LVM is much more flexible than partitions, so why use
>> partitions on raid?
> 
> My bad. I am actually using LVM and they're volumes, not partitions.

After spending hours and hours reading the forums, I found one posting 
where a similar problem was solved by installing lvm2. For me, lvm2 was 
already installed but generating new initrd files and even installing 
new kernels only made things worse. However, after reading this post, I 
tried the following command from a rescue chrooted shell:

--
# dpkg-reconfigure lvm2
Backing up any LVM2 metadata that may exist...done.
update-initramfs: Generating /boot/initrd.img-2.6.22-14-386
--

I tried rebooting with this new initrd and boom! Problem solved.

Thanks for your assistance Lennart and Tyler, it helped me look in the 
right direction.

Cheers,

TAA

-- 
Tony Abou-Assaleh
Email:    taa-HInyCGIudOg at public.gmane.org
Web site: http://tony.abou-assaleh.net
--
The Toronto Linux Users Group.      Meetings: http://gtalug.org/
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://gtalug.org/wiki/Mailing_lists





More information about the Legacy mailing list