Rescue software RAID system

William Muriithi william.muriithi-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org
Mon Feb 3 20:19:24 UTC 2014


>
> 1. Boot the working drive to confirm it works.
>    Was grub enabled on both drives ?
>    A lot forget this step when making a raid1.
>
> 2. Power down and drop in a new drive
>
> 3. fdisk and mdadm it and get the raid in sync.
>
The system come up fine now.  The system run from md0. The problem now is
fixing the data partition which is on md0

William
> Teddy
>
>
>
> William Muriithi wrote:
>>
>>  > > So I booted from a rescue CD, used mdadmin to start RAID device,
>> mounted the
>>  > > device and tried to chroot into the mount directory and don't work,
>> don't
>>  > > seem to have chroot binary in my rescue environment.
>>  >
>>  > If this is the centos rescue environment, it definitely should include
>>  > chroot. The approach you're taking seems fine to me. I've done this in
>>  > the past.
>>
>> This eventually worked. I have now been able to bring up one of the RAID
>> device up fully. This is md0 which is RAID1. I have struggled with md2
>> though which is a RAID5 device. Here is how it looks.
>>
>> mdadm --query --detail /dev/md2
>>
>> /dev/md2: Version : 1.2 Creation Time : Thu Dec 13 11:04:35 2012 Raid
>> Level : raid5 Used Dev Size : -1 Raid Devices : 3 Total Devices : 2
>> Persistence : Superblock is persistent
>>
>> Update Time : Thu Nov 21 09:20:07 2013 State : active, degraded, Not
>> Started Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare
>> Devices : 0
>>
>> Layout : left-symmetric Chunk Size : 512K
>>
>> Name : gfs1.jamar.com:2 <http://gfs1.jamar.com:2> (local to host
>> gfs1.jamar.com <http://gfs1.jamar.com>) UUID :
>>
>> 0202547e:d9a3bd67:fdeaf168: a780aa38 Events : 6292
>>
>> Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 18 1 active sync
>> /dev/sdb2 3 8 34 2 active sync /dev/sdc2
>>
>> mdadm --add /dev/md2 /dev/sda2 mdadm: add new device failed for
>> /dev/sda2 as 4: Invalid argument
>>
>> On dmesg, I see the error
>>
>> MD2: ADD_NEW_DISK not supported
>>
>> (The above actually worked for the RAID 1 device on the same system. It
>> has two partition, RAID 1 being system and RAID 5 being data partition).
>>
>> mdadm --examine /dev/sd[abc]2 | egrep 'dev|Update|Role|State|Chunk Size'
>> /dev/sda2: State : active Update Time : Thu Nov 21 09:20:07 2013 Device
>> Role : spare Array State : AAA ('A' == active, '.' == missing)
>> /dev/sdb2: State : active Update Time : Thu Nov 21 09:20:07 2013 Device
>> Role : Active device 1 Array State : AAA ('A' == active, '.' == missing)
>> /dev/sdc2: State : active Update Time : Thu Nov 21 09:20:07 2013 Device
>> Role : Active device 2 Array State : AAA ('A' == active, '.' == missing)
>>
>> What does "used device size" mean when the number is negative?
>>
>> Anybody see something I may have approached wrongly? Its a back up
>> system so not too worried for now. Plan to just destroy the device and
>> create a new if I can't bring it up by end week. Would be satisfying
>> though to bring it back though
>>
>> William
>>  >
>>  > Also, in the unfortunate event that you have problems during the
>>  > rebuild, be aware that unless the first failed disk is completely
>>  > fubared, you may be able to use some of its data if required.
>>  > Hopefully not, but keep it in mind.
>>  >
>>  > Thanks
>>  > -Ben
>>  > --
>>  > The Toronto Linux Users Group.      Meetings: http://gtalug.org/
>>  > TLUG requests: Linux topics, No HTML, wrap text below 80 columns
>>  > How to UNSUBSCRIBE: http://gtalug.org/wiki/Mailing_lists
>>
> --
> The Toronto Linux Users Group.      Meetings: http://gtalug.org/
> TLUG requests: Linux topics, No HTML, wrap text below 80 columns
> How to UNSUBSCRIBE: http://gtalug.org/wiki/Mailing_lists
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gtalug.org/pipermail/legacy/attachments/20140203/ac4d5ff0/attachment.html>


More information about the Legacy mailing list