Rescue software RAID system

William Muriithi william.muriithi-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org
Tue Feb 4 21:35:06 UTC 2014


> > > > Have already tried this with the same error. I suspected the two
commands
> > > > are like alias.
> > >
> > > What is the current state of the raid:
> > >
> > > cat /proc/mdstat
> > >
> >
> > Looks like its inactive.
> >
> > Personalities : [raid1] [raid6] [raid5] [raid4]
> >
> > md2 : inactive sdb2[1] sdc2[3] 5692755968 blocks super 1.2
> >
> > md0 : active raid1 sda1[2] sdb1[1] 83886008 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
> >
> > unused devices: <none>
> > > --
> > > Len Sorensen
> Actually this got me in the proper path. I googled for inactive raid
device and I it does look like you need to stop it first.
>
> Then, I started it and got this message
> mdadm --assemble /dev/md2 /dev/sda2 /dev/sdb2 /dev/sdc2
>
> mdadm: device 3 in /dev/md2 has wrong state in superblock, but /dev/sda2
seems ok
> mdadm: /dev/md2 assembled from 2 drives and 1 spare - not enough to start
the array while not clean - consider --force.
>
> Have never liked to use force.
>
So I did force the assembly and all seem fine now. Thanks a lot Len. I am
good now.

mdadm --assemble --force /dev/md2 /dev/sda2 /dev/sdb2 /dev/sdc2 mdadm:
clearing FAULTY flag for device 0 in /dev/md2 for /dev/sda2 mdadm: Marking
array /dev/md2 as 'clean' md/raid: md2: raid level 5 active with 2 out of 3
devices, algorithm 2 mdadm: /dev/md2 has started with 2 drives (out of 3)
and 1 spare

cat /proc/mdstat

Personalities : [raid1] [raid6] [raid5] [raid4] md2 : active raid5 sdb2[1]
sda2[4] sdc2[3] 5692754944 blocks super 1.2 level 5, 512k chunk, algorithm
2 [3/2] [_UU] [>....................] recovery = 1.7% (50805888/2846377472)
finish=431.8min speed=107884K/sec

md0 : active raid1 sda1[2] sdb1[1] 83886008 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>

/dev/md2: Version : 1.2 Creation Time : Thu Dec 13 11:04:35 2012 Raid Level
: raid5 Array Size : 5692754944 (5429.03 GiB 5829.38 GB) Used Dev Size :
2846377472 (2714.52 GiB 2914.69 GB) Raid Devices : 3 Total Devices : 3
Persistence : Superblock is persistent

Update Time : Tue Feb 4 16:18:14 2014 State : clean, degraded, recovering
Active Devices : 2 Working Devices : 3 Failed Devices : 0 Spare Devices : 1

Layout : left-symmetric Chunk Size : 512K

Rebuild Status : 11% complete

Name : gfs1.jamar.com:2 (local to host gfs1.jamar.com) UUID :
0202547e:d9a3bd67:fdeaf168: a780aa38 Events : 6296

Number Major Minor RaidDevice State 4 8 2 0 spare rebuilding /dev/sda2 1 8
18 1 active sync /dev/sdb2 3 8 34 2 active sync /dev/sdc2

William
> William
> > > --
> > William
> > > The Toronto Linux Users Group.      Meetings: http://gtalug.org/
> > > TLUG requests: Linux topics, No HTML, wrap text below 80 columns
> > > How to UNSUBSCRIBE: http://gtalug.org/wiki/Mailing_lists
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gtalug.org/pipermail/legacy/attachments/20140204/6ace97e0/attachment.html>


More information about the Legacy mailing list