Stupid RAID question
Alejandro Imass
aimass-EzYyMjUkBrFWk0Htik3J/w at public.gmane.org
Tue Oct 4 00:44:09 UTC 2011
On Mon, Oct 3, 2011 at 7:27 PM, William Muriithi
<william.muriithi-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org> wrote:
>
>> It depends. It's usual, and actually quite typical to have the OS and
>> boot system in a non-RAID drive and then start your array from that
>> disk. For example, older servers had a boot-up IDE drive and then from
>> there you would start the array.
>>
> Hmm, why would you do that whem you can boot straight on the RAID system and
> hence avoid single point of failure? That was a good solution when GRUB was
> not able to handle RAID
>
Usually you want to separate your booting OS from your RAID volumes.
It's actually standard practice.
>>
>> 1 small boot drive and then 2 others in raid 1, or 4 in raid 10
>>
> RAID 10 would waste too much space I think. For more than 3 drives, RAID 5
> would offer him more space. That would still be an overkill for a personal
> desktop. A RAID 1 on two drive would be the cheapest solution
>
Again it depends on what is your goal: reduce risk, performance, size
of the array?
RAID 10 usually provides the lowest level of risk and the simplicity
greatly outweighs RAID 5 in most situations where you have a small
number of spindles. Anything less than 10 drives I would usually go
with RAID 10 despite the "less" space (not actually that much less).
In RAID 10 with just 4 drives you can have one drive fail on each side
and the array is fully functional. In RAID 5 any 2 drive failure
renders the array unusable, you will need to rebuild, etc.
Regarding Linux software raid in particular, I had a very bad
experience with mdadm/RAID 5 in the past. I mean, it works in the lab,
but in a real world scenario, it will bite you in the ass, so I
wouldn't touch it with a ten foot pole. But RAID 10 is usually very
simple and straight forward, very stable and very fault tolerant
especially if you have S.M.A.R.T. working. With cheap commercial
drives you can bet that all your drives will fail almost
simultaneously, so it's very common for 2 drives to fail in any cheap
array, one right after the other.
Furthermore, if the application is random-write intensive (e.g. RDBMS
storage) RAID 5 will kill the performance of the DB unless it's a very
large SAN implementation with *very* professional hardware. Disk
drives are relatively cheap nowadays and RAID 10 is being preferred
all across the board for many reasons, especially for slow and
unstable software raid.
My real world 0.02: If you are limited in budget for a real hardware
RAID it's usually best to stick to raid 1 or raid 10 with a small
number of spindles. For example, with a relatively cheap and good
quality 1000 watt power supply (e.g. Agiler) you can easily fit 5 SATA
drives: 1 for boot and 4 for a raid 10 array. Some boards still have
the IDE port which is ideal for the boot disk.
As to having a separate boot disk versus booting directly on the
array, I would almost always have a separate boot disk but it's a
question of choice. A single point of failure for the OS is OK IMHO,
but it's never OK for your data!
Hence the 3-layer separation is usually good practice: (a) base
system, (b) application software, (c) data.
Boot drive will only be used for that: boot and you can even enable
and setup spin-down on that particular drive.
Cheers,
--
Alejandro Imass
>>
>> --
>> Alejandro Imass
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> > Thanks,
>> >
>> > Andrej
>> >
>> --
>> The Toronto Linux Users Group. Meetings: http://gtalug.org/
>> TLUG requests: Linux topics, No HTML, wrap text below 80 columns
>> How to UNSUBSCRIBE: http://gtalug.org/wiki/Mailing_lists
>
--
The Toronto Linux Users Group. Meetings: http://gtalug.org/
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://gtalug.org/wiki/Mailing_lists
More information about the Legacy
mailing list