Nice looking 'disk array'

Byron Sonne blsonne-bJEeYj9oJeDQT0dZR+AlfA at public.gmane.org
Mon Jan 29 22:36:02 UTC 2007


> So what happens in software raid when someone pulls the plug on the
> machine ???
> and if you want fast cheap hardware RAID controllers check out Areca.
> 1GB optional battery backed up write cache means the answer to the above
> question is nothing bad.

That's one the things I loved about the cpqarray (or wtf they were
called) type cards. Battery backed up cache like you mentioned, saved
our arses a couple times.

I'm also fundamentally opposed to any device that cheaps out by using
the host CPU. To me the idea reeks just like a winmodem. I don't care if
there's cycles to spare; the idea just flat out bothers me. It's wrong
like Windows is wrong. IMO it kinda goes against part of the Unix
philosophy too: do one thing and do it well.

And when your box is tapped serving out data to tons of people, and
you've got a volume rebuild on your hands after replacing 2 failed
drives out of 7... it's nice that the raid card has a CPU to do it's own
work.

The comparison I think we need between the two is this : serve out data
to something like 8000 users while being backed up to tape
simultaneously, as well as doing a volume rebuild. Sadly, this is a more
realistic scenario than I wish it was ;) I picked 8000 because I'm
familiar with the performance issues and gear for handling those kinds
of resources. I will wager that the software raid falls over way before
hardware raid in an enterprise environment.

You also have the added advantage that alot (most?) of the enterprise
raid uses scsi for the disks. SCSI drives tend to better built and hold
up better than ATA (IDE) under the 8000 user type scenario envisioned
above. There's a reason scsi drives cost more ;)
--
The Toronto Linux Users Group.      Meetings: http://gtalug.org/
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://gtalug.org/wiki/Mailing_lists





More information about the Legacy mailing list