Nice looking 'disk array'

Lennart Sorensen lsorense-1wCw9BSqJbv44Nm34jS7GywD8/FfD2ys at public.gmane.org
Mon Jan 29 23:15:34 UTC 2007


On Mon, Jan 29, 2007 at 05:36:02PM -0500, Byron Sonne wrote:
> > So what happens in software raid when someone pulls the plug on the
> > machine ???
> > and if you want fast cheap hardware RAID controllers check out Areca.
> > 1GB optional battery backed up write cache means the answer to the above
> > question is nothing bad.
> 
> That's one the things I loved about the cpqarray (or wtf they were
> called) type cards. Battery backed up cache like you mentioned, saved
> our arses a couple times.
> 
> I'm also fundamentally opposed to any device that cheaps out by using
> the host CPU. To me the idea reeks just like a winmodem. I don't care if
> there's cycles to spare; the idea just flat out bothers me. It's wrong
> like Windows is wrong. IMO it kinda goes against part of the Unix
> philosophy too: do one thing and do it well.

But what if your 3GHz CPU does XOR better than the dedicated chip on the
card?  Now if you are running a database server which has things to do
with the CPU, then sure offload the XOR to a dedicated chip, but
otherwise, if you are running a fileserver only, perhaps you would
rather have faster performance.

> And when your box is tapped serving out data to tons of people, and
> you've got a volume rebuild on your hands after replacing 2 failed
> drives out of 7... it's nice that the raid card has a CPU to do it's own
> work.

You let two disks fail at once?

> The comparison I think we need between the two is this : serve out data
> to something like 8000 users while being backed up to tape
> simultaneously, as well as doing a volume rebuild. Sadly, this is a more
> realistic scenario than I wish it was ;) I picked 8000 because I'm
> familiar with the performance issues and gear for handling those kinds
> of resources. I will wager that the software raid falls over way before
> hardware raid in an enterprise environment.
> 
> You also have the added advantage that alot (most?) of the enterprise
> raid uses scsi for the disks. SCSI drives tend to better built and hold
> up better than ATA (IDE) under the 8000 user type scenario envisioned
> above. There's a reason scsi drives cost more ;)

SCSI tends to have faster seek times, lower capacities per disk, higher
rotation speed, higher costs, and command queueing.

SATA generation 2 has command queueing too, which helps a lot under
multi user.  Of course 3ware controllers emulate command queueing on any
disk with the full benefit of that.  I suspect some other cards do too.
Parallel ATA is simply of no interest anymore, and I can't imagine
anyone willingly buying any of those anymore.

As for better built, well I am no longer convinced of that.  Too many
failures in IBM hotswap scsi drivers for me to believe that anymore.

If you really care about enterprise though, you want SAS.  Dual ported
drives makes a lot of sense for reliability, and not having the scsi bus
as a single point of failure for a raid also makes a lot of sense.  The
scsi cable is often much more likely to fail than the controller, so
having one link per drive just makes sense, and having two links per
drive makes even more sense.

--
Len Sorensen
--
The Toronto Linux Users Group.      Meetings: http://gtalug.org/
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://gtalug.org/wiki/Mailing_lists





More information about the Legacy mailing list