IBM serveraid and linux

Dave Stubbs dave.stubbs-H217xnMUJC0sA/PxXw9srA at public.gmane.org
Wed Jan 5 18:53:01 UTC 2005


> With the advent of reliable software RAID, I have never seen an
> application in which a RAID card was a good idea. You can _always_ do
> RAID 10 faster and safer in the kernel. If you really need high
> performance or high availabliltiy, then look at a SAN / disk array
> solution. Otherwise, dozens of 10kRPM SATA disks hung off an inexpensive
> 3ware or highpoint controller doing JBOD is obviously the way to go.
>
Uh - that might be oversimplifying it a bit.  With the IBM ServeRAID 
cards, you can add drives and extend your arrays, and convert from RAID 
1 to RAID 5 and back again, seamlessly, while the system is running.  
You can't do this with Software RAID.  Maybe you could kludge it with 
LVD, but that's quite a risky approach, if you read the details.  And 
not all SATA setups support hot-swap properly either.

The general rule of thumb with RAID is that hardware RAID is *always* 
faster than software RAID.  The IBM ServeRAID has it's own CPU for 
pity's sake - just for processing RAID parity calculations and managing 
the cache.  It would make no sense for software RAID to be able to keep 
up with this. 

Now, I must admit that in my *Linux* experience I have had the same 
results as you - software RAID has been way faster than hardware.  But 
that is only under Linux.  I have seen under BSD, UNIX, and Windows that 
Hardware RAID is generally faster.  That would tend to indicate to me 
one of two things -

    1.  The Linux Software RAID guys have come up with some super cool
    whiz-bang way of making software RAID work really well, and no one
    else has figured it out, even though the RAID subsystem is open
    source (unlikely)...   or...
    2. The Linux drivers for the IBM ServeRAID adapter are crap (much
    more likely). 

It wasn't too long ago that the ServeRAID driver was still marked 
"Experimental" in the kernel.  And the fact that there seems to be only 
one driver in Linux that is supposed to support all ServeRAID adapters 
from the ServeRAID 1 up to the 6 or whatever they're up to now, while 
there are individual specialized drivers for each variation of each 
model of  the ServeRAID for every other operating system supported - 
would tend to indicate that some kind of generalization or short-cutting 
is going on.

Bottom line:  I would not be comfortable generalizing about RAID 
performance based on Linux experience with the IBM ServeRAID.

> Before some idiot suggests using RAID3/4/5 to get more storage out of
> your disk, I should point out that they are over complicated, processor
> intensive solutions to a problem that doesn't exist anymore. You'd think
> they would just go away in this age of super cheap disk. It is pretty
> stupid to believe that it is more economic to waste hours of sysadmin
> time, degrade performance and risk data to save a couple of thousand
> bucks on disk.
>
Couple of Thousand?  That's small fry - talk to people who are trying to 
save tens of thousands on disk.  And your SAN recommendation is going to 
cost much more than that also.  Not to mention that the SAN is a huge 
SPOF itself. 

My $0.02

Dave
--
The Toronto Linux Users Group.      Meetings: http://tlug.ss.org
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://tlug.ss.org/subscribe.shtml





More information about the Legacy mailing list