IBM serveraid and linux

Dave Stubbs dave.stubbs-H217xnMUJC0sA/PxXw9srA at public.gmane.org
Thu Jan 6 05:40:38 UTC 2005


Tim Writer wrote:

>I disagree.  Modern systems are usually I/O constrained meaning there are
>plenty of available CPU cycles to handle the simple parity calculations.  To
>put it another way, if the CPU is idle while waiting for I/O, why not use
>that time to do the parity calculations?  You give up a few (spare) CPU
>cycles in return for retaining a more complete picture of the block I/O which
>allows the kernel to make better scheduling decisions, gaining better overall
>I/O throughput.
>
>  
>
But imagine another machine with the complete picture of the block I/O, 
as you say above, BUT it can pass the parity calculation on to a 
coprocessor.  Wouldn't it be faster?  Unless (I guess) the modern CPUs 
are just so powerful they can run the calc faster than the dedicated 
copro on the RAID card.

>>>Now, I must admit that in my *Linux* experience I have had the same 
>>>results as you - software RAID has been way faster than hardware.  But 
>>>that is only under Linux.  I have seen under BSD, UNIX, and Windows that 
>>>Hardware RAID is generally faster.  That would tend to indicate to me 
>>>one of two things -
>>>
>>>   1.  The Linux Software RAID guys have come up with some super cool
>>>   whiz-bang way of making software RAID work really well, and no one
>>>   else has figured it out, even though the RAID subsystem is open
>>>   source (unlikely)...   or...
>>>   2. The Linux drivers for the IBM ServeRAID adapter are crap (much
>>>   more likely). 
>>>      
>>>
>
>3. The block I/O subsystems of other systems suck so bad they make ServerRAID
>look good.
>
>I'm not saying this is the case, just throwing it out as another possibility.
>Years ago, I ran some benchmarks to compare I/O performance of Linux (ext2)
>running on a 486/33 with 16MB RAM and a single (narrow) SCSI-II disk against
>Solaris 2.4 running a 167Mhz UltraSPARC with 128MB RAM and a single wide
>SCSI-II disk.  Linux blew Slowaris away!  I don't know if that Sun system
>would have benefited from hardware RAID but it sure needed something.
>
>  
>
Well, a lot of the claims of better performance with HW RAID are based 
on Windows Systems experience, and when it comes to Windows block I/O, 
you are probably on to something.  I'd not be surprised to see Linux 
much faster.  I have carried out rough benchmarks in the past which seem 
to support this also.

>Yes, this is annecdotal and no, I haven't done an extensive study.  But I've
>generally had good experiences with software RAID and poor experiences with
>hardware RAID.  In my opinion, hardware RAID doesn't live up to its promises.
>
>  
>
I've been responsible for a few Dell shops, and the Dell PERC RAID 
adapter is usually a variant of the same LSI Logic RAID card that is 
customized by IBM to make some of the ServeRAID cards, and believe me, 
the Dells are much worse than the IBMs. 

To get a bit off topic:  Have you compared Linux software RAID 
performance between 2.4 and 2.6 kernels?  I'm finding 2.4 to still be a 
LOT faster than 2.6.  Have you seen this also?

Dave...
--
The Toronto Linux Users Group.      Meetings: http://tlug.ss.org
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://tlug.ss.org/subscribe.shtml





More information about the Legacy mailing list