IBM serveraid and linux

Lennart Sorensen lsorense-1wCw9BSqJbv44Nm34jS7GywD8/FfD2ys at public.gmane.org
Wed Jan 5 19:02:37 UTC 2005


On Wed, Jan 05, 2005 at 01:53:01PM -0500, Dave Stubbs wrote:
> Uh - that might be oversimplifying it a bit.  With the IBM ServeRAID 
> cards, you can add drives and extend your arrays, and convert from RAID 
> 1 to RAID 5 and back again, seamlessly, while the system is running.  
> You can't do this with Software RAID.  Maybe you could kludge it with 
> LVD, but that's quite a risky approach, if you read the details.  And 
> not all SATA setups support hot-swap properly either.

The on the fly raid conversion is a very nice feature, which I think
3ware also supports, but I could be wrong.  Certainly not something
every hardware raid card can do.  And yes not all SATA does hotswap.
3ware explicitly does, not so sure about others.

> The general rule of thumb with RAID is that hardware RAID is *always* 
> faster than software RAID.  The IBM ServeRAID has it's own CPU for 
> pity's sake - just for processing RAID parity calculations and managing 
> the cache.  It would make no sense for software RAID to be able to keep 
> up with this. 

Too bad the ServeRAID (4 at least) are so pathetically slow.  I got a
serious slow down going from linux software raid (md1) to a serveraid4M
with the same drives in the same machine (single P3 733).  I was very
disappointed.  I sure hope it wasn't the 15k rpm IBM scsi drives that
were too slow for the card (they were 50% faster without the serveraid
using the aic7xxx).

It is however much simpler to manage the raid rebuilds, and setting up
boot loader and such when using hardware raid.  I wouldn't personally
use scsi anymore though when 3ware is an option.

> Now, I must admit that in my *Linux* experience I have had the same 
> results as you - software RAID has been way faster than hardware.  But 
> that is only under Linux.  I have seen under BSD, UNIX, and Windows that 
> Hardware RAID is generally faster.  That would tend to indicate to me 
> one of two things -
> 
>    1.  The Linux Software RAID guys have come up with some super cool
>    whiz-bang way of making software RAID work really well, and no one
>    else has figured it out, even though the RAID subsystem is open
>    source (unlikely)...   or...
>    2. The Linux drivers for the IBM ServeRAID adapter are crap (much
>    more likely). 
> 
> It wasn't too long ago that the ServeRAID driver was still marked 
> "Experimental" in the kernel.  And the fact that there seems to be only 
> one driver in Linux that is supposed to support all ServeRAID adapters 
> from the ServeRAID 1 up to the 6 or whatever they're up to now, while 
> there are individual specialized drivers for each variation of each 
> model of  the ServeRAID for every other operating system supported - 
> would tend to indicate that some kind of generalization or short-cutting 
> is going on.
> 
> Bottom line:  I would not be comfortable generalizing about RAID 
> performance based on Linux experience with the IBM ServeRAID.

One would think if the drivers were the problem IBM would have a serious
interest in fixing that problem.

Lennart Sorensen
--
The Toronto Linux Users Group.      Meetings: http://tlug.ss.org
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://tlug.ss.org/subscribe.shtml





More information about the Legacy mailing list