Nice looking 'disk array'

Lennart Sorensen lsorense-1wCw9BSqJbv44Nm34jS7GywD8/FfD2ys at public.gmane.org
Tue Jan 30 15:05:31 UTC 2007


On Mon, Jan 29, 2007 at 06:30:49PM -0500, ted leslie wrote:
> Good timing for me for this thread,
> 
> i have just finished setting up 4 or so servers with 3ware PCI-X 8 and
> 12 channel sata raid controllers.
> 
> I was thinking of setting up another involving about 16 drives,
> and perhaps a read throughput (not buffered) of about 1GB / second,
> the most i have got with a 8channel (8 drives) is about 500-600 MB/sec,
> so I am hoping a 16 channel (16 drive) could push me over 1GB/sec
> transfer.

Hmm, the theoretical bandwidth of PCI-X (The 133MHz 64bit kind), is only
a little over 1GB/s.  Of course with overhead there is no way you could
get transfers that high.  Myabe 800MB/s, but even that could be pushing
it.  500-600MB/s is rather amazing already.  If you want more you need
PCI express.

> now if i take the linux software approach , I'd have to get
> about 12 sata channels as a pci-x expansion card (+4 on board).
> I get the 3ware 16 channel for about 1000$
> 
> i wonder how the linux software raid with PCI-X straight sata expansion
> cards is going to compare with cost and performance to a 
> 3ware 16 channel in a single PCI-X slot?

I doubt there are any PCI-X sata cards that aren't raid cards.  Most
simple sata cards are just 2 or 4 ports on plain PCI.

> also , having a UPS on a system doing software raid, isnt the equivalent
> of a battery backup on a 3ware card,
> you could have a failure between UPS and CPU, or a kernel panic
> (granted a KP is pretty rare these days).
> 
> If I can get 1GB/s transfer from a linux software raid sol'n , 
> and its faster and cheaper then the 3ware .... man I am going to
> consider it!!
> anyone tried?

Well getting controllers with that many channels pretty much means 3ware
anyhow. :)  I also don't know what the cpu load is for running 16 drives
in raid5 or raid6, even with sse based raid code.

Of course if you have independant busses for controllers, software raid
would be able to span multiple controllers and potentially gain speed
that way, although software raid does have the overhead of transfering
all the data including parity and also reading parity for doing updates,
where as the hardware raid only has to transfer the data, with the
parity handled on the card.  So if you are bus bandwidth starved, the
hardware raid does have an advantage, except it can't span controllers
(well maybe some controllers can, I don't think I have seen any that
could though).

> I am going to hazard a guess that for a array rebuild, the HW raid has
> got to be preferable? but hopefully one isn't rebuilding to often.

The hardware raid is probably a bit easier to rebuild (usually just
stick in the drive, whereas software raid you have to tell it to add the
new drive before it will rebuild).

--
Len Sorensen
--
The Toronto Linux Users Group.      Meetings: http://gtalug.org/
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://gtalug.org/wiki/Mailing_lists





More information about the Legacy mailing list