enabling DMA on hard drives

Anton Markov anton-F0u+EriZ6ihBDgjK7y7TUQ at public.gmane.org
Wed Jan 19 02:33:35 UTC 2005


Lennart Sorensen wrote:
> On Tue, Jan 18, 2005 at 10:41:27PM +0200, Peter L. Peres wrote:
> 
>>Unfortunately unless one is using high end SCSI drives the throughput 
>>will remain very low, limited by the c**p controller and r/w head 
>>throughput (normal ide/eide/ata/sata will not go beyond 10-12MBps and 
>>that is very fast by the average standards of off the shelf hdds in my 
>>experience). hdparm will tell you if you can gain from DMA. hdparm -Tt 
>>will determine the cache and the real disk r/w speeds. The latter is 
>>usually 10 times less then the cache r/w speed and is the real 
>>bottleneck. Enabling DMA should not (and does not as far as I could 
>>test) improve this. The initialisation code in the kernels puts the hdds 
>>into a 'best match' mode with the available capabilities. And if one 
>>does use SCSI then it's the SCSI controller's DMA that must be turned 
>>on.
> 
> 
> The kernel MIGHT enable DMA by default, but then again it might be
> configured not to.  Without DMA most systems I have seen give about 2 or
> 3MB/s transfer rate, while with DMA they give anywhere from 10 to 60MB/s
> (as per hdparm -t measurements).  The hdparm -T measurement seems to be
> a linux memory cache benchmark, which depends entirely on the cpu and
> memory subsystem of the machine and has nothing to do with the actual
> disk or controller.  DMA of course also significantly reduces the cpu
> load required to operate the disk.

Actually the 'hdparm -T' command tests the speed at which data can be 
read from the disk controller's / hard drive's cache. It represents the 
maximum speed at which data can be transfered from the controller or the 
hard drive itself to the CPU (I don't remember which one). Usually 
actual disk reads are slower by at least a magnitude (factor of 10), due 
to the limitations of the hardware. I would imagine the '-T' test is 
useful, because certain (database) servers may benefit by using data 
directly from the controller's cache.

However, you are right in that memory management and the effectiveness 
of the Linux I/O subsystem would probably have an impact on these test 
results, and it could be used as a good benchmark for these systems. And 
yes, DMA significantly improves both the physical and cache read results.

-- 
Anton Markov <("anton" + "@" + "truxtar" + "." + "com")>

GnuPG Key fingerprint =
5546 A6E2 1FFB 9BB8 15C3  CE34 46B7 8D93 3AD1 44B4

*** LINUX - MAY THE SOURCE BE WITH YOU! ***
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 256 bytes
Desc: OpenPGP digital signature
URL: <http://gtalug.org/pipermail/legacy/attachments/20050118/af041d01/attachment.sig>


More information about the Legacy mailing list