Linux Benchmarking

Christopher Browne cbbrowne-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org
Fri Aug 19 12:19:28 UTC 2005


On 8/19/05, Walter Dnes <waltdnes-SLHPyeZ9y/tg9hUCZPvPmw at public.gmane.org> wrote:
> 
> On Thu, Aug 18, 2005 at 12:29:01AM -0400, Jason Carson wrote
> > Greetings,
> >
> > I am thinking about doing a Linux comparison by benchmarking various
> > distros then posting the results on my website.
> >
> > Does anyone have any recommendations as to what software I can use to do
> > the benchmarking. I found this page with a bunch of tests
> > (http://lbs.sourceforge.net/)
> 
> I'm old enough to remember the NEC V20 chip getting 20% better results
> on Norton SI (System Info) than the stock Intel 8088. Real-world tests
> showed *AT MOST* 5-to-8 percent improvement. Eventually, NEC got out of
> the 8088-clone-chip business. And of course, RISC-chip manufacturers
> just *LOVE* comparing their chips doing no-op loops versus X86 CPUs
> doing no-op loops. That's because no-op is almost the only instruction
> where it doesn't take umpteen RISC instructions to emulate one CISC
> instruction.


A couple weeks ago, I was visiting OSDL and had the amusement of reading one 
of the spec sheets on a TPCD benchmark.

OSDL has been running some database performance benchmarks, of late; the 
point being to try to find any bottlenecks in the Linux kernel that might 
prevent it from being as fast as it ought to be at such.

I was there with a group of PostgreSQL folk; the discussions validated that 
the core PostgreSQL guys were using all the APIs sanely and that there isn't 
any "magick bullet out there" such that using some extra system call might 
speed things up radically.

Anyways, back to that spec sheet. It was documenting a system used for some 
Itanium/Linux-based database benchmark that got some pretty good numbers. 
The machine was pretty expensive, and the specs seemed sensible, until you 
got to the fine print at the end where they mentioned the number of 15K RPM 
SCSI disks required to accomplish the benchmark. If memory serves, it was 
112 disks.

It was not a number of disks that I would normally expect to be able to hook 
up to a database server. It was an outrageous number that would require more 
SCSI controllers for than you have PCI slots for on other than a seriously 
mutant server.

In effect, the TPC benchmarks have led to vendors constructing the computing 
equivalents to Formula 1 race cars. Things that, on a suitably restricted 
track, and when driven by suitably trained operators, provide speed you 
can't really comprehend. But not of much use in evaluating the merits of any 
of the things that travel on "more pedestrian roads."

A meaningful test would be the same application doing the same
> real-world tasks on the same machine running the same WM. You'd
> probably need some serious scripting to ensure repeatability. Also,
> remember to create separate tests for first access after bootup and
> repeated accesses thereafter.


And it's common that the bottlenecks fall into particular places, whether:
a) Disk I/O, in which case the distribution can be quite irrelevant;
b) Human I/O, in which case the distribution is sure to be irrelevant;
c) Memory usage, in which case, having a bit more or less RAM will mess with 
things a *LOT*, and where having extra daemons running or having more 
languages configured in GLIBC can hurt...
-- 
http://www3.sympatico.ca/cbbrowne/linux.html
"The true measure of a man is how he treats someone who can do him
absolutely no good." -- Samuel Johnson, lexicographer (1709-1784)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gtalug.org/pipermail/legacy/attachments/20050819/abd4ee90/attachment.html>


More information about the Legacy mailing list