Linux Benchmarking

Francois Ouellette fouellet-cpI+UMyWUv9BDgjK7y7TUQ at public.gmane.org
Fri Aug 19 12:45:49 UTC 2005


----- Original Message -----
From: Christopher Browne
To: tlug-lxSQFCZeNF4 at public.gmane.org
Sent: Friday, August 19, 2005 8:19 AM
Subject: Re: [TLUG]: Linux Benchmarking

On 8/19/05, Walter Dnes <waltdnes-SLHPyeZ9y/tg9hUCZPvPmw at public.gmane.org> wrote:
On Thu, Aug 18, 2005 at 12:29:01AM -0400, Jason Carson wrote
> Greetings,
>
> I am thinking about doing a Linux comparison by benchmarking various
> distros then posting the results on my website.
>
> Does anyone have any recommendations as to what software I can use to do
> the benchmarking. I found this page with a bunch of tests
> (http://lbs.sourceforge.net/ )
>
>  I'm old enough to remember the NEC V20 chip getting 20% better results
>on Norton SI (System Info) than the stock Intel 8088.  Real-world tests
>showed *AT MOST* 5-to-8 percent improvement.  Eventually, NEC got out of
>the 8088-clone-chip business.  And of course, RISC-chip manufacturers
>just *LOVE* comparing their chips doing no-op loops versus X86 CPUs
>doing no-op loops.  That's because no-op is almost the only instruction
>where it doesn't take umpteen RISC instructions to emulate one CISC
>instruction.
>
>A couple weeks ago, I was visiting OSDL and had the amusement of  reading
one of the spec sheets on a  TPCD >benchmark.
>
>OSDL has been running some  database performance benchmarks, of late; the
point being to try to find any bottlenecks in >the Linux kernel that might
prevent it from being as fast as it ought to be at such.
><snip>
>  A meaningful test would be the same application doing the same
>real-world tasks on the same machine running the same WM.  You'd
>probably need some serious scripting to ensure repeatability.  Also,
>remember to create separate tests for first access after bootup and
>repeated accesses thereafter.
>
>And it's common that the bottlenecks fall into particular places, whether:
>a) Disk I/O, in which case the distribution can be quite irrelevant;
>b) Human I/O, in which case the distribution is sure to be irrelevant;
>c) Memory usage, in which case, having a bit more or less RAM will mess
with things a *LOT*, and where having extra >daemons running or having more
languages configured in GLIBC can hurt...
=====================

Having worked for well-known vendors of hardware and software during my long
carreer I can guarantee you that ALL benchmarks I have seen them putting
together were made using outrageous tweaks to the hardware and/or software.
Even today Big Blue publishes "benchmarks" on its web site where there are
always two or tree of its machines in the top five of, say, an Oracle on
Linux "benchmark", which only proves that one can make things run faster
given enough time and $$$ to build a fast performer, which is usually NOT
the type of configuration that a normal customer will buy.

Benchmarking what? Trying 5 different flavours of a Linux kernel on the same
hardware will only measure that specific hardware's performance running
those specific kernels and applications with a very specific set of
parameters in place, which may not be the ones that a real client will use.

A faster car does not make a bad driver drive better!

  François Ouellette
<fouellet-cpI+UMyWUv9BDgjK7y7TUQ at public.gmane.org>


--
The Toronto Linux Users Group.      Meetings: http://tlug.ss.org
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://tlug.ss.org/subscribe.shtml





More information about the Legacy mailing list