Charting server load

Lennart Sorensen lsorense-1wCw9BSqJbv44Nm34jS7GywD8/FfD2ys at public.gmane.org
Thu Feb 1 22:45:08 UTC 2007


On Thu, Feb 01, 2007 at 05:09:32PM -0500, Ian Petersen wrote:
> OK, that makes a certain amount of sense, but I still feel unconvinced.
> 
> The load average is exactly that: an average.  A load of 50 says that,
> over the last minute, every time the run queue was sampled (how often
> does that happen anyway?) there were 50 processes "ready to go".
> 
> Now, if all of our 50 hypothetical processes need 2.01% of the CPU to
> get all their work done, shouldn't each one show up in roughly 2.01%
> of the samples?  If that's true, then each sample would see, on
> average, one process in the ready state and the load would be one
> (well, 1.05 since we're talking 2.01% not 2%).

Depends when it is samples, and how the load is distributed by the
processes.

> On the other hand, if the sampling mechanism samples N times for a
> minute and the average sample sees 50 processes in the ready state,
> then the load is 50 because the kernel "always" had 50 choices of
> things to do during the last minute.  I don't see how you can arrive
> at that state with 50 processes that need ~2% of the CPU each.  Such a
> state sounds to me like 50 CPU-bound processes.

They may be processing data coming in a serial or network port.  Not
everything is cpu bound.  Or every xxx ms they wake up and do a certain
amount of work and go to sleep again.  Certainly most of the time, a
higher load means a busier system, but it does depend on the type of
processes it is running.

--
Len Sorensen
--
The Toronto Linux Users Group.      Meetings: http://gtalug.org/
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://gtalug.org/wiki/Mailing_lists





More information about the Legacy mailing list