Charting server load
Ian Petersen
ispeters-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org
Thu Feb 1 22:09:32 UTC 2007
> If the 50 processes each needed 2.01% of the cpu to get all their work
> done all the time, then you are only overloaded by 0.5%, so really you
> are keeping up with the load. If even one of the processes was to be
> stopped, your load should actually drop quickly to less than 1 since it
> would suddenly be able to keep up with all the work. Load averages are
> rather weird that way.
[snip]
OK, that makes a certain amount of sense, but I still feel unconvinced.
The load average is exactly that: an average. A load of 50 says that,
over the last minute, every time the run queue was sampled (how often
does that happen anyway?) there were 50 processes "ready to go".
Now, if all of our 50 hypothetical processes need 2.01% of the CPU to
get all their work done, shouldn't each one show up in roughly 2.01%
of the samples? If that's true, then each sample would see, on
average, one process in the ready state and the load would be one
(well, 1.05 since we're talking 2.01% not 2%).
On the other hand, if the sampling mechanism samples N times for a
minute and the average sample sees 50 processes in the ready state,
then the load is 50 because the kernel "always" had 50 choices of
things to do during the last minute. I don't see how you can arrive
at that state with 50 processes that need ~2% of the CPU each. Such a
state sounds to me like 50 CPU-bound processes.
Ian
--
Tired of pop-ups, security holes, and spyware?
Try Firefox: http://www.getfirefox.com
--
The Toronto Linux Users Group. Meetings: http://gtalug.org/
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://gtalug.org/wiki/Mailing_lists
More information about the Legacy
mailing list