Charting server load
Alex Beamish
talexb-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org
Thu Feb 1 19:00:00 UTC 2007
On 2/1/07, Ian Petersen <ispeters-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org> wrote:
>
> > That's quite helpful, thank you. All of the machines that I run have
> > loads under 1, and so I have little experience with this type of thing.
>
> I believe, and someone can correct me if I'm wrong, that "maximum
> load" is the same as the number of CPUs in the system. In other
> words, if you have a uniprocessor, and the load is 1, then the CPU is
> always busy, but you haven't overloaded the system. In a
> dual-processor setup, you can run with a load of 2 and it's the same
> as a uniprocessor running at 1--both processors are maxed, but the
> system's not overloaded.
Ian's quite right .. depending on the version of top that's available, you
may get a lins starting with 'CPU states' for each processor, or you may
just get a single line starting with 'Cpu(s):'
A single processor machine might look like this:
13:52:55 up 2 days, 7:28, 7 users, load average: 0.07, 0.06, 0.09
89 processes: 88 sleeping, 1 running, 0 zombie, 0 stopped
CPU states: 2.8% user 0.4% system 0.0% nice 0.0% iowait 96.7% idle
Mem: 513772k av, 504120k used, 9652k free, 0k shrd, 82244k
buff
389064k actv, 0k in_d, 10468k in_c
Swap: 1044184k av, 13608k used, 1030576k free 274352k
cached
PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND
15726 alex 20 0 1144 1144 864 R 0.9 0.2 0:00 0 top
1 root 15 0 480 452 424 S 0.0 0.0 0:04 0 init
2 root 15 0 0 0 0 SW 0.0 0.0 0:00 0 keventd
3 root 15 0 0 0 0 SW 0.0 0.0 0:01 0 kapmd
4 root 34 19 0 0 0 SWN 0.0 0.0 0:00 0
ksoftirqd_CPU
9 root 25 0 0 0 0 SW 0.0 0.0 0:00 0 bdflush
5 root 15 0 0 0 0 SW 0.0 0.0 0:00 0 kswapd
6 root 15 0 0 0 0 SW 0.0 0.0 0:00 0 kscand/DMA
...
and a multi-processor machine might look like this:
top - 13:53:14 up 389 days, 8:03, 1 user, load average: 0.03, 0.01, 0.00
Tasks: 106 total, 1 running, 105 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0% us, 0.0% sy, 0.0% ni, 99.9% id, 0.0% wa, 0.0% hi, 0.0% si
Mem: 2074712k total, 1084732k used, 989980k free, 166684k buffers
Swap: 4194296k total, 0k used, 4194296k free, 359816k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
12858 openvpn 15 0 7464 4536 1144 S 2.0 0.2 11:03.81openvpn
1 root 16 0 1744 568 492 S 0.0 0.0 0:16.26init
2 root RT 0 0 0 0 S 0.0 0.0 0:15.90migration/0
3 root 34 19 0 0 0 S 0.0 0.0 0:00.00ksoftirqd/0
4 root RT 0 0 0 0 S 0.0 0.0 0:09.78migration/1
5 root 34 19 0 0 0 S 0.0 0.0 0:00.00ksoftirqd/1
6 root RT 0 0 0 0 S 0.0 0.0 0:07.55migration/2
7 root 39 19 0 0 0 S 0.0 0.0 0:00.00ksoftirqd/2
8 root RT 0 0 0 0 S 0.0 0.0 0:16.08migration/3
9 root 39 19 0 0 0 S 0.0 0.0 0:00.00ksoftirqd/3
10 root 10 -5 0 0 0 S 0.0 0.0 0:14.95events/0
11 root 10 -5 0 0 0 S 0.0 0.0 0:02.35events/1
12 root 10 -5 0 0 0 S 0.0 0.0 0:01.45events/2
13 root 10 -5 0 0 0 S 0.0 0.0 0:00.44events/3
14 root 10 -5 0 0 0 S 0.0 0.0 0:00.01 khelper
...
Note that for the multi-processor machine there are multiple event queues.
So, assuming my understanding of the load number is correct, your web
> servers would need to be 40-processor machines to be able to run at a
> load of 40 without being overloaded. Or, turning that on its head,
> your hosting company has apparently oversold the server by a factor of
> 10 to 40, depending on how many cores are in the machine, which is
> unreasonable in my eyes, unless its costing you pennies.
Right -- it may be that it's a dual or quad machine that's just keeping
busy, but my guess is that it's a single processor machine that the web
provider is making plenty of money from. Try the command
$ cat /proc/cpuinfo
and see what it says -- it should list the available processors on the
system. The first system shown above has a single 1GHz processor with 256K
cache; the second one has four 2.4GHz processors, each with 512K cache.
Bogomips numbers are 2005 and 4799, respectively.
--
Alex Beamish
Toronto, Ontario
aka talexb
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gtalug.org/pipermail/legacy/attachments/20070201/332421bc/attachment.html>
More information about the Legacy
mailing list