Limits on number os ssh users? X users?
Lennart Sorensen
lsorense-1wCw9BSqJbv44Nm34jS7GywD8/FfD2ys at public.gmane.org
Thu Feb 16 14:32:02 UTC 2006
On Wed, Feb 15, 2006 at 09:00:50PM -0500, Christopher Browne wrote:
> It depends *heavily* on what processes they are running.
>
> - A bunch of users running Mozilla and Emacs will chew up memory Purty Fast.
>
> - If they are running things with small memory footprints, the number
> of users could grow Purty High before things fall down.
>
> I'd expect memory to be the primary constraint. That's what we used
> to chew up when running a dozen users on a MicroVAX with 32MB of
> memory... Eight Megabytes And Constantly Swapping was a pretty big
> memory consumer...
Well at least on a modern linux kernel processes share the memory for
the executable code. Only the modified data segments are specific to
each user. I suspect emacs doesn't generally have that much of that.
On older unix systems where memory pages could not be shared between
users, you would quickly run out of memory with such uses.
I remember at university sending message to people that if they were
going to run emacs, at least they should use 3 buffers in one emacs
rather than 3 copies of emacs at the same time. :) SunOS 4 did not deal
too well with memory as far as I can tell. It would also swap itself to
death under load when too many people were compiling code at the same
time. It made the mistake of counting swapin time as runtime so often
the swapin time for a task took the full time slice, so when it finished
swapping in, the system switched to the next task, and no actual work
ever got done anymore, until the next morning when the admins showed up
and rebooted the systems. Load average of 130+ meant the system was
going to be dead soon. :)
Len Sorensen
--
The Toronto Linux Users Group. Meetings: http://tlug.ss.org
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://tlug.ss.org/subscribe.shtml
More information about the Legacy
mailing list