Bandwidth monitor for a server with many virtual hosts
Madison Kelly
linux-5ZoueyuiTZhBDgjK7y7TUQ at public.gmane.org
Wed Aug 15 18:17:51 UTC 2007
John Van Ostrand wrote:
> On Wed, 2007-08-15 at 13:46 -0400, Madison Kelly wrote:
>> Until now, I have not worried about how much bandwidth various
>> customers whom we host use. Now that our number of clients grows though,
>> I need to start tracking who uses how much bandwidth.
>>
>> What do you guys here use to log bandwidth usage? Our clients use
>> mail, web and can copy data to/from the server via scp/rsync. I suspect
>> I could hack up something using TCP dump, but I really doubt I could
>> come up with something elegant and efficient. :)
>
> Do all of your virtualhosts use separate access_log and error_log files?
> That would make it easy to parse those for bytes transferred. It's won't
> take into account TCP overhead, but it would allow you single out a busy
> host.
All VHs do have their own log directories. Though I find sometimes data
is written to Apache2's main log file for some reason I've never
resolved. Thankfully I think this is mainly for 'error', and not 'access'.
If I know the packet size and the amount of data needed for each packet,
shouldn't I be able to say 'fileX' is 'Y' size, which would need 'Z
packets + TCP overhead' and come up with a reasonable number?
I am not concerned with getting every last byte accounted for, but I do
like being able to get anything as close to accurate as possible, for
accuracy's sake. :)
> Have you taken a look at ntop and apachetop? Ntop can classify traffic
> by service (http vs. smtp vs pop vs https, etc) and by IP address.
> Apachetop shows you top-like stats on web page hits by tailing
> access_logs. Again not really what you are looking for but I thought I
> would offer it.
Those sound neat for other reasons, and no I haven't played with them.
Thanks! However, I don't think it'd work quite well for me. Writing a
script to read all the files in each VH container then tailing the
access logs and periodically updating the file list and writing data to
a DB would work. It just seems like such a cludge. I would be surprised
it there isn't "a better way" out there...
Madi
--
The Toronto Linux Users Group. Meetings: http://gtalug.org/
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://gtalug.org/wiki/Mailing_lists
More information about the Legacy
mailing list