Postgres/Perl performance help needed

Madison Kelly linux-5ZoueyuiTZhBDgjK7y7TUQ at public.gmane.org
Wed May 26 03:23:14 UTC 2004


cbbrowne-HInyCGIudOg at public.gmane.org wrote:
>>Hi all,
>>
>>   The intrepid "Asker of questions" is back (so sorry!).
>>
>>   I am hoping I can pick the brains of those of you here who play with 
>>postgresql a lot. Maybe you can help me configure the server to be more 
>>efficient or suggest a different way to approach my code.
>>
>>   I have run into a problem where I need to write data into a 
>>postgresql database and it is unbearably slow to do the task. I have 
>>been reading (and will continue to read after sending this) what I can 
>>find on the postgresql.org site and elsewhere that google points me but 
>>so far my attempts to resolve the problem have in fact made it worse.
>>
>>   Specs: I have Fedora Core 1 on a Pentium3 650MHz CPU with 448MB RAM 
>>(it's a box make of parts) running in runlevel 3 with no X up.
>>
>>   I have tried increasing the amount of swap space by passing:
>>
>>echo 128000000 > /proc/sys/kernel/shmmax
>>
>>   to the kernel and editing /var /lib/pgsql/data/postgresql.cong to have:
>>
>>shared_buffers = 15200
>>sort_mem = 32168
> 
> 
> Those changes are unlikely to have been helpful; they probably made
> things WAY worse.
> 
> You just asked to allocate 121MB of shared memory (NOT SWAP), and that
> every time you initiate a sort, 256MB will be allocated to that.  Drop
> shared buffers down to a couple thousand 8K blocks, at most.  The
> default sort_mem was probably OK.
> 
> You're probably going to do well enough with:
>    shared_buffers = 2000
>    sort_mem = 4096
> 
> 
>>   Before I started messing with things I record a directory with 2,490 
>>files and folders (just the names, obviously!) in 23 seconds which was 
>>not reasonable. When I tried to record a filesystem with 175,000 records 
>>it took 32 minutes... Since I have started tweaking the same number of 
>>records takes 35 seconds.
> 
> 
> Some _BIG_ wins come from grouping changes together into a single
> transaction.
> 
> Another big win comes in having an index on file_dir on file_src_uuid...
> 
>    create index fduuid on file_dir (file_src_uuid);
> 
> It's probably good to have the following index:
> 
>    create index fd_name_parent on file_dir (file_name, file_parent_dir);
> 
> If you're running a modern version of PostgreSQL, running pg_autovacuum
> would solve the need to periodically run ANALYZE to keep table
> statistics up to date, and to clean out dead tuples via VACUUM.  If
> that's not readily available, it's a good idea to run VACUUM ANALYZE on
> this table every few thousand updates, as your processing will be
> generating lots of garbage, as well as changing the statistics.

Hi,

   Wow, progress finally!!

   I edited the shared_buffers and sort_mem down to the values you 
recommended and that shaved the time down from 29 seconds to 27 seconds. 
After that I tried again to set 'autocommit = false' and this time it 
worked and shaved the time down more to 21 seconds! It is still far too 
slow but at least I am finally making headway thanks to all of you guys!!

   Madison

--
The Toronto Linux Users Group.      Meetings: http://tlug.ss.org
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://tlug.ss.org/subscribe.shtml





More information about the Legacy mailing list