Postgres/Perl performance help needed

Stewart C. Russell scruss-rieW9WUcm8FFJ04o6PK0Fg at public.gmane.org
Tue May 25 23:21:44 UTC 2004


Madison Kelly wrote:
>
> ... or suggest a different way to approach my code.

I'm sure this question will have already been optimized to death on Perl 
Monks, <http://perlmonks.org/>. They have better wisdom there.

Some suggestions:

* are you sure your bottlenecks are in the DB server? Recursive 
directory traversing is best done with find(1), or better yet, some of 
Perl's internal directory-traversing routines. Have a look at:

	perldoc -f opendir
	perldoc -f readdir
	perldoc -f stat
	perldoc -f -X
	perldoc File::Find

These have to be better than recursively calling `sudo ls ...`

* Placeholders and Bind Values -- if you can restructure your code to 
use them effectively -- can speed up your DB access incredibly. Please 
see the section "Placeholders and Bind Values" in `perldoc DBI` for more 
details. What you want to do is prepare your SQL statement with 
placeholders outside the loop, then execute the precompiled statement 
with bound values inside the loop. I've seen 100x speed improvements 
just by moving the 'prepare' outside the loop. Also, with bound values, 
you get argument quoting for free. This is good.

* You don't want to muck around splitting up paths by '/'. Use 
File::Basename; it will do path parsing for you, reliably and portably.

I can't comment on the server side, 'cept to say no-one ever ran 
PostgreSQL for speed ;-) (ducks...)

  Stewart

--
The Toronto Linux Users Group.      Meetings: http://tlug.ss.org
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://tlug.ss.org/subscribe.shtml





More information about the Legacy mailing list