dump a perl array into a psql DB via 'copy'; help?

Madison Kelly linux-5ZoueyuiTZhBDgjK7y7TUQ at public.gmane.org
Wed Dec 15 06:38:56 UTC 2004


Hi all (again, I know)

   Quick update first; a while back I was asking for advice on how to 
spead up database performance and several people suggested avoiding 
calling and reading 'ls' and instead use 'readdir'. I avoided it at the 
time because I needed all of the file's information. Well, with 'stat' 
and '$size = -s $file' I can get it now. With that and other 
improvements my performance has increased by more than five-fold.

   But I want more. :)

   What I doing currently is opening a file, starting the file with 
'psql' copy command, then for each file being processed write a line of 
data and finally cap off the file with '\.', With this written 
(currently taking 11 seconds to process 22,000 files on my machine) I 
then call 'psql' to read in the contents. This works but the read alone 
takes another 31 seconds. I know this sounds somewhat trivial but I need 
it to be faster.

   My idea, and what I need help with, is this:

   Instead of writing the lines to a file I would rather write out each 
line into an array so that I avoid the disk access of writing the file 
out. Next I want to dump that contents of the array into 'psql' one line 
at a time but not commit the changes until the whole array is in (which 
is how I believe the 'copy' works from a text file). This way I would 
avoid a second disk IO hit by avoiding the need to have 'psql' read the 
file.

   My question is how can I get the same function as 'psql dbname -f 
data.txt' except telling 'psql' to get the values from the array instead?

   Thanks yet again!

Madison
--
The Toronto Linux Users Group.      Meetings: http://tlug.ss.org
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://tlug.ss.org/subscribe.shtml





More information about the Legacy mailing list