perl, forking and broken file handles

billt-lxSQFCZeNF4 at public.gmane.org billt-lxSQFCZeNF4 at public.gmane.org
Wed May 4 16:58:37 UTC 2005


I don't know about the file handles, but DBI handles generally don't survive the end of a child program.

The reason why is as follows:
The child and parent after a fork share the same connection (the same socket id to the database). This connection is terminated when the child ends because the DBI module closes all connections to the database.

Also, if you really want to screw up your program try doing inserting or selecting from both child and parent.

file handles may be the same way. The File::IO module may explicitly close the files on exit and since the two programs share the same (unix) file handle, it will be set to null by the child.

What you want to do is pass as parameters (global variables) the names of the files you want to open to the child process and open up the DB and files after the fork.

Note, sharing file handles in threaded programs are generally frowned upon as well.

Bill

On Wed, May 04, 2005 at 12:35:15PM -0400, Madison Kelly wrote:
> Hi all,
> 
>    I need to call one or more external perl script(s) from within my 
> main perl script then wait for them all to finish before proceeding. 
> I've real the perl 'fork' and 'IPC' docs but I must be missing 
> something... Once the program finishes waiting for the children and 
> tries to proceed all file handles and open databases are lost. I am 
> pretty sure from reading that this is because the filehandles and DB 
> connections are shared with the child(ren) and die with them. What I 
> don't know is how to keep these connections alive in the parent script.
> 
>    Here is what I am doing that currently breaks:
> 
> -=-=-[ code snippet ]-=-=-
> {
> ...
>    ## At this point file handles and database connections still work...
> 
>    ## This is inside another loop so this might be called multiple times
>    ## at once if the user has asked to run multiple backup streams at
>    ## once or run each stream in series otherwise.
> 
>    ## These are files that have lists of files I want to copy. I will
>    ## start one 'rsync' stream for each file after closing the FH
>    while ( my $keys = ( each %file_handle ) )
>    {
>      print LOG "Closing the file handle: [$keys]\n";
>      close ($file_handle{$keys});
> 		
>      print LOG "Copying the files listed in: [<path/to/copy.file>] to: 
> [</dst/mount/dir/src_name>]\n";
> 
>      $SIG{CHLD}="IGNORE";
>      my $cpid=fork();
>      if ($cpid == 0)
>      {
>        print "I will now call 'rsync' stream. Copying selected data from 
> source: [#<src_id>] to destination: [#<dst_id>]\n";
> 
>        open (RSYNC, "/path/to/rsync --<switches> 
> --files-from="/path/to/copy.file" </src/mount/dir> 
> </dst/mount/dir/src_name> 2>&1 |");
>        close (RSYNC);
>        exit 0;
>    }
> 
>    # I was running into a race condition...
>    sleep 1;
> 		
>    if ( $parallel_streams == 0 )
>    {
>      print "You have asked to run each stream one at a time. I will now 
> wait for this stream to end.\n";
>      wait;
>      print "This stream is finished. Thank you for waiting.\n";
>    }
> 
> }
> if ( $parallel_streams == 1 )
> {
>    # I was running into a race condition...
>    sleep 1;
>    print "I am waiting for the last stream to finish before proceeding:\n";
>    wait;
>    print "The last stream is finished. Thank you for waiting.\n";
> }
> 
> ## At this point file handles and database connections are dead...
> -=-=-[ End code snippet ]-=-=-
> 
>    Any help or insight would be very much appreciated! Alternative ways 
> of doing this are also very much welcome, too!
> 
> Madison
> 
> -- 
> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> Madison Kelly (Digimer)
> TLE-BU, The Linux Experience; Back Up
> http://tle-bu.thelinuxexperience.com
> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> --
> The Toronto Linux Users Group.      Meetings: http://tlug.ss.org
> TLUG requests: Linux topics, No HTML, wrap text below 80 columns
> How to UNSUBSCRIBE: http://tlug.ss.org/subscribe.shtml
--
The Toronto Linux Users Group.      Meetings: http://tlug.ss.org
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://tlug.ss.org/subscribe.shtml





More information about the Legacy mailing list