Perl optimisation help
Lennart Sorensen
lsorense-1wCw9BSqJbv44Nm34jS7GywD8/FfD2ys at public.gmane.org
Fri Jun 9 14:35:31 UTC 2006
On Fri, Jun 09, 2006 at 11:45:19AM +0300, Peter wrote:
>
> Thanks to all who have responded. I did not have time to check the
> perlmonger links suggested, but I think I have solved my problem.
> Conclusions:
>
> Solutions by speed:
>
> 5. $var .= $line;
> 13. %hash{$idx++} = $line; later collect this into $var using .= or join
> 180. $var = $var.$line;
> n/a. @var = (@var, $line);
>
> The first number is seconds runtime with my test dataset (8Megs). In all
> cases there is a 'collect' phase and a 'print' phase. The 'collect'
> phase has the $var .= $line; run in a certain phase of a state machine
> that analyzes <INPUT> lines. It can collect up to 100,000 lines of about
> 80 characters (~8Megs). In all cases the modified collect instructions
> were placed in the exact same place in the code, and no loop rewriting
> was done. It can be said that there is some distance (code length
> speaking) between the calls to the collect statements.
So .= appends efficiently, while $var = $var.$new recreates the line
each time (makes sense since that is what you are telling it to do).
Good to knwo the string append is rather efficient in perl then.
Len Sorensen
--
The Toronto Linux Users Group. Meetings: http://tlug.ss.org
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://tlug.ss.org/subscribe.shtml
More information about the Legacy
mailing list