Berkeley CSUA MOTD:Entry 42187
Berkeley CSUA MOTD
 
WIKI | FAQ | Tech FAQ
http://csua.com/feed/
2025/04/04 [General] UID:1000 Activity:popular
4/4     

2006/3/10-13 [Computer/SW/Languages/Perl] UID:42187 Activity:low
3/10    I wrote a little perl, that had a little curl
        right in the middle of it's call stack.
        And when it crashed, it crashed very very fast,
        but when it was slow, it was working
        \_ LWP::UserAgent!
           \_ Can I use LWP::UserAgent to do multiple concurrent requests,
              and time them?  Will it deal with a redirect automatically?
        \_ There's also a parallel version of LWP:UserAgent but it is a
           bit clunkier to use but if you need it, I found it better than
           forking off multiple child procs.
           \_ Eh, using the default one with Parallel::ForkManager's pretty
              easy.  I used that for the remote-link checking part of a
              linkchecker I wrote and it was pretty easy.
              \_ I haven't used P:FM.  I assume it's forking off procs?
                 I'll take a look at it, thanks.  I wonder if P:FM or
                 the P version of LWP:UA is faster....
           \_ How would you time the parallel requests in perl?  (how long
              one takes to complete).
              \_ Since I was running dozens or even hundreds at a time, I just
                 did a simple $end - $start timer for entire runs.  Timing a
                 single connection is a tiny bit harder because the start time
                 is just a "go do them all now" call but you can time any
                 particular connection if you want.  You know when you fired
                 off the batch, you control if the batch runs in order or as
                 fast as possible/parallel, you know for each connection when
                 you get the first bits back and you of course can tell when a
                 particular connection is done.  But as I said, I just fired
                 off however many, noted the start and end times and that was
                 it.  Divide # requests by total time was good enough for me.