Statically Typed

because Hindley-Milner rocks

Revisiting My Wrench and Timing It Too


I had mentioned that in working with Python 2.4 I had come up with a rather rigid solution to a problem (chaining a bunch of iterators together to traverse a 2D array) which given a greater context and new requirements failed to be adaptable.  Here was my original solution can be seen in the “Cog meet wrench” posting.  Finally I’ve figured out how to do it and wouldn’t you just know it but it’s been staring me in the face the whole time.

def map2D( values, col_start, col_end, row_start, row_end )
   partial_apply = islice( imap( lambda x: islice( x, row_start, row_end+1 ), values ), col_start, col_end+1 )
   return chain( *partial_apply )

All I had to do was replace the call to iter with one to islice and then wrap the iterator returned from imap with another call to islice.  For later versions of Python you could even condense it further using chain.from_iterators and turn it into a one-linear (a longish one-liner.)

To me this is much easier to follow than using some convoluted list-comprehension.  Technically speaking I could have solved the original problem via a generator comprehension:

iter_values = ( item for x in values for item in x )

which for a novice at comprehensions could be a little difficult to grok.  However, there’s another down side to using generator comprehensions, the iterators created from the itertools module outperform generator comprehensions.  How much?  I did a little unscientific experiment:

def timer( it ):
   start = time()
   [i for i in it]
   return time() - start

basis = [ range( 0, 100 ) for i in xrange( 0, 100 ) ]
generator_results = [ timer( ( item for x in basis for item in x ) ) for i in xrange( 0, 20 ) ]
iterator_results = [ timer( chain( *basis ) ) for i in xrange( 0, 20 ) ]

Not surprisingly the generator comprehension took on average 0.0057 seconds to complete while the itertools took on average 0.0025 seconds.  Almost 2x as long.  More than that the ratio persisted when comparing the max and min timing results.

Then I tried timing tests for larger sets (1,000 x 1,000) and (10,000 x 1,000.)  I experimented with the generator comprehension, a list-comprehension and the itertools iterator.  The average, min, and max results surprised me.  Edging out ahead was the list-comprehension (1.355 seconds), followed by the itertools solution (1.424 seconds) and then the generator comprehension (2.635 seconds.)

Absolutely stunning that the list-comprehension would come out on top.  I had thought that the garbage collector would be working overtime there.  Despite this set back I think I’d still opt for the itertools approach.  I just love them too much to give them up.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Information

This entry was posted on July 11, 2010 by in Python.
%d bloggers like this: