Well, my first step was to set the two tests up independently to ensure that this is not a result of e.g. the order in which the functions are defined.
>python -mtimeit "x=[34534534, 23423523, 77645645, 345346]" "[e for e in x]"
1000000 loops, best of 3: 0.638 usec per loop
>python -mtimeit "x=[34534534, 23423523, 77645645, 345346]" "list(e for e in x)"
1000000 loops, best of 3: 1.72 usec per loop
Sure enough, I can replicate this. OK, next step is to have a look at the bytecode to see what's actually going on:
>>> import dis
>>> x=[34534534, 23423523, 77645645, 345346]
>>> dis.dis(lambda: [e for e in x])
1 0 LOAD_CONST 0 (<code object <listcomp> at 0x0000000001F8B330, file "<stdin>", line 1>)
3 MAKE_FUNCTION 0
6 LOAD_GLOBAL 0 (x)
9 GET_ITER
10 CALL_FUNCTION 1
13 RETURN_VALUE
>>> dis.dis(lambda: list(e for e in x))
1 0 LOAD_GLOBAL 0 (list)
3 LOAD_CONST 0 (<code object <genexpr> at 0x0000000001F8B9B0, file "<stdin>", line 1>)
6 MAKE_FUNCTION 0
9 LOAD_GLOBAL 1 (x)
12 GET_ITER
13 CALL_FUNCTION 1
16 CALL_FUNCTION 1
19 RETURN_VALUE
Notice that the first method creates the list directly, whereas the second method creates a genexpr
object and passes that to the global list
. This is probably where the overhead lies.
Note also that the difference is approximately a microsecond i.e. utterly trivial.
Other interesting data
This still holds for non-trivial lists
>python -mtimeit "x=range(100000)" "[e for e in x]"
100 loops, best of 3: 8.51 msec per loop
>python -mtimeit "x=range(100000)" "list(e for e in x)"
100 loops, best of 3: 11.8 msec per loop
and for less trivial map functions:
>python -mtimeit "x=range(100000)" "[2*e for e in x]"
100 loops, best of 3: 12.8 msec per loop
>python -mtimeit "x=range(100000)" "list(2*e for e in x)"
100 loops, best of 3: 16.8 msec per loop
and (though less strongly) if we filter the list:
>python -mtimeit "x=range(100000)" "[e for e in x if e%2]"
100 loops, best of 3: 14 msec per loop
>python -mtimeit "x=range(100000)" "list(e for e in x if e%2)"
100 loops, best of 3: 16.5 msec per loop