"Base" meaning without just using lru_cache. All of these are "fast enough" -- I'm not looking for the fastest algorithm -- but the timings surprised me so I was hoping I could learn something about how Python "works".
Simple loop (/tail recursion):
def fibonacci(n):
a, b = 0, 1
if n in (a, b): return n
for _ in range(n - 1):
a, b = b, a + b
return b
Simple memoized:
def fibonacci(n, memo={0:0, 1:1}):
if len(memo) <= n:
memo[n] = fibonacci(n - 1) + fibonacci(n - 2)
return memo[n]
Using a generator:
def fib_seq():
a, b = 0, 1
yield a
yield b
while True:
a, b = b, a + b
yield b
def fibonacci(n):
return next(x for (i, x) in enumerate(fib_seq()) if i == n)
I expected the first, being dead simple, to be the fastest. It's not. The second is by far the fastest, despite the recursion and lots of function calls. The third is cool, and uses "modern" features, but is even slower, which is disappointing. (I was tempted to think of generators as in some ways an alternative to memoization -- since they remember their state -- and since they're implemented in C I was hoping they'd be faster.)
Typical results:
loop: about 140 μs
memo: about 430 ns
genr: about 250 μs
So can anyone explain, in particular, why memoization is an order of magnitude faster than a simple loop?
EDIT:
Clear now that I have (like many before me) simply stumbled upon Python's mutable default arguments. This behavior explains the real and the apparent gains in execution speeds.
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…