That's not the Sieve of Eratosthenes, even though it looks like it is. It is in fact much worse. The Sieve is the best algorithm for finding primes.
See http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes
edit: I've modified https://stackoverflow.com/a/9302299/711085 to be a one-liner (originally it was not the real Sieve, but now it is... probably...):
reduce( (lambda r,x: r-set(range(x**2,N,x)) if (x in r) else r),
range(2,N), set(range(2,N)))
Demo:
>>> primesUpTo(N): lambda N: reduce(...)
>>> primesUpTo(30)
{2, 3, 5, 7, 11, 13, 17, 19}
Sadly I think that while this would be efficient in a functional programming language, it might not be as efficient in python due to non-persistent (shared-state and immutable) data structures, and any sieve in python would need to use mutation to achieve comparable performance. We can still cram it into a one-liner if we desperately wanted to. But first...
Normal sieve:
>>> N = 100
>>> table = list(range(N))
>>> for i in range(2,int(N**0.5)+1):
... if table[i]:
... for mult in range(i**2,N,i):
... table[mult] = False
...
>>> primes = [p for p in table if p][1:]
>>> primes
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97]
We can now define and call anonymous functions on the same line, as well as the hack of [...].__setitem__
to do inline mutation, and the hack of ... and foo
to evaluate ...
while returning foo
:
>>> primesUpTo = lambda N: (lambda table: [[table.__setitem__(mult,False) for mult in range(i**2,N,i)] for i in range(2,int(N**0.5)+1) if table[i]] and [p for p in table if p][1:])(list(range(N)))
>>> primesUpTo(30)
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
Proceed to cringe in horror, the one-liner expanded (oddly beautiful because you could almost directly translate the control flow, yet a terrible abuse of everything):
lambda N:
(lambda table:
[[table.__setitem__(mult,False) for mult in range(i**2,N,i)]
for i in range(2,int(N**0.5)+1) if table[i]]
and [p for p in table if p][1:]
)(list(range(N)))
This one-liner mutating version gave up at around 108 on my machine, while the original mutating version gave up at around 109, running out of memory (oddly).
The original reduce
version gave up at 107. So perhaps it is not that inefficient after all (at least for numbers you can deal with on your computer).
edit2 It seems you can abuse side-effects more concisely as:
reduce( (lambda r,x: (r.difference_update(range(x**2,N,x)) or r)
if (x in r) else r),
range(2,N), set(range(2,N)))
It gives up at around 108, the same as the one-liner mutating version.
edit3: This runs at O(N) empirical complexity, whereas without the difference_update
it ran at O(n^2.2) complexity.
Limiting the range that is reduced over, to the sqrt of the upper limit, and working with odds only, both result in additional speed-ups (2x and 1.6x correspondingly):
reduce( (lambda r,x: (r.difference_update(range(x*x,N,2*x)) or r)
if (x in r) else r),
range(3, int((N+1)**0.5+1), 2),
set([2] + range(3,N,2)))