To remove duplicates use set(a)
. To print duplicates, something like:
a = [1,2,3,2,1,5,6,5,5,5]
import collections
print([item for item, count in collections.Counter(a).items() if count > 1])
## [1, 2, 5]
Note that Counter
is not particularly efficient (timings) and probably overkill here. set
will perform better. This code computes a list of unique elements in the source order:
seen = set()
uniq = []
for x in a:
if x not in seen:
uniq.append(x)
seen.add(x)
or, more concisely:
seen = set()
uniq = [x for x in a if x in seen or seen.add(x)]
I don't recommend the latter style, because it is not obvious what not seen.add(x)
is doing (the set add()
method always returns None
, hence the need for not
).
To compute the list of duplicated elements without libraries:
seen = {}
dupes = []
for x in a:
if x not in seen:
seen[x] = 1
else:
if seen[x] == 1:
dupes.append(x)
seen[x] += 1
If list elements are not hashable, you cannot use sets/dicts and have to resort to a quadratic time solution (compare each with each). For example:
a = [[1], [2], [3], [1], [5], [3]]
no_dupes = [x for n, x in enumerate(a) if x not in a[:n]]
print no_dupes # [[1], [2], [3], [5]]
dupes = [x for n, x in enumerate(a) if x in a[:n]]
print dupes # [[1], [3]]