Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
608 views
in Technique[技术] by (71.8m points)

python - numpy: split 1D array of chunks separated by nans into a list of the chunks

I have a numpy array with only some values being valid and the rest being nan. example:

[nan,nan, 1 , 2 , 3 , nan, nan, 10, 11 , nan, nan, nan, 23, 1, nan, 7, 8]

I would like to split it into a list of chunks containing every time the valid data. The result would be

[[1,2,3], [10,11], [23,1], [7,8]]

I managed to get it done by iterating over the array, checking isfinite() and producing (start,stop) indexes.

However... It is painfully slow...

Do you perhaps have a better idea?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

Here is another possibility:

import numpy as np
nan = np.nan

def using_clump(a):
    return [a[s] for s in np.ma.clump_unmasked(np.ma.masked_invalid(a))]

x = [nan,nan, 1 , 2 , 3 , nan, nan, 10, 11 , nan, nan, nan, 23, 1, nan, 7, 8]

In [56]: using_clump(x)
Out[56]: 
[array([ 1.,  2.,  3.]),
 array([ 10.,  11.]),
 array([ 23.,   1.]),
 array([ 7.,  8.])]

Some benchmarks comparing using_clump and using_groupby:

import itertools as IT
groupby = IT.groupby
def using_groupby(a):
    return [list(v) for k,v in groupby(a,np.isfinite) if k]

In [58]: %timeit using_clump(x)
10000 loops, best of 3: 37.3 us per loop

In [59]: %timeit using_groupby(x)
10000 loops, best of 3: 53.1 us per loop

The performance is even better for larger arrays:

In [9]: x = x*1000
In [12]: %timeit using_clump(x)
100 loops, best of 3: 5.69 ms per loop

In [13]: %timeit using_groupby(x)
10 loops, best of 3: 60 ms per loop

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...