I have a function which processes a DataFrame, largely to process data into buckets create a binary matrix of features in a particular column using pd.get_dummies(df[col])
.
To avoid processing all of my data using this function at once (which goes out of memory and causes iPython to crash), I have broken the large DataFrame into chunks using:
chunks = (len(df) / 10000) + 1
df_list = np.array_split(df, chunks)
pd.get_dummies(df)
will automatically create new columns based on the contents of df[col]
and these are likely to differ for each df
in df_list
.
After processing, I am concatenating the DataFrames back together using:
for i, df_chunk in enumerate(df_list):
print "chunk", i
[x, y] = preprocess_data(df_chunk)
super_x = pd.concat([super_x, x], axis=0)
super_y = pd.concat([super_y, y], axis=0)
print datetime.datetime.utcnow()
The processing time of the first chunk is perfectly acceptable, however, it grows per chunk! This is not to do with the preprocess_data(df_chunk)
as there is no reason for it to increase. Is this increase in time occurring as a result of the call to pd.concat()
?
Please see log below:
chunks 6
chunk 0
2016-04-08 00:22:17.728849
chunk 1
2016-04-08 00:22:42.387693
chunk 2
2016-04-08 00:23:43.124381
chunk 3
2016-04-08 00:25:30.249369
chunk 4
2016-04-08 00:28:11.922305
chunk 5
2016-04-08 00:32:00.357365
Is there a workaround to speed this up? I have 2900 chunks to process so any help is appreciated!
Open to any other suggestions in Python!
Question&Answers:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…