Total size of serialized results of tasks is bigger than spark.driver.maxResultSize
means when a executor is trying to send its result to driver, it exceeds spark.driver.maxResultSize
. Possible solution is as mentioned above by @mayank agrawal to keep on increasing it till you get it to work (not a recommended solution if an executor is trying to send too much data ).
I would suggest looking into your code and see if the data is skewed that is making one of the executor to do most of the work resulting in a lot of data in/out. If data is skewed you could try repartitioning
it.
for too many open files issues , possible cause is Spark might be creating a number of intermediate files before shuffle. could happen if too many cores being used in executor/high parallelism or unique keys (possible cause in your case - huge number of input files). One solution to look into is consolidating the huge number of intermediate files through this flag : --conf spark.shuffle.consolidateFiles=true
(when you do spark-submit
)
One more thing to check is this thread (if that something similar to your use case): https://issues.apache.org/jira/browse/SPARK-12837
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…