Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
747 views
in Technique[技术] by (71.8m points)

scala - Apache Spark : When not to use mapPartition and foreachPartition?

I know that when we want to initialize some resource for a group of RDDs instead of individual RDD elements we should ideally use the mapPartition and foreachPartition. For example in case of initializing a JDBC connection for each partition of data. But are there scenarios where we should not use either of them and instead use plain vanilla map() and foreach() transformation and action.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

When you write Spark jobs that uses either mapPartition or foreachPartition you can just modify the partition data itself or just iterate through partition data respectively. The anonymous function passed as parameter will be executed on the executors thus there is not a viable way to execute a code which invokes all the nodes e.g: df.reduceByKey from one particular executor. This code should be executed only from the driver node. Thus only from the driver code you can access dataframes, datasets and spark session.

Please find here a detailed discussion over this issue and possible solutions


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...