Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
1.3k views
in Technique[技术] by (71.8m points)

scala - spark apply function to columns in parallel

Spark will process the data in parallel, but not the operations. In my DAG I want to call a function per column like Spark processing columns in parallel the values for each column could be calculated independently from other columns. Is there any way to achieve such parallelism via spark-SQL API? Utilizing window functions Spark dynamic DAG is a lot slower and different from hard coded DAG helped to optimize the DAG by a lot but only executes in a serial fashion.

An example which contains a little bit more information can be found https://github.com/geoHeil/sparkContrastCoding

The minimum example below:

val df = Seq(
    (0, "A", "B", "C", "D"),
    (1, "A", "B", "C", "D"),
    (0, "d", "a", "jkl", "d"),
    (0, "d", "g", "C", "D"),
    (1, "A", "d", "t", "k"),
    (1, "d", "c", "C", "D"),
    (1, "c", "B", "C", "D")
  ).toDF("TARGET", "col1", "col2", "col3TooMany", "col4")

val inputToDrop = Seq("col3TooMany")
val inputToBias = Seq("col1", "col2")

val targetCounts = df.filter(df("TARGET") === 1).groupBy("TARGET").agg(count("TARGET").as("cnt_foo_eq_1"))
val newDF = df.toDF.join(broadcast(targetCounts), Seq("TARGET"), "left")
  newDF.cache
def handleBias(df: DataFrame, colName: String, target: String = target) = {
    val w1 = Window.partitionBy(colName)
    val w2 = Window.partitionBy(colName, target)

    df.withColumn("cnt_group", count("*").over(w2))
      .withColumn("pre2_" + colName, mean(target).over(w1))
      .withColumn("pre_" + colName, coalesce(min(col("cnt_group") / col("cnt_foo_eq_1")).over(w1), lit(0D)))
      .drop("cnt_group")
  }

val joinUDF = udf((newColumn: String, newValue: String, codingVariant: Int, results: Map[String, Map[String, Seq[Double]]]) => {
    results.get(newColumn) match {
      case Some(tt) => {
        val nestedArray = tt.getOrElse(newValue, Seq(0.0))
        if (codingVariant == 0) {
          nestedArray.head
        } else {
          nestedArray.last
        }
      }
      case None => throw new Exception("Column not contained in initial data frame")
    }
  })

Now I want to apply my handleBias function to all the columns, unfortunately, this is not executed in parallel.

val res = (inputToDrop ++ inputToBias).toSet.foldLeft(newDF) {
    (currentDF, colName) =>
      {
        logger.info("using col " + colName)
        handleBias(currentDF, colName)
      }
  }
    .drop("cnt_foo_eq_1")

val combined = ((inputToDrop ++ inputToBias).toSet).foldLeft(res) {
    (currentDF, colName) =>
      {
        currentDF
          .withColumn("combined_" + colName, map(col(colName), array(col("pre_" + colName), col("pre2_" + colName))))
      }
  }

val columnsToUse = combined
    .select(combined.columns
      .filter(_.startsWith("combined_"))
      map (combined(_)): _*)

val newNames = columnsToUse.columns.map(_.split("combined_").last)
val renamed = columnsToUse.toDF(newNames: _*)

val cols = renamed.columns
val localData = renamed.collect

val columnsMap = cols.map { colName =>
    colName -> localData.flatMap(_.getAs[Map[String, Seq[Double]]](colName)).toMap
}.toMap
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

values for each column could be calculated independently from other columns

While it is true it doesn't really help your case. You can generate a number of independent DataFrames, each one with its own additions, but it doesn't mean you can automatically combine this into a single execution plan.

Each application of handleBias shuffles your data twice and output DataFrames don't have the same data distribution as the parent DataFrame. This is why when you fold over the list of columns each addition has to be performed separately.

Theoretically you could design a pipeline which can be expressed (with pseudocode) like this:

  • add unique id:

    df_with_id = df.withColumn("id", unique_id())
    
  • compute each df independently and convert to wide format:

    dfs = for (c in columns) 
      yield handle_bias(df, c).withColumn(
        "pres", explode([(pre_name, pre_value), (pre2_name, pre2_value)])
      )
    
  • union all partial results:

    combined = dfs.reduce(union)
    
  • pivot to convert from long to wide format:

    combined.groupBy("id").pivot("pres._1").agg(first("pres._2"))
    

but I doubt it is worth all the fuss. The process you use is extremely heavy as it is and requires a significant network and disk IO.

If number of total levels (sum count(distinct x)) for x in columns)) is relatively low you can try to compute all statistics with a single pass using for example aggregateByKey with Map[Tuple2[_, _], StatCounter] otherwise consider downsampling to the level where you can compute statistics locally.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

2.1m questions

2.1m answers

60 comments

57.0k users

...