本文整理汇总了Scala中org.apache.spark.sql.execution.streaming.MemoryStream类的典型用法代码示例。如果您正苦于以下问题:Scala MemoryStream类的具体用法?Scala MemoryStream怎么用?Scala MemoryStream使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
在下文中一共展示了MemoryStream类的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Scala代码示例。
示例1: CustomSinkSuite
//设置package包名称以及导入依赖的类
package com.highperformancespark.examples.structuredstreaming
import com.holdenkarau.spark.testing.DataFrameSuiteBase
import scala.collection.mutable.ListBuffer
import org.scalatest.FunSuite
import org.apache.spark._
import org.apache.spark.sql.{Dataset, DataFrame, Encoder, SQLContext}
import org.apache.spark.sql.execution.streaming.MemoryStream
class CustomSinkSuite extends FunSuite with DataFrameSuiteBase {
test("really simple test of the custom sink") {
import spark.implicits._
val input = MemoryStream[String]
val doubled = input.toDS().map(x => x + " " + x)
val formatName = ("com.highperformancespark.examples" +
"structuredstreaming.CustomSinkCollectorProvider")
val query = doubled.writeStream
.queryName("testCustomSinkBasic")
.format(formatName)
.start()
val inputData = List("hi", "holden", "bye", "pandas")
input.addData(inputData)
assert(query.isActive === true)
query.processAllAvailable()
assert(query.exception === None)
assert(Pandas.results(0) === inputData.map(x => x + " " + x))
}
}
object Pandas{
val results = new ListBuffer[Seq[String]]()
}
class CustomSinkCollectorProvider extends ForeachDatasetSinkProvider {
override def func(df: DataFrame) {
val spark = df.sparkSession
import spark.implicits._
Pandas.results += df.as[String].rdd.collect()
}
}
开发者ID:holdenk,项目名称:spark-structured-streaming-ml,代码行数:45,代码来源:CustomSinkSuite.scala
注:本文中的org.apache.spark.sql.execution.streaming.MemoryStream类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论