• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java StreamingQuery类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.spark.sql.streaming.StreamingQuery的典型用法代码示例。如果您正苦于以下问题:Java StreamingQuery类的具体用法?Java StreamingQuery怎么用?Java StreamingQuery使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



StreamingQuery类属于org.apache.spark.sql.streaming包,在下文中一共展示了StreamingQuery类的9个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: start

import org.apache.spark.sql.streaming.StreamingQuery; //导入依赖的package包/类
private void start() {
	log.debug("-> start()");

	SparkSession spark = SparkSession.builder().appName("Read lines over a file stream").master("local")
			.getOrCreate();

	// @formatter:off
	Dataset<Row> df = spark
			.readStream()
			.format("text")
			.load(StreamingUtils.getInputDirectory());
	// @formatter:on

	StreamingQuery query = df.writeStream().outputMode(OutputMode.Update()).format("console").start();

	try {
		query.awaitTermination();
	} catch (StreamingQueryException e) {
		log.error("Exception while waiting for query to end {}.", e.getMessage(), e);
	}

	// In this case everything is a string
	df.show();
	df.printSchema();
}
 
开发者ID:jgperrin,项目名称:net.jgp.labs.spark,代码行数:26,代码来源:ReadLinesFromMultipleFileStreams.java


示例2: start

import org.apache.spark.sql.streaming.StreamingQuery; //导入依赖的package包/类
private void start() {
    log.debug("-> start()");

    SparkSession spark = SparkSession.builder()
            .appName("Read lines over a file stream").master("local")
            .getOrCreate();

    Dataset<Row> df = spark
            .readStream()
            .format("text")
            .load(StreamingUtils.getInputDirectory());

    StreamingQuery query = df.writeStream().outputMode(OutputMode.Update())
            .format("console").start();

    try {
        query.awaitTermination();
    } catch (StreamingQueryException e) {
        log.error("Exception while waiting for query to end {}.", e.getMessage(), e);
    }

    // Never executed
    df.show();
    df.printSchema();
}
 
开发者ID:jgperrin,项目名称:net.jgp.labs.spark,代码行数:26,代码来源:ReadLinesFromFileStream.java


示例3: shutdownGracefully

import org.apache.spark.sql.streaming.StreamingQuery; //导入依赖的package包/类
/**
 * Shutdown gracefully a streaming spark job and wait for specific amount of time before exiting.
 *
 * @param query
 * @param checkIntervalMillis whether the query has terminated or not within the checkIntervalMillis milliseconds.
 * @throws InterruptedException
 * @throws StreamingQueryException
 */
public static void shutdownGracefully(StreamingQuery query, long checkIntervalMillis) throws InterruptedException,
    StreamingQueryException {
  boolean isStopped = false;
  while (!isStopped) {
    isStopped = query.awaitTermination(checkIntervalMillis);
    if (!isStopped && sparkInfo.isShutdownRequested()) {
      LOG.info("Marker file has been removed, will attempt to stop gracefully the spark structured streaming query");
      query.stop();
    }
  }
}
 
开发者ID:hopshadoop,项目名称:hops-util,代码行数:20,代码来源:HopsUtil.java


示例4: main

import org.apache.spark.sql.streaming.StreamingQuery; //导入依赖的package包/类
public static void main(String args[]) throws StreamingQueryException {
    SparkSession spark = SparkSession
            .builder()
            .appName("JavaStructuredNetworkWordCount")
            .master("local")
            .config("spark.sql.shuffle.partitions", 8)
            .getOrCreate();

    // Create DataFrame representing the stream of input lines from connection to localhost:9999
    Dataset<Row> lines = spark
            .readStream()
            .format("socket")
            .option("host", "localhost")
            .option("port", 9999)
            .load();

   // Split the lines into words
    Dataset<String> words = lines
            .as(Encoders.STRING())
            .flatMap(
                    new FlatMapFunction<String, String>() {
                        @Override
                        public Iterator<String> call(String x) {
                            return Arrays.asList(x.split(" ")).iterator();
                        }
                    }, Encoders.STRING());

    // Generate running word count
    Dataset<Row> wordCounts = words.groupBy("value").count();

    // Start running the query that prints the running counts to the console
    StreamingQuery query = wordCounts.writeStream()
            .outputMode("complete")
            .format("console")
            .start();

    query.awaitTermination();
}
 
开发者ID:knoldus,项目名称:Sparkathon,代码行数:39,代码来源:JavaStructuredNetworkWordCount.java


示例5: start

import org.apache.spark.sql.streaming.StreamingQuery; //导入依赖的package包/类
public StreamingQuery start(final DataStreamWriter<?> writer, final String path) {
    Function0<StreamingQuery> runFunction = new AbstractFunction0<StreamingQuery>() {
        @Override
        public StreamingQuery apply() {
            return writer.start(path);
        }
    };
    return harness.startTest(runFunction);
}
 
开发者ID:elastic,项目名称:elasticsearch-hadoop,代码行数:10,代码来源:JavaStreamingQueryTestHarness.java


示例6: run

import org.apache.spark.sql.streaming.StreamingQuery; //导入依赖的package包/类
public void run(final DataStreamWriter<?> writer, final String path) {
    Function0<StreamingQuery> runFunction = new AbstractFunction0<StreamingQuery>() {
        @Override
        public StreamingQuery apply() {
            return writer.start(path);
        }
    };
    harness.runTest(runFunction);
}
 
开发者ID:elastic,项目名称:elasticsearch-hadoop,代码行数:10,代码来源:JavaStreamingQueryTestHarness.java


示例7: main

import org.apache.spark.sql.streaming.StreamingQuery; //导入依赖的package包/类
public static void main(String[] args) throws Exception {
//Read properties
Properties prop = PropertyFileReader.readPropertyFile();

//SparkSesion
SparkSession spark = SparkSession
	      .builder()
	      .appName("VideoStreamProcessor")
	      .master(prop.getProperty("spark.master.url"))
	      .getOrCreate();	

//directory to save image files with motion detected
final String processedImageDir = prop.getProperty("processed.output.dir");
logger.warn("Output directory for saving processed images is set to "+processedImageDir+". This is configured in processed.output.dir key of property file.");

//create schema for json message
StructType schema =  DataTypes.createStructType(new StructField[] { 
		DataTypes.createStructField("cameraId", DataTypes.StringType, true),
		DataTypes.createStructField("timestamp", DataTypes.TimestampType, true),
		DataTypes.createStructField("rows", DataTypes.IntegerType, true),
		DataTypes.createStructField("cols", DataTypes.IntegerType, true),
		DataTypes.createStructField("type", DataTypes.IntegerType, true),
		DataTypes.createStructField("data", DataTypes.StringType, true)
		});


//Create DataSet from stream messages from kafka
   Dataset<VideoEventData> ds = spark
     .readStream()
     .format("kafka")
     .option("kafka.bootstrap.servers", prop.getProperty("kafka.bootstrap.servers"))
     .option("subscribe", prop.getProperty("kafka.topic"))
     .option("kafka.max.partition.fetch.bytes", prop.getProperty("kafka.max.partition.fetch.bytes"))
     .option("kafka.max.poll.records", prop.getProperty("kafka.max.poll.records"))
     .load()
     .selectExpr("CAST(value AS STRING) as message")
     .select(functions.from_json(functions.col("message"),schema).as("json"))
     .select("json.*")
     .as(Encoders.bean(VideoEventData.class)); 
   
   //key-value pair of cameraId-VideoEventData
KeyValueGroupedDataset<String, VideoEventData> kvDataset = ds.groupByKey(new MapFunction<VideoEventData, String>() {
	@Override
	public String call(VideoEventData value) throws Exception {
		return value.getCameraId();
	}
}, Encoders.STRING());
	
//process
Dataset<VideoEventData> processedDataset = kvDataset.mapGroupsWithState(new MapGroupsWithStateFunction<String, VideoEventData, VideoEventData,VideoEventData>(){
	@Override
	public VideoEventData call(String key, Iterator<VideoEventData> values, GroupState<VideoEventData> state) throws Exception {
		logger.warn("CameraId="+key+" PartitionId="+TaskContext.getPartitionId());
		VideoEventData existing = null;
		//check previous state
		if (state.exists()) {
			existing = state.get();
		}
		//detect motion
		VideoEventData processed = VideoMotionDetector.detectMotion(key,values,processedImageDir,existing);
		
		//update last processed
		if(processed != null){
			state.update(processed);
		}
		return processed;
	}}, Encoders.bean(VideoEventData.class), Encoders.bean(VideoEventData.class));

//start
 StreamingQuery query = processedDataset.writeStream()
	      .outputMode("update")
	      .format("console")
	      .start();
 
 //await
    query.awaitTermination();
}
 
开发者ID:baghelamit,项目名称:video-stream-analytics,代码行数:78,代码来源:VideoStreamProcessor.java


示例8: main

import org.apache.spark.sql.streaming.StreamingQuery; //导入依赖的package包/类
public static void main(String[] args) throws StreamingQueryException {
    //set log4j programmatically
    LogManager.getLogger("org.apache.spark").setLevel(Level.WARN);
    LogManager.getLogger("akka").setLevel(Level.ERROR);

    //configure Spark
    SparkConf conf = new SparkConf()
            .setAppName("kafka-structured")
            .setMaster("local[*]");

    //initialize spark session
    SparkSession sparkSession = SparkSession
            .builder()
            .config(conf)
            .getOrCreate();

    //reduce task number
    sparkSession.sqlContext().setConf("spark.sql.shuffle.partitions", "3");

    //data stream from kafka
    Dataset<Row> ds1 = sparkSession
            .readStream()
            .format("kafka")
            .option("kafka.bootstrap.servers", "localhost:9092")
            .option("subscribe", "mytopic")
            .option("startingOffsets", "earliest")
            .load();

    //start the streaming query
    sparkSession.udf().register("deserialize", (byte[] data) -> {
        GenericRecord record = recordInjection.invert(data).get();
        return RowFactory.create(record.get("str1").toString(), record.get("str2").toString(), record.get("int1"));

    }, DataTypes.createStructType(type.fields()));
    ds1.printSchema();
    Dataset<Row> ds2 = ds1
            .select("value").as(Encoders.BINARY())
            .selectExpr("deserialize(value) as rows")
            .select("rows.*");

    ds2.printSchema();

    StreamingQuery query1 = ds2
            .groupBy("str1")
            .count()
            .writeStream()
            .queryName("Test query")
            .outputMode("complete")
            .format("console")
            .start();

    query1.awaitTermination();

}
 
开发者ID:Neuw84,项目名称:structured-streaming-avro-demo,代码行数:55,代码来源:StructuredDemo.java


示例9: main

import org.apache.spark.sql.streaming.StreamingQuery; //导入依赖的package包/类
public static void main(String[] args) throws StreamingQueryException {
	System.setProperty("hadoop.home.dir", "C:\\softwares\\Winutils");
	SparkSession sparkSession = SparkSession.builder().master("local[*]").appName("structured Streaming Example")
			.config("spark.sql.warehouse.dir", "file:////C:/Users/sgulati/spark-warehouse").getOrCreate();

	Dataset<Row> inStream = sparkSession.readStream().format("socket").option("host", "10.204.136.223")
			.option("port", 9999).load();

	Dataset<FlightDetails> dsFlightDetails = inStream.as(Encoders.STRING()).map(x -> {
		ObjectMapper mapper = new ObjectMapper();
		return mapper.readValue(x, FlightDetails.class);

	}, Encoders.bean(FlightDetails.class));
	
	
	dsFlightDetails.createOrReplaceTempView("flight_details");
	
	Dataset<Row> avdFlightDetails = sparkSession.sql("select flightId, avg(temperature) from flight_details group by flightId");
	
	StreamingQuery query = avdFlightDetails.writeStream()
			  .outputMode("complete")
			  .format("console")
			  .start();

			query.awaitTermination();
	

}
 
开发者ID:PacktPublishing,项目名称:Apache-Spark-2x-for-Java-Developers,代码行数:29,代码来源:StructuredStreamingExample.java



注:本文中的org.apache.spark.sql.streaming.StreamingQuery类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java ParcelablePlease类代码示例发布时间:2022-05-23
下一篇:
Java XMLWriterSettings类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap