• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java AssignerWithPeriodicWatermarks类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks的典型用法代码示例。如果您正苦于以下问题:Java AssignerWithPeriodicWatermarks类的具体用法?Java AssignerWithPeriodicWatermarks怎么用?Java AssignerWithPeriodicWatermarks使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



AssignerWithPeriodicWatermarks类属于org.apache.flink.streaming.api.functions包,在下文中一共展示了AssignerWithPeriodicWatermarks类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: Kafka08Fetcher

import org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks; //导入依赖的package包/类
public Kafka08Fetcher(
		SourceContext<T> sourceContext,
		Map<KafkaTopicPartition, Long> seedPartitionsWithInitialOffsets,
		SerializedValue<AssignerWithPeriodicWatermarks<T>> watermarksPeriodic,
		SerializedValue<AssignerWithPunctuatedWatermarks<T>> watermarksPunctuated,
		StreamingRuntimeContext runtimeContext,
		KeyedDeserializationSchema<T> deserializer,
		Properties kafkaProperties,
		long autoCommitInterval,
		boolean useMetrics) throws Exception {
	super(
			sourceContext,
			seedPartitionsWithInitialOffsets,
			watermarksPeriodic,
			watermarksPunctuated,
			runtimeContext.getProcessingTimeService(),
			runtimeContext.getExecutionConfig().getAutoWatermarkInterval(),
			runtimeContext.getUserCodeClassLoader(),
			useMetrics);

	this.deserializer = checkNotNull(deserializer);
	this.kafkaConfig = checkNotNull(kafkaProperties);
	this.runtimeContext = runtimeContext;
	this.invalidOffsetBehavior = getInvalidOffsetBehavior(kafkaProperties);
	this.autoCommitInterval = autoCommitInterval;
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:27,代码来源:Kafka08Fetcher.java


示例2: createFetcher

import org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks; //导入依赖的package包/类
@Override
protected AbstractFetcher<T, ?> createFetcher(
		SourceContext<T> sourceContext,
		Map<KafkaTopicPartition, Long> assignedPartitionsWithInitialOffsets,
		SerializedValue<AssignerWithPeriodicWatermarks<T>> watermarksPeriodic,
		SerializedValue<AssignerWithPunctuatedWatermarks<T>> watermarksPunctuated,
		StreamingRuntimeContext runtimeContext,
		OffsetCommitMode offsetCommitMode) throws Exception {

	boolean useMetrics = !PropertiesUtil.getBoolean(kafkaProperties, KEY_DISABLE_METRICS, false);

	long autoCommitInterval = (offsetCommitMode == OffsetCommitMode.KAFKA_PERIODIC)
			? PropertiesUtil.getLong(kafkaProperties, "auto.commit.interval.ms", 60000)
			: -1; // this disables the periodic offset committer thread in the fetcher

	return new Kafka08Fetcher<>(
			sourceContext,
			assignedPartitionsWithInitialOffsets,
			watermarksPeriodic,
			watermarksPunctuated,
			runtimeContext,
			deserializer,
			kafkaProperties,
			autoCommitInterval,
			useMetrics);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:27,代码来源:FlinkKafkaConsumer08.java


示例3: createPartitionStateHolders

import org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks; //导入依赖的package包/类
/**
 * Shortcut variant of {@link #createPartitionStateHolders(Map, int, SerializedValue, SerializedValue, ClassLoader)}
 * that uses the same offset for all partitions when creating their state holders.
 */
private List<KafkaTopicPartitionState<KPH>> createPartitionStateHolders(
	List<KafkaTopicPartition> partitions,
	long initialOffset,
	int timestampWatermarkMode,
	SerializedValue<AssignerWithPeriodicWatermarks<T>> watermarksPeriodic,
	SerializedValue<AssignerWithPunctuatedWatermarks<T>> watermarksPunctuated,
	ClassLoader userCodeClassLoader) throws IOException, ClassNotFoundException {

	Map<KafkaTopicPartition, Long> partitionsToInitialOffset = new HashMap<>(partitions.size());
	for (KafkaTopicPartition partition : partitions) {
		partitionsToInitialOffset.put(partition, initialOffset);
	}

	return createPartitionStateHolders(
			partitionsToInitialOffset,
			timestampWatermarkMode,
			watermarksPeriodic,
			watermarksPunctuated,
			userCodeClassLoader);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:25,代码来源:AbstractFetcher.java


示例4: TestFetcher

import org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks; //导入依赖的package包/类
protected TestFetcher(
		SourceContext<T> sourceContext,
		Map<KafkaTopicPartition, Long> assignedPartitionsWithStartOffsets,
		SerializedValue<AssignerWithPeriodicWatermarks<T>> watermarksPeriodic,
		SerializedValue<AssignerWithPunctuatedWatermarks<T>> watermarksPunctuated,
		ProcessingTimeService processingTimeProvider,
		long autoWatermarkInterval) throws Exception {
	super(
		sourceContext,
		assignedPartitionsWithStartOffsets,
		watermarksPeriodic,
		watermarksPunctuated,
		processingTimeProvider,
		autoWatermarkInterval,
		TestFetcher.class.getClassLoader(),
		false);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:18,代码来源:AbstractFetcherTest.java


示例5: TestFetcher

import org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks; //导入依赖的package包/类
protected TestFetcher(
		SourceContext<T> sourceContext,
		Map<KafkaTopicPartition, Long> assignedPartitionsWithStartOffsets,
		SerializedValue<AssignerWithPeriodicWatermarks<T>> watermarksPeriodic,
		SerializedValue<AssignerWithPunctuatedWatermarks<T>> watermarksPunctuated,
		ProcessingTimeService processingTimeProvider,
		long autoWatermarkInterval) throws Exception
{
	super(
		sourceContext,
		assignedPartitionsWithStartOffsets,
		watermarksPeriodic,
		watermarksPunctuated,
		processingTimeProvider,
		autoWatermarkInterval,
		TestFetcher.class.getClassLoader(),
		false);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:19,代码来源:AbstractFetcherTimestampsTest.java


示例6: createFetcher

import org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks; //导入依赖的package包/类
@Override
protected AbstractFetcher<T, ?> createFetcher(
		SourceContext<T> sourceContext,
		List<KafkaTopicPartition> thisSubtaskPartitions,
		SerializedValue<AssignerWithPeriodicWatermarks<T>> watermarksPeriodic,
		SerializedValue<AssignerWithPunctuatedWatermarks<T>> watermarksPunctuated,
		StreamingRuntimeContext runtimeContext) throws Exception {

	boolean useMetrics = !Boolean.valueOf(properties.getProperty(KEY_DISABLE_METRICS, "false"));

	return new Kafka010Fetcher<>(
			sourceContext,
			thisSubtaskPartitions,
			watermarksPeriodic,
			watermarksPunctuated,
			runtimeContext.getProcessingTimeService(),
			runtimeContext.getExecutionConfig().getAutoWatermarkInterval(),
			runtimeContext.getUserCodeClassLoader(),
			runtimeContext.isCheckpointingEnabled(),
			runtimeContext.getTaskNameWithSubtasks(),
			runtimeContext.getMetricGroup(),
			deserializer,
			properties,
			pollTimeout,
			useMetrics);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:27,代码来源:FlinkKafkaConsumer010.java


示例7: createFetcher

import org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks; //导入依赖的package包/类
@Override
protected AbstractFetcher<T, ?> createFetcher(
		SourceContext<T> sourceContext,
		Map<KafkaTopicPartition, Long> assignedPartitionsWithInitialOffsets,
		SerializedValue<AssignerWithPeriodicWatermarks<T>> watermarksPeriodic,
		SerializedValue<AssignerWithPunctuatedWatermarks<T>> watermarksPunctuated,
		StreamingRuntimeContext runtimeContext,
		OffsetCommitMode offsetCommitMode) throws Exception {

	boolean useMetrics = !PropertiesUtil.getBoolean(properties, KEY_DISABLE_METRICS, false);

	// make sure that auto commit is disabled when our offset commit mode is ON_CHECKPOINTS;
	// this overwrites whatever setting the user configured in the properties
	if (offsetCommitMode == OffsetCommitMode.ON_CHECKPOINTS || offsetCommitMode == OffsetCommitMode.DISABLED) {
		properties.setProperty(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
	}

	return new Kafka09Fetcher<>(
			sourceContext,
			assignedPartitionsWithInitialOffsets,
			watermarksPeriodic,
			watermarksPunctuated,
			runtimeContext.getProcessingTimeService(),
			runtimeContext.getExecutionConfig().getAutoWatermarkInterval(),
			runtimeContext.getUserCodeClassLoader(),
			runtimeContext.getTaskNameWithSubtasks(),
			runtimeContext.getMetricGroup(),
			deserializer,
			properties,
			pollTimeout,
			useMetrics);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:33,代码来源:FlinkKafkaConsumer09.java


示例8: Kafka010Fetcher

import org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks; //导入依赖的package包/类
public Kafka010Fetcher(
		SourceContext<T> sourceContext,
		Map<KafkaTopicPartition, Long> assignedPartitionsWithInitialOffsets,
		SerializedValue<AssignerWithPeriodicWatermarks<T>> watermarksPeriodic,
		SerializedValue<AssignerWithPunctuatedWatermarks<T>> watermarksPunctuated,
		ProcessingTimeService processingTimeProvider,
		long autoWatermarkInterval,
		ClassLoader userCodeClassLoader,
		String taskNameWithSubtasks,
		MetricGroup metricGroup,
		KeyedDeserializationSchema<T> deserializer,
		Properties kafkaProperties,
		long pollTimeout,
		boolean useMetrics) throws Exception {
	super(
			sourceContext,
			assignedPartitionsWithInitialOffsets,
			watermarksPeriodic,
			watermarksPunctuated,
			processingTimeProvider,
			autoWatermarkInterval,
			userCodeClassLoader,
			taskNameWithSubtasks,
			metricGroup,
			deserializer,
			kafkaProperties,
			pollTimeout,
			useMetrics);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:30,代码来源:Kafka010Fetcher.java


示例9: createFetcher

import org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks; //导入依赖的package包/类
@Override
protected AbstractFetcher<T, ?> createFetcher(
		SourceContext<T> sourceContext,
		Map<KafkaTopicPartition, Long> assignedPartitionsWithInitialOffsets,
		SerializedValue<AssignerWithPeriodicWatermarks<T>> watermarksPeriodic,
		SerializedValue<AssignerWithPunctuatedWatermarks<T>> watermarksPunctuated,
		StreamingRuntimeContext runtimeContext,
		OffsetCommitMode offsetCommitMode) throws Exception {

	boolean useMetrics = !PropertiesUtil.getBoolean(properties, KEY_DISABLE_METRICS, false);

	// make sure that auto commit is disabled when our offset commit mode is ON_CHECKPOINTS;
	// this overwrites whatever setting the user configured in the properties
	if (offsetCommitMode == OffsetCommitMode.ON_CHECKPOINTS || offsetCommitMode == OffsetCommitMode.DISABLED) {
		properties.setProperty(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
	}

	return new Kafka010Fetcher<>(
			sourceContext,
			assignedPartitionsWithInitialOffsets,
			watermarksPeriodic,
			watermarksPunctuated,
			runtimeContext.getProcessingTimeService(),
			runtimeContext.getExecutionConfig().getAutoWatermarkInterval(),
			runtimeContext.getUserCodeClassLoader(),
			runtimeContext.getTaskNameWithSubtasks(),
			runtimeContext.getMetricGroup(),
			deserializer,
			properties,
			pollTimeout,
			useMetrics);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:33,代码来源:FlinkKafkaConsumer010.java


示例10: KafkaTopicPartitionStateWithPeriodicWatermarks

import org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks; //导入依赖的package包/类
public KafkaTopicPartitionStateWithPeriodicWatermarks(
		KafkaTopicPartition partition, KPH kafkaPartitionHandle,
		AssignerWithPeriodicWatermarks<T> timestampsAndWatermarks) {
	super(partition, kafkaPartitionHandle);

	this.timestampsAndWatermarks = timestampsAndWatermarks;
	this.partitionWatermark = Long.MIN_VALUE;
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:9,代码来源:KafkaTopicPartitionStateWithPeriodicWatermarks.java


示例11: testPeriodicWatermarksWithNoSubscribedPartitionsShouldYieldNoWatermarks

import org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks; //导入依赖的package包/类
@Test
public void testPeriodicWatermarksWithNoSubscribedPartitionsShouldYieldNoWatermarks() throws Exception {
	final String testTopic = "test topic name";
	Map<KafkaTopicPartition, Long> originalPartitions = new HashMap<>();

	TestSourceContext<Long> sourceContext = new TestSourceContext<>();

	TestProcessingTimeService processingTimeProvider = new TestProcessingTimeService();

	TestFetcher<Long> fetcher = new TestFetcher<>(
		sourceContext,
		originalPartitions,
		new SerializedValue<AssignerWithPeriodicWatermarks<Long>>(new PeriodicTestExtractor()),
		null, /* punctuated watermarks assigner*/
		processingTimeProvider,
		10);

	processingTimeProvider.setCurrentTime(10);
	// no partitions; when the periodic watermark emitter fires, no watermark should be emitted
	assertFalse(sourceContext.hasWatermark());

	// counter-test that when the fetcher does actually have partitions,
	// when the periodic watermark emitter fires again, a watermark really is emitted
	fetcher.addDiscoveredPartitions(Collections.singletonList(new KafkaTopicPartition(testTopic, 0)));
	fetcher.emitRecord(100L, fetcher.subscribedPartitionStates().get(0), 3L);
	processingTimeProvider.setCurrentTime(20);
	assertEquals(100, sourceContext.getLatestWatermark().getTimestamp());
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:29,代码来源:AbstractFetcherTest.java


示例12: createFetcher

import org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks; //导入依赖的package包/类
@Override
protected AbstractFetcher<T, ?> createFetcher(
		SourceContext<T> sourceContext,
		Map<KafkaTopicPartition, Long> thisSubtaskPartitionsWithStartOffsets,
		SerializedValue<AssignerWithPeriodicWatermarks<T>> watermarksPeriodic,
		SerializedValue<AssignerWithPunctuatedWatermarks<T>> watermarksPunctuated,
		StreamingRuntimeContext runtimeContext,
		OffsetCommitMode offsetCommitMode) throws Exception {
	return fetcher;
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:11,代码来源:FlinkKafkaConsumerBaseMigrationTest.java


示例13: createFetcher

import org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks; //导入依赖的package包/类
@Override
@SuppressWarnings("unchecked")
protected AbstractFetcher<T, ?> createFetcher(
		SourceContext<T> sourceContext,
		Map<KafkaTopicPartition, Long> thisSubtaskPartitionsWithStartOffsets,
		SerializedValue<AssignerWithPeriodicWatermarks<T>> watermarksPeriodic,
		SerializedValue<AssignerWithPunctuatedWatermarks<T>> watermarksPunctuated,
		StreamingRuntimeContext runtimeContext,
		OffsetCommitMode offsetCommitMode) throws Exception {
	return this.testFetcher;
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:12,代码来源:FlinkKafkaConsumerBaseTest.java


示例14: createFetcher

import org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks; //导入依赖的package包/类
@Override
protected AbstractFetcher<T, ?> createFetcher(
		SourceContext<T> sourceContext,
		Map<KafkaTopicPartition, Long> thisSubtaskPartitionsWithStartOffsets,
		SerializedValue<AssignerWithPeriodicWatermarks<T>> watermarksPeriodic,
		SerializedValue<AssignerWithPunctuatedWatermarks<T>> watermarksPunctuated,
		StreamingRuntimeContext runtimeContext,
		OffsetCommitMode offsetCommitMode) throws Exception {
	return mock(AbstractFetcher.class);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:11,代码来源:FlinkKafkaConsumerBaseFrom11MigrationTest.java


示例15: createFetcher

import org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks; //导入依赖的package包/类
@Override
protected AbstractFetcher<T, ?> createFetcher(
		SourceContext<T> sourceContext,
		List<KafkaTopicPartition> thisSubtaskPartitions,
		SerializedValue<AssignerWithPeriodicWatermarks<T>> watermarksPeriodic,
		SerializedValue<AssignerWithPunctuatedWatermarks<T>> watermarksPunctuated,
		StreamingRuntimeContext runtimeContext) throws Exception {

	boolean useMetrics = !Boolean.valueOf(properties.getProperty(KEY_DISABLE_METRICS, "false"));

	return new Kafka09Fetcher<>(
			sourceContext,
			thisSubtaskPartitions,
			watermarksPeriodic,
			watermarksPunctuated,
			runtimeContext.getProcessingTimeService(),
			runtimeContext.getExecutionConfig().getAutoWatermarkInterval(),
			runtimeContext.getUserCodeClassLoader(),
			runtimeContext.isCheckpointingEnabled(),
			runtimeContext.getTaskNameWithSubtasks(),
			runtimeContext.getMetricGroup(),
			deserializer,
			properties,
			pollTimeout,
			useMetrics);
	
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:28,代码来源:FlinkKafkaConsumer09.java


示例16: Kafka08Fetcher

import org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks; //导入依赖的package包/类
public Kafka08Fetcher(
		SourceContext<T> sourceContext,
		List<KafkaTopicPartition> assignedPartitions,
		SerializedValue<AssignerWithPeriodicWatermarks<T>> watermarksPeriodic,
		SerializedValue<AssignerWithPunctuatedWatermarks<T>> watermarksPunctuated,
		StreamingRuntimeContext runtimeContext,
		KeyedDeserializationSchema<T> deserializer,
		Properties kafkaProperties,
		long invalidOffsetBehavior,
		long autoCommitInterval,
		boolean useMetrics) throws Exception
{
	super(
			sourceContext,
			assignedPartitions,
			watermarksPeriodic,
			watermarksPunctuated,
			runtimeContext.getProcessingTimeService(),
			runtimeContext.getExecutionConfig().getAutoWatermarkInterval(),
			runtimeContext.getUserCodeClassLoader(),
			useMetrics);

	this.deserializer = checkNotNull(deserializer);
	this.kafkaConfig = checkNotNull(kafkaProperties);
	this.runtimeContext = runtimeContext;
	this.invalidOffsetBehavior = invalidOffsetBehavior;
	this.autoCommitInterval = autoCommitInterval;
	this.unassignedPartitionsQueue = new ClosableBlockingQueue<>();

	// initially, all these partitions are not assigned to a specific broker connection
	for (KafkaTopicPartitionState<TopicAndPartition> partition : subscribedPartitions()) {
		unassignedPartitionsQueue.add(partition);
	}
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:35,代码来源:Kafka08Fetcher.java


示例17: createFetcher

import org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks; //导入依赖的package包/类
@Override
protected AbstractFetcher<T, ?> createFetcher(
		SourceContext<T> sourceContext,
		List<KafkaTopicPartition> thisSubtaskPartitions,
		SerializedValue<AssignerWithPeriodicWatermarks<T>> watermarksPeriodic,
		SerializedValue<AssignerWithPunctuatedWatermarks<T>> watermarksPunctuated,
		StreamingRuntimeContext runtimeContext) throws Exception {

	boolean useMetrics = !Boolean.valueOf(kafkaProperties.getProperty(KEY_DISABLE_METRICS, "false"));

	return new Kafka08Fetcher<>(sourceContext, thisSubtaskPartitions,
			watermarksPeriodic, watermarksPunctuated,
			runtimeContext, deserializer, kafkaProperties,
			invalidOffsetBehavior, autoCommitInterval, useMetrics);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:16,代码来源:FlinkKafkaConsumer08.java


示例18: Kafka010Fetcher

import org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks; //导入依赖的package包/类
public Kafka010Fetcher(
		SourceContext<T> sourceContext,
		List<KafkaTopicPartition> assignedPartitions,
		SerializedValue<AssignerWithPeriodicWatermarks<T>> watermarksPeriodic,
		SerializedValue<AssignerWithPunctuatedWatermarks<T>> watermarksPunctuated,
		ProcessingTimeService processingTimeProvider,
		long autoWatermarkInterval,
		ClassLoader userCodeClassLoader,
		boolean enableCheckpointing,
		String taskNameWithSubtasks,
		MetricGroup metricGroup,
		KeyedDeserializationSchema<T> deserializer,
		Properties kafkaProperties,
		long pollTimeout,
		boolean useMetrics) throws Exception
{
	super(
			sourceContext,
			assignedPartitions,
			watermarksPeriodic,
			watermarksPunctuated,
			processingTimeProvider,
			autoWatermarkInterval,
			userCodeClassLoader,
			enableCheckpointing,
			taskNameWithSubtasks,
			metricGroup,
			deserializer,
			kafkaProperties,
			pollTimeout,
			useMetrics);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:33,代码来源:Kafka010Fetcher.java


示例19: KafkaTopicPartitionStateWithPeriodicWatermarks

import org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks; //导入依赖的package包/类
public KafkaTopicPartitionStateWithPeriodicWatermarks(
		KafkaTopicPartition partition, KPH kafkaPartitionHandle,
		AssignerWithPeriodicWatermarks<T> timestampsAndWatermarks)
{
	super(partition, kafkaPartitionHandle);
	
	this.timestampsAndWatermarks = timestampsAndWatermarks;
	this.partitionWatermark = Long.MIN_VALUE;
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:10,代码来源:KafkaTopicPartitionStateWithPeriodicWatermarks.java


示例20: TestFetcher

import org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks; //导入依赖的package包/类
protected TestFetcher(
		SourceContext<T> sourceContext,
		List<KafkaTopicPartition> assignedPartitions,
		SerializedValue<AssignerWithPeriodicWatermarks<T>> watermarksPeriodic,
		SerializedValue<AssignerWithPunctuatedWatermarks<T>> watermarksPunctuated,
		ProcessingTimeService processingTimeProvider,
		long autoWatermarkInterval) throws Exception
{
	super(sourceContext, assignedPartitions, watermarksPeriodic, watermarksPunctuated, processingTimeProvider, autoWatermarkInterval, TestFetcher.class.getClassLoader(), false);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:11,代码来源:AbstractFetcherTimestampsTest.java



注:本文中的org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java LafManagerListener类代码示例发布时间:2022-05-22
下一篇:
Java GeoPackage类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap