• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java AvroSource类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.flume.source.AvroSource的典型用法代码示例。如果您正苦于以下问题:Java AvroSource类的具体用法?Java AvroSource怎么用?Java AvroSource使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



AvroSource类属于org.apache.flume.source包,在下文中一共展示了AvroSource类的14个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: initiate

import org.apache.flume.source.AvroSource; //导入依赖的package包/类
@Before
public void initiate() throws Exception {
  int port = 25430;
  source = new AvroSource();
  ch = new MemoryChannel();
  Configurables.configure(ch, new Context());

  Context context = new Context();
  context.put("port", String.valueOf(port));
  context.put("bind", "localhost");
  Configurables.configure(source, context);

  File TESTFILE = new File(
      TestLog4jAppender.class.getClassLoader()
          .getResource("flume-log4jtest.properties").getFile());
  FileReader reader = new FileReader(TESTFILE);
  props = new Properties();
  props.load(reader);
  reader.close();
}
 
开发者ID:moueimei,项目名称:flume-release-1.7.0,代码行数:21,代码来源:TestLog4jAppender.java


示例2: setUp

import org.apache.flume.source.AvroSource; //导入依赖的package包/类
@Before
@Override
public void setUp() throws Exception {
  super.setUp();
  //setup flume to write to
  source = new AvroSource();
  ch = new MemoryChannel();
  Configurables.configure(ch, new Context());

  Context context = new Context();
  //This should match whats present in the pipeline.json file
  context.put("port", String.valueOf(9050));
  context.put("bind", "localhost");
  Configurables.configure(source, context);

  List<Channel> channels = new ArrayList<>();
  channels.add(ch);
  ChannelSelector rcs = new ReplicatingChannelSelector();
  rcs.setChannels(channels);
  source.setChannelProcessor(new ChannelProcessor(rcs));
  source.start();
}
 
开发者ID:streamsets,项目名称:datacollector,代码行数:23,代码来源:FlumeDestinationPipelineRunIT.java


示例3: setUp

import org.apache.flume.source.AvroSource; //导入依赖的package包/类
@Before
public void setUp() throws Exception {
  port = NetworkUtils.getRandomPort();
  source = new AvroSource();
  ch = new MemoryChannel();
  Configurables.configure(ch, new Context());

  Context context = new Context();
  context.put("port", String.valueOf(port));
  context.put("bind", "localhost");
  Configurables.configure(source, context);

  List<Channel> channels = new ArrayList<>();
  channels.add(ch);
  ChannelSelector rcs = new ReplicatingChannelSelector();
  rcs.setChannels(channels);
  source.setChannelProcessor(new ChannelProcessor(rcs));
  source.start();
}
 
开发者ID:streamsets,项目名称:datacollector,代码行数:20,代码来源:TestFlumeFailoverTarget.java


示例4: setUp

import org.apache.flume.source.AvroSource; //导入依赖的package包/类
@BeforeClass
public static void setUp() throws IOException {
  int dataNodes = 1;
  int port = 29999;
  JobConf conf = new JobConf();
  channel = new MemoryChannel();
  conf.set("dfs.block.access.token.enable", "false");
  conf.set("dfs.permissions", "true");
  conf.set("hadoop.security.authentication", "simple");
  conf.set("fs.default.name", "hdfs://localhost:29999");
  dfsCluster = new MiniDFSCluster(port, conf, dataNodes, true, true, null, null);
  fileSystem = dfsCluster.getFileSystem();
  fileSystem.delete(new Path("/logs"), true);
  source = new AvroSource();
  sink = new KaaHdfsSink();
  logSchemasRootDir = new File("schemas");
  if (logSchemasRootDir.exists()) {
    logSchemasRootDir.delete();
  }
  prepareSchema(logSchemasRootDir);
}
 
开发者ID:kaaproject,项目名称:kaa,代码行数:22,代码来源:TestKaaHdfsSink.java


示例5: setUp

import org.apache.flume.source.AvroSource; //导入依赖的package包/类
@Before
public void setUp() throws Exception {
  URL schemaUrl = getClass().getClassLoader().getResource("myrecord.avsc");
  Files.copy(Resources.newInputStreamSupplier(schemaUrl),
      new File("/tmp/myrecord.avsc"));

  int port = 25430;
  source = new AvroSource();
  ch = new MemoryChannel();
  Configurables.configure(ch, new Context());

  Context context = new Context();
  context.put("port", String.valueOf(port));
  context.put("bind", "localhost");
  Configurables.configure(source, context);

  List<Channel> channels = new ArrayList<Channel>();
  channels.add(ch);

  ChannelSelector rcs = new ReplicatingChannelSelector();
  rcs.setChannels(channels);

  source.setChannelProcessor(new ChannelProcessor(rcs));

  source.start();
}
 
开发者ID:cloudera,项目名称:cdk,代码行数:27,代码来源:TestLog4jAppenderWithAvro.java


示例6: beforeClass

import org.apache.flume.source.AvroSource; //导入依赖的package包/类
@BeforeClass
public static void beforeClass() throws Exception {
  //setup kafka to read from
  KafkaTestUtil.startZookeeper();
  KafkaTestUtil.startKafkaBrokers(1);
  KafkaTestUtil.createTopic(TOPIC, 1, 1);
  producer = KafkaTestUtil.createProducer(KafkaTestUtil.getMetadataBrokerURI(), true);
  produceRecords(RECORDS_PRODUCED);

  //setup flume to write to
  source = new AvroSource();
  ch = new MemoryChannel();
  Configurables.configure(ch, new Context());

  Context context = new Context();
  //This should match whats present in the pipeline.json file
  flumePort = TestUtil.getFreePort();
  context.put("port", String.valueOf(flumePort));
  context.put("bind", "localhost");
  Configurables.configure(source, context);

  List<Channel> channels = new ArrayList<>();
  channels.add(ch);
  ChannelSelector rcs = new ReplicatingChannelSelector();
  rcs.setChannels(channels);
  source.setChannelProcessor(new ChannelProcessor(rcs));
  source.start();

  //setup Cluster and start pipeline
  ClusterUtil.setupCluster(TEST_NAME, getPipelineJson(), new YarnConfiguration());
  serverURI = ClusterUtil.getServerURI();
  miniSDC = ClusterUtil.getMiniSDC();
}
 
开发者ID:streamsets,项目名称:datacollector,代码行数:34,代码来源:NoBasicLibIT.java


示例7: beforeClass

import org.apache.flume.source.AvroSource; //导入依赖的package包/类
@BeforeClass
public static void beforeClass() throws Exception {
  //setup flume to write to
  source = new AvroSource();
  ch = new MemoryChannel();
  Configurables.configure(ch, new Context());

  Context context = new Context();
  //This should match whats present in the pipeline.json file
  context.put("port", String.valueOf(9050));
  context.put("bind", "localhost");
  Configurables.configure(source, context);

  List<Channel> channels = new ArrayList<>();
  channels.add(ch);
  ChannelSelector rcs = new ReplicatingChannelSelector();
  rcs.setChannels(channels);
  source.setChannelProcessor(new ChannelProcessor(rcs));
  source.start();
  PipelineOperationsStandaloneIT.beforeClass(getPipelineJson());

  //read from flume memory channel every second, otherwise the channel fills up and there is no more data coming in
  executorService = Executors.newSingleThreadExecutor();
  executorService.submit(new Runnable() {
    @Override
    public void run() {
      Transaction transaction;
      while (true) {
        transaction = ch.getTransaction();
        transaction.begin();
        ch.take();
        transaction.commit();
        transaction.close();
      }
    }
  });
}
 
开发者ID:streamsets,项目名称:datacollector,代码行数:38,代码来源:FlumeDestinationPipelineOperationsIT.java


示例8: setUp

import org.apache.flume.source.AvroSource; //导入依赖的package包/类
@Before
public void setUp() {
  sources = new ArrayList<>(NUM_HOSTS);
  chs = new ArrayList<>(NUM_HOSTS);

  for(int i = 0; i < NUM_HOSTS; i++) {
    AvroSource source = new AvroSource();
    Channel channel = new MemoryChannel();
    Configurables.configure(channel, new Context());

    Context context = new Context();
    context.put("port", String.valueOf(ports.get(i)));
    context.put("bind", "localhost");
    Configurables.configure(source, context);

    List<Channel> channels = new ArrayList<>();
    channels.add(channel);
    ChannelSelector rcs = new ReplicatingChannelSelector();
    rcs.setChannels(channels);
    source.setChannelProcessor(new ChannelProcessor(rcs));

    sources.add(source);
    chs.add(channel);

    source.start();
  }

}
 
开发者ID:streamsets,项目名称:datacollector,代码行数:29,代码来源:TestFlumeLoadBalancingTarget.java


示例9: setUp

import org.apache.flume.source.AvroSource; //导入依赖的package包/类
@Before
public void setUp() throws Exception {
    eventSource = new AvroSource();
    channel = new MemoryChannel();

    Configurables.configure(channel, new Context());

    avroLogger = (Logger) LogManager.getLogger("avrologger");
    /*
     * Clear out all other appenders associated with this logger to ensure
     * we're only hitting the Avro appender.
     */
    removeAppenders(avroLogger);
    final Context context = new Context();
    testPort = String.valueOf(testServerPort);
    context.put("port", testPort);
    context.put("bind", "0.0.0.0");
    Configurables.configure(eventSource, context);

    final List<Channel> channels = new ArrayList<Channel>();
    channels.add(channel);

    final ChannelSelector cs = new ReplicatingChannelSelector();
    cs.setChannels(channels);

    eventSource.setChannelProcessor(new ChannelProcessor(cs));

    eventSource.start();

    Assert.assertTrue("Reached start or error", LifecycleController
            .waitForOneOf(eventSource, LifecycleState.START_OR_ERROR));
    Assert.assertEquals("Server is started", LifecycleState.START,
            eventSource.getLifecycleState());
}
 
开发者ID:OuZhencong,项目名称:log4j2,代码行数:35,代码来源:FlumeAppenderTest.java


示例10: initiate

import org.apache.flume.source.AvroSource; //导入依赖的package包/类
@Before
public void initiate() throws Exception{
  int port = 25430;
  source = new AvroSource();
  ch = new MemoryChannel();
  Configurables.configure(ch, new Context());

  Context context = new Context();
  context.put("port", String.valueOf(port));
  context.put("bind", "localhost");
  Configurables.configure(source, context);

  List<Channel> channels = new ArrayList<Channel>();
  channels.add(ch);

  ChannelSelector rcs = new ReplicatingChannelSelector();
  rcs.setChannels(channels);

  source.setChannelProcessor(new ChannelProcessor(rcs));

  source.start();
  File TESTFILE = new File(
      TestLog4jAppender.class.getClassLoader()
          .getResource("flume-log4jtest.properties").getFile());
  FileReader reader = new FileReader(TESTFILE);
  props = new Properties();
  props.load(reader);
  reader.close();
}
 
开发者ID:cloudera,项目名称:cdk,代码行数:30,代码来源:TestLog4jAppender.java


示例11: beforeEachTest

import org.apache.flume.source.AvroSource; //导入依赖的package包/类
@Before
public void beforeEachTest() throws Exception {
  eventSource = new AvroSource();
  channel = new MemoryChannel();
  Configurables.configure(channel, new Context());
  avroLogger = ctx.getLogger("avrologger");
  /*
   * Clear out all other appenders associated with this logger to ensure we're
   * only hitting the Avro appender.
   */
  avroLogger.detachAndStopAllAppenders();
  final Context context = new Context();
  testPort = String.valueOf(testServerPort);
  context.put("port", testPort);
  context.put("bind", "0.0.0.0");
  Configurables.configure(eventSource, context);
  final List<Channel> channels = new ArrayList<Channel>();
  channels.add(channel);
  final ChannelSelector cs = new ReplicatingChannelSelector();
  cs.setChannels(channels);
  eventSource.setChannelProcessor(new ChannelProcessor(cs));
  eventSource.start();
  assertThat(
    "Reached start or error",
    LifecycleController.waitForOneOf(eventSource, LifecycleState.START_OR_ERROR),
    is(true)
  );
  assertThat("Server is started", eventSource.getLifecycleState(), equalTo(LifecycleState.START));
}
 
开发者ID:jopecko,项目名称:logback-flume-ng,代码行数:30,代码来源:FlumeAppenderTest.java


示例12: setUp

import org.apache.flume.source.AvroSource; //导入依赖的package包/类
@Before
public void setUp() throws Exception {
    eventSource = new AvroSource();
    channel = new MemoryChannel();

    Configurables.configure(channel, new Context());

    avroLogger = (Logger) LogManager.getLogger("avrologger");
    /*
     * Clear out all other appenders associated with this logger to ensure
     * we're only hitting the Avro appender.
     */
    removeAppenders(avroLogger);
    final Context context = new Context();
    testPort = String.valueOf(AvailablePortFinder.getNextAvailable());
    context.put("port", testPort);
    context.put("bind", "0.0.0.0");
    Configurables.configure(eventSource, context);

    final List<Channel> channels = new ArrayList<>();
    channels.add(channel);

    final ChannelSelector cs = new ReplicatingChannelSelector();
    cs.setChannels(channels);

    eventSource.setChannelProcessor(new ChannelProcessor(cs));

    eventSource.start();

    Assert.assertTrue("Reached start or error", LifecycleController
            .waitForOneOf(eventSource, LifecycleState.START_OR_ERROR));
    Assert.assertEquals("Server is started", LifecycleState.START,
            eventSource.getLifecycleState());
}
 
开发者ID:apache,项目名称:logging-log4j2,代码行数:35,代码来源:FlumeAppenderTest.java


示例13: createAvroSourceWithSelectorHDFSAndESSinks

import org.apache.flume.source.AvroSource; //导入依赖的package包/类
private void createAvroSourceWithSelectorHDFSAndESSinks() {
	Channel ESChannel = flumeESSinkService.getChannel();
	Channel HDFSChannel = flumeHDFSSinkService.getChannel();
	Channel HbaseChannel = flumeHbaseSinkService.getChannel();

	final Map<String, String> properties = new HashMap<String, String>();
	properties.put("type", "avro");
	properties.put("bind", "localhost");
	properties.put("port", "44444");

	avroSource = new AvroSource();
	avroSource.setName("AvroSource-" + UUID.randomUUID());
	Context sourceContext = new Context(properties);
	avroSource.configure(sourceContext);
	ChannelSelector selector = new MultiplexingChannelSelector();
	List<Channel> channels = new ArrayList<>();
	channels.add(ESChannel);
	channels.add(HDFSChannel);
	channels.add(sparkAvroChannel);
	channels.add(HbaseChannel);
	selector.setChannels(channels);
	final Map<String, String> selectorProperties = new HashMap<String, String>();
	selectorProperties.put("type", "multiplexing");
	selectorProperties.put("header", "State");
	// selectorProperties.put("mapping.VIEWED", HDFSChannel.getName() + " "
	// + ESChannel.getName());
	// selectorProperties.put("mapping.FAVOURITE", HDFSChannel.getName() +
	// " "
	// + ESChannel.getName());
	// selectorProperties.put("default", HDFSChannel.getName());
	// In case spark avro sink is used.
	selectorProperties.put("mapping.VIEWED", HDFSChannel.getName() + " "
			+ ESChannel.getName() + " " + sparkAvroChannel.getName() + " "
			+ HbaseChannel.getName());
	selectorProperties.put("mapping.FAVOURITE", HDFSChannel.getName() + " "
			+ ESChannel.getName() + " " + sparkAvroChannel.getName() + " "
			+ HbaseChannel.getName());
	selectorProperties.put("default", HDFSChannel.getName() + " "
			+ sparkAvroChannel.getName() + " " + HbaseChannel.getName());
	Context selectorContext = new Context(selectorProperties);
	selector.configure(selectorContext);
	ChannelProcessor cp = new ChannelProcessor(selector);
	avroSource.setChannelProcessor(cp);

	avroSource.start();
}
 
开发者ID:jaibeermalik,项目名称:searchanalytics-bigdata,代码行数:47,代码来源:FlumeAgentServiceImpl.java


示例14: createAvroSourceWithLocalFileRollingSink

import org.apache.flume.source.AvroSource; //导入依赖的package包/类
@SuppressWarnings("unused")
private void createAvroSourceWithLocalFileRollingSink() {
	channel = new MemoryChannel();
	String channelName = "AvroSourceMemoryChannel-" + UUID.randomUUID();
	channel.setName(channelName);

	sink = new RollingFileSink();
	sink.setName("RollingFileSink-" + UUID.randomUUID());
	Map<String, String> paramters = new HashMap<>();
	paramters.put("type", "file_roll");
	paramters.put("sink.directory", "target/flumefilelog");
	Context sinkContext = new Context(paramters);
	sink.configure(sinkContext);
	Configurables.configure(channel, sinkContext);
	sink.setChannel(channel);

	final Map<String, String> properties = new HashMap<String, String>();
	properties.put("type", "avro");
	properties.put("bind", "localhost");
	properties.put("port", "44444");
	properties.put("selector.type", "multiplexing");
	properties.put("selector.header", "State");
	properties.put("selector.mapping.VIEWED", channelName);
	properties.put("selector.mapping.default", channelName);

	avroSource = new AvroSource();
	avroSource.setName("AvroSource-" + UUID.randomUUID());
	Context sourceContext = new Context(properties);
	avroSource.configure(sourceContext);
	ChannelSelector selector = new MultiplexingChannelSelector();
	List<Channel> channels = new ArrayList<>();
	channels.add(channel);
	selector.setChannels(channels);
	final Map<String, String> selectorProperties = new HashMap<String, String>();
	properties.put("default", channelName);
	Context selectorContext = new Context(selectorProperties);
	selector.configure(selectorContext);
	ChannelProcessor cp = new ChannelProcessor(selector);
	avroSource.setChannelProcessor(cp);

	sink.start();
	channel.start();
	avroSource.start();
}
 
开发者ID:jaibeermalik,项目名称:searchanalytics-bigdata,代码行数:45,代码来源:FlumeAgentServiceImpl.java



注:本文中的org.apache.flume.source.AvroSource类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java LuaUserdata类代码示例发布时间:2022-05-22
下一篇:
Java DSAKeyValueResolver类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap