• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java LoggingMetricsConsumer类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中backtype.storm.metric.LoggingMetricsConsumer的典型用法代码示例。如果您正苦于以下问题:Java LoggingMetricsConsumer类的具体用法?Java LoggingMetricsConsumer怎么用?Java LoggingMetricsConsumer使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



LoggingMetricsConsumer类属于backtype.storm.metric包,在下文中一共展示了LoggingMetricsConsumer类的6个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: run

import backtype.storm.metric.LoggingMetricsConsumer; //导入依赖的package包/类
@Override
protected int run(String[] args) {
    TopologyBuilder builder = new TopologyBuilder();

    // use SQL table for storing the status of the URLS

    builder.setSpout("spout", new SQLSpout());

    builder.setBolt("partitioner", new URLPartitionerBolt())
            .shuffleGrouping("spout");

    builder.setBolt("fetch", new SimpleFetcherBolt()).fieldsGrouping(
            "partitioner", new Fields("key"));

    builder.setBolt("sitemap", new SiteMapParserBolt())
            .localOrShuffleGrouping("fetch");

    builder.setBolt("parse", new ParserBolt()).localOrShuffleGrouping(
            "sitemap");

    builder.setBolt("switch", new StatusStreamBolt())
            .localOrShuffleGrouping("parse");

    builder.setBolt("index", new CloudSearchIndexerBolt())
            .localOrShuffleGrouping("parse");

    builder.setBolt("status", new StatusUpdaterBolt())
            .localOrShuffleGrouping("fetch", Constants.StatusStreamName)
            .localOrShuffleGrouping("sitemap", Constants.StatusStreamName)
            .localOrShuffleGrouping("switch", Constants.StatusStreamName)
            .localOrShuffleGrouping("parse", Constants.StatusStreamName);

    conf.registerMetricsConsumer(LoggingMetricsConsumer.class);

    return submit("crawl", conf, builder);
}
 
开发者ID:DigitalPebble,项目名称:tescobank,代码行数:37,代码来源:CrawlTopology.java


示例2: configureStorm

import backtype.storm.metric.LoggingMetricsConsumer; //导入依赖的package包/类
protected void configureStorm(Configuration conf, Config stormConf) throws IllegalAccessException {
  stormConf.registerSerialization(LogRecord.class);
  //stormConf.registerSerialization(Entity.class);
  stormConf.registerMetricsConsumer(LoggingMetricsConsumer.class);

  for (Iterator<String> iter = conf.getKeys(); iter.hasNext(); ) {
    String key = iter.next();

    String keyString = key.toString();
    String cleanedKey = keyString.replaceAll("\\.\\.", ".");

    String schemaFieldName = cleanedKey.replaceAll("\\.", "_").toUpperCase() + "_SCHEMA";
    Field field = FieldUtils.getField(Config.class, schemaFieldName);
    Object fieldObject = field.get(null);

    if (fieldObject == Boolean.class)
      stormConf.put(cleanedKey, conf.getBoolean(keyString));
    else if (fieldObject == String.class)
      stormConf.put(cleanedKey, conf.getString(keyString));
    else if (fieldObject == ConfigValidation.DoubleValidator)
      stormConf.put(cleanedKey, conf.getDouble(keyString));
    else if (fieldObject == ConfigValidation.IntegerValidator)
      stormConf.put(cleanedKey, conf.getInt(keyString));
    else if (fieldObject == ConfigValidation.PowerOf2Validator)
      stormConf.put(cleanedKey, conf.getLong(keyString));
    else if (fieldObject == ConfigValidation.StringOrStringListValidator)
      stormConf.put(cleanedKey, Arrays.asList(conf.getStringArray(keyString)));
    else if (fieldObject == ConfigValidation.StringsValidator)
      stormConf.put(cleanedKey, Arrays.asList(conf.getStringArray(keyString)));
    else {
      logger.error("{} cannot be configured from XML. Consider configuring in navie storm configuration.");
      throw new UnsupportedOperationException(cleanedKey + " cannot be configured from XML");
    }
  }
}
 
开发者ID:boozallen,项目名称:cognition,代码行数:36,代码来源:ConfigurableIngestTopology.java


示例3: createConfig

import backtype.storm.metric.LoggingMetricsConsumer; //导入依赖的package包/类
private static Config createConfig(Boolean debug) {
  int NUM_WORKERS = 1;
  int NUM_ACKERS = NUM_WORKERS;
  int TIMEOUT = 30;

  Config config = new Config();
  config.setDebug(debug);
  config.setMessageTimeoutSecs(TIMEOUT);
  config.setNumWorkers(NUM_WORKERS);
  config.setNumAckers(NUM_ACKERS);
  config.registerMetricsConsumer(LoggingMetricsConsumer.class, 1);

  return config;
}
 
开发者ID:Storm-Applied,项目名称:C6-Flash-sale-recommender,代码行数:15,代码来源:RemoteTopologyRunner.java


示例4: main

import backtype.storm.metric.LoggingMetricsConsumer; //导入依赖的package包/类
public static void main(String[] args) {
  Config config = new Config();
  config.setDebug(true);
  config.registerMetricsConsumer(LoggingMetricsConsumer.class, 1);

  LocalCluster localCluster = new LocalCluster();
  localCluster.submitTopology("flash-sale-recommender", config, FlashSaleTopologyBuilder.build());
}
 
开发者ID:Storm-Applied,项目名称:C6-Flash-sale-recommender,代码行数:9,代码来源:LocalTopologyRunner.java


示例5: main

import backtype.storm.metric.LoggingMetricsConsumer; //导入依赖的package包/类
public static void main(String... argv) throws Exception {
    CommandLine cli = OutlierOptions.parse(new PosixParser(), argv);
    DataPointExtractorConfig extractorConfig = JSONUtil.INSTANCE.load(new FileInputStream(new File(OutlierOptions.EXTRACTOR_CONFIG.get(cli)))
                                                                     , DataPointExtractorConfig.class
                                                                     );
    com.caseystella.analytics.outlier.streaming.OutlierConfig streamingOutlierConfig = JSONUtil.INSTANCE.load(new FileInputStream(new File(OutlierOptions.STREAM_OUTLIER_CONFIG.get(cli)))
                                                                     , com.caseystella.analytics.outlier.streaming.OutlierConfig.class
                                                                     );

    PersistenceConfig persistenceConfig = JSONUtil.INSTANCE.load(new FileInputStream(new File(OutlierOptions.TIMESERIES_DB_CONFIG.get(cli)))
                                                                     , PersistenceConfig.class
                                                                     );
    int numSpouts = 1;
    int numWorkers = 10;
    if(OutlierOptions.NUM_WORKERS.has(cli)) {
        numWorkers = Integer.parseInt(OutlierOptions.NUM_WORKERS.get(cli));
    }
    if(OutlierOptions.NUM_SPOUTS.has(cli)) {
        numSpouts = Integer.parseInt(OutlierOptions.NUM_SPOUTS.get(cli));
    }
    Map clusterConf = Utils.readStormConfig();
    clusterConf.put("topology.max.spout.pending", 100);
    Config config = new Config();
    config.put("topology.max.spout.pending", 100);
    config.setNumWorkers(numWorkers);
    config.registerMetricsConsumer(LoggingMetricsConsumer.class);

    String topicName = OutlierOptions.TOPIC.get(cli);
    String topologyName = "streaming_outliers_" + topicName;
    String zkConnectString = OutlierOptions.ZK_QUORUM.get(cli);
    /*DataPointExtractorConfig extractorConfig
                                            , com.caseystella.analytics.outlier.streaming.OutlierConfig streamingOutlierConfig
                                            , com.caseystella.analytics.outlier.batch.OutlierConfig batchOutlierConfig
                                            , PersistenceConfig persistenceConfig
                                            , String kafkaTopic
                                            , String zkQuorum
                                            , int numWorkers*/
    boolean startAtBeginning = OutlierOptions.FROM_BEGINNING.has(cli);
    TopologyBuilder topology = createTopology( extractorConfig
                                             , streamingOutlierConfig
                                             , persistenceConfig
                                             , topicName
                                             , zkConnectString
                                             , OutlierOptions.ES_NODE.get(cli)
                                             , numWorkers
                                             , numSpouts
                                             , OutlierOptions.NUM_INDEXING_WORKERS.has(cli)?
                                               Integer.parseInt(OutlierOptions.NUM_INDEXING_WORKERS.get(cli)):
                                               5
                                             , OutlierOptions.INDEX.has(cli)?
                                               OutlierOptions.INDEX.get(cli):
                                               "{source}/outlier"
                                             , startAtBeginning
                                             );
    StormSubmitter.submitTopologyWithProgressBar( topologyName, clusterConf, topology.createTopology());
    //Nimbus.Client client = NimbusClient.getConfiguredClient(clusterConf).getClient();
}
 
开发者ID:cestella,项目名称:streaming_outliers,代码行数:58,代码来源:Topology.java


示例6: run

import backtype.storm.metric.LoggingMetricsConsumer; //导入依赖的package包/类
@Override
protected int run(String[] args) {
    TopologyBuilder builder = new TopologyBuilder();

    builder.setSpout("spout", new RandomURLSpout());

    builder.setSpout("spoutAlfresco", new AlfrescoSpout());

    builder.setBolt("process", new ProcessNodes())
            .shuffleGrouping("spoutAlfresco");

    builder.setBolt("partitioner", new URLPartitionerBolt())
            .shuffleGrouping("spout");

    builder.setBolt("fetch", new FetcherBolt()).fieldsGrouping(
            "partitioner", new Fields("key"));

    builder.setBolt("sitemap", new SiteMapParserBolt())
            .localOrShuffleGrouping("fetch");

    builder.setBolt("parse", new JSoupParserBolt()).localOrShuffleGrouping(
            "sitemap");

    builder.setBolt("switch", new StatusStreamBolt())
            .localOrShuffleGrouping("parse");

    builder.setBolt("index", new IndexerBolt()).localOrShuffleGrouping(
            "parse");


    builder.setBolt("status", new PrinterBolt())
            .localOrShuffleGrouping("fetch", Constants.StatusStreamName)
            .localOrShuffleGrouping("sitemap", Constants.StatusStreamName)
            .localOrShuffleGrouping("process", Constants.StatusStreamName)
            .localOrShuffleGrouping("switch", Constants.StatusStreamName)
            .localOrShuffleGrouping("parse", Constants.StatusStreamName);

    conf.registerMetricsConsumer(LoggingMetricsConsumer.class);

    return submit("crawl", conf, builder);
}
 
开发者ID:zaizi,项目名称:alfresco-apache-storm-demo,代码行数:42,代码来源:CrawlTopology.java



注:本文中的backtype.storm.metric.LoggingMetricsConsumer类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java LoadToast类代码示例发布时间:2022-05-23
下一篇:
Java ConditionalOnNotWebApplication类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap