• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java Split类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中storm.trident.testing.Split的典型用法代码示例。如果您正苦于以下问题:Java Split类的具体用法?Java Split怎么用?Java Split使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



Split类属于storm.trident.testing包,在下文中一共展示了Split类的16个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: buildTopology

import storm.trident.testing.Split; //导入依赖的package包/类
public StormTopology buildTopology(LocalDRPC drpc) {
    TridentKafkaConfig kafkaConfig = new TridentKafkaConfig(brokerHosts, "storm-sentence", "storm");
    kafkaConfig.scheme = new SchemeAsMultiScheme(new StringScheme());
    TransactionalTridentKafkaSpout kafkaSpout = new TransactionalTridentKafkaSpout(kafkaConfig);
    TridentTopology topology = new TridentTopology();

    TridentState wordCounts = topology.newStream("kafka", kafkaSpout).shuffle().
            each(new Fields("str"), new WordSplit(), new Fields("word")).
            groupBy(new Fields("word")).
            persistentAggregate(new HazelCastStateFactory(), new Count(), new Fields("aggregates_words")).parallelismHint(2);


    topology.newDRPCStream("words", drpc)
            .each(new Fields("args"), new Split(), new Fields("word"))
            .groupBy(new Fields("word"))
            .stateQuery(wordCounts, new Fields("word"), new MapGet(), new Fields("count"))
            .each(new Fields("count"), new FilterNull())
            .aggregate(new Fields("count"), new Sum(), new Fields("sum"));

    return topology.build();
}
 
开发者ID:wurstmeister,项目名称:storm-kafka-0.8-plus-test,代码行数:22,代码来源:SentenceAggregationTopology.java


示例2: buildTopology

import storm.trident.testing.Split; //导入依赖的package包/类
public static StormTopology buildTopology(WindowsStoreFactory windowStore, WindowConfig windowConfig)
        throws Exception {
    FixedBatchSpout spout = new FixedBatchSpout(new Fields("sentence"), 3,
            new Values("the cow jumped over the moon"),
            new Values("the man went to the store and bought some candy"),
            new Values("four score and seven years ago"), new Values("how many apples can you eat"),
            new Values("to be or not to be the person"));
    spout.setCycle(true);
    
    TridentTopology topology = new TridentTopology();
    
    Stream stream = topology.newStream("spout1", spout).parallelismHint(16)
            .each(new Fields("sentence"), new Split(), new Fields("word"))
            .window(windowConfig, windowStore, new Fields("word"), new CountAsAggregator(), new Fields("count"))
            .peek(new Consumer() {
                @Override
                public void accept(TridentTuple input) {
                    LOG.info("Received tuple: [{}]", input);
                }
            });
            
    return topology.build();
}
 
开发者ID:alibaba,项目名称:jstorm,代码行数:24,代码来源:TridentWindowingInmemoryStoreTopology.java


示例3: testTridentSlidingCountWindow

import storm.trident.testing.Split; //导入依赖的package包/类
@Test
public void testTridentSlidingCountWindow()
{
    WindowsStoreFactory windowsStoreFactory = new InMemoryWindowsStoreFactory();
    FixedLimitBatchSpout spout = new FixedLimitBatchSpout(SPOUT_LIMIT, new Fields("sentence"), SPOUT_BATCH_SIZE,
                new Values("the cow jumped over the moon"),
                new Values("the man went to the store and bought some candy"),
                new Values("four score and seven years ago"), new Values("how many apples can you eat"),
                new Values("to be or not to be the person"));

    TridentTopology tridentTopology = new TridentTopology();

    Stream stream = tridentTopology.newStream("spout1", spout).parallelismHint(16)
                .each(new Fields("sentence"), new Split(), new Fields("word"))
                .window(windowConfig, windowsStoreFactory, new Fields("word"), new CountAsAggregator(), new Fields("count"))
                .peek(new ValidateConsumer());

    Map config = new HashMap();
    config.put(Config.TOPOLOGY_NAME, "TridentSlidingCountWindowTest");

    JStormUnitTestRunner.submitTopology(tridentTopology.build(), null, 120, null);
}
 
开发者ID:alibaba,项目名称:jstorm,代码行数:23,代码来源:TridentSlidingCountWindowTest.java


示例4: testTridentTumblingCountWindow

import storm.trident.testing.Split; //导入依赖的package包/类
@Test
public void testTridentTumblingCountWindow()
{
    WindowsStoreFactory windowsStoreFactory = new InMemoryWindowsStoreFactory();
    FixedLimitBatchSpout spout = new FixedLimitBatchSpout(SPOUT_LIMIT, new Fields("sentence"), SPOUT_BATCH_SIZE,
                new Values("the cow jumped over the moon"),
                new Values("the man went to the store and bought some candy"),
                new Values("four score and seven years ago"), new Values("how many apples can you eat"),
                new Values("to be or not to be the person"));

    TridentTopology tridentTopology = new TridentTopology();

    Stream stream = tridentTopology.newStream("spout1", spout).parallelismHint(16)
                .each(new Fields("sentence"), new Split(), new Fields("word"))
                .window(windowConfig, windowsStoreFactory, new Fields("word"), new CountAsAggregator(), new Fields("count"))
                .peek(new ValidateConsumer());

    Map config = new HashMap();
    config.put(Config.TOPOLOGY_NAME, "TridentTumblingCountWindowTest");

    JStormUnitTestRunner.submitTopology(tridentTopology.build(), null, 120, null);
}
 
开发者ID:alibaba,项目名称:jstorm,代码行数:23,代码来源:TridentTumblingCountWindowTest.java


示例5: testTridentTumblingDurationWindow

import storm.trident.testing.Split; //导入依赖的package包/类
@Test
public void testTridentTumblingDurationWindow()
{
        WindowsStoreFactory windowsStoreFactory = new InMemoryWindowsStoreFactory();
        FixedLimitBatchSpout spout = new FixedLimitBatchSpout(SPOUT_LIMIT, new Fields("sentence"), SPOUT_BATCH_SIZE,
                new Values("the cow jumped over the moon"),
                new Values("the man went to the store and bought some candy"),
                new Values("four score and seven years ago"), new Values("how many apples can you eat"),
                new Values("to be or not to be the person"));

        TridentTopology tridentTopology = new TridentTopology();

        Stream stream = tridentTopology.newStream("spout1", spout).parallelismHint(16)
                .each(new Fields("sentence"), new Split(), new Fields("word"))
                .window(windowConfig, windowsStoreFactory, new Fields("word"), new CountAsAggregator(), new Fields("count"))
                .peek(new ValidateConsumer());

        Map config = new HashMap();
        config.put(Config.TOPOLOGY_NAME, "TridentTumblingDurationWindowTest");

        JStormUnitTestRunner.submitTopology(tridentTopology.build(), null, 120, null);

}
 
开发者ID:alibaba,项目名称:jstorm,代码行数:24,代码来源:TridentTumblingDurationWindowTest.java


示例6: testTridentSlidingDurationWindow

import storm.trident.testing.Split; //导入依赖的package包/类
@Test
public void testTridentSlidingDurationWindow()
{
    WindowsStoreFactory windowsStoreFactory = new InMemoryWindowsStoreFactory();
    FixedLimitBatchSpout spout = new FixedLimitBatchSpout(SPOUT_LIMIT, new Fields("sentence"), SPOUT_BATCH_SIZE,
                new Values("the cow jumped over the moon"),
                new Values("the man went to the store and bought some candy"),
                new Values("four score and seven years ago"), new Values("how many apples can you eat"),
                new Values("to be or not to be the person"));

    TridentTopology tridentTopology = new TridentTopology();

    Stream stream = tridentTopology.newStream("spout1", spout).parallelismHint(16)
                .each(new Fields("sentence"), new Split(), new Fields("word"))
                .window(windowConfig, windowsStoreFactory, new Fields("word"), new CountAsAggregator(), new Fields("count"))
                .peek(new ValidateConsumer());

    Map config = new HashMap();
    config.put(Config.TOPOLOGY_NAME, "TridentSlidingDurationWindowTest");

    JStormUnitTestRunner.submitTopology(tridentTopology.build(), null, 120, null);

}
 
开发者ID:alibaba,项目名称:jstorm,代码行数:24,代码来源:TridentSlidingDurationWindowTest.java


示例7: buildWordCountAndSourceTopology

import storm.trident.testing.Split; //导入依赖的package包/类
@SuppressWarnings("unchecked")
public static StormTopology buildWordCountAndSourceTopology(LocalDRPC drpc) {
    LOG.info("Building topology.");
    TridentTopology topology = new TridentTopology();

    String source1 = "spout1";
    String source2 = "spout2";
    FixedBatchSpout spout1 = new FixedBatchSpout(new Fields("sentence", "source"), 3,
            new Values("the cow jumped over the moon", source1),
            new Values("the man went to the store and bought some candy", source1),
            new Values("four score and four years ago", source2),
            new Values("how much wood can a wood chuck chuck", source2));
    spout1.setCycle(true);

    TridentState wordCounts =
            topology.newStream("spout1", spout1)
                    .each(new Fields("sentence"), new Split(), new Fields("word"))
                    .groupBy(new Fields("word", "source"))
                    .persistentAggregate(CassandraCqlMapState.nonTransactional(new WordCountAndSourceMapper()),
                            new IntegerCount(), new Fields("count"))
                    .parallelismHint(6);

    topology.newDRPCStream("words", drpc)
            .each(new Fields("args"), new Split(), new Fields("word"))
            .groupBy(new Fields("word"))
            .stateQuery(wordCounts, new Fields("word"), new MapGet(), new Fields("count"))
            .each(new Fields("count"), new FilterNull())
            .aggregate(new Fields("count"), new Sum(), new Fields("sum"));

    return topology.build();
}
 
开发者ID:hpcc-systems,项目名称:storm-cassandra-cql,代码行数:32,代码来源:WordCountTopology.java


示例8: buildTopology1

import storm.trident.testing.Split; //导入依赖的package包/类
private static StormTopology buildTopology1(LocalDRPC drpc) {
    FixedBatchSpout spout = new FixedBatchSpout(new Fields("sentence"), 100,
            new Values("the cow jumped over the moon"),
            new Values("the man went to the store and bought some candy"),
            new Values("to be or not to be the person"));
    spout.setCycle(true);

    AerospikeOptions options = new AerospikeOptions();
    options.set = "words";
    options.keyType = AerospikeOptions.AerospikeKeyType.STRING;

    TridentTopology topology = new TridentTopology();
    TridentState wordCounts =
            topology.newStream("spout1", spout)
                    .parallelismHint(4)
                    .each(new Fields("sentence"), new Split(), new Fields("word"))
                    .groupBy(new Fields("word"))
                    .persistentAggregate(AerospikeSingleBinMapState.getTransactional("count", options),
                            new CountAggregator(), new Fields("count"))
                    .parallelismHint(4);

    topology.newDRPCStream("words", drpc)
            .each(new Fields("args"), new Split(), new Fields("word"))
            .groupBy(new Fields("word"))
            .stateQuery(wordCounts, new Fields("word"), new MapGet(), new Fields("count"))
            .each(new Fields("count"), new FilterNull())
            .aggregate(new Fields("count"), new Sum(), new Fields("sum"));

    return topology.build();
}
 
开发者ID:adform,项目名称:trident-aerospike,代码行数:31,代码来源:StormTridentAerospikeTopology.java


示例9: addDRPCStream

import storm.trident.testing.Split; //导入依赖的package包/类
private Stream addDRPCStream(TridentTopology tridentTopology, TridentState state, LocalDRPC drpc) {
    return tridentTopology.newDRPCStream("words", drpc)
            .each(new Fields("args"), new Split(), new Fields("word"))
            .groupBy(new Fields("word"))
            .stateQuery(state, new Fields("word"), new MapGet(), new Fields("count"))
            .each(new Fields("count"), new FilterNull())
            .project(new Fields("word", "count"));
}
 
开发者ID:desp0916,项目名称:LearnStorm,代码行数:9,代码来源:TridentKafkaWordCount.java


示例10: addTridentState

import storm.trident.testing.Split; //导入依赖的package包/类
private TridentState addTridentState(TridentTopology tridentTopology) {
    return tridentTopology.newStream("spout1", createKafkaSpout()).parallelismHint(1)
            .each(new Fields("str"), new Split(), new Fields("word"))
            .groupBy(new Fields("word"))
            .persistentAggregate(new MemoryMapState.Factory(), new Count(), new Fields("count"))
            .parallelismHint(1);
}
 
开发者ID:desp0916,项目名称:LearnStorm,代码行数:8,代码来源:TridentKafkaWordCount.java


示例11: buildTopology

import storm.trident.testing.Split; //导入依赖的package包/类
public StormTopology buildTopology() {
    TridentTopology topology = new TridentTopology();

    SamevalGenerator dataGen = new SamevalGenerator();

    StateFactory mapState = new MemoryMapState.Factory();

    TridentState counterState = topology.newStream("CounterGen", dataGen)
            .groupBy(new Fields(Names.TIME_STAMP_FLD))
            .persistentAggregate(mapState, new Fields(Names.USER_ID_FLD),
                    new HLLAggregator(Names.USER_ID_FLD),
                    new Fields("ItemCounter"));

    topology.newDRPCStream("CountItemStream", localDRPC)

            .each(new Fields("args"), new Split(), new Fields("FLD"))
            .each(new Fields("FLD"), new DataTypeConvert(new Integer(1)), new Fields(Names.MIN_OF_DAY_FLD))
            .each(new Fields(Names.MIN_OF_DAY_FLD), new Debug())
            .stateQuery(counterState, new Fields(Names.MIN_OF_DAY_FLD), new MapGet(), new Fields(Names.COUNTER_VALS_FLD))
            .each(new Fields(Names.COUNTER_VALS_FLD), new FilterNull())

                    //.each(new Fields("CounterVals"), new HLLToStrConverter("CounterVals"), new Fields("UniqueItems"));
            .each(new Fields(Names.COUNTER_VALS_FLD), new HLLToStrConverter(Names.COUNTER_VALS_FLD), new Fields("UniqueItems"))
            .project(new Fields("UniqueItems"));

    return topology.build();
}
 
开发者ID:sumanthn,项目名称:SketchOnStorm,代码行数:28,代码来源:UniqueUserIdTestTopology.java


示例12: buildTopology

import storm.trident.testing.Split; //导入依赖的package包/类
public static StormTopology buildTopology(LocalDRPC drpc, StateFactory stateFactory) {
    FixedBatchSpout spout = new FixedBatchSpout(new Fields("sentence"), 3,
            new Values("the cow jumped over the moon"),
            new Values("the man went to the store and bought some candy"),
            new Values("four score and seven years ago"),
            new Values("how many apples can you eat"),
            new Values("to be or not to be the person"));
    spout.setCycle(true);

    TridentTopology topology = new TridentTopology();
    topology.build();
    TridentState wordCounts =
            topology.newStream("spout1", spout)
                    .parallelismHint(16)
                    .each(new Fields("sentence"), new Split(), new Fields("word"))
                    .groupBy(new Fields("word"))
                    .persistentAggregate(stateFactory, new Fields("word"), new WordCount(), new Fields("count"))
                    .parallelismHint(16);

    topology.newDRPCStream("words", drpc)
            .each(new Fields("args"), new Split(), new Fields("word"))
            .groupBy(new Fields("word"))
            .stateQuery(wordCounts, new Fields("word"), new MapGet(), new Fields("count"))
            .each(new Fields("count"), new FilterNull())
            .aggregate(new Fields("count"), new SumWord(), new Fields("sum"))
    ;

    return topology.build();
}
 
开发者ID:duolaieimeng,项目名称:trident-mongodb,代码行数:30,代码来源:MongoStateTest.java


示例13: main

import storm.trident.testing.Split; //导入依赖的package包/类
public static void main(String args[]) throws Exception {

		TridentTopology topology = new TridentTopology();
		Config conf = new Config();

		@SuppressWarnings("unchecked")
		FixedBatchSpout spout = new FixedBatchSpout(
				new Fields("sentence"), 3,
				new Values("the cow jumped over the moon"),
				new Values("the man went to the store and bought some candy"),
				new Values("four score and seven years ago"),
				new Values("how many apples can you eat"));
		spout.setCycle(true);

		TridentState wordCounts = topology.newStream("spout1", spout)
				.each(new Fields("sentence"), new Split(), new Fields("word"))
				.groupBy(new Fields("word"))
				.persistentAggregate(new MemoryMapState.Factory(), new Count(), new Fields("count"))
				.parallelismHint(6);

		// MapGet() : gets the count for each word
		topology.newDRPCStream("words")
				.each(new Fields("args"), new Split(), new Fields("word"))
				.groupBy(new Fields("word"))
				.stateQuery(wordCounts, new Fields("word"), new MapGet(), new Fields("count"))
				.each(new Fields("count"), new FilterNull())
				.aggregate(new Fields("count"), new Sum(), new Fields("sum"));

		conf.setDebug(true);

		conf.put("storm.thrift.transport", "backtype.storm.security.auth.SimpleTransportPlugin");
		conf.put(Config.STORM_NIMBUS_RETRY_TIMES, 3);
		conf.put(Config.STORM_NIMBUS_RETRY_INTERVAL, 10);
		conf.put(Config.STORM_NIMBUS_RETRY_INTERVAL_CEILING, 20);
		conf.put(Config.DRPC_MAX_BUFFER_SIZE, 1048576);

		DRPCClient client = new DRPCClient(conf, "hdp02.localdomain", 3772);

		System.out.println(client.execute("words", "cat dog the man"));

		LocalCluster cluster = new LocalCluster();
		cluster.submitTopology("test", conf, topology.build());

		Utils.sleep(1000);

		cluster.killTopology("test");
		cluster.shutdown();

	}
 
开发者ID:desp0916,项目名称:LearnStorm,代码行数:50,代码来源:TestTridentTopology.java


示例14: buildTopology

import storm.trident.testing.Split; //导入依赖的package包/类
public static final StormTopology buildTopology() {

        //first init topology

        TridentTopology topology = new TridentTopology();

        Fields fields = new Fields(Names.HOSTNAME_FLD, Names.BYTES_FLD, Names.HOUR_0F_DAY_FLD, Names.DAY_0F_YEAR_FLD);
        //bring in the spout
        HostTrafficGenerator dataGen = new HostTrafficGenerator(fields, 10,
                TimeMeasures.ONESECOND_MILLIS);

        //dataGen.turnOnTestMode();
        // FlowGenTest dataGen = new FlowGenTest();

        //attach the state factory
        //here it is simple HashMap backed in memory store
        StateFactory dataVolumeMapStore = new MemoryMapState.Factory();

        //define the stream
        Stream hostTrafficStream = topology
                .newStream(STREAM_NAME, dataGen)
                .parallelismHint(1);

        //define the state
        TridentState counterState =

                hostTrafficStream
                        //group by hourly bucket
                        .groupBy(new Fields(Names.HOUR_0F_DAY_FLD))
                                //aggregate by host
                        .persistentAggregate(dataVolumeMapStore, new Fields(Names.HOSTNAME_FLD, Names.BYTES_FLD),
                                new DataVolumeAggregator(),
                                new Fields("DataVolumes"));

        //now define DRPC stream on which the queries are executed

        //attach to local instance of DRPC
        topology.newDRPCStream(DATA_VOLUME_BY_HOSTS, StormClusterStore.getInstance().getLocalDRPC())

                //takes in string args the hour of day (bucket used for unique counts)
                .each(new Fields(Names.ARGS_FLD), new Split(), new Fields("FLD"))
                        //MIN_OF_DAY_FLD is the key for the map
                .each(new Fields("FLD"), new DataTypeConvert(new Integer(1)),
                        new Fields(Names.HOUR_0F_DAY_FLD))
                        //now get the fields
                .stateQuery(counterState, new Fields(Names.HOUR_0F_DAY_FLD),
                        new MapGet(), new Fields(Names.DATA_VOLUME_FLD))

                        //filter out the NULLs
                .each(new Fields(Names.DATA_VOLUME_FLD), new FilterNull())

                        //convert the HLL sketch to Base64 encoded String
                        //since drpc.execute results can only be strings
                        //TODO: if possible define another combiner to combine multiple results from DRPC
                .each(new Fields(Names.DATA_VOLUME_FLD), new SketchToStrConverter(SketchType.CMS,
                        Names.DATA_VOLUME_FLD),
                        new Fields(Names.DATA_VOLUME_SKETCH_FLD))
                .project(new Fields(Names.DATA_VOLUME_SKETCH_FLD));

        return topology.build();
    }
 
开发者ID:sumanthn,项目名称:SketchOnStorm,代码行数:62,代码来源:DataVolumeAnalysisTopologyBuilder.java


示例15: buildTopology

import storm.trident.testing.Split; //导入依赖的package包/类
/**
 * Builds the topology for Unique UserId Counter
 */
public static StormTopology buildTopology() {

    //first init topology
    TridentTopology topology = new TridentTopology();

    //bring in the spout
    UserIdStreamGenerator dataGen = new UserIdStreamGenerator();

    //attach the state factory
    //here it is simple HashMap backed in memory store
    StateFactory mapState = new MemoryMapState.Factory();

    //define the counter state
    //use HLL as sketch to keep track of unique userids
    TridentState counterState =
            topology.newStream(STREAM_NAME, dataGen)
                    //group by minutely bucket
                    .groupBy(new Fields(Names.TIME_STAMP_FLD))
                            //store the HLL based sketch for every minute
                            //this should give the unique user id count
                    .persistentAggregate(mapState, new Fields(Names.USER_ID_FLD),
                            new HLLAggregator(Names.USER_ID_FLD),
                            new Fields("ItemCounter"));

    //now define DRPC stream on which the queries are executed

    //attach to local instance of DRPC
    topology.newDRPCStream(DRPC_STREAM_NAME, StormClusterStore.getInstance().getLocalDRPC())

            //takes in string args the minute of day (bucket used for unique counts)
            .each(new Fields("args"), new Split(), new Fields("FLD"))
                    //convert Str to integer each fld
                    //USER_ID_FLD is the key for the map
            .each(new Fields("FLD"), new DataTypeConvert(new Integer(1)), new Fields(Names.TIME_STAMP_FLD))

                    //.each(new Fields(Names.USER_ID_FLD), new Debug())
                    //now get the fields
            .stateQuery(counterState, new Fields(Names.TIME_STAMP_FLD), new MapGet(), new Fields(Names.COUNTER_VALS_FLD))

                    //filter out the NULLs
            .each(new Fields(Names.COUNTER_VALS_FLD), new FilterNull())

                    //convert the HLL sketch to Base64 encoded String
                    //since drpc.execute results can only be strings
                    //TODO: if possible define another combiner to combine multiple results from DRPC
            .each(new Fields(Names.COUNTER_VALS_FLD), new HLLToStrConverter(Names.COUNTER_VALS_FLD),
                    new Fields(Names.UNIQUE_USER_SKETCH))
            .project(new Fields(Names.UNIQUE_USER_SKETCH));

    return topology.build();
}
 
开发者ID:sumanthn,项目名称:SketchOnStorm,代码行数:55,代码来源:UniqueUserCounterTopologyBuilder.java


示例16: buildTopology

import storm.trident.testing.Split; //导入依赖的package包/类
/**
 * Builds the topology for Ip Flow Analysis
 */
public static final StormTopology buildTopology() {

    //first init topology
    TridentTopology topology = new TridentTopology();

    //bring in the spout
    FlowGenerator dataGen = new FlowGenerator();
    //attach the state factory
    //here it is simple HashMap backed in memory store
    StateFactory mapState = new MemoryMapState.Factory();
    StateFactory conversationMapState = new MemoryMapState.Factory();

    //the stream
    Stream ipFlowStream = topology
            .newStream(STREAM_NAME, dataGen)
            .parallelismHint(4);

    //define the counter state
    TridentState counterState =

            ipFlowStream
                    //group by minutely bucket
                    .groupBy(new Fields(Names.MIN_OF_DAY_FLD))
                            //Source + IP DEST fld makes a conversation
                    .persistentAggregate(mapState, new Fields(Names.SOURCE_IP_FLD, Names.DEST_IP_FLD),
                            new IpConversationSketch(),
                            new Fields("ConversationsCount"));

    //track al conversations count
    TridentState globalCountPerMin =
            ipFlowStream
                    //group by minutely bucket
                    .groupBy(new Fields(Names.MIN_OF_DAY_FLD))
                    .persistentAggregate(conversationMapState, new Fields(Names.MIN_OF_DAY_FLD),
                            new Count(),
                            new Fields("ConversationCountPerMin"));

    //now define DRPC stream on which the queries are executed

    //attach to local instance of DRPC
    topology.newDRPCStream(UNIQUE_CONVERSATION_COUNT, StormClusterStore.getInstance().getLocalDRPC())

            //takes in string args the minute of day (bucket used for unique counts)
            .each(new Fields("args"), new Split(), new Fields("FLD"))
                    //MIN_OF_DAY_FLD is the key for the map
            .each(new Fields("FLD"), new DataTypeConvert(new Integer(1)), new Fields(Names.MIN_OF_DAY_FLD))
                    //now get the fields
            .stateQuery(counterState, new Fields(Names.MIN_OF_DAY_FLD),
                    new MapGet(), new Fields(Names.COUNTER_VALS_FLD))

                    //filter out the NULLs
            .each(new Fields(Names.COUNTER_VALS_FLD), new FilterNull())

                    //convert the HLL sketch to Base64 encoded String
                    //since drpc.execute results can only be strings
                    //TODO: if possible define another combiner to combine multiple results from DRPC
            .each(new Fields(Names.COUNTER_VALS_FLD), new HLLToStrConverter(Names.COUNTER_VALS_FLD),
                    new Fields(Names.CONVERSATION_COUNT_FLD))
            .project(new Fields(Names.CONVERSATION_COUNT_FLD));

    topology.newDRPCStream(CONVERSATION_COUNT, StormClusterStore.getInstance().getLocalDRPC())

            //takes in string args the minute of day (bucket used for unique counts)
            .each(new Fields("args"), new Split(), new Fields("FLD"))
                    //MIN_OF_DAY_FLD is the key for the map
            .each(new Fields("FLD"), new DataTypeConvert(new Integer(1)), new Fields(Names.MIN_OF_DAY_FLD))

                    //now get the fields
            .stateQuery(globalCountPerMin, new Fields(Names.MIN_OF_DAY_FLD),
                    new MapGet(), new Fields(Names.COUNTER_VALS_FLD))

                    //filter out the NULLs
            .each(new Fields(Names.COUNTER_VALS_FLD), new FilterNull());

    return topology.build();
}
 
开发者ID:sumanthn,项目名称:SketchOnStorm,代码行数:80,代码来源:FlowAnalysisTopologyBuilder.java



注:本文中的storm.trident.testing.Split类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java CoordinateTransform类代码示例发布时间:2022-05-23
下一篇:
Java SdkConfigurationUtil类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap