• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java AuthorizationException类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中backtype.storm.generated.AuthorizationException的典型用法代码示例。如果您正苦于以下问题:Java AuthorizationException类的具体用法?Java AuthorizationException怎么用?Java AuthorizationException使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



AuthorizationException类属于backtype.storm.generated包,在下文中一共展示了AuthorizationException类的18个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: main

import backtype.storm.generated.AuthorizationException; //导入依赖的package包/类
public static void main(String[] args) {
    Config config = new Config();

    HdfsBolt hdfsBolt = makeHdfsBolt();
    KafkaSpout kafkaSpout = makeKafkaSpout(TOPIC, TOPOLOGY_NAME);

    LOG.info("Topology name is {}", TOPOLOGY_NAME);

    TopologyBuilder topologyBuilder = new TopologyBuilder();
    topologyBuilder.setSpout(KAFKA_SPOUT_ID, kafkaSpout, 10);
    topologyBuilder.setBolt(CROP_BOLT_ID, new CropBolt(), 10).shuffleGrouping(KAFKA_SPOUT_ID);
    topologyBuilder.setBolt(SPLIT_FIELDS_BOLT_ID, new SplitFieldsBolt(), 10).shuffleGrouping(CROP_BOLT_ID);
    topologyBuilder.setBolt(STORM_HDFS_BOLT_ID, hdfsBolt, 4).fieldsGrouping(SPLIT_FIELDS_BOLT_ID, new Fields("timestamp", "fieldvalues"));

    if (args != null && args.length > 0) {
        config.setDebug(false);
        config.setNumWorkers(3);

        try {
            StormSubmitter.submitTopology(args[0], config, topologyBuilder.createTopology());
        } catch (InvalidTopologyException | AlreadyAliveException | AuthorizationException e) {
            e.printStackTrace();
        }
    }
}
 
开发者ID:lovelock,项目名称:storm-demo,代码行数:26,代码来源:LogStatisticsTopology.java


示例2: fail

import backtype.storm.generated.AuthorizationException; //导入依赖的package包/类
@Override
public void fail(Object msgId) {
    DRPCMessageId did = (DRPCMessageId) msgId;
    DistributedRPCInvocations.Iface client;

    if (_local_drpc_id == null) {
        client = _clients.get(did.index);
    } else {
        client = (DistributedRPCInvocations.Iface) ServiceRegistry.getService(_local_drpc_id);
    }
    try {
        client.failRequest(did.id);
    } catch (AuthorizationException aze) {
        LOG.error("Not authorized to failREquest from DRPC server", aze);
    } catch (TException e) {
        LOG.error("Failed to fail request", e);
    }
}
 
开发者ID:kkllwww007,项目名称:jstrom,代码行数:19,代码来源:DRPCSpout.java


示例3: buildAndSubmit

import backtype.storm.generated.AuthorizationException; //导入依赖的package包/类
private void buildAndSubmit() throws AlreadyAliveException, InvalidTopologyException, AuthorizationException {
		final int numWorkers = Integer.valueOf(topologyConfig.getProperty("num.workers"));
		Config config = new Config();
		config.setDebug(DEBUG);
		config.setNumWorkers(numWorkers);
		config.setMaxSpoutPending(1000000);
		// https://github.com/apache/storm/tree/v0.10.0/external/storm-kafka
		config.setMessageTimeoutSecs(600);	// This value(30 secs by default) must
							// be larger than retryDelayMaxMs
							// (60 secs by default) in
							/// KafkaSpout.

		TopologyBuilder builder = new TopologyBuilder();
		configureKafkaSpout(builder, config);
		configureESBolts(builder, config);

//		LocalCluster cluster = new LocalCluster();
		StormSubmitter.submitTopology("LogAnalyzerV1", config, builder.createTopology());
	}
 
开发者ID:desp0916,项目名称:LearnStorm,代码行数:20,代码来源:LogAnalyzer.java


示例4: buildAndSubmit

import backtype.storm.generated.AuthorizationException; //导入依赖的package包/类
private void buildAndSubmit() throws AlreadyAliveException, InvalidTopologyException, AuthorizationException {
		final int numWorkers = Integer.valueOf(topologyConfig.getProperty("num.workers"));
		Config config = new Config();
		config.setDebug(DEBUG);
		config.setNumWorkers(numWorkers);
		config.setMaxSpoutPending(1000000);
		// https://github.com/apache/storm/tree/v0.10.0/external/storm-kafka
		config.setMessageTimeoutSecs(600);	// This value(30 secs by default) must
							// be larger than retryDelayMaxMs
							// (60 secs by default) in
							// KafkaSpout.
		TopologyBuilder builder = new TopologyBuilder();
		configureKafkaSpout(builder, config);
		configureESBolts(builder, config);
//		configureHBaseBolts(builder, config);

//		conf.put(Config.NIMBUS_HOST, "hdp01.localdomain");
//		System.setProperty("storm.jar", "/root/workspace//LearnStorm/target/LearnStorm-0.0.1-SNAPSHOT.jar");
//		System.setProperty("hadoop.home.dir", "/tmp");
//		LocalCluster cluster = new LocalCluster();
		StormSubmitter.submitTopology("ApLogAnalyzerV1", config, builder.createTopology());
	}
 
开发者ID:desp0916,项目名称:LearnStorm,代码行数:23,代码来源:ApLogAnalyzer.java


示例5: assertStoreHasExactly

import backtype.storm.generated.AuthorizationException; //导入依赖的package包/类
public static void assertStoreHasExactly(BlobStore store, Subject who, String ... keys)
        throws IOException, KeyNotFoundException, AuthorizationException {
    Set<String> expected = new HashSet<String>(Arrays.asList(keys));
    Set<String> found = new HashSet<String>();
    Iterator<String> c = store.listKeys();
    while (c.hasNext()) {
        String keyName = c.next();
        found.add(keyName);
    }
    Set<String> extra = new HashSet<String>(found);
    extra.removeAll(expected);
    assertTrue("Found extra keys in the blob store "+extra, extra.isEmpty());
    Set<String> missing = new HashSet<String>(expected);
    missing.removeAll(found);
    assertTrue("Found keys missing from the blob store "+missing, missing.isEmpty());
}
 
开发者ID:alibaba,项目名称:jstorm,代码行数:17,代码来源:BlobStoreTest.java


示例6: complete

import backtype.storm.generated.AuthorizationException; //导入依赖的package包/类
@Override
public void complete(ReturnResultsState state, TridentCollector collector) {
    // only one of the multireducers will receive the tuples
    if (state.returnInfo != null) {
        String result = JSONValue.toJSONString(state.results);
        Map retMap = (Map) JSONValue.parse(state.returnInfo);
        final String host = (String) retMap.get("host");
        final int port = Utils.getInt(retMap.get("port"));
        String id = (String) retMap.get("id");
        DistributedRPCInvocations.Iface client;
        if (local) {
            client = (DistributedRPCInvocations.Iface) ServiceRegistry.getService(host);
        } else {
            List server = new ArrayList() {
                {
                    add(host);
                    add(port);
                }
            };

            if (!_clients.containsKey(server)) {
                try {
                    _clients.put(server, new DRPCInvocationsClient(conf, host, port));
                } catch (TTransportException ex) {
                    throw new RuntimeException(ex);
                }
            }
            client = _clients.get(server);
        }

        try {
            client.result(id, result);
        } catch (AuthorizationException aze) {
            collector.reportError(aze);
        } catch (TException e) {
            collector.reportError(e);
        }
    }
}
 
开发者ID:kkllwww007,项目名称:jstrom,代码行数:40,代码来源:ReturnResultsReducer.java


示例7: fetchRequest

import backtype.storm.generated.AuthorizationException; //导入依赖的package包/类
public DRPCRequest fetchRequest(String func) throws TException, AuthorizationException {
    DistributedRPCInvocations.Client c = client.get();
    try {
        if (c == null) {
            throw new TException("Client is not connected...");
        }
        return c.fetchRequest(func);
    } catch (AuthorizationException aze) {
        throw aze;
    } catch (TException e) {
        client.compareAndSet(c, null);
        throw e;
    }
}
 
开发者ID:kkllwww007,项目名称:jstrom,代码行数:15,代码来源:DRPCInvocationsClient.java


示例8: failRequest

import backtype.storm.generated.AuthorizationException; //导入依赖的package包/类
public void failRequest(String id) throws TException {
    DistributedRPCInvocations.Client c = client.get();
    try {
        if (c == null) {
            throw new TException("Client is not connected...");
        }
        c.failRequest(id);
    } catch (AuthorizationException aze) {
        throw aze;
    } catch (TException e) {
        client.compareAndSet(c, null);
        throw e;
    }
}
 
开发者ID:kkllwww007,项目名称:jstrom,代码行数:15,代码来源:DRPCInvocationsClient.java


示例9: buildAndSubmit

import backtype.storm.generated.AuthorizationException; //导入依赖的package包/类
private void buildAndSubmit() throws AlreadyAliveException, InvalidTopologyException, AuthorizationException {
		Config config = new Config();
		config.setDebug(DEBUG);
		config.setNumWorkers(1);

		TopologyBuilder builder = new TopologyBuilder();
		configureRandomLogSpout(builder, config);
		configureKafkaBolt(builder, config);

//		LocalCluster cluster = new LocalCluster();
		StormSubmitter.submitTopology("ApLogGeneratorV1", config, builder.createTopology());
	}
 
开发者ID:desp0916,项目名称:LearnStorm,代码行数:13,代码来源:ApLogGenerator.java


示例10: main

import backtype.storm.generated.AuthorizationException; //导入依赖的package包/类
/**
 * 
 * This example is very dangerous to the consistency of your bank accounts. Guess why, or read the
 * tutorial.
 * 
 * @throws AlreadyAliveException
 * @throws InvalidTopologyException
 * @throws AuthorizationException
 */
public static void main(String... args) throws AlreadyAliveException, InvalidTopologyException, AuthorizationException {

	// starting to build topology
	TopologyBuilder builder = new TopologyBuilder();

	// Kafka as a spout
	builder.setSpout(IDs.kafkaSpout, new KafkaSpoutBuilder(Conf.zookeeper, Conf.inputTopic).build());

	// bolt to println data
	builder.setBolt(IDs.printlnBolt, new PrintlnBolt()).shuffleGrouping(IDs.kafkaSpout);

	// bolt to perform transactions and simulate bank accounts
	builder.setBolt(IDs.userAccountBolt, new BankAccountBolt()).shuffleGrouping(IDs.kafkaSpout);

	// Kafka as a bolt -- sending messages to the output topic
	KafkaBolt<Object, Object> bolt = new KafkaBolt<>().withTopicSelector(new DefaultTopicSelector(Conf.outputTopic))
			.withTupleToKafkaMapper(new TransactionTupleToKafkaMapper());
	builder.setBolt(IDs.kafkaBolt, bolt).shuffleGrouping(IDs.userAccountBolt);

	// submit topolody to local cluster
	new LocalCluster().submitTopology(IDs.kafkaAccountsTopology, topologyConfig(), builder.createTopology());

	// wait a while, then simulate random transaction stream to Kafka
	Sleep.seconds(5);
	KafkaProduceExample.start(2000);

}
 
开发者ID:dzikowski,项目名称:simple-kafka-storm-java,代码行数:37,代码来源:KafkaStormExample.java


示例11: main

import backtype.storm.generated.AuthorizationException; //导入依赖的package包/类
public static void main(String... args) throws AlreadyAliveException, InvalidTopologyException, AuthorizationException {

		// starting to build topology
		TridentTopology topology = new TridentTopology();

		// Kafka as an opaque trident spout
		OpaqueTridentKafkaSpout spout = new OpaqueTridentKafkaSpoutBuilder(Conf.zookeeper, Conf.inputTopic).build();
		Stream stream = topology.newStream(kafkaSpout, spout);

		// mapping transaction messages to pairs: (person,amount)
		Stream atomicTransactions = stream.each(strF, Functions.mapToPersonAmount, personAmountF);

		// bolt to println data
		atomicTransactions.each(personAmountF, Functions.printlnFunction, emptyF);

		// aggregating transactions and mapping to Kafka messages
		Stream transactionsGroupped = atomicTransactions.groupBy(personF)
				.persistentAggregate(new MemoryMapState.Factory(), amountF, new Sum(), sumF).newValuesStream()
				.each(personSumF, Functions.mapToKafkaMessage, keyMessageF);

		// Kafka as a bolt -- producing to outputTopic
		TridentKafkaStateFactory stateFactory = new TridentKafkaStateFactory() //
				.withKafkaTopicSelector(new DefaultTopicSelector(Conf.outputTopic)) //
				.withTridentTupleToKafkaMapper(new FieldNameBasedTupleToKafkaMapper<String, String>(key, message));
		transactionsGroupped.partitionPersist(stateFactory, keyMessageF, new TridentKafkaUpdater(), emptyF);

		// submitting topology to local cluster
		new LocalCluster().submitTopology(kafkaAccountsTopology, topologyConfig(), topology.build());

		// waiting a while, then running Kafka producer
		Sleep.seconds(5);
		KafkaProduceExample.start(20);

	}
 
开发者ID:dzikowski,项目名称:simple-kafka-storm-java,代码行数:35,代码来源:KafkaStormTridentExample.java


示例12: fetchRequest

import backtype.storm.generated.AuthorizationException; //导入依赖的package包/类
public DRPCRequest fetchRequest(String func) throws TException {
    DistributedRPCInvocations.Client c = client.get();
    try {
        if (c == null) {
            throw new TException("Client is not connected...");
        }
        return c.fetchRequest(func);
    } catch (AuthorizationException aze) {
        throw aze;
    } catch (TException e) {
        client.compareAndSet(c, null);
        throw e;
    }
}
 
开发者ID:alibaba,项目名称:jstorm,代码行数:15,代码来源:DRPCInvocationsClient.java


示例13: readInt

import backtype.storm.generated.AuthorizationException; //导入依赖的package包/类
public static int readInt(BlobStore store, Subject who, String key)
        throws IOException, KeyNotFoundException, AuthorizationException {
    InputStream in = store.getBlob(key);
    try {
        return in.read();
    } finally {
        in.close();
    }
}
 
开发者ID:alibaba,项目名称:jstorm,代码行数:10,代码来源:BlobStoreTest.java


示例14: testGetFileLength

import backtype.storm.generated.AuthorizationException; //导入依赖的package包/类
@Test
public void testGetFileLength()
        throws AuthorizationException, KeyNotFoundException, KeyAlreadyExistsException, IOException {
    LocalFsBlobStore store = initLocalFs();
    AtomicOutputStream out = store.createBlob("test", new SettableBlobMeta());
    out.write(1);
    out.close();
    assertEquals(1, store.getBlob("test").getFileLength());
}
 
开发者ID:alibaba,项目名称:jstorm,代码行数:10,代码来源:BlobStoreTest.java


示例15: execute

import backtype.storm.generated.AuthorizationException; //导入依赖的package包/类
public String execute(String func, String args) throws TException, DRPCExecutionException, AuthorizationException {
    return client.execute(func, args);
}
 
开发者ID:kkllwww007,项目名称:jstrom,代码行数:4,代码来源:DRPCClient.java


示例16: run

import backtype.storm.generated.AuthorizationException; //导入依赖的package包/类
public void run(String... args){
    String kafkaTopic = "stock_topic";

    SpoutConfig spoutConfig = new SpoutConfig(new ZkHosts("127.0.0.1"),
            kafkaTopic, "/kafka_storm", "StormSpout");
    spoutConfig.useStartOffsetTimeIfOffsetOutOfRange = true;
    spoutConfig.startOffsetTime = System.currentTimeMillis();

    KafkaSpout kafkaSpout = new KafkaSpout(spoutConfig);
    
    // Hive connection configuration
    String metaStoreURI = "thrift://one.hdp:9083";
    String dbName = "default";
    String tblName = "stock_prices";
    // Fields for possible partition
    String[] partNames = {"name"};
    // Fields for possible column data
    String[] colNames = {"day", "open", "high", "low", "close", "volume","adj_close"};
    // Record Writer configuration
    DelimitedRecordHiveMapper mapper = new DelimitedRecordHiveMapper()
            .withColumnFields(new Fields(colNames))
            .withPartitionFields(new Fields(partNames));

    HiveOptions hiveOptions;
    hiveOptions = new HiveOptions(metaStoreURI, dbName, tblName, mapper)
            .withTxnsPerBatch(2)
            .withBatchSize(100)
            .withIdleTimeout(10)
            .withCallTimeout(10000000);
            //.withKerberosKeytab(path_to_keytab)
            //.withKerberosPrincipal(krb_principal);

    TopologyBuilder builder = new TopologyBuilder();

    builder.setSpout(KAFKA_SPOUT_ID, kafkaSpout);
    builder.setBolt(STOCK_PROCESS_BOLT_ID, new StockDataBolt()).shuffleGrouping(KAFKA_SPOUT_ID);
    builder.setBolt(HIVE_BOLT_ID, new HiveBolt(hiveOptions)).shuffleGrouping(STOCK_PROCESS_BOLT_ID);
    
    String topologyName = "StormHiveStreamingTopo";
    Config config = new Config();
    config.setNumWorkers(1);
    config.setMessageTimeoutSecs(60);
    try {
        StormSubmitter.submitTopology(topologyName, config, builder.createTopology());
    } catch (AlreadyAliveException | InvalidTopologyException | AuthorizationException ex) {
        Logger.getLogger(Topology.class.getName()).log(Level.SEVERE, null, ex);
    }
}
 
开发者ID:hkropp,项目名称:storm-hive-streaming-example,代码行数:49,代码来源:Topology.java


示例17: readAssertEquals

import backtype.storm.generated.AuthorizationException; //导入依赖的package包/类
public static void readAssertEquals(BlobStore store, String key, int value)
        throws IOException, KeyNotFoundException, AuthorizationException {
    assertEquals(value, readInt(store, key));
}
 
开发者ID:alibaba,项目名称:jstorm,代码行数:5,代码来源:BlobStoreTest.java


示例18: readAssertEqualsWithAuth

import backtype.storm.generated.AuthorizationException; //导入依赖的package包/类
public void readAssertEqualsWithAuth(BlobStore store, Subject who, String key, int value)
        throws IOException, KeyNotFoundException, AuthorizationException {
    assertEquals(value, readInt(store, who, key));
}
 
开发者ID:alibaba,项目名称:jstorm,代码行数:5,代码来源:BlobStoreTest.java



注:本文中的backtype.storm.generated.AuthorizationException类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java Smooks类代码示例发布时间:2022-05-22
下一篇:
Java ProductionContentFolderTypeProvider类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap