本文整理汇总了Java中kafka.cluster.BrokerEndPoint类的典型用法代码示例。如果您正苦于以下问题:Java BrokerEndPoint类的具体用法?Java BrokerEndPoint怎么用?Java BrokerEndPoint使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
BrokerEndPoint类属于kafka.cluster包,在下文中一共展示了BrokerEndPoint类的6个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: lookupBootstrap
import kafka.cluster.BrokerEndPoint; //导入依赖的package包/类
/**
* Generates the Kafka bootstrap connection string from the metadata stored in Zookeeper.
* Allows for backwards compatibility of the zookeeperConnect configuration.
*/
private String lookupBootstrap(String zookeeperConnect, SecurityProtocol securityProtocol) {
ZkUtils zkUtils = ZkUtils.apply(zookeeperConnect, ZK_SESSION_TIMEOUT, ZK_CONNECTION_TIMEOUT,
JaasUtils.isZkSecurityEnabled());
try {
List<BrokerEndPoint> endPoints =
asJavaListConverter(zkUtils.getAllBrokerEndPointsForChannel(securityProtocol)).asJava();
List<String> connections = new ArrayList<>();
for (BrokerEndPoint endPoint : endPoints) {
connections.add(endPoint.connectionString());
}
return StringUtils.join(connections, ',');
} finally {
zkUtils.close();
}
}
开发者ID:moueimei,项目名称:flume-release-1.7.0,代码行数:20,代码来源:KafkaSource.java
示例2: MockKafkaSimpleConsumerFactory
import kafka.cluster.BrokerEndPoint; //导入依赖的package包/类
public MockKafkaSimpleConsumerFactory(String[] hosts, int[] ports, long[] partitionStartOffsets,
long[] partitionEndOffsets, int[] partitionLeaderIndices, String topicName) {
Preconditions.checkArgument(hosts.length == ports.length);
this.hosts = hosts;
this.ports = ports;
brokerCount = hosts.length;
brokerArray = new BrokerEndPoint[brokerCount];
for (int i = 0; i < brokerCount; i++) {
brokerArray[i] = new BrokerEndPoint(i, hosts[i], ports[i]);
}
Preconditions.checkArgument(partitionStartOffsets.length == partitionEndOffsets.length);
Preconditions.checkArgument(partitionStartOffsets.length == partitionLeaderIndices.length);
this.partitionStartOffsets = partitionStartOffsets;
this.partitionEndOffsets = partitionEndOffsets;
this.partitionLeaderIndices = partitionLeaderIndices;
partitionCount = partitionStartOffsets.length;
this.topicName = topicName;
}
开发者ID:linkedin,项目名称:pinot,代码行数:22,代码来源:SimpleConsumerWrapperTest.java
示例3: getTopicPartitionLogSize
import kafka.cluster.BrokerEndPoint; //导入依赖的package包/类
/**
* 获取指定主题及分区logsize
* @param stat
*/
public void getTopicPartitionLogSize(TopicPartitionInfo stat) {
BrokerEndPoint leader = findLeader(stat.getTopic(), stat.getPartition()).leader();
SimpleConsumer consumer = getConsumerClient(leader.host(), leader.port());
try {
long logsize = getLastOffset(consumer, stat.getTopic(), stat.getPartition(),
kafka.api.OffsetRequest.LatestTime());
stat.setLogSize(logsize);
} finally {
consumer.close();
}
}
开发者ID:warlock-china,项目名称:azeroth,代码行数:17,代码来源:ZkConsumerCommand.java
示例4: getTopicPartitionLogSize
import kafka.cluster.BrokerEndPoint; //导入依赖的package包/类
/**
* 获取指定主题及分区logsize
* @param stat
*/
public void getTopicPartitionLogSize(TopicPartitionInfo stat){
BrokerEndPoint leader = findLeader(stat.getTopic(), stat.getPartition()).leader();
SimpleConsumer consumer = getConsumerClient(leader.host(), leader.port());
try {
long logsize = getLastOffset(consumer,stat.getTopic(), stat.getPartition(), kafka.api.OffsetRequest.LatestTime());
stat.setLogSize(logsize);
} finally {
consumer.close();
}
}
开发者ID:vakinge,项目名称:jeesuite-libs,代码行数:16,代码来源:ZkConsumerCommand.java
示例5: send
import kafka.cluster.BrokerEndPoint; //导入依赖的package包/类
@Override
public TopicMetadataResponse send(TopicMetadataRequest request) {
java.util.List<String> topics = request.topics();
TopicMetadata[] topicMetadataArray = new TopicMetadata[topics.size()];
for (int i = 0; i < topicMetadataArray.length; i++) {
String topic = topics.get(i);
if (!topic.equals(topicName)) {
topicMetadataArray[i] = new TopicMetadata(topic, null, Errors.UNKNOWN_TOPIC_OR_PARTITION.code());
} else {
PartitionMetadata[] partitionMetadataArray = new PartitionMetadata[partitionCount];
for (int j = 0; j < partitionCount; j++) {
java.util.List<BrokerEndPoint> emptyJavaList = Collections.emptyList();
List<BrokerEndPoint> emptyScalaList = JavaConversions.asScalaBuffer(emptyJavaList).toList();
partitionMetadataArray[j] = new PartitionMetadata(j, Some.apply(brokerArray[partitionLeaderIndices[j]]),
emptyScalaList, emptyScalaList, Errors.NONE.code());
}
Seq<PartitionMetadata> partitionsMetadata = List.fromArray(partitionMetadataArray);
topicMetadataArray[i] = new TopicMetadata(topic, partitionsMetadata, Errors.NONE.code());
}
}
Seq<BrokerEndPoint> brokers = List.fromArray(brokerArray);
Seq<TopicMetadata> topicsMetadata = List.fromArray(topicMetadataArray);
return new TopicMetadataResponse(new kafka.api.TopicMetadataResponse(brokers, topicsMetadata, -1));
}
开发者ID:linkedin,项目名称:pinot,代码行数:29,代码来源:SimpleConsumerWrapperTest.java
示例6: getHostPort
import kafka.cluster.BrokerEndPoint; //导入依赖的package包/类
private static String getHostPort(BrokerEndPoint leader) {
if (leader != null) {
return leader.host() + ":" + leader.port();
}
return null;
}
开发者ID:uber,项目名称:chaperone,代码行数:7,代码来源:KafkaMonitor.java
注:本文中的kafka.cluster.BrokerEndPoint类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论