• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java ClientOperationHeaderProto类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto的典型用法代码示例。如果您正苦于以下问题:Java ClientOperationHeaderProto类的具体用法?Java ClientOperationHeaderProto怎么用?Java ClientOperationHeaderProto使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



ClientOperationHeaderProto类属于org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos包,在下文中一共展示了ClientOperationHeaderProto类的12个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: buildClientHeader

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto; //导入依赖的package包/类
static ClientOperationHeaderProto buildClientHeader(ExtendedBlock blk,
    String client, Token<BlockTokenIdentifier> blockToken) {
  ClientOperationHeaderProto header =
    ClientOperationHeaderProto.newBuilder()
      .setBaseHeader(buildBaseHeader(blk, blockToken))
      .setClientName(client)
      .build();
  return header;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:10,代码来源:DataTransferProtoUtil.java


示例2: buildClientHeader

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto; //导入依赖的package包/类
static ClientOperationHeaderProto buildClientHeader(ExtendedBlock blk,
    String client, Token<BlockTokenIdentifier> blockToken) {
  return ClientOperationHeaderProto.newBuilder()
    .setBaseHeader(buildBaseHeader(blk, blockToken))
    .setClientName(client)
    .build();
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:8,代码来源:DataTransferProtoUtil.java


示例3: writeBlock

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto; //导入依赖的package包/类
@Override
public void writeBlock(final ExtendedBlock blk,
    final Token<BlockTokenIdentifier> blockToken,
    final String clientName,
    final DatanodeInfo[] targets,
    final DatanodeInfo source,
    final BlockConstructionStage stage,
    final int pipelineSize,
    final long minBytesRcvd,
    final long maxBytesRcvd,
    final long latestGenerationStamp,
    DataChecksum requestedChecksum) throws IOException {
  ClientOperationHeaderProto header = DataTransferProtoUtil.buildClientHeader(
      blk, clientName, blockToken);
  
  ChecksumProto checksumProto =
    DataTransferProtoUtil.toProto(requestedChecksum);

  OpWriteBlockProto.Builder proto = OpWriteBlockProto.newBuilder()
    .setHeader(header)
    .addAllTargets(PBHelper.convert(targets, 1))
    .setStage(toProto(stage))
    .setPipelineSize(pipelineSize)
    .setMinBytesRcvd(minBytesRcvd)
    .setMaxBytesRcvd(maxBytesRcvd)
    .setLatestGenerationStamp(latestGenerationStamp)
    .setRequestedChecksum(checksumProto);
  
  if (source != null) {
    proto.setSource(PBHelper.convertDatanodeInfo(source));
  }

  send(out, Op.WRITE_BLOCK, proto.build());
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:35,代码来源:Sender.java


示例4: writeBlock

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto; //导入依赖的package包/类
@Override
public void writeBlock(final ExtendedBlock blk,
    final Token<BlockTokenIdentifier> blockToken, final String clientName,
    final DatanodeInfo[] targets, final DatanodeInfo source,
    final BlockConstructionStage stage, final int pipelineSize,
    final long minBytesRcvd, final long maxBytesRcvd,
    final long latestGenerationStamp, DataChecksum requestedChecksum)
    throws IOException {
  ClientOperationHeaderProto header =
      DataTransferProtoUtil.buildClientHeader(blk, clientName, blockToken);
  
  ChecksumProto checksumProto =
      DataTransferProtoUtil.toProto(requestedChecksum);

  OpWriteBlockProto.Builder proto =
      OpWriteBlockProto.newBuilder().setHeader(header)
          .addAllTargets(PBHelper.convert(targets, 1))
          .setStage(toProto(stage)).setPipelineSize(pipelineSize)
          .setMinBytesRcvd(minBytesRcvd).setMaxBytesRcvd(maxBytesRcvd)
          .setLatestGenerationStamp(latestGenerationStamp)
          .setRequestedChecksum(checksumProto);
  
  if (source != null) {
    proto.setSource(PBHelper.convertDatanodeInfo(source));
  }

  send(out, Op.WRITE_BLOCK, proto.build());
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:29,代码来源:Sender.java


示例5: buildClientHeader

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto; //导入依赖的package包/类
static ClientOperationHeaderProto buildClientHeader(ExtendedBlock blk,
    String client, Token<BlockTokenIdentifier> blockToken) {
  ClientOperationHeaderProto header = ClientOperationHeaderProto.newBuilder()
      .setBaseHeader(buildBaseHeader(blk, blockToken)).setClientName(client)
      .build();
  return header;
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:8,代码来源:DataTransferProtoUtil.java


示例6: writeBlock

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto; //导入依赖的package包/类
@Override
public void writeBlock(final ExtendedBlock blk,
    final Token<BlockTokenIdentifier> blockToken,
    final String clientName,
    final DatanodeInfo[] targets,
    final DatanodeInfo source,
    final BlockConstructionStage stage,
    final int pipelineSize,
    final long minBytesRcvd,
    final long maxBytesRcvd,
    final long latestGenerationStamp,
    DataChecksum requestedChecksum,
    final CachingStrategy cachingStrategy) throws IOException {
  ClientOperationHeaderProto header = DataTransferProtoUtil.buildClientHeader(
      blk, clientName, blockToken);
  
  ChecksumProto checksumProto =
    DataTransferProtoUtil.toProto(requestedChecksum);

  OpWriteBlockProto.Builder proto = OpWriteBlockProto.newBuilder()
    .setHeader(header)
    .addAllTargets(PBHelper.convert(targets, 1))
    .setStage(toProto(stage))
    .setPipelineSize(pipelineSize)
    .setMinBytesRcvd(minBytesRcvd)
    .setMaxBytesRcvd(maxBytesRcvd)
    .setLatestGenerationStamp(latestGenerationStamp)
    .setRequestedChecksum(checksumProto)
    .setCachingStrategy(getCachingStrategy(cachingStrategy));
  
  if (source != null) {
    proto.setSource(PBHelper.convertDatanodeInfo(source));
  }

  send(out, Op.WRITE_BLOCK, proto.build());
}
 
开发者ID:chendave,项目名称:hadoop-TCP,代码行数:37,代码来源:Sender.java


示例7: writeBlock

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto; //导入依赖的package包/类
@Override
public void writeBlock(final ExtendedBlock blk,
    final StorageType storageType, 
    final Token<BlockTokenIdentifier> blockToken,
    final String clientName,
    final DatanodeInfo[] targets,
    final StorageType[] targetStorageTypes, 
    final DatanodeInfo source,
    final BlockConstructionStage stage,
    final int pipelineSize,
    final long minBytesRcvd,
    final long maxBytesRcvd,
    final long latestGenerationStamp,
    DataChecksum requestedChecksum,
    final CachingStrategy cachingStrategy,
    final boolean allowLazyPersist,
    final boolean pinning,
    final boolean[] targetPinnings) throws IOException {
  ClientOperationHeaderProto header = DataTransferProtoUtil.buildClientHeader(
      blk, clientName, blockToken);
  
  ChecksumProto checksumProto =
    DataTransferProtoUtil.toProto(requestedChecksum);

  OpWriteBlockProto.Builder proto = OpWriteBlockProto.newBuilder()
    .setHeader(header)
    .setStorageType(PBHelper.convertStorageType(storageType))
    .addAllTargets(PBHelper.convert(targets, 1))
    .addAllTargetStorageTypes(PBHelper.convertStorageTypes(targetStorageTypes, 1))
    .setStage(toProto(stage))
    .setPipelineSize(pipelineSize)
    .setMinBytesRcvd(minBytesRcvd)
    .setMaxBytesRcvd(maxBytesRcvd)
    .setLatestGenerationStamp(latestGenerationStamp)
    .setRequestedChecksum(checksumProto)
    .setCachingStrategy(getCachingStrategy(cachingStrategy))
    .setAllowLazyPersist(allowLazyPersist)
    .setPinning(pinning)
    .addAllTargetPinnings(PBHelper.convert(targetPinnings, 1));
  
  if (source != null) {
    proto.setSource(PBHelper.convertDatanodeInfo(source));
  }

  send(out, Op.WRITE_BLOCK, proto.build());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:47,代码来源:Sender.java


示例8: continueTraceSpan

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto; //导入依赖的package包/类
public static TraceScope continueTraceSpan(ClientOperationHeaderProto header,
    String description) {
  return continueTraceSpan(header.getBaseHeader(), description);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:5,代码来源:DataTransferProtoUtil.java


示例9: writeBlock

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto; //导入依赖的package包/类
@Override
public void writeBlock(final ExtendedBlock blk,
    final StorageType storageType,
    final Token<BlockTokenIdentifier> blockToken,
    final String clientName,
    final DatanodeInfo[] targets,
    final StorageType[] targetStorageTypes,
    final DatanodeInfo source,
    final BlockConstructionStage stage,
    final int pipelineSize,
    final long minBytesRcvd,
    final long maxBytesRcvd,
    final long latestGenerationStamp,
    DataChecksum requestedChecksum,
    final CachingStrategy cachingStrategy,
    final boolean allowLazyPersist,
    final boolean pinning,
    final boolean[] targetPinnings) throws IOException {
  ClientOperationHeaderProto header = DataTransferProtoUtil.buildClientHeader(
      blk, clientName, blockToken);

  ChecksumProto checksumProto =
      DataTransferProtoUtil.toProto(requestedChecksum);

  OpWriteBlockProto.Builder proto = OpWriteBlockProto.newBuilder()
      .setHeader(header)
      .setStorageType(PBHelperClient.convertStorageType(storageType))
      .addAllTargets(PBHelperClient.convert(targets, 1))
      .addAllTargetStorageTypes(
          PBHelperClient.convertStorageTypes(targetStorageTypes, 1))
      .setStage(toProto(stage))
      .setPipelineSize(pipelineSize)
      .setMinBytesRcvd(minBytesRcvd)
      .setMaxBytesRcvd(maxBytesRcvd)
      .setLatestGenerationStamp(latestGenerationStamp)
      .setRequestedChecksum(checksumProto)
      .setCachingStrategy(getCachingStrategy(cachingStrategy))
      .setAllowLazyPersist(allowLazyPersist)
      .setPinning(pinning)
      .addAllTargetPinnings(PBHelperClient.convert(targetPinnings, 1));

  if (source != null) {
    proto.setSource(PBHelperClient.convertDatanodeInfo(source));
  }

  send(out, Op.WRITE_BLOCK, proto.build());
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:48,代码来源:Sender.java


示例10: continueTraceSpan

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto; //导入依赖的package包/类
private TraceScope continueTraceSpan(ClientOperationHeaderProto header,
                                           String description) {
  return continueTraceSpan(header.getBaseHeader(), description);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:5,代码来源:Receiver.java


示例11: writeBlock

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto; //导入依赖的package包/类
@Override
public void writeBlock(final ExtendedBlock blk,
    final StorageType storageType, 
    final Token<BlockTokenIdentifier> blockToken,
    final String clientName,
    final DatanodeInfo[] targets,
    final StorageType[] targetStorageTypes, 
    final DatanodeInfo source,
    final BlockConstructionStage stage,
    final int pipelineSize,
    final long minBytesRcvd,
    final long maxBytesRcvd,
    final long latestGenerationStamp,
    DataChecksum requestedChecksum,
    final CachingStrategy cachingStrategy,
    final boolean allowLazyPersist) throws IOException {
  ClientOperationHeaderProto header = DataTransferProtoUtil.buildClientHeader(
      blk, clientName, blockToken);
  
  ChecksumProto checksumProto =
    DataTransferProtoUtil.toProto(requestedChecksum);

  OpWriteBlockProto.Builder proto = OpWriteBlockProto.newBuilder()
    .setHeader(header)
    .setStorageType(PBHelper.convertStorageType(storageType))
    .addAllTargets(PBHelper.convert(targets, 1))
    .addAllTargetStorageTypes(PBHelper.convertStorageTypes(targetStorageTypes, 1))
    .setStage(toProto(stage))
    .setPipelineSize(pipelineSize)
    .setMinBytesRcvd(minBytesRcvd)
    .setMaxBytesRcvd(maxBytesRcvd)
    .setLatestGenerationStamp(latestGenerationStamp)
    .setRequestedChecksum(checksumProto)
    .setCachingStrategy(getCachingStrategy(cachingStrategy))
    .setAllowLazyPersist(allowLazyPersist);
  
  if (source != null) {
    proto.setSource(PBHelper.convertDatanodeInfo(source));
  }

  send(out, Op.WRITE_BLOCK, proto.build());
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:43,代码来源:Sender.java


示例12: connectToDataNodes

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto; //导入依赖的package包/类
private static List<Future<Channel>> connectToDataNodes(Configuration conf, DFSClient client,
    String clientName, LocatedBlock locatedBlock, long maxBytesRcvd, long latestGS,
    BlockConstructionStage stage, DataChecksum summer, EventLoopGroup eventLoopGroup,
    Class<? extends Channel> channelClass) {
  Enum<?>[] storageTypes = locatedBlock.getStorageTypes();
  DatanodeInfo[] datanodeInfos = locatedBlock.getLocations();
  boolean connectToDnViaHostname =
      conf.getBoolean(DFS_CLIENT_USE_DN_HOSTNAME, DFS_CLIENT_USE_DN_HOSTNAME_DEFAULT);
  int timeoutMs = conf.getInt(DFS_CLIENT_SOCKET_TIMEOUT_KEY, READ_TIMEOUT);
  ExtendedBlock blockCopy = new ExtendedBlock(locatedBlock.getBlock());
  blockCopy.setNumBytes(locatedBlock.getBlockSize());
  ClientOperationHeaderProto header = ClientOperationHeaderProto.newBuilder()
      .setBaseHeader(BaseHeaderProto.newBuilder().setBlock(PB_HELPER.convert(blockCopy))
          .setToken(PB_HELPER.convert(locatedBlock.getBlockToken())))
      .setClientName(clientName).build();
  ChecksumProto checksumProto = DataTransferProtoUtil.toProto(summer);
  OpWriteBlockProto.Builder writeBlockProtoBuilder = OpWriteBlockProto.newBuilder()
      .setHeader(header).setStage(OpWriteBlockProto.BlockConstructionStage.valueOf(stage.name()))
      .setPipelineSize(1).setMinBytesRcvd(locatedBlock.getBlock().getNumBytes())
      .setMaxBytesRcvd(maxBytesRcvd).setLatestGenerationStamp(latestGS)
      .setRequestedChecksum(checksumProto)
      .setCachingStrategy(CachingStrategyProto.newBuilder().setDropBehind(true).build());
  List<Future<Channel>> futureList = new ArrayList<>(datanodeInfos.length);
  for (int i = 0; i < datanodeInfos.length; i++) {
    DatanodeInfo dnInfo = datanodeInfos[i];
    Enum<?> storageType = storageTypes[i];
    Promise<Channel> promise = eventLoopGroup.next().newPromise();
    futureList.add(promise);
    String dnAddr = dnInfo.getXferAddr(connectToDnViaHostname);
    new Bootstrap().group(eventLoopGroup).channel(channelClass)
        .option(CONNECT_TIMEOUT_MILLIS, timeoutMs).handler(new ChannelInitializer<Channel>() {

          @Override
          protected void initChannel(Channel ch) throws Exception {
            // we need to get the remote address of the channel so we can only move on after
            // channel connected. Leave an empty implementation here because netty does not allow
            // a null handler.
          }
        }).connect(NetUtils.createSocketAddr(dnAddr)).addListener(new ChannelFutureListener() {

          @Override
          public void operationComplete(ChannelFuture future) throws Exception {
            if (future.isSuccess()) {
              initialize(conf, future.channel(), dnInfo, storageType, writeBlockProtoBuilder,
                timeoutMs, client, locatedBlock.getBlockToken(), promise);
            } else {
              promise.tryFailure(future.cause());
            }
          }
        });
  }
  return futureList;
}
 
开发者ID:apache,项目名称:hbase,代码行数:54,代码来源:FanOutOneBlockAsyncDFSOutputHelper.java



注:本文中的org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java SlackUser类代码示例发布时间:2022-05-22
下一篇:
Java Evaluator类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap