• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java BlockTargetPair类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.BlockTargetPair的典型用法代码示例。如果您正苦于以下问题:Java BlockTargetPair类的具体用法?Java BlockTargetPair怎么用?Java BlockTargetPair使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



BlockTargetPair类属于org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor包,在下文中一共展示了BlockTargetPair类的12个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: BlockCommand

import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.BlockTargetPair; //导入依赖的package包/类
/**
 * Create BlockCommand for transferring blocks to another datanode
 * @param blocktargetlist    blocks to be transferred 
 */
public BlockCommand(int action, String poolId,
    List<BlockTargetPair> blocktargetlist) {
  super(action);
  this.poolId = poolId;
  blocks = new Block[blocktargetlist.size()]; 
  targets = new DatanodeInfo[blocks.length][];
  targetStorageTypes = new StorageType[blocks.length][];
  targetStorageIDs = new String[blocks.length][];

  for(int i = 0; i < blocks.length; i++) {
    BlockTargetPair p = blocktargetlist.get(i);
    blocks[i] = p.block;
    targets[i] = DatanodeStorageInfo.toDatanodeInfos(p.targets);
    targetStorageTypes[i] = DatanodeStorageInfo.toStorageTypes(p.targets);
    targetStorageIDs[i] = DatanodeStorageInfo.toStorageIDs(p.targets);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:22,代码来源:BlockCommand.java


示例2: BlockCommand

import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.BlockTargetPair; //导入依赖的package包/类
/**
 * Create BlockCommand for transferring blocks to another datanode
 * @param blocktargetlist    blocks to be transferred 
 */
public BlockCommand(int action, String poolId,
    List<BlockTargetPair> blocktargetlist) {
  super(action);
  this.poolId = poolId;
  blocks = new Block[blocktargetlist.size()]; 
  targets = new DatanodeInfo[blocks.length][];
  targetStorageIDs = new String[blocks.length][];

  for(int i = 0; i < blocks.length; i++) {
    BlockTargetPair p = blocktargetlist.get(i);
    blocks[i] = p.block;
    targets[i] = DatanodeStorageInfo.toDatanodeInfos(p.targets);
    targetStorageIDs[i] = DatanodeStorageInfo.toStorageIDs(p.targets);
  }
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:20,代码来源:BlockCommand.java


示例3: scheduleSingleReplication

import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.BlockTargetPair; //导入依赖的package包/类
private DatanodeStorageInfo[] scheduleSingleReplication(Block block) {
  // list for priority 1
  List<Block> list_p1 = new ArrayList<Block>();
  list_p1.add(block);

  // list of lists for each priority
  List<List<Block>> list_all = new ArrayList<List<Block>>();
  list_all.add(new ArrayList<Block>()); // for priority 0
  list_all.add(list_p1); // for priority 1

  assertEquals("Block not initially pending replication", 0,
      bm.pendingReplications.getNumReplicas(block));
  assertEquals(
      "computeReplicationWork should indicate replication is needed", 1,
      bm.computeReplicationWorkForBlocks(list_all));
  assertTrue("replication is pending after work is computed",
      bm.pendingReplications.getNumReplicas(block) > 0);

  LinkedListMultimap<DatanodeStorageInfo, BlockTargetPair> repls = getAllPendingReplications();
  assertEquals(1, repls.size());
  Entry<DatanodeStorageInfo, BlockTargetPair> repl =
    repls.entries().iterator().next();
      
  DatanodeStorageInfo[] targets = repl.getValue().targets;

  DatanodeStorageInfo[] pipeline = new DatanodeStorageInfo[1 + targets.length];
  pipeline[0] = repl.getKey();
  System.arraycopy(targets, 0, pipeline, 1, targets.length);

  return pipeline;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:32,代码来源:TestBlockManager.java


示例4: getAllPendingReplications

import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.BlockTargetPair; //导入依赖的package包/类
private LinkedListMultimap<DatanodeStorageInfo, BlockTargetPair> getAllPendingReplications() {
  LinkedListMultimap<DatanodeStorageInfo, BlockTargetPair> repls =
    LinkedListMultimap.create();
  for (DatanodeDescriptor dn : nodes) {
    List<BlockTargetPair> thisRepls = dn.getReplicationCommand(10);
    if (thisRepls != null) {
      for(DatanodeStorageInfo storage : dn.getStorageInfos()) {
        repls.putAll(storage, thisRepls);
      }
    }
  }
  return repls;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:14,代码来源:TestBlockManager.java


示例5: scheduleSingleReplication

import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.BlockTargetPair; //导入依赖的package包/类
private DatanodeStorageInfo[] scheduleSingleReplication(BlockInfo block) {
  // list for priority 1
  List<BlockInfo> list_p1 = new ArrayList<>();
  list_p1.add(block);

  // list of lists for each priority
  List<List<BlockInfo>> list_all = new ArrayList<>();
  list_all.add(new ArrayList<BlockInfo>()); // for priority 0
  list_all.add(list_p1); // for priority 1

  assertEquals("Block not initially pending replication", 0,
      bm.pendingReplications.getNumReplicas(block));
  assertEquals(
      "computeBlockRecoveryWork should indicate replication is needed", 1,
      bm.computeRecoveryWorkForBlocks(list_all));
  assertTrue("replication is pending after work is computed",
      bm.pendingReplications.getNumReplicas(block) > 0);

  LinkedListMultimap<DatanodeStorageInfo, BlockTargetPair> repls = getAllPendingReplications();
  assertEquals(1, repls.size());
  Entry<DatanodeStorageInfo, BlockTargetPair> repl =
    repls.entries().iterator().next();
      
  DatanodeStorageInfo[] targets = repl.getValue().targets;

  DatanodeStorageInfo[] pipeline = new DatanodeStorageInfo[1 + targets.length];
  pipeline[0] = repl.getKey();
  System.arraycopy(targets, 0, pipeline, 1, targets.length);

  return pipeline;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:32,代码来源:TestBlockManager.java


示例6: BlockCommand

import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.BlockTargetPair; //导入依赖的package包/类
/**
 * Create BlockCommand for transferring blocks to another datanode
 * @param blocktargetlist    blocks to be transferred 
 */
public BlockCommand(int action, String poolId,
    List<BlockTargetPair> blocktargetlist) {
  super(action);
  this.poolId = poolId;
  blocks = new Block[blocktargetlist.size()]; 
  targets = new DatanodeInfo[blocks.length][];
  for(int i = 0; i < blocks.length; i++) {
    BlockTargetPair p = blocktargetlist.get(i);
    blocks[i] = p.block;
    targets[i] = p.targets;
  }
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:17,代码来源:BlockCommand.java


示例7: scheduleSingleReplication

import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.BlockTargetPair; //导入依赖的package包/类
private DatanodeDescriptor[] scheduleSingleReplication(Block block) {
  // list for priority 1
  List<Block> list_p1 = new ArrayList<Block>();
  list_p1.add(block);

  // list of lists for each priority
  List<List<Block>> list_all = new ArrayList<List<Block>>();
  list_all.add(new ArrayList<Block>()); // for priority 0
  list_all.add(list_p1); // for priority 1

  assertEquals("Block not initially pending replication", 0,
      bm.pendingReplications.getNumReplicas(block));
  assertEquals(
      "computeReplicationWork should indicate replication is needed", 1,
      bm.computeReplicationWorkForBlocks(list_all));
  assertTrue("replication is pending after work is computed",
      bm.pendingReplications.getNumReplicas(block) > 0);

  LinkedListMultimap<DatanodeDescriptor, BlockTargetPair> repls = getAllPendingReplications();
  assertEquals(1, repls.size());
  Entry<DatanodeDescriptor, BlockTargetPair> repl =
    repls.entries().iterator().next();
      
  DatanodeDescriptor[] targets = repl.getValue().targets;

  DatanodeDescriptor[] pipeline = new DatanodeDescriptor[1 + targets.length];
  pipeline[0] = repl.getKey();
  System.arraycopy(targets, 0, pipeline, 1, targets.length);

  return pipeline;
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:32,代码来源:TestBlockManager.java


示例8: getAllPendingReplications

import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.BlockTargetPair; //导入依赖的package包/类
private LinkedListMultimap<DatanodeDescriptor, BlockTargetPair> getAllPendingReplications() {
  LinkedListMultimap<DatanodeDescriptor, BlockTargetPair> repls =
    LinkedListMultimap.create();
  for (DatanodeDescriptor dn : nodes) {
    List<BlockTargetPair> thisRepls = dn.getReplicationCommand(10);
    if (thisRepls != null) {
      repls.putAll(dn, thisRepls);
    }
  }
  return repls;
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:12,代码来源:TestBlockManager.java


示例9: BlockCommand

import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.BlockTargetPair; //导入依赖的package包/类
/**
 * Create BlockCommand for transferring blocks to another datanode
 *
 * @param blocktargetlist
 *     blocks to be transferred
 */
public BlockCommand(int action, String poolId,
    List<BlockTargetPair> blocktargetlist) {
  super(action);
  this.poolId = poolId;
  blocks = new Block[blocktargetlist.size()];
  targets = new DatanodeInfo[blocks.length][];
  for (int i = 0; i < blocks.length; i++) {
    BlockTargetPair p = blocktargetlist.get(i);
    blocks[i] = p.block;
    targets[i] = p.targets;
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:19,代码来源:BlockCommand.java


示例10: getAllPendingReplications

import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.BlockTargetPair; //导入依赖的package包/类
private LinkedListMultimap<DatanodeDescriptor, BlockTargetPair> getAllPendingReplications() {
  LinkedListMultimap<DatanodeDescriptor, BlockTargetPair> repls =
      LinkedListMultimap.create();
  for (DatanodeDescriptor dn : nodes) {
    List<BlockTargetPair> thisRepls = dn.getReplicationCommand(10);
    if (thisRepls != null) {
      repls.putAll(dn, thisRepls);
    }
  }
  return repls;
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:12,代码来源:TestBlockManager.java


示例11: handleHeartbeat

import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.BlockTargetPair; //导入依赖的package包/类
/** Handle heartbeat from datanodes. */
public DatanodeCommand[] handleHeartbeat(DatanodeRegistration nodeReg,
    StorageReport[] reports, final String blockPoolId,
    long cacheCapacity, long cacheUsed, int xceiverCount, 
    int maxTransfers, int failedVolumes,
    VolumeFailureSummary volumeFailureSummary) throws IOException {
  final DatanodeDescriptor nodeinfo;
  try {
    nodeinfo = getDatanode(nodeReg);
  } catch (UnregisteredNodeException e) {
    return new DatanodeCommand[]{RegisterCommand.REGISTER};
  }

  // Check if this datanode should actually be shutdown instead.
  if (nodeinfo != null && nodeinfo.isDisallowed()) {
    setDatanodeDead(nodeinfo);
    throw new DisallowedDatanodeException(nodeinfo);
  }

  if (nodeinfo == null || !nodeinfo.isRegistered()) {
    return new DatanodeCommand[]{RegisterCommand.REGISTER};
  }
  heartbeatManager.updateHeartbeat(nodeinfo, reports, cacheCapacity,
      cacheUsed, xceiverCount, failedVolumes, volumeFailureSummary);

  // If we are in safemode, do not send back any recovery / replication
  // requests. Don't even drain the existing queue of work.
  if (namesystem.isInSafeMode()) {
    return new DatanodeCommand[0];
  }

  // block recovery command
  final BlockRecoveryCommand brCommand = getBlockRecoveryCommand(blockPoolId,
      nodeinfo);
  if (brCommand != null) {
    return new DatanodeCommand[]{brCommand};
  }

  final List<DatanodeCommand> cmds = new ArrayList<>();
  // check pending replication
  List<BlockTargetPair> pendingList = nodeinfo.getReplicationCommand(
      maxTransfers);
  if (pendingList != null) {
    cmds.add(new BlockCommand(DatanodeProtocol.DNA_TRANSFER, blockPoolId,
        pendingList));
  }
  // check pending erasure coding tasks
  List<BlockECRecoveryInfo> pendingECList = nodeinfo.getErasureCodeCommand(
      maxTransfers);
  if (pendingECList != null) {
    cmds.add(new BlockECRecoveryCommand(DNA_ERASURE_CODING_RECOVERY,
        pendingECList));
  }
  // check block invalidation
  Block[] blks = nodeinfo.getInvalidateBlocks(blockInvalidateLimit);
  if (blks != null) {
    cmds.add(new BlockCommand(DatanodeProtocol.DNA_INVALIDATE, blockPoolId,
        blks));
  }
  // cache commands
  addCacheCommands(blockPoolId, nodeinfo, cmds);
  // key update command
  blockManager.addKeyUpdateCommand(cmds, nodeinfo);

  // check for balancer bandwidth update
  if (nodeinfo.getBalancerBandwidth() > 0) {
    cmds.add(new BalancerBandwidthCommand(nodeinfo.getBalancerBandwidth()));
    // set back to 0 to indicate that datanode has been sent the new value
    nodeinfo.setBalancerBandwidth(0);
  }

  if (!cmds.isEmpty()) {
    return cmds.toArray(new DatanodeCommand[cmds.size()]);
  }

  return new DatanodeCommand[0];
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:78,代码来源:DatanodeManager.java


示例12: handleHeartbeat

import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.BlockTargetPair; //导入依赖的package包/类
/**
 * Handle heartbeat from datanodes.
 */
public DatanodeCommand[] handleHeartbeat(DatanodeRegistration nodeReg,
    final String blockPoolId, long capacity, long dfsUsed, long remaining,
    long blockPoolUsed, int xceiverCount, int maxTransfers, int failedVolumes)
    throws IOException {
  synchronized (heartbeatManager) {
    synchronized (datanodeMap) {
      DatanodeDescriptor nodeinfo = null;
      try {
        nodeinfo = getDatanode(nodeReg);
      } catch (UnregisteredNodeException e) {
        return new DatanodeCommand[]{RegisterCommand.REGISTER};
      }
      
      // Check if this datanode should actually be shutdown instead. 
      if (nodeinfo != null && nodeinfo.isDisallowed()) {
        setDatanodeDead(nodeinfo);
        throw new DisallowedDatanodeException(nodeinfo);
      }

      if (nodeinfo == null || !nodeinfo.isAlive) {
        return new DatanodeCommand[]{RegisterCommand.REGISTER};
      }

      heartbeatManager.updateHeartbeat(nodeinfo, capacity, dfsUsed, remaining,
          blockPoolUsed, xceiverCount, failedVolumes);
      
      //check lease recovery
      BlockInfoUnderConstruction[] blocks =
          nodeinfo.getLeaseRecoveryCommand(Integer.MAX_VALUE);
      if (blocks != null) {
        BlockRecoveryCommand brCommand =
            new BlockRecoveryCommand(blocks.length);
        for (BlockInfoUnderConstruction b : blocks) {
          brCommand.add(new RecoveringBlock(new ExtendedBlock(blockPoolId, b),
              getDataNodeDescriptorsTx(b), b.getBlockRecoveryId()));
        }
        return new DatanodeCommand[]{brCommand};
      }

      final List<DatanodeCommand> cmds = new ArrayList<>();
      //check pending replication
      List<BlockTargetPair> pendingList =
          nodeinfo.getReplicationCommand(maxTransfers);
      if (pendingList != null) {
        cmds.add(new BlockCommand(DatanodeProtocol.DNA_TRANSFER, blockPoolId,
            pendingList));
      }
      //check block invalidation
      Block[] blks = nodeinfo.getInvalidateBlocks(blockInvalidateLimit);
      if (blks != null) {
        cmds.add(
            new BlockCommand(DatanodeProtocol.DNA_INVALIDATE, blockPoolId,
                blks));
      }
      
      blockManager.addKeyUpdateCommand(cmds, nodeinfo);

      // check for balancer bandwidth update
      if (nodeinfo.getBalancerBandwidth() > 0) {
        cmds.add(
            new BalancerBandwidthCommand(nodeinfo.getBalancerBandwidth()));
        // set back to 0 to indicate that datanode has been sent the new value
        nodeinfo.setBalancerBandwidth(0);
      }

      if (!cmds.isEmpty()) {
        return cmds.toArray(new DatanodeCommand[cmds.size()]);
      }
    }
  }

  return new DatanodeCommand[0];
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:77,代码来源:DatanodeManager.java



注:本文中的org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.BlockTargetPair类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java DefaultTableRenderer类代码示例发布时间:2022-05-22
下一篇:
Java MouseJointDef类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap