• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java UnregisteredNodeException类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.protocol.UnregisteredNodeException的典型用法代码示例。如果您正苦于以下问题:Java UnregisteredNodeException类的具体用法?Java UnregisteredNodeException怎么用?Java UnregisteredNodeException使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



UnregisteredNodeException类属于org.apache.hadoop.hdfs.protocol包,在下文中一共展示了UnregisteredNodeException类的18个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: verifyJournalRequest

import org.apache.hadoop.hdfs.protocol.UnregisteredNodeException; //导入依赖的package包/类
/** 
 * Verifies a journal request
 */
private void verifyJournalRequest(JournalInfo journalInfo)
    throws IOException {
  verifyLayoutVersion(journalInfo.getLayoutVersion());
  String errorMsg = null;
  int expectedNamespaceID = namesystem.getNamespaceInfo().getNamespaceID();
  if (journalInfo.getNamespaceId() != expectedNamespaceID) {
    errorMsg = "Invalid namespaceID in journal request - expected " + expectedNamespaceID
        + " actual " + journalInfo.getNamespaceId();
    LOG.warn(errorMsg);
    throw new UnregisteredNodeException(journalInfo);
  } 
  if (!journalInfo.getClusterId().equals(namesystem.getClusterId())) {
    errorMsg = "Invalid clusterId in journal request - expected "
        + journalInfo.getClusterId() + " actual " + namesystem.getClusterId();
    LOG.warn(errorMsg);
    throw new UnregisteredNodeException(journalInfo);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:22,代码来源:BackupNode.java


示例2: removeDatanode

import org.apache.hadoop.hdfs.protocol.UnregisteredNodeException; //导入依赖的package包/类
/**
 * Remove a datanode
 * @throws UnregisteredNodeException 
 */
public void removeDatanode(final DatanodeID node
    ) throws UnregisteredNodeException {
  namesystem.writeLock();
  try {
    final DatanodeDescriptor descriptor = getDatanode(node);
    if (descriptor != null) {
      removeDatanode(descriptor);
    } else {
      NameNode.stateChangeLog.warn("BLOCK* removeDatanode: "
                                   + node + " does not exist");
    }
  } finally {
    namesystem.writeUnlock();
  }
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:20,代码来源:DatanodeManager.java


示例3: getDatanode

import org.apache.hadoop.hdfs.protocol.UnregisteredNodeException; //导入依赖的package包/类
/**
 * Get data node by storage ID.
 *
 * @param nodeID
 * @return DatanodeDescriptor or null if the node is not found.
 * @throws UnregisteredNodeException
 */
public DatanodeDescriptor getDatanode(DatanodeID nodeID)
    throws UnregisteredNodeException {
  DatanodeDescriptor node = null;
  if (nodeID != null && nodeID.getStorageID() != null &&
      !nodeID.getStorageID().equals("")) {
    node = getDatanode(nodeID.getStorageID());
  }
  if (node == null) {
    return null;
  }
  if (!node.getXferAddr().equals(nodeID.getXferAddr())) {
    final UnregisteredNodeException e =
        new UnregisteredNodeException(nodeID, node);
    NameNode.stateChangeLog
        .fatal("BLOCK* NameSystem.getDatanode: " + e.getLocalizedMessage());
    throw e;
  }
  return node;
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:27,代码来源:DatanodeManager.java


示例4: verifyRequest

import org.apache.hadoop.hdfs.protocol.UnregisteredNodeException; //导入依赖的package包/类
/** 
 * Verifies the given registration.
 * 
 * @param nodeReg node registration
 * @throws UnregisteredNodeException if the registration is invalid
 */
private void verifyRequest(NodeRegistration nodeReg) throws IOException {
  // verify registration ID
  final String id = nodeReg.getRegistrationID();
  final String expectedID = namesystem.getRegistrationID();
  if (!expectedID.equals(id)) {
    LOG.warn("Registration IDs mismatched: the "
        + nodeReg.getClass().getSimpleName() + " ID is " + id
        + " but the expected ID is " + expectedID);
     throw new UnregisteredNodeException(nodeReg);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:18,代码来源:NameNodeRpcServer.java


示例5: invalidateWorkForOneNode

import org.apache.hadoop.hdfs.protocol.UnregisteredNodeException; //导入依赖的package包/类
/**
 * Get blocks to invalidate for <i>nodeId</i>
 * in {@link #invalidateBlocks}.
 *
 * @return number of blocks scheduled for removal during this iteration.
 */
private int invalidateWorkForOneNode(DatanodeInfo dn) {
  final List<Block> toInvalidate;
  
  namesystem.writeLock();
  try {
    // blocks should not be replicated or removed if safe mode is on
    if (namesystem.isInSafeMode()) {
      LOG.debug("In safemode, not computing replication work");
      return 0;
    }
    try {
      DatanodeDescriptor dnDescriptor = datanodeManager.getDatanode(dn);
      if (dnDescriptor == null) {
        LOG.warn("DataNode " + dn + " cannot be found with UUID " +
            dn.getDatanodeUuid() + ", removing block invalidation work.");
        invalidateBlocks.remove(dn);
        return 0;
      }
      toInvalidate = invalidateBlocks.invalidateWork(dnDescriptor);
      
      if (toInvalidate == null) {
        return 0;
      }
    } catch(UnregisteredNodeException une) {
      return 0;
    }
  } finally {
    namesystem.writeUnlock();
  }
  blockLog.info("BLOCK* {}: ask {} to delete {}", getClass().getSimpleName(),
      dn, toInvalidate);
  return toInvalidate.size();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:40,代码来源:BlockManager.java


示例6: invalidateWorkForOneNode

import org.apache.hadoop.hdfs.protocol.UnregisteredNodeException; //导入依赖的package包/类
/**
 * Get blocks to invalidate for <i>nodeId</i>
 * in {@link #invalidateBlocks}.
 *
 * @return number of blocks scheduled for removal during this iteration.
 */
private int invalidateWorkForOneNode(DatanodeInfo dn) {
  final List<Block> toInvalidate;
  
  namesystem.writeLock();
  try {
    // blocks should not be replicated or removed if safe mode is on
    if (namesystem.isInSafeMode()) {
      LOG.debug("In safemode, not computing replication work");
      return 0;
    }
    try {
      DatanodeDescriptor dnDescriptor = datanodeManager.getDatanode(dn);
      if (dnDescriptor == null) {
        LOG.warn("DataNode " + dn + " cannot be found with UUID " +
            dn.getDatanodeUuid() + ", removing block invalidation work.");
        invalidateBlocks.remove(dn);
        return 0;
      }
      toInvalidate = invalidateBlocks.invalidateWork(dnDescriptor);
      
      if (toInvalidate == null) {
        return 0;
      }
    } catch(UnregisteredNodeException une) {
      return 0;
    }
  } finally {
    namesystem.writeUnlock();
  }
  blockLog.debug("BLOCK* {}: ask {} to delete {}", getClass().getSimpleName(),
      dn, toInvalidate);
  return toInvalidate.size();
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:40,代码来源:BlockManager.java


示例7: verifyRequest

import org.apache.hadoop.hdfs.protocol.UnregisteredNodeException; //导入依赖的package包/类
/** 
 * Verifies the given registration.
 * 
 * @param nodeReg node registration
 * @throws UnregisteredNodeException if the registration is invalid
 */
void verifyRequest(NodeRegistration nodeReg) throws IOException {
  verifyLayoutVersion(nodeReg.getVersion());
  if (!namesystem.getRegistrationID().equals(nodeReg.getRegistrationID())) {
    LOG.warn("Invalid registrationID - expected: "
        + namesystem.getRegistrationID() + " received: "
        + nodeReg.getRegistrationID());
    throw new UnregisteredNodeException(nodeReg);
  }
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:16,代码来源:NameNodeRpcServer.java


示例8: getDatanode

import org.apache.hadoop.hdfs.protocol.UnregisteredNodeException; //导入依赖的package包/类
/**
 * Get data node by storage ID.
 * 
 * @param nodeID
 * @return DatanodeDescriptor or null if the node is not found.
 * @throws UnregisteredNodeException
 */
public DatanodeDescriptor getDatanode(DatanodeID nodeID
    ) throws UnregisteredNodeException {
  final DatanodeDescriptor node = getDatanode(nodeID.getStorageID());
  if (node == null) 
    return null;
  if (!node.getXferAddr().equals(nodeID.getXferAddr())) {
    final UnregisteredNodeException e = new UnregisteredNodeException(
        nodeID, node);
    NameNode.stateChangeLog.fatal("BLOCK* NameSystem.getDatanode: "
                                  + e.getLocalizedMessage());
    throw e;
  }
  return node;
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:22,代码来源:DatanodeManager.java


示例9: invalidateWorkForOneNode

import org.apache.hadoop.hdfs.protocol.UnregisteredNodeException; //导入依赖的package包/类
/**
 * Get blocks to invalidate for <i>nodeId</i>
 * in {@link #invalidateBlocks}.
 *
 * @return number of blocks scheduled for removal during this iteration.
 */
private int invalidateWorkForOneNode(DatanodeInfo dn) {
  final List<Block> toInvalidate;
  
  namesystem.writeLock();
  try {
    // blocks should not be replicated or removed if safe mode is on
    if (namesystem.isInSafeMode()) {
      LOG.debug("In safemode, not computing replication work");
      return 0;
    }
    try {
      toInvalidate = invalidateBlocks.invalidateWork(datanodeManager.getDatanode(dn));
      
      if (toInvalidate == null) {
        return 0;
      }
    } catch(UnregisteredNodeException une) {
      return 0;
    }
  } finally {
    namesystem.writeUnlock();
  }
  if (NameNode.stateChangeLog.isInfoEnabled()) {
    NameNode.stateChangeLog.info("BLOCK* " + getClass().getSimpleName()
        + ": ask " + dn + " to delete " + toInvalidate);
  }
  return toInvalidate.size();
}
 
开发者ID:yncxcw,项目名称:FlexMap,代码行数:35,代码来源:BlockManager.java


示例10: verifyRequest

import org.apache.hadoop.hdfs.protocol.UnregisteredNodeException; //导入依赖的package包/类
/**
 * Verifies the given registration.
 *
 * @param nodeReg
 *     node registration
 * @throws UnregisteredNodeException
 *     if the registration is invalid
 */
void verifyRequest(NodeRegistration nodeReg) throws IOException {
  verifyLayoutVersion(nodeReg.getVersion());
  if (!namesystem.getRegistrationID().equals(nodeReg.getRegistrationID())) {
    LOG.warn("Invalid registrationID - expected: " +
        namesystem.getRegistrationID() + " received: " +
        nodeReg.getRegistrationID());
    throw new UnregisteredNodeException(nodeReg);
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:18,代码来源:NameNodeRpcServer.java


示例11: removeDatanode

import org.apache.hadoop.hdfs.protocol.UnregisteredNodeException; //导入依赖的package包/类
/**
 * Remove a datanode
 *
 * @throws UnregisteredNodeException
 */
public void removeDatanode(final DatanodeID node
    //Called my NameNodeRpcServer
) throws UnregisteredNodeException, IOException {
  final DatanodeDescriptor descriptor = getDatanode(node);
  if (descriptor != null) {
    removeDatanode(descriptor);
  } else {
    NameNode.stateChangeLog
        .warn("BLOCK* removeDatanode: " + node + " does not exist");
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:17,代码来源:DatanodeManager.java


示例12: getBlocksWithLocations

import org.apache.hadoop.hdfs.protocol.UnregisteredNodeException; //导入依赖的package包/类
/** Get all blocks with location information from a datanode. */
private BlocksWithLocations getBlocksWithLocations(final DatanodeID datanode,
    final long size) throws UnregisteredNodeException {
  final DatanodeDescriptor node = getDatanodeManager().getDatanode(datanode);
  if (node == null) {
    blockLog.warn("BLOCK* getBlocks: Asking for blocks from an" +
        " unrecorded node {}", datanode);
    throw new HadoopIllegalArgumentException(
        "Datanode " + datanode + " not found.");
  }

  int numBlocks = node.numBlocks();
  if(numBlocks == 0) {
    return new BlocksWithLocations(new BlockWithLocations[0]);
  }
  Iterator<BlockInfoContiguous> iter = node.getBlockIterator();
  int startBlock = DFSUtil.getRandom().nextInt(numBlocks); // starting from a random block
  // skip blocks
  for(int i=0; i<startBlock; i++) {
    iter.next();
  }
  List<BlockWithLocations> results = new ArrayList<BlockWithLocations>();
  long totalSize = 0;
  BlockInfoContiguous curBlock;
  while(totalSize<size && iter.hasNext()) {
    curBlock = iter.next();
    if(!curBlock.isComplete())  continue;
    totalSize += addBlock(curBlock, results);
  }
  if(totalSize<size) {
    iter = node.getBlockIterator(); // start from the beginning
    for(int i=0; i<startBlock&&totalSize<size; i++) {
      curBlock = iter.next();
      if(!curBlock.isComplete())  continue;
      totalSize += addBlock(curBlock, results);
    }
  }

  return new BlocksWithLocations(
      results.toArray(new BlockWithLocations[results.size()]));
}
 
开发者ID:naver,项目名称:hadoop,代码行数:42,代码来源:BlockManager.java


示例13: getBlocksWithLocations

import org.apache.hadoop.hdfs.protocol.UnregisteredNodeException; //导入依赖的package包/类
/** Get all blocks with location information from a datanode. */
private BlocksWithLocations getBlocksWithLocations(final DatanodeID datanode,
    final long size) throws UnregisteredNodeException {
  final DatanodeDescriptor node = getDatanodeManager().getDatanode(datanode);
  if (node == null) {
    blockLog.warn("BLOCK* getBlocks: Asking for blocks from an" +
        " unrecorded node {}", datanode);
    throw new HadoopIllegalArgumentException(
        "Datanode " + datanode + " not found.");
  }

  int numBlocks = node.numBlocks();
  if(numBlocks == 0) {
    return new BlocksWithLocations(new BlockWithLocations[0]);
  }
  Iterator<BlockInfo> iter = node.getBlockIterator();
  // starting from a random block
  int startBlock = ThreadLocalRandom.current().nextInt(numBlocks);
  // skip blocks
  for(int i=0; i<startBlock; i++) {
    iter.next();
  }
  List<BlockWithLocations> results = new ArrayList<BlockWithLocations>();
  long totalSize = 0;
  BlockInfo curBlock;
  while(totalSize<size && iter.hasNext()) {
    curBlock = iter.next();
    if(!curBlock.isComplete())  continue;
    totalSize += addBlock(curBlock, results);
  }
  if(totalSize<size) {
    iter = node.getBlockIterator(); // start from the beginning
    for(int i=0; i<startBlock&&totalSize<size; i++) {
      curBlock = iter.next();
      if(!curBlock.isComplete())  continue;
      totalSize += addBlock(curBlock, results);
    }
  }

  return new BlocksWithLocations(
      results.toArray(new BlockWithLocations[results.size()]));
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:43,代码来源:BlockManager.java


示例14: getBlocksWithLocations

import org.apache.hadoop.hdfs.protocol.UnregisteredNodeException; //导入依赖的package包/类
/** Get all blocks with location information from a datanode. */
private BlocksWithLocations getBlocksWithLocations(final DatanodeID datanode,
    final long size) throws UnregisteredNodeException {
  final DatanodeDescriptor node = getDatanodeManager().getDatanode(datanode);
  if (node == null) {
    blockLog.warn("BLOCK* getBlocks: Asking for blocks from an" +
        " unrecorded node {}", datanode);
    throw new HadoopIllegalArgumentException(
        "Datanode " + datanode + " not found.");
  }

  int numBlocks = node.numBlocks();
  if(numBlocks == 0) {
    return new BlocksWithLocations(new BlockWithLocations[0]);
  }
  Iterator<BlockInfo> iter = node.getBlockIterator();
  int startBlock = DFSUtil.getRandom().nextInt(numBlocks); // starting from a random block
  // skip blocks
  for(int i=0; i<startBlock; i++) {
    iter.next();
  }
  List<BlockWithLocations> results = new ArrayList<BlockWithLocations>();
  long totalSize = 0;
  BlockInfo curBlock;
  while(totalSize<size && iter.hasNext()) {
    curBlock = iter.next();
    if(!curBlock.isComplete())  continue;
    totalSize += addBlock(curBlock, results);
  }
  if(totalSize<size) {
    iter = node.getBlockIterator(); // start from the beginning
    for(int i=0; i<startBlock&&totalSize<size; i++) {
      curBlock = iter.next();
      if(!curBlock.isComplete())  continue;
      totalSize += addBlock(curBlock, results);
    }
  }

  return new BlocksWithLocations(
      results.toArray(new BlockWithLocations[results.size()]));
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:42,代码来源:BlockManager.java


示例15: getBlocksWithLocations

import org.apache.hadoop.hdfs.protocol.UnregisteredNodeException; //导入依赖的package包/类
/** Get all blocks with location information from a datanode. */
private BlocksWithLocations getBlocksWithLocations(final DatanodeID datanode,
    final long size) throws UnregisteredNodeException {
  final DatanodeDescriptor node = getDatanodeManager().getDatanode(datanode);
  if (node == null) {
    blockLog.warn("BLOCK* getBlocks: "
        + "Asking for blocks from an unrecorded node " + datanode);
    throw new HadoopIllegalArgumentException(
        "Datanode " + datanode + " not found.");
  }

  int numBlocks = node.numBlocks();
  if(numBlocks == 0) {
    return new BlocksWithLocations(new BlockWithLocations[0]);
  }
  Iterator<BlockInfo> iter = node.getBlockIterator();
  int startBlock = DFSUtil.getRandom().nextInt(numBlocks); // starting from a random block
  // skip blocks
  for(int i=0; i<startBlock; i++) {
    iter.next();
  }
  List<BlockWithLocations> results = new ArrayList<BlockWithLocations>();
  long totalSize = 0;
  BlockInfo curBlock;
  while(totalSize<size && iter.hasNext()) {
    curBlock = iter.next();
    if(!curBlock.isComplete())  continue;
    totalSize += addBlock(curBlock, results);
  }
  if(totalSize<size) {
    iter = node.getBlockIterator(); // start from the beginning
    for(int i=0; i<startBlock&&totalSize<size; i++) {
      curBlock = iter.next();
      if(!curBlock.isComplete())  continue;
      totalSize += addBlock(curBlock, results);
    }
  }

  return new BlocksWithLocations(
      results.toArray(new BlockWithLocations[results.size()]));
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:42,代码来源:BlockManager.java


示例16: getBlocksWithLocations

import org.apache.hadoop.hdfs.protocol.UnregisteredNodeException; //导入依赖的package包/类
/**
 * Get all blocks with location information from a datanode.
 */
private BlocksWithLocations getBlocksWithLocations(final DatanodeID datanode,
    final long size) throws UnregisteredNodeException, IOException {
  final DatanodeDescriptor node = getDatanodeManager().getDatanode(datanode);
  if (node == null) {
    blockLog.warn(
        "BLOCK* getBlocks: " + "Asking for blocks from an unrecorded node " +
            datanode);
    throw new HadoopIllegalArgumentException(
        "Datanode " + datanode + " not found.");
  }

  int numBlocks = node.numBlocks();
  if (numBlocks == 0) {
    return new BlocksWithLocations(new BlockWithLocations[0]);
  }
  Iterator<BlockInfo> iter = node.getBlockIterator();
  int startBlock =
      DFSUtil.getRandom().nextInt(numBlocks); // starting from a random block
  // skip blocks
  for (int i = 0; i < startBlock; i++) {
    iter.next();
  }
  List<BlockWithLocations> results = new ArrayList<>();
  long totalSize = 0;
  BlockInfo curBlock;
  while (totalSize < size && iter.hasNext()) {
    curBlock = iter.next();
    if (!curBlock.isComplete()) {
      continue;
    }
    totalSize += addBlock(curBlock, results);
  }
  if (totalSize < size) {
    iter = node.getBlockIterator(); // start from the beginning
    for (int i = 0; i < startBlock && totalSize < size; i++) {
      curBlock = iter.next();
      if (!curBlock.isComplete()) {
        continue;
      }
      totalSize += addBlock(curBlock, results);
    }
  }

  return new BlocksWithLocations(
      results.toArray(new BlockWithLocations[results.size()]));
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:50,代码来源:BlockManager.java


示例17: handleHeartbeat

import org.apache.hadoop.hdfs.protocol.UnregisteredNodeException; //导入依赖的package包/类
/**
 * Handle heartbeat from datanodes.
 */
public DatanodeCommand[] handleHeartbeat(DatanodeRegistration nodeReg,
    final String blockPoolId, long capacity, long dfsUsed, long remaining,
    long blockPoolUsed, int xceiverCount, int maxTransfers, int failedVolumes)
    throws IOException {
  synchronized (heartbeatManager) {
    synchronized (datanodeMap) {
      DatanodeDescriptor nodeinfo = null;
      try {
        nodeinfo = getDatanode(nodeReg);
      } catch (UnregisteredNodeException e) {
        return new DatanodeCommand[]{RegisterCommand.REGISTER};
      }
      
      // Check if this datanode should actually be shutdown instead. 
      if (nodeinfo != null && nodeinfo.isDisallowed()) {
        setDatanodeDead(nodeinfo);
        throw new DisallowedDatanodeException(nodeinfo);
      }

      if (nodeinfo == null || !nodeinfo.isAlive) {
        return new DatanodeCommand[]{RegisterCommand.REGISTER};
      }

      heartbeatManager.updateHeartbeat(nodeinfo, capacity, dfsUsed, remaining,
          blockPoolUsed, xceiverCount, failedVolumes);
      
      //check lease recovery
      BlockInfoUnderConstruction[] blocks =
          nodeinfo.getLeaseRecoveryCommand(Integer.MAX_VALUE);
      if (blocks != null) {
        BlockRecoveryCommand brCommand =
            new BlockRecoveryCommand(blocks.length);
        for (BlockInfoUnderConstruction b : blocks) {
          brCommand.add(new RecoveringBlock(new ExtendedBlock(blockPoolId, b),
              getDataNodeDescriptorsTx(b), b.getBlockRecoveryId()));
        }
        return new DatanodeCommand[]{brCommand};
      }

      final List<DatanodeCommand> cmds = new ArrayList<>();
      //check pending replication
      List<BlockTargetPair> pendingList =
          nodeinfo.getReplicationCommand(maxTransfers);
      if (pendingList != null) {
        cmds.add(new BlockCommand(DatanodeProtocol.DNA_TRANSFER, blockPoolId,
            pendingList));
      }
      //check block invalidation
      Block[] blks = nodeinfo.getInvalidateBlocks(blockInvalidateLimit);
      if (blks != null) {
        cmds.add(
            new BlockCommand(DatanodeProtocol.DNA_INVALIDATE, blockPoolId,
                blks));
      }
      
      blockManager.addKeyUpdateCommand(cmds, nodeinfo);

      // check for balancer bandwidth update
      if (nodeinfo.getBalancerBandwidth() > 0) {
        cmds.add(
            new BalancerBandwidthCommand(nodeinfo.getBalancerBandwidth()));
        // set back to 0 to indicate that datanode has been sent the new value
        nodeinfo.setBalancerBandwidth(0);
      }

      if (!cmds.isEmpty()) {
        return cmds.toArray(new DatanodeCommand[cmds.size()]);
      }
    }
  }

  return new DatanodeCommand[0];
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:77,代码来源:DatanodeManager.java


示例18: verifyRequest

import org.apache.hadoop.hdfs.protocol.UnregisteredNodeException; //导入依赖的package包/类
/** 
 * Verify request.
 * 
 * Verifies correctness of the datanode version, registration ID, and 
 * if the datanode does not need to be shutdown.
 * 
 * @param nodeReg data node registration
 * @throws IOException
 */
public void verifyRequest(NodeRegistration nodeReg) throws IOException {
  verifyVersion(nodeReg.getVersion());
  if (!namesystem.getRegistrationID().equals(nodeReg.getRegistrationID()))
    throw new UnregisteredNodeException(nodeReg);
}
 
开发者ID:cumulusyebl,项目名称:cumulus,代码行数:15,代码来源:NameNode.java



注:本文中的org.apache.hadoop.hdfs.protocol.UnregisteredNodeException类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java LevenshteinAutomata类代码示例发布时间:2022-05-22
下一篇:
Java GraphicsUtils类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap