• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java BlockCollection类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.server.blockmanagement.BlockCollection的典型用法代码示例。如果您正苦于以下问题:Java BlockCollection类的具体用法?Java BlockCollection怎么用?Java BlockCollection使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



BlockCollection类属于org.apache.hadoop.hdfs.server.blockmanagement包,在下文中一共展示了BlockCollection类的7个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: clearCorruptLazyPersistFiles

import org.apache.hadoop.hdfs.server.blockmanagement.BlockCollection; //导入依赖的package包/类
/**
 * Periodically go over the list of lazyPersist files with missing
 * blocks and unlink them from the namespace.
 */
private void clearCorruptLazyPersistFiles()
    throws IOException {

  BlockStoragePolicy lpPolicy = blockManager.getStoragePolicy("LAZY_PERSIST");

  List<BlockCollection> filesToDelete = new ArrayList<>();
  boolean changed = false;
  writeLock();
  try {
    final Iterator<Block> it = blockManager.getCorruptReplicaBlockIterator();

    while (it.hasNext()) {
      Block b = it.next();
      BlockInfoContiguous blockInfo = blockManager.getStoredBlock(b);
      if (blockInfo.getBlockCollection().getStoragePolicyID()
          == lpPolicy.getId()) {
        filesToDelete.add(blockInfo.getBlockCollection());
      }
    }

    for (BlockCollection bc : filesToDelete) {
      LOG.warn("Removing lazyPersist file " + bc.getName() + " with no replicas.");
      BlocksMapUpdateInfo toRemoveBlocks =
          FSDirDeleteOp.deleteInternal(
              FSNamesystem.this, bc.getName(),
              INodesInPath.fromINode((INodeFile) bc), false);
      changed |= toRemoveBlocks != null;
      if (toRemoveBlocks != null) {
        removeBlocks(toRemoveBlocks); // Incremental deletion of blocks
      }
    }
  } finally {
    writeUnlock();
  }
  if (changed) {
    getEditLog().logSync();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:43,代码来源:FSNamesystem.java


示例2: chooseReplicaToDelete

import org.apache.hadoop.hdfs.server.blockmanagement.BlockCollection; //导入依赖的package包/类
@Override
public DatanodeStorageInfo chooseReplicaToDelete(BlockCollection inode,
    Block block, short replicationFactor,
    Collection<DatanodeStorageInfo> first,
    Collection<DatanodeStorageInfo> second,
    List<StorageType> excessTypes) {
  
  Collection<DatanodeStorageInfo> chooseFrom = !first.isEmpty() ? first : second;

  List<DatanodeStorageInfo> l = Lists.newArrayList(chooseFrom);
  return l.get(DFSUtil.getRandom().nextInt(l.size()));
}
 
开发者ID:naver,项目名称:hadoop,代码行数:13,代码来源:TestDNFencing.java


示例3: clearCorruptLazyPersistFiles

import org.apache.hadoop.hdfs.server.blockmanagement.BlockCollection; //导入依赖的package包/类
/**
 * Periodically go over the list of lazyPersist files with missing
 * blocks and unlink them from the namespace.
 */
private void clearCorruptLazyPersistFiles()
    throws SafeModeException, AccessControlException,
    UnresolvedLinkException, IOException {

  BlockStoragePolicy lpPolicy = blockManager.getStoragePolicy("LAZY_PERSIST");

  List<BlockCollection> filesToDelete = new ArrayList<BlockCollection>();

  writeLock();

  try {
    final Iterator<Block> it = blockManager.getCorruptReplicaBlockIterator();

    while (it.hasNext()) {
      Block b = it.next();
      BlockInfo blockInfo = blockManager.getStoredBlock(b);
      if (blockInfo.getBlockCollection().getStoragePolicyID() == lpPolicy.getId()) {
        filesToDelete.add(blockInfo.getBlockCollection());
      }
    }

    for (BlockCollection bc : filesToDelete) {
      LOG.warn("Removing lazyPersist file " + bc.getName() + " with no replicas.");
      deleteInternal(bc.getName(), false, false, false);
    }
  } finally {
    writeUnlock();
  }
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:34,代码来源:FSNamesystem.java


示例4: chooseReplicaToDelete

import org.apache.hadoop.hdfs.server.blockmanagement.BlockCollection; //导入依赖的package包/类
@Override
public DatanodeDescriptor chooseReplicaToDelete(BlockCollection inode,
    Block block, short replicationFactor,
    Collection<DatanodeDescriptor> first,
    Collection<DatanodeDescriptor> second) {
  
  Collection<DatanodeDescriptor> chooseFrom =
    !first.isEmpty() ? first : second;

  List<DatanodeDescriptor> l = Lists.newArrayList(chooseFrom);
  return l.get(DFSUtil.getRandom().nextInt(l.size()));
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:13,代码来源:TestDNFencing.java


示例5: blockIdCK

import org.apache.hadoop.hdfs.server.blockmanagement.BlockCollection; //导入依赖的package包/类
/**
 * Check block information given a blockId number
 *
*/
public void blockIdCK(String blockId) {

  if(blockId == null) {
    out.println("Please provide valid blockId!");
    return;
  }

  BlockManager bm = namenode.getNamesystem().getBlockManager();
  try {
    //get blockInfo
    Block block = new Block(Block.getBlockId(blockId));
    //find which file this block belongs to
    BlockInfoContiguous blockInfo = bm.getStoredBlock(block);
    if(blockInfo == null) {
      out.println("Block "+ blockId +" " + NONEXISTENT_STATUS);
      LOG.warn("Block "+ blockId + " " + NONEXISTENT_STATUS);
      return;
    }
    BlockCollection bc = bm.getBlockCollection(blockInfo);
    INode iNode = (INode) bc;
    NumberReplicas numberReplicas= bm.countNodes(block);
    out.println("Block Id: " + blockId);
    out.println("Block belongs to: "+iNode.getFullPathName());
    out.println("No. of Expected Replica: " + bc.getBlockReplication());
    out.println("No. of live Replica: " + numberReplicas.liveReplicas());
    out.println("No. of excess Replica: " + numberReplicas.excessReplicas());
    out.println("No. of stale Replica: " + numberReplicas.replicasOnStaleNodes());
    out.println("No. of decommission Replica: "
        + numberReplicas.decommissionedReplicas());
    out.println("No. of corrupted Replica: " + numberReplicas.corruptReplicas());
    //record datanodes that have corrupted block replica
    Collection<DatanodeDescriptor> corruptionRecord = null;
    if (bm.getCorruptReplicas(block) != null) {
      corruptionRecord = bm.getCorruptReplicas(block);
    }

    //report block replicas status on datanodes
    for(int idx = (blockInfo.numNodes()-1); idx >= 0; idx--) {
      DatanodeDescriptor dn = blockInfo.getDatanode(idx);
      out.print("Block replica on datanode/rack: " + dn.getHostName() +
          dn.getNetworkLocation() + " ");
      if (corruptionRecord != null && corruptionRecord.contains(dn)) {
        out.print(CORRUPT_STATUS+"\t ReasonCode: "+
          bm.getCorruptReason(block,dn));
      } else if (dn.isDecommissioned() ){
        out.print(DECOMMISSIONED_STATUS);
      } else if (dn.isDecommissionInProgress()) {
        out.print(DECOMMISSIONING_STATUS);
      } else {
        out.print(HEALTHY_STATUS);
      }
      out.print("\n");
    }
  } catch (Exception e){
    String errMsg = "Fsck on blockId '" + blockId;
    LOG.warn(errMsg, e);
    out.println(e.getMessage());
    out.print("\n\n" + errMsg);
    LOG.warn("Error in looking up block", e);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:66,代码来源:NamenodeFsck.java


示例6: testFsckReplicaDetails

import org.apache.hadoop.hdfs.server.blockmanagement.BlockCollection; //导入依赖的package包/类
@Test(timeout = 60000)
public void testFsckReplicaDetails() throws Exception {

  final short REPL_FACTOR = 1;
  short NUM_DN = 1;
  final long blockSize = 512;
  final long fileSize = 1024;
  boolean checkDecommissionInProgress = false;
  String[] racks = { "/rack1" };
  String[] hosts = { "host1" };

  Configuration conf = new Configuration();
  conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, blockSize);
  conf.setInt(DFSConfigKeys.DFS_REPLICATION_KEY, 1);

  MiniDFSCluster cluster;
  DistributedFileSystem dfs;
  cluster =
      new MiniDFSCluster.Builder(conf).numDataNodes(NUM_DN).hosts(hosts).racks(racks).build();
  cluster.waitClusterUp();
  dfs = cluster.getFileSystem();

  // create files
  final String testFile = new String("/testfile");
  final Path path = new Path(testFile);
  DFSTestUtil.createFile(dfs, path, fileSize, REPL_FACTOR, 1000L);
  DFSTestUtil.waitReplication(dfs, path, REPL_FACTOR);
  try {
    // make sure datanode that has replica is fine before decommission
    String fsckOut = runFsck(conf, 0, true, testFile, "-files", "-blocks", "-replicaDetails");
    assertTrue(fsckOut.contains(NamenodeFsck.HEALTHY_STATUS));
    assertTrue(fsckOut.contains("(LIVE)"));

    // decommission datanode
    ExtendedBlock eb = DFSTestUtil.getFirstBlock(dfs, path);
    FSNamesystem fsn = cluster.getNameNode().getNamesystem();
    BlockManager bm = fsn.getBlockManager();
    BlockCollection bc = null;
    try {
      fsn.writeLock();
      BlockInfo bi = bm.getStoredBlock(eb.getLocalBlock());
      bc = fsn.getBlockCollection(bi);
    } finally {
      fsn.writeUnlock();
    }
    DatanodeDescriptor dn = bc.getBlocks()[0]
        .getDatanode(0);
    bm.getDatanodeManager().getDecomManager().startDecommission(dn);
    String dnName = dn.getXferAddr();

    // check the replica status while decommissioning
    fsckOut = runFsck(conf, 0, true, testFile, "-files", "-blocks", "-replicaDetails");
    assertTrue(fsckOut.contains("(DECOMMISSIONING)"));

    // Start 2nd Datanode and wait for decommission to start
    cluster.startDataNodes(conf, 1, true, null, null, null);
    DatanodeInfo datanodeInfo = null;
    do {
      Thread.sleep(2000);
      for (DatanodeInfo info : dfs.getDataNodeStats()) {
        if (dnName.equals(info.getXferAddr())) {
          datanodeInfo = info;
        }
      }
      if (!checkDecommissionInProgress && datanodeInfo != null
          && datanodeInfo.isDecommissionInProgress()) {
        checkDecommissionInProgress = true;
      }
    } while (datanodeInfo != null && !datanodeInfo.isDecommissioned());

    // check the replica status after decommission is done
    fsckOut = runFsck(conf, 0, true, testFile, "-files", "-blocks", "-replicaDetails");
    assertTrue(fsckOut.contains("(DECOMMISSIONED)"));
  } finally {
    if (cluster != null) {
      cluster.shutdown();
    }
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:80,代码来源:TestFsck.java


示例7: getBlockCollection

import org.apache.hadoop.hdfs.server.blockmanagement.BlockCollection; //导入依赖的package包/类
BlockCollection getBlockCollection(long id); 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:2,代码来源:Namesystem.java



注:本文中的org.apache.hadoop.hdfs.server.blockmanagement.BlockCollection类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java ContentVideoViewClient类代码示例发布时间:2022-05-22
下一篇:
Java NewEpochResponseProto类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap