• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java State类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State的典型用法代码示例。如果您正苦于以下问题:Java State类的具体用法?Java State怎么用?Java State使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



State类属于org.apache.hadoop.hdfs.server.protocol.DatanodeStorage包,在下文中一共展示了State类的16个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: convert

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State; //导入依赖的package包/类
public static NNHAStatusHeartbeatProto convert(NNHAStatusHeartbeat hb) {
  if (hb == null) return null;
  NNHAStatusHeartbeatProto.Builder builder =
    NNHAStatusHeartbeatProto.newBuilder();
  switch (hb.getState()) {
    case ACTIVE:
      builder.setState(NNHAStatusHeartbeatProto.State.ACTIVE);
      break;
    case STANDBY:
      builder.setState(NNHAStatusHeartbeatProto.State.STANDBY);
      break;
    default:
      throw new IllegalArgumentException("Unexpected NNHAStatusHeartbeat.State:" +
          hb.getState());
  }
  builder.setTxid(hb.getTxId());
  return builder.build();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:19,代码来源:PBHelper.java


示例2: addToInvalidates

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State; //导入依赖的package包/类
/**
 * Adds block to list of blocks which will be invalidated on all its
 * datanodes.
 */
private void addToInvalidates(BlockInfo storedBlock) {
  if (!isPopulatingReplQueues()) {
    return;
  }
  StringBuilder datanodes = new StringBuilder();
  for(DatanodeStorageInfo storage : blocksMap.getStorages(storedBlock,
      State.NORMAL)) {
    final DatanodeDescriptor node = storage.getDatanodeDescriptor();
    final Block b = getBlockOnStorage(storedBlock, storage);
    if (b != null) {
      invalidateBlocks.add(b, node, false);
      datanodes.append(node).append(" ");
    }
  }
  if (datanodes.length() != 0) {
    blockLog.debug("BLOCK* addToInvalidates: {} {}", storedBlock, datanodes);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:23,代码来源:BlockManager.java


示例3: chooseStorage4Block

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State; //导入依赖的package包/类
/**
 * Find whether the datanode contains good storage of given type to
 * place block of size <code>blockSize</code>.
 *
 * <p>Currently datanode only cares about the storage type, in this
 * method, the first storage of given type we see is returned.
 *
 * @param t requested storage type
 * @param blockSize requested block size
 */
public DatanodeStorageInfo chooseStorage4Block(StorageType t,
    long blockSize) {
  final long requiredSize =
      blockSize * HdfsServerConstants.MIN_BLOCKS_FOR_WRITE;
  final long scheduledSize = blockSize * getBlocksScheduled(t);
  long remaining = 0;
  DatanodeStorageInfo storage = null;
  for (DatanodeStorageInfo s : getStorageInfos()) {
    if (s.getState() == State.NORMAL && s.getStorageType() == t) {
      if (storage == null) {
        storage = s;
      }
      long r = s.getRemaining();
      if (r >= requiredSize) {
        remaining += r;
      }
    }
  }
  if (requiredSize > remaining - scheduledSize) {
    return null;
  }
  return storage;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:34,代码来源:DatanodeDescriptor.java


示例4: addToInvalidates

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State; //导入依赖的package包/类
/**
 * Adds block to list of blocks which will be invalidated on all its
 * datanodes.
 */
private void addToInvalidates(Block b) {
  if (!namesystem.isPopulatingReplQueues()) {
    return;
  }
  StringBuilder datanodes = new StringBuilder();
  for(DatanodeStorageInfo storage : blocksMap.getStorages(b, State.NORMAL)) {
    final DatanodeDescriptor node = storage.getDatanodeDescriptor();
    invalidateBlocks.add(b, node, false);
    datanodes.append(node).append(" ");
  }
  if (datanodes.length() != 0) {
    blockLog.info("BLOCK* addToInvalidates: " + b + " "
        + datanodes);
  }
}
 
开发者ID:yncxcw,项目名称:FlexMap,代码行数:20,代码来源:BlockManager.java


示例5: addToInvalidates

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State; //导入依赖的package包/类
/**
 * Adds block to list of blocks which will be invalidated on all its
 * datanodes.
 */
private void addToInvalidates(Block b) {
  if (!namesystem.isPopulatingReplQueues()) {
    return;
  }
  StringBuilder datanodes = new StringBuilder();
  for(DatanodeStorageInfo storage : blocksMap.getStorages(b, State.NORMAL)) {
    final DatanodeDescriptor node = storage.getDatanodeDescriptor();
    invalidateBlocks.add(b, node, false);
    datanodes.append(node).append(" ");
  }
  if (datanodes.length() != 0) {
    blockLog.info("BLOCK* addToInvalidates: {} {}", b, datanodes.toString());
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:19,代码来源:BlockManager.java


示例6: countNodes

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State; //导入依赖的package包/类
/**
 * Return the number of nodes hosting a given block, grouped
 * by the state of those replicas.
 */
public NumberReplicas countNodes(Block b) {
  int decommissioned = 0;
  int live = 0;
  int corrupt = 0;
  int excess = 0;
  int stale = 0;
  Collection<DatanodeDescriptor> nodesCorrupt = corruptReplicas.getNodes(b);
  for(DatanodeStorageInfo storage : blocksMap.getStorages(b, State.NORMAL)) {
    final DatanodeDescriptor node = storage.getDatanodeDescriptor();
    if ((nodesCorrupt != null) && (nodesCorrupt.contains(node))) {
      corrupt++;
    } else if (node.isDecommissionInProgress() || node.isDecommissioned()) {
      decommissioned++;
    } else {
      LightWeightLinkedSet<Block> blocksExcess = excessReplicateMap.get(node
          .getDatanodeUuid());
      if (blocksExcess != null && blocksExcess.contains(b)) {
        excess++;
      } else {
        live++;
      }
    }
    if (storage.areBlockContentsStale()) {
      stale++;
    }
  }
  return new NumberReplicas(live, decommissioned, corrupt, excess, stale);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:33,代码来源:BlockManager.java


示例7: countLiveNodes

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State; //导入依赖的package包/类
/** 
 * Simpler, faster form of {@link #countNodes(Block)} that only returns the number
 * of live nodes.  If in startup safemode (or its 30-sec extension period),
 * then it gains speed by ignoring issues of excess replicas or nodes
 * that are decommissioned or in process of becoming decommissioned.
 * If not in startup, then it calls {@link #countNodes(Block)} instead.
 * 
 * @param b - the block being tested
 * @return count of live nodes for this block
 */
int countLiveNodes(BlockInfoContiguous b) {
  if (!namesystem.isInStartupSafeMode()) {
    return countNodes(b).liveReplicas();
  }
  // else proceed with fast case
  int live = 0;
  Collection<DatanodeDescriptor> nodesCorrupt = corruptReplicas.getNodes(b);
  for(DatanodeStorageInfo storage : blocksMap.getStorages(b, State.NORMAL)) {
    final DatanodeDescriptor node = storage.getDatanodeDescriptor();
    if ((nodesCorrupt == null) || (!nodesCorrupt.contains(node)))
      live++;
  }
  return live;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:25,代码来源:BlockManager.java


示例8: updateFailedStorage

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State; //导入依赖的package包/类
private void updateFailedStorage(
    Set<DatanodeStorageInfo> failedStorageInfos) {
  for (DatanodeStorageInfo storageInfo : failedStorageInfos) {
    if (storageInfo.getState() != DatanodeStorage.State.FAILED) {
      LOG.info(storageInfo + " failed.");
      storageInfo.setState(DatanodeStorage.State.FAILED);
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:10,代码来源:DatanodeDescriptor.java


示例9: getRemaining

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State; //导入依赖的package包/类
/**
 * Return the sum of remaining spaces of the specified type. If the remaining
 * space of a storage is less than minSize, it won't be counted toward the
 * sum.
 *
 * @param t The storage type. If null, the type is ignored.
 * @param minSize The minimum free space required.
 * @return the sum of remaining spaces that are bigger than minSize.
 */
public long getRemaining(StorageType t, long minSize) {
  long remaining = 0;
  for (DatanodeStorageInfo s : getStorageInfos()) {
    if (s.getState() == State.NORMAL &&
        (t == null || s.getStorageType() == t)) {
      long r = s.getRemaining();
      if (r >= minSize) {
        remaining += r;
      }
    }
  }
  return remaining;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:23,代码来源:DatanodeDescriptor.java


示例10: convertState

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State; //导入依赖的package包/类
private static StorageState convertState(State state) {
  switch(state) {
  case READ_ONLY_SHARED:
    return StorageState.READ_ONLY_SHARED;
  case NORMAL:
  default:
    return StorageState.NORMAL;
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:10,代码来源:PBHelper.java


示例11: convertState

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State; //导入依赖的package包/类
private static State convertState(StorageState state) {
  switch(state) {
  case READ_ONLY_SHARED:
    return State.READ_ONLY_SHARED;
  case NORMAL:
  default:
    return State.NORMAL;
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:10,代码来源:PBHelperClient.java


示例12: countNodes

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State; //导入依赖的package包/类
/**
 * Return the number of nodes hosting a given block, grouped
 * by the state of those replicas.
 * For a striped block, this includes nodes storing blocks belonging to the
 * striped block group.
 */
public NumberReplicas countNodes(Block b) {
  int decommissioned = 0;
  int decommissioning = 0;
  int live = 0;
  int readonly = 0;
  int corrupt = 0;
  int excess = 0;
  int stale = 0;
  Collection<DatanodeDescriptor> nodesCorrupt = corruptReplicas.getNodes(b);
  for(DatanodeStorageInfo storage : blocksMap.getStorages(b)) {
    if (storage.getState() == State.FAILED) {
      continue;
    } else if (storage.getState() == State.READ_ONLY_SHARED) {
      readonly++;
      continue;
    }
    final DatanodeDescriptor node = storage.getDatanodeDescriptor();
    if ((nodesCorrupt != null) && (nodesCorrupt.contains(node))) {
      corrupt++;
    } else if (node.isDecommissionInProgress()) {
      decommissioning++;
    } else if (node.isDecommissioned()) {
      decommissioned++;
    } else {
      LightWeightHashSet<BlockInfo> blocksExcess = excessReplicateMap.get(
          node.getDatanodeUuid());
      if (blocksExcess != null && blocksExcess.contains(b)) {
        excess++;
      } else {
        live++;
      }
    }
    if (storage.areBlockContentsStale()) {
      stale++;
    }
  }
  return new NumberReplicas(live, readonly, decommissioned, decommissioning,
      corrupt, excess, stale);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:46,代码来源:BlockManager.java


示例13: countLiveNodes

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State; //导入依赖的package包/类
/** 
 * Simpler, faster form of {@link #countNodes} that only returns the number
 * of live nodes.  If in startup safemode (or its 30-sec extension period),
 * then it gains speed by ignoring issues of excess replicas or nodes
 * that are decommissioned or in process of becoming decommissioned.
 * If not in startup, then it calls {@link #countNodes} instead.
 *
 * @param b - the block being tested
 * @return count of live nodes for this block
 */
int countLiveNodes(BlockInfo b) {
  if (!namesystem.isInStartupSafeMode()) {
    return countNodes(b).liveReplicas();
  }
  // else proceed with fast case
  int live = 0;
  Collection<DatanodeDescriptor> nodesCorrupt = corruptReplicas.getNodes(b);
  for(DatanodeStorageInfo storage : blocksMap.getStorages(b, State.NORMAL)) {
    final DatanodeDescriptor node = storage.getDatanodeDescriptor();
    if ((nodesCorrupt == null) || (!nodesCorrupt.contains(node)))
      live++;
  }
  return live;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:25,代码来源:BlockManager.java


示例14: updateFailedStorage

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State; //导入依赖的package包/类
private void updateFailedStorage(
    Set<DatanodeStorageInfo> failedStorageInfos) {
  for (DatanodeStorageInfo storageInfo : failedStorageInfos) {
    if (storageInfo.getState() != DatanodeStorage.State.FAILED) {
      LOG.info("{} failed.", storageInfo);
      storageInfo.setState(DatanodeStorage.State.FAILED);
    }
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:10,代码来源:DatanodeDescriptor.java


示例15: countLiveNodes

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State; //导入依赖的package包/类
/** 
 * Simpler, faster form of {@link #countNodes(Block)} that only returns the number
 * of live nodes.  If in startup safemode (or its 30-sec extension period),
 * then it gains speed by ignoring issues of excess replicas or nodes
 * that are decommissioned or in process of becoming decommissioned.
 * If not in startup, then it calls {@link #countNodes(Block)} instead.
 * 
 * @param b - the block being tested
 * @return count of live nodes for this block
 */
int countLiveNodes(BlockInfo b) {
  if (!namesystem.isInStartupSafeMode()) {
    return countNodes(b).liveReplicas();
  }
  // else proceed with fast case
  int live = 0;
  Collection<DatanodeDescriptor> nodesCorrupt = corruptReplicas.getNodes(b);
  for(DatanodeStorageInfo storage : blocksMap.getStorages(b, State.NORMAL)) {
    final DatanodeDescriptor node = storage.getDatanodeDescriptor();
    if ((nodesCorrupt == null) || (!nodesCorrupt.contains(node)))
      live++;
  }
  return live;
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:25,代码来源:BlockManager.java


示例16: convert

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State; //导入依赖的package包/类
private static StorageState convert(State state) {
  switch (state) {
    case READ_ONLY:
      return StorageState.READ_ONLY;
    case NORMAL:
    default:
      return StorageState.NORMAL;
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:10,代码来源:PBHelper.java



注:本文中的org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java EnumJvmMemPoolCollectThreshdSupport类代码示例发布时间:2022-05-22
下一篇:
Java GrayU8类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap