• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java DatanodeStorageReport类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport的典型用法代码示例。如果您正苦于以下问题:Java DatanodeStorageReport类的具体用法?Java DatanodeStorageReport怎么用?Java DatanodeStorageReport使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



DatanodeStorageReport类属于org.apache.hadoop.hdfs.server.protocol包,在下文中一共展示了DatanodeStorageReport类的19个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: getDatanodeStorageReport

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport; //导入依赖的package包/类
DatanodeStorageReport[] getDatanodeStorageReport(final DatanodeReportType type
    ) throws AccessControlException, StandbyException {
  checkSuperuserPrivilege();
  checkOperation(OperationCategory.UNCHECKED);
  readLock();
  try {
    checkOperation(OperationCategory.UNCHECKED);
    final DatanodeManager dm = getBlockManager().getDatanodeManager();      
    final List<DatanodeDescriptor> datanodes = dm.getDatanodeListForReport(type);

    DatanodeStorageReport[] reports = new DatanodeStorageReport[datanodes.size()];
    for (int i = 0; i < reports.length; i++) {
      final DatanodeDescriptor d = datanodes.get(i);
      reports[i] = new DatanodeStorageReport(new DatanodeInfo(d),
          d.getStorageReports());
    }
    return reports;
  } finally {
    readUnlock();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:22,代码来源:FSNamesystem.java


示例2: init

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport; //导入依赖的package包/类
/** Get live datanode storage reports and then build the network topology. */
public List<DatanodeStorageReport> init() throws IOException {
  final DatanodeStorageReport[] reports = nnc.getLiveDatanodeStorageReport();
  final List<DatanodeStorageReport> trimmed = new ArrayList<DatanodeStorageReport>(); 
  // create network topology and classify utilization collections:
  // over-utilized, above-average, below-average and under-utilized.
  for (DatanodeStorageReport r : DFSUtil.shuffle(reports)) {
    final DatanodeInfo datanode = r.getDatanodeInfo();
    if (shouldIgnore(datanode)) {
      continue;
    }
    trimmed.add(r);
    cluster.add(datanode);
  }
  return trimmed;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:17,代码来源:Dispatcher.java


示例3: assertReports

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport; //导入依赖的package包/类
static void assertReports(int numDatanodes, DatanodeReportType type,
    DFSClient client, List<DataNode> datanodes, String bpid) throws IOException {
  final DatanodeInfo[] infos = client.datanodeReport(type);
  assertEquals(numDatanodes, infos.length);
  final DatanodeStorageReport[] reports = client.getDatanodeStorageReport(type);
  assertEquals(numDatanodes, reports.length);
  
  for(int i = 0; i < infos.length; i++) {
    assertEquals(infos[i], reports[i].getDatanodeInfo());
    
    final DataNode d = findDatanode(infos[i].getDatanodeUuid(), datanodes);
    if (bpid != null) {
      //check storage
      final StorageReport[] computed = reports[i].getStorageReports();
      Arrays.sort(computed, CMP);
      final StorageReport[] expected = d.getFSDataset().getStorageReports(bpid);
      Arrays.sort(expected, CMP);

      assertEquals(expected.length, computed.length);
      for(int j = 0; j < expected.length; j++) {
        assertEquals(expected[j].getStorage().getStorageID(),
                     computed[j].getStorage().getStorageID());
      }
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:27,代码来源:TestDatanodeReport.java


示例4: compareTotalPoolUsage

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport; //导入依赖的package包/类
/**
 * Compare the total blockpool usage on each datanode to ensure that nothing
 * was balanced.
 *
 * @param preReports storage reports from pre balancer run
 * @param postReports storage reports from post balancer run
 */
private static void compareTotalPoolUsage(DatanodeStorageReport[] preReports,
    DatanodeStorageReport[] postReports) {
  Assert.assertNotNull(preReports);
  Assert.assertNotNull(postReports);
  Assert.assertEquals(preReports.length, postReports.length);
  for (DatanodeStorageReport preReport : preReports) {
    String dnUuid = preReport.getDatanodeInfo().getDatanodeUuid();
    for(DatanodeStorageReport postReport : postReports) {
      if(postReport.getDatanodeInfo().getDatanodeUuid().equals(dnUuid)) {
        Assert.assertEquals(getTotalPoolUsage(preReport),
            getTotalPoolUsage(postReport));
        LOG.info("Comparision of datanode pool usage pre/post balancer run. "
            + "PrePoolUsage: " + getTotalPoolUsage(preReport)
            + ", PostPoolUsage: " + getTotalPoolUsage(postReport));
        break;
      }
    }
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:27,代码来源:TestBalancerWithMultipleNameNodes.java


示例5: getNumberOfDataDirsPerHost

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport; //导入依赖的package包/类
public HashMap<String, Integer> getNumberOfDataDirsPerHost(){
	HashMap<String, Integer> disksPerHost = new HashMap<>();
	
	try {
		@SuppressWarnings("resource")
		DFSClient dfsClient = new DFSClient(NameNode.getAddress(getConf()), getConf());
		
		DatanodeStorageReport[] datanodeStorageReports = dfsClient.getDatanodeStorageReport(DatanodeReportType.ALL);
		
		for (DatanodeStorageReport datanodeStorageReport : datanodeStorageReports) {
			disksPerHost.put(
					datanodeStorageReport.getDatanodeInfo().getHostName(),
					datanodeStorageReport.getStorageReports().length);
			
		}
	} catch (IOException e) {
		LOG.warn("number of data directories (disks) per node could not be collected (requieres higher privilegies).");
	}
	
	return disksPerHost;
}
 
开发者ID:cerndb,项目名称:hdfs-metadata,代码行数:22,代码来源:DistributedFileSystemMetadata.java


示例6: getDatanodeStorageReport

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport; //导入依赖的package包/类
public DatanodeStorageReport[] getDatanodeStorageReport(
    DatanodeReportType type) throws IOException {
  checkOpen();
  TraceScope scope =
      Trace.startSpan("datanodeStorageReport", traceSampler);
  try {
    return namenode.getDatanodeStorageReport(type);
  } finally {
    scope.close();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:12,代码来源:DFSClient.java


示例7: init

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport; //导入依赖的package包/类
void init() throws IOException {
  initStoragePolicies();
  final List<DatanodeStorageReport> reports = dispatcher.init();
  for(DatanodeStorageReport r : reports) {
    final DDatanode dn = dispatcher.newDatanode(r.getDatanodeInfo());
    for(StorageType t : StorageType.getMovableTypes()) {
      final Source source = dn.addSource(t, Long.MAX_VALUE, dispatcher);
      final long maxRemaining = getMaxRemaining(r, t);
      final StorageGroup target = maxRemaining > 0L ? dn.addTarget(t,
          maxRemaining) : null;
      storages.add(source, target);
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:15,代码来源:Mover.java


示例8: getMaxRemaining

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport; //导入依赖的package包/类
private static long getMaxRemaining(DatanodeStorageReport report, StorageType t) {
  long max = 0L;
  for(StorageReport r : report.getStorageReports()) {
    if (r.getStorage().getStorageType() == t) {
      if (r.getRemaining() > max) {
        max = r.getRemaining();
      }
    }
  }
  return max;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:12,代码来源:Mover.java


示例9: getDatanodeStorageReport

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport; //导入依赖的package包/类
@Override // ClientProtocol
public DatanodeStorageReport[] getDatanodeStorageReport(
    DatanodeReportType type) throws IOException {
  checkNNStartup();
  final DatanodeStorageReport[] reports = namesystem.getDatanodeStorageReport(type);
  return reports;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:8,代码来源:NameNodeRpcServer.java


示例10: getCapacity

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport; //导入依赖的package包/类
private static long getCapacity(DatanodeStorageReport report, StorageType t) {
  long capacity = 0L;
  for(StorageReport r : report.getStorageReports()) {
    if (r.getStorage().getStorageType() == t) {
      capacity += r.getCapacity();
    }
  }
  return capacity;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:10,代码来源:Balancer.java


示例11: getRemaining

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport; //导入依赖的package包/类
private static long getRemaining(DatanodeStorageReport report, StorageType t) {
  long remaining = 0L;
  for(StorageReport r : report.getStorageReports()) {
    if (r.getStorage().getStorageType() == t) {
      remaining += r.getRemaining();
    }
  }
  return remaining;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:10,代码来源:Balancer.java


示例12: accumulateSpaces

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport; //导入依赖的package包/类
@Override
void accumulateSpaces(DatanodeStorageReport r) {
  for(StorageReport s : r.getStorageReports()) {
    final StorageType t = s.getStorage().getStorageType();
    totalCapacities.add(t, s.getCapacity());
    totalUsedSpaces.add(t, s.getDfsUsed());
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:9,代码来源:BalancingPolicy.java


示例13: getUtilization

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport; //导入依赖的package包/类
@Override
Double getUtilization(DatanodeStorageReport r, final StorageType t) {
  long capacity = 0L;
  long dfsUsed = 0L;
  for(StorageReport s : r.getStorageReports()) {
    if (s.getStorage().getStorageType() == t) {
      capacity += s.getCapacity();
      dfsUsed += s.getDfsUsed();
    }
  }
  return capacity == 0L? null: dfsUsed*100.0/capacity;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:13,代码来源:BalancingPolicy.java


示例14: convertDatanodeStorageReport

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport; //导入依赖的package包/类
public static DatanodeStorageReportProto convertDatanodeStorageReport(
    DatanodeStorageReport report) {
  return DatanodeStorageReportProto.newBuilder()
      .setDatanodeInfo(convert(report.getDatanodeInfo()))
      .addAllStorageReports(convertStorageReports(report.getStorageReports()))
      .build();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:8,代码来源:PBHelper.java


示例15: convertDatanodeStorageReports

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport; //导入依赖的package包/类
public static List<DatanodeStorageReportProto> convertDatanodeStorageReports(
    DatanodeStorageReport[] reports) {
  final List<DatanodeStorageReportProto> protos
      = new ArrayList<DatanodeStorageReportProto>(reports.length);
  for(int i = 0; i < reports.length; i++) {
    protos.add(convertDatanodeStorageReport(reports[i]));
  }
  return protos;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:10,代码来源:PBHelper.java


示例16: getDatanodeStorageReport

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport; //导入依赖的package包/类
@Override
public DatanodeStorageReport[] getDatanodeStorageReport(DatanodeReportType type)
    throws IOException {
  final GetDatanodeStorageReportRequestProto req
      = GetDatanodeStorageReportRequestProto.newBuilder()
          .setType(PBHelper.convert(type)).build();
  try {
    return PBHelper.convertDatanodeStorageReports(
        rpcProxy.getDatanodeStorageReport(null, req).getDatanodeStorageReportsList());
  } catch (ServiceException e) {
    throw ProtobufHelper.getRemoteException(e);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:14,代码来源:ClientNamenodeProtocolTranslatorPB.java


示例17: getDatanodeStorageReport

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport; //导入依赖的package包/类
public DatanodeStorageReport[] getDatanodeStorageReport(
    DatanodeReportType type) throws IOException {
  checkOpen();
  try (TraceScope ignored = tracer.newScope("datanodeStorageReport")) {
    return namenode.getDatanodeStorageReport(type);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:8,代码来源:DFSClient.java


示例18: convertDatanodeStorageReports

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport; //导入依赖的package包/类
public static DatanodeStorageReport[] convertDatanodeStorageReports(
    List<DatanodeStorageReportProto> protos) {
  final DatanodeStorageReport[] reports
      = new DatanodeStorageReport[protos.size()];
  for(int i = 0; i < reports.length; i++) {
    reports[i] = convertDatanodeStorageReport(protos.get(i));
  }
  return reports;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:10,代码来源:PBHelperClient.java


示例19: getDatanodeStorageReport

import org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport; //导入依赖的package包/类
@Override
public DatanodeStorageReport[] getDatanodeStorageReport(
    DatanodeReportType type) throws IOException {
  final GetDatanodeStorageReportRequestProto req
      = GetDatanodeStorageReportRequestProto.newBuilder()
      .setType(PBHelperClient.convert(type)).build();
  try {
    return PBHelperClient.convertDatanodeStorageReports(
        rpcProxy.getDatanodeStorageReport(null, req)
            .getDatanodeStorageReportsList());
  } catch (ServiceException e) {
    throw ProtobufHelper.getRemoteException(e);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:15,代码来源:ClientNamenodeProtocolTranslatorPB.java



注:本文中的org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java IdParam类代码示例发布时间:2022-05-22
下一篇:
Java XMLDocumentHandler类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap