• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java DatanodeReportType类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.protocol.FSConstants.DatanodeReportType的典型用法代码示例。如果您正苦于以下问题:Java DatanodeReportType类的具体用法?Java DatanodeReportType怎么用?Java DatanodeReportType使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



DatanodeReportType类属于org.apache.hadoop.hdfs.protocol.FSConstants包,在下文中一共展示了DatanodeReportType类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: waitForDNHeartbeat

import org.apache.hadoop.hdfs.protocol.FSConstants.DatanodeReportType; //导入依赖的package包/类
private void waitForDNHeartbeat(DataNode dn, long timeoutMillis, int nnIndex)
  throws IOException, InterruptedException {
  InetSocketAddress addr = new InetSocketAddress("localhost",
                                                 getNameNodePort(nnIndex));
  DFSClient client = new DFSClient(addr, nameNodes[nnIndex].conf);
  int namespaceId = getNameNode(nnIndex).getNamespaceID();
  long startTime = System.currentTimeMillis();
  while (System.currentTimeMillis() < startTime + timeoutMillis) {
    DatanodeInfo report[] = client.datanodeReport(DatanodeReportType.LIVE);
 
    for (DatanodeInfo thisReport : report) {
      if (thisReport.getStorageID().equals(
            dn.getDNRegistrationForNS(namespaceId).getStorageID())) {
        if (thisReport.getLastUpdate() > startTime)
          return;
      }
    }

    Thread.sleep(500);
  }
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:22,代码来源:MiniDFSCluster.java


示例2: testReportingNodesDNShutdown

import org.apache.hadoop.hdfs.protocol.FSConstants.DatanodeReportType; //导入依赖的package包/类
@Test
public void testReportingNodesDNShutdown() throws Exception {
  FSNamesystem namesystem = cluster.getNameNode().namesystem;
  waitForNodesReporting(3, namesystem);

  cluster.shutdownDataNode(0, false);

  int live = namesystem.getDatanodeListForReport(DatanodeReportType.LIVE)
      .size();
  long start = System.currentTimeMillis();
  while (live != 2 && System.currentTimeMillis() - start < 30000) {
    live = namesystem.getDatanodeListForReport(DatanodeReportType.LIVE)
        .size();
    System.out.println("Waiting for live : " + live);
    Thread.sleep(1000);
  }
  assertEquals(2, live);

  waitForNodesReporting(2, namesystem);

  cluster.restartDataNode(0);
  waitForNodesReporting(3, namesystem);
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:24,代码来源:TestReportingNodes.java


示例3: assertBalanced

import org.apache.hadoop.hdfs.protocol.FSConstants.DatanodeReportType; //导入依赖的package包/类
/** When function exits then cluster is balanced (no other guarantees, might loop forever) */
private void assertBalanced(long totalUsedSpace, long totalCapacity) throws Exception {
  waitForHeartBeat(totalUsedSpace, totalCapacity);
  boolean balanced;
  do {
    DatanodeInfo[] datanodeReport = 
      client.getDatanodeReport(DatanodeReportType.ALL);
    assertEquals(datanodeReport.length, cluster.getDataNodes().size());
    balanced = true;
    double avgUtilization = ((double)totalUsedSpace)/totalCapacity*100;
    for(DatanodeInfo datanode:datanodeReport) {
      double util = ((double) datanode.getDfsUsed()) / datanode.getCapacity()
          * 100;
      if (Math.abs(avgUtilization - util) > 10 || util > 99) {
        balanced = false;
        DFSTestUtil.waitNMilliSecond(100);
        break;
      }
    }
  } while(!balanced);
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:22,代码来源:TestBalancer.java


示例4: assertNotBalanced

import org.apache.hadoop.hdfs.protocol.FSConstants.DatanodeReportType; //导入依赖的package包/类
private void assertNotBalanced(long totalUsedSpace, long totalCapacity,
      long[] expectedUtilizations) throws Exception {
  waitForHeartBeat(totalUsedSpace, totalCapacity);
  DatanodeInfo[] datanodeReport = client.getDatanodeReport(DatanodeReportType.ALL);
  long[] utilizations = new long[expectedUtilizations.length];
  int i = 0;
  for (DatanodeInfo datanode : datanodeReport) {
    totalUsedSpace -= datanode.getDfsUsed();
    totalCapacity -= datanode.getCapacity();
    utilizations[i++] = datanode.getDfsUsed();
  }
  assertEquals(0, totalUsedSpace);
  assertEquals(0, totalCapacity);
  assertEquals(expectedUtilizations.length, utilizations.length);
  Arrays.sort(expectedUtilizations);
  Arrays.sort(utilizations);
  assertTrue(Arrays.equals(expectedUtilizations, utilizations));
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:19,代码来源:TestBalancer.java


示例5: setupCluster

import org.apache.hadoop.hdfs.protocol.FSConstants.DatanodeReportType; //导入依赖的package包/类
private void setupCluster(Configuration conf,
    String[] racks, String[] hosts) throws IOException, InterruptedException {
  // start the cluster with one datanode
  this.conf = conf;
  cluster = new MiniDFSCluster(conf, hosts.length, true, racks, hosts);
  cluster.waitActive();
  fs = cluster.getFileSystem();
  placementMonitor = new PlacementMonitor(conf);
  placementMonitor.start();
  blockMover = placementMonitor.blockMover;
  namenode = cluster.getNameNode();
  datanodes = namenode.getDatanodeReport(DatanodeReportType.LIVE);
  // Wait for Livenodes in clusterInfo to be non-null
  long sTime = System.currentTimeMillis();
  while (System.currentTimeMillis() - sTime < 120000 && blockMover.cluster.liveNodes == null) {
    LOG.info("Waiting for cluster info to add all liveNodes");
    Thread.sleep(1000);
  }
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:20,代码来源:TestPlacementMonitor.java


示例6: decommissionOneNode

import org.apache.hadoop.hdfs.protocol.FSConstants.DatanodeReportType; //导入依赖的package包/类
private String decommissionOneNode() throws IOException {
  
  DFSClient client = ((DistributedFileSystem)fileSys).getClient();
  DatanodeInfo[] info = client.datanodeReport(DatanodeReportType.LIVE);

  int index = 0;
  boolean found = false;
  while (!found) {
    index = rand.nextInt(info.length);
    if (!info[index].isDecommissioned() && !info[index].isDecommissionInProgress()) {
      found = true;
    }
  }
  String nodename = info[index].getName();
  System.out.println("Decommissioning node: " + nodename);

  // write nodename into the exclude file.
  decommissionedNodes.add(nodename);
  writeExcludesFileAndRefresh(decommissionedNodes);
  return nodename;
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:22,代码来源:TestBlockCopier.java


示例7: testDatanodeStartupDuringFailover

import org.apache.hadoop.hdfs.protocol.FSConstants.DatanodeReportType; //导入依赖的package包/类
@Test
public void testDatanodeStartupDuringFailover() throws Exception {
  setUp(false, "testDatanodeStartupDuringFailover");
  cluster.killPrimary();
  cluster.restartDataNodes(false);
  long start = System.currentTimeMillis();
  int live = 0;
  int total = 3;
  while (System.currentTimeMillis() - start < 30000 && live != total) {
    live = cluster.getStandbyAvatar(0).avatar
        .getDatanodeReport(DatanodeReportType.LIVE).length;
    total = cluster.getStandbyAvatar(0).avatar
        .getDatanodeReport(DatanodeReportType.ALL).length;
  }
  assertEquals(total, live);
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:17,代码来源:TestAvatarFailover.java


示例8: testDeadDatanodeFailover

import org.apache.hadoop.hdfs.protocol.FSConstants.DatanodeReportType; //导入依赖的package包/类
@Test
public void testDeadDatanodeFailover() throws Exception {
  setUp(false, "testDeadDatanodeFailover");
  h.setIgnoreDatanodes(false);
  // Create test files.
  createTestFiles("/testDeadDatanodeFailover");
  cluster.shutDownDataNode(0);
  FSNamesystem ns = cluster.getStandbyAvatar(0).avatar.namesystem;
  StandbySafeMode safeMode = cluster.getStandbyAvatar(0).avatar.getStandbySafeMode();
  new ExitSafeMode(safeMode, ns).start();
  cluster.failOver();
  // One datanode should be removed after failover
  assertEquals(2,
      cluster.getPrimaryAvatar(0).avatar.namesystem
          .datanodeReport(DatanodeReportType.LIVE).length);
  assertTrue(pass);
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:18,代码来源:TestStandbySafeMode.java


示例9: waitForHeartbeats

import org.apache.hadoop.hdfs.protocol.FSConstants.DatanodeReportType; //导入依赖的package包/类
public void waitForHeartbeats() throws Exception {
  DatanodeInfo[] dns = cluster.getPrimaryAvatar(0).avatar
      .getDatanodeReport(DatanodeReportType.ALL);
  while (true) {
    int count = 0;
    for (DatanodeInfo dn : dns) {
      if (dn.getRemaining() == 5 * MAX_FILE_SIZE || dn.getRemaining() == 0) {
        LOG.info("Bad dn : " + dn.getName() + " remaining : "
            + dn.getRemaining());
        count++;
      }
    }
    dns = cluster.getPrimaryAvatar(0).avatar
      .getDatanodeReport(DatanodeReportType.ALL);
    if (count == 1)
      break;
    LOG.info("Waiting for heartbeats");
    Thread.sleep(1000);
  }
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:21,代码来源:TestAvatarBalancer.java


示例10: testDatanodeNoService

import org.apache.hadoop.hdfs.protocol.FSConstants.DatanodeReportType; //导入依赖的package包/类
@Test
public void testDatanodeNoService() throws Exception {
  cluster.shutDownDataNodes();
  cluster.killStandby();
  cluster.restartStandby();
  InjectionHandler.set(new TestHandler());
  cluster.restartDataNodes(false);
  // Wait for trigger.
  while (!done) {
    System.out.println("Waiting for trigger");
    Thread.sleep(1000);
  }
  int dnReports = cluster.getStandbyAvatar(0).avatar
      .getDatanodeReport(DatanodeReportType.LIVE).length;
  long start = System.currentTimeMillis();
  while (System.currentTimeMillis() - start < 30000 && dnReports != 1) {
    System.out.println("Waiting for dn report");
    Thread.sleep(1000);
    dnReports = cluster.getStandbyAvatar(0).avatar
        .getDatanodeReport(DatanodeReportType.LIVE).length;
  }
  assertEquals(1, dnReports);
  assertTrue(pass);
  assertTrue(done);
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:26,代码来源:TestAvatarDatanodeNoService.java


示例11: testDatanodeVersionStandby

import org.apache.hadoop.hdfs.protocol.FSConstants.DatanodeReportType; //导入依赖的package包/类
/** Test when standby registration throws IncorrectVersion */
@Test
public void testDatanodeVersionStandby() throws Exception {
  InjectionHandler.set(new TestHandler(2));
  cluster.startDataNodes(1, null, null, conf);
  waitForDone();
  int dnReports = cluster.getPrimaryAvatar(0).avatar
      .getDatanodeReport(DatanodeReportType.LIVE).length;
  int dnStandbyReports = cluster.getStandbyAvatar(0).avatar
      .getDatanodeReport(DatanodeReportType.LIVE).length;
  long start = System.currentTimeMillis();
  while (System.currentTimeMillis() - start < 10000 && dnReports != 1) {
    System.out.println("Waiting for dn report");
    DFSTestUtil.waitSecond();
    dnReports = cluster.getPrimaryAvatar(0).avatar
        .getDatanodeReport(DatanodeReportType.LIVE).length;
    dnStandbyReports = cluster.getStandbyAvatar(0).avatar
    .getDatanodeReport(DatanodeReportType.LIVE).length;
  }
  assertEquals(1, dnReports);
  assertEquals(0, dnStandbyReports);
  assertEquals(1, cluster.getDataNodes().size());
  assertTrue(cluster.getDataNodes().get(0).isDatanodeUp());
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:25,代码来源:TestAvatarDatanodeVersion.java


示例12: testDatanodeVersionPrimary

import org.apache.hadoop.hdfs.protocol.FSConstants.DatanodeReportType; //导入依赖的package包/类
@Test
public void testDatanodeVersionPrimary() throws Exception {
  InjectionHandler.set(new TestHandler(1));
  cluster.startDataNodes(1, null, null, conf);
  waitForDone();

  int dnReports = cluster.getPrimaryAvatar(0).avatar
      .getDatanodeReport(DatanodeReportType.LIVE).length;
  int dnStandbyReports = cluster.getStandbyAvatar(0).avatar
      .getDatanodeReport(DatanodeReportType.LIVE).length;
  long start = System.currentTimeMillis();
  while (System.currentTimeMillis() - start < 10000) {
    System.out.println("Waiting for dn report");
    DFSTestUtil.waitSecond();;
    dnReports = cluster.getPrimaryAvatar(0).avatar
        .getDatanodeReport(DatanodeReportType.LIVE).length;
    dnStandbyReports = cluster.getStandbyAvatar(0).avatar
        .getDatanodeReport(DatanodeReportType.LIVE).length;
  }
  assertEquals(0, dnReports);
  assertEquals(1, dnStandbyReports);
  assertEquals(1, cluster.getDataNodes().size());
  assertFalse(cluster.getDataNodes().get(0).isDatanodeUp());
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:25,代码来源:TestAvatarDatanodeVersion.java


示例13: waitActive

import org.apache.hadoop.hdfs.protocol.FSConstants.DatanodeReportType; //导入依赖的package包/类
/**
 * Wait until the cluster is active and running.
 */
public void waitActive() throws IOException {
  if (nameNode == null) {
    return;
  }
  InetSocketAddress addr = NetUtils.makeSocketAddr("localhost",
                                                 getNameNodePort());
  DFSClient client = new DFSClient(addr, conf);

  // make sure all datanodes have registered and sent heartbeat
  while (shouldWait(client.datanodeReport(DatanodeReportType.LIVE))) {
    try {
      Thread.sleep(100);
    } catch (InterruptedException e) {
    }
  }

  client.close();
  System.out.println("Cluster is active");
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre,代码行数:23,代码来源:MiniDFSCluster.java


示例14: waitForDNHeartbeat

import org.apache.hadoop.hdfs.protocol.FSConstants.DatanodeReportType; //导入依赖的package包/类
/**
 * Wait for the given datanode to heartbeat once.
 */
public void waitForDNHeartbeat(int dnIndex, long timeoutMillis)
  throws IOException, InterruptedException {
  DataNode dn = getDataNodes().get(dnIndex);
  InetSocketAddress addr = new InetSocketAddress("localhost",
                                                 getNameNodePort());
  DFSClient client = new DFSClient(addr, conf);

  long startTime = System.currentTimeMillis();
  while (System.currentTimeMillis() < startTime + timeoutMillis) {
    DatanodeInfo report[] = client.datanodeReport(DatanodeReportType.LIVE);

    for (DatanodeInfo thisReport : report) {
      if (thisReport.getStorageID().equals(
            dn.dnRegistration.getStorageID())) {
        if (thisReport.getLastUpdate() > startTime)
          return;
      }
    }

    Thread.sleep(500);
  }
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre,代码行数:26,代码来源:MiniDFSCluster.java


示例15: waitActive

import org.apache.hadoop.hdfs.protocol.FSConstants.DatanodeReportType; //导入依赖的package包/类
/**
 * Wait until the cluster is active and running.
 */
public void waitActive() throws IOException {
  if (nameNode == null) {
    return;
  }
  InetSocketAddress addr = new InetSocketAddress("localhost",
                                                 getNameNodePort());
  DFSClient client = new DFSClient(addr, conf);

  // make sure all datanodes have registered and sent heartbeat
  while (shouldWait(client.datanodeReport(DatanodeReportType.LIVE))) {
    try {
      Thread.sleep(100);
    } catch (InterruptedException e) {
    }
  }

  client.close();
}
 
开发者ID:cumulusyebl,项目名称:cumulus,代码行数:22,代码来源:MiniDFSCluster.java


示例16: chooseDatanode

import org.apache.hadoop.hdfs.protocol.FSConstants.DatanodeReportType; //导入依赖的package包/类
/**
 * Choose a datanode (hostname:portnumber). The datanode is chosen at random
 * from the live datanodes.
 * 
 * @param locationsToAvoid
 *            locations to avoid.
 * @return A string in the format name:port.
 * @throws IOException
 */
private String chooseDatanode(DatanodeInfo[] locationsToAvoid)
		throws IOException {
	DistributedFileSystem dfs = getDFS(new Path("/"));
	DatanodeInfo[] live = dfs.getClient().datanodeReport(
			DatanodeReportType.LIVE);

	Random rand = new Random();
	String chosen = null;
	int maxAttempts = 1000;
	for (int i = 0; i < maxAttempts && chosen == null; i++) {
		int idx = rand.nextInt(live.length);
		chosen = live[idx].name;
		for (DatanodeInfo avoid : locationsToAvoid) {
			if (chosen.equals(avoid.name)) {
				//LOG.info("Avoiding " + avoid.name);
				chosen = null;
				break;
			}
		}
	}
	if (chosen == null) {
		throw new IOException("Could not choose datanode");
	}
	return chosen;
}
 
开发者ID:iVCE,项目名称:RDFS,代码行数:35,代码来源:BlockReconstructor.java


示例17: chooseDatanodeInfo

import org.apache.hadoop.hdfs.protocol.FSConstants.DatanodeReportType; //导入依赖的package包/类
private DatanodeInfo chooseDatanodeInfo(DatanodeInfo[] locationsToAvoid)
		throws IOException {
	DistributedFileSystem dfs = getDFS(new Path("/"));
	DatanodeInfo[] live = dfs.getClient().datanodeReport(
			DatanodeReportType.LIVE);

	Random rand = new Random();
	DatanodeInfo chosen = null;
	int maxAttempts = 1000;
	for (int i = 0; i < maxAttempts && chosen == null; i++) {
		int idx = rand.nextInt(live.length);
		chosen = live[idx];
		for (DatanodeInfo avoid : locationsToAvoid) {
			if (chosen.name.equals(avoid.name)) {
				chosen = null;
				break;
			}
		}
	}
	if (chosen == null) {
		throw new IOException("Could not choose datanode");
	}
	return chosen;
}
 
开发者ID:iVCE,项目名称:RDFS,代码行数:25,代码来源:BlockReconstructor.java


示例18: triggerFailover

import org.apache.hadoop.hdfs.protocol.FSConstants.DatanodeReportType; //导入依赖的package包/类
/**
 * Triggers failover processing for safe mode and blocks until we have left
 * safe mode.
 * 
 * @throws IOException
 */
protected void triggerFailover() throws IOException {
  clearDataStructures();
  for (DatanodeInfo node : namesystem.datanodeReport(DatanodeReportType.LIVE)) {
    liveDatanodes.add(node);
    outStandingHeartbeats.add(node);
  }
  safeModeState = SafeModeState.FAILOVER_IN_PROGRESS;
  safeModeMonitor = new Daemon(new SafeModeMonitor(namesystem, this));
  safeModeMonitor.start();
  try {
    safeModeMonitor.join();
  } catch (InterruptedException ie) {
    throw new IOException("triggerSafeMode() interruped()");
  }
  if (safeModeState != SafeModeState.AFTER_FAILOVER) {
    throw new RuntimeException("safeModeState is : " + safeModeState +
        " which does not indicate a successfull exit of safemode");
  }
}
 
开发者ID:iVCE,项目名称:RDFS,代码行数:26,代码来源:StandbySafeMode.java


示例19: testDatanodeStartupDuringFailover

import org.apache.hadoop.hdfs.protocol.FSConstants.DatanodeReportType; //导入依赖的package包/类
@Test
public void testDatanodeStartupDuringFailover() throws Exception {
  setUp(false);
  cluster.killPrimary();
  cluster.restartDataNodes(false);
  long start = System.currentTimeMillis();
  int live = 0;
  int total = 3;
  while (System.currentTimeMillis() - start < 30000 && live != total) {
    live = cluster.getStandbyAvatar(0).avatar
        .getDatanodeReport(DatanodeReportType.LIVE).length;
    total = cluster.getStandbyAvatar(0).avatar
        .getDatanodeReport(DatanodeReportType.ALL).length;
  }
  assertEquals(total, live);
}
 
开发者ID:iVCE,项目名称:RDFS,代码行数:17,代码来源:TestAvatarFailover.java


示例20: testDeadDatanodeFailover

import org.apache.hadoop.hdfs.protocol.FSConstants.DatanodeReportType; //导入依赖的package包/类
@Test
public void testDeadDatanodeFailover() throws Exception {
  setUp(false);
  h.setIgnoreDatanodes(false);
  // Create test files.
  createTestFiles("/testDeadDatanodeFailover");
  cluster.shutDownDataNode(0);
  FSNamesystem ns = cluster.getStandbyAvatar(0).avatar.namesystem;
  StandbySafeMode safeMode = cluster.getStandbyAvatar(0).avatar.getStandbySafeMode();
  new ExitSafeMode(safeMode, ns).start();
  cluster.failOver();
  // One datanode should be removed after failover
  assertEquals(2,
      cluster.getPrimaryAvatar(0).avatar.namesystem
          .datanodeReport(DatanodeReportType.LIVE).length);
  assertTrue(pass);
}
 
开发者ID:iVCE,项目名称:RDFS,代码行数:18,代码来源:TestStandbySafeMode.java



注:本文中的org.apache.hadoop.hdfs.protocol.FSConstants.DatanodeReportType类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java EndpointAddress类代码示例发布时间:2022-05-23
下一篇:
Java ResourceRecord类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap