本文整理汇总了Java中org.apache.hadoop.hdfs.tools.offlineImageViewer.SpotCheckImageVisitor.ImageInfo类的典型用法代码示例。如果您正苦于以下问题:Java ImageInfo类的具体用法?Java ImageInfo怎么用?Java ImageInfo使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
ImageInfo类属于org.apache.hadoop.hdfs.tools.offlineImageViewer.SpotCheckImageVisitor包,在下文中一共展示了ImageInfo类的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: spotCheck
import org.apache.hadoop.hdfs.tools.offlineImageViewer.SpotCheckImageVisitor.ImageInfo; //导入依赖的package包/类
private void spotCheck(String hadoopVersion, String input,
ImageInfo inodes, ImageInfo INUCs) {
SpotCheckImageVisitor v = new SpotCheckImageVisitor();
OfflineImageViewer oiv = new OfflineImageViewer(input, v, false);
try {
oiv.go();
} catch (IOException e) {
fail("Error processing file: " + input);
}
compareSpotCheck(hadoopVersion, v.getINodesInfo(), inodes);
compareSpotCheck(hadoopVersion, v.getINUCsInfo(), INUCs);
System.out.println("Successfully processed fsimage file from Hadoop version " +
hadoopVersion);
}
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:16,代码来源:TestOIVCanReadOldVersions.java
示例2: compareSpotCheck
import org.apache.hadoop.hdfs.tools.offlineImageViewer.SpotCheckImageVisitor.ImageInfo; //导入依赖的package包/类
private void compareSpotCheck(String hadoopVersion,
ImageInfo generated, ImageInfo expected) {
assertEquals("Version " + hadoopVersion + ": Same number of total blocks",
expected.totalNumBlocks, generated.totalNumBlocks);
assertEquals("Version " + hadoopVersion + ": Same total file size",
expected.totalFileSize, generated.totalFileSize);
assertEquals("Version " + hadoopVersion + ": Same total replication factor",
expected.totalReplications, generated.totalReplications);
assertEquals("Version " + hadoopVersion + ": One-to-one matching of path names",
expected.pathNames, generated.pathNames);
}
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:12,代码来源:TestOIVCanReadOldVersions.java
示例3: testOldFSImages
import org.apache.hadoop.hdfs.tools.offlineImageViewer.SpotCheckImageVisitor.ImageInfo; //导入依赖的package包/类
public void testOldFSImages() {
// Define the expected values from the prior versions, as they were created
// and verified at time of creation
Set<String> pathNames = new HashSet<String>();
Collections.addAll(pathNames, "", /* root */
"/bar",
"/bar/dir0",
"/bar/dir0/file0",
"/bar/dir0/file1",
"/bar/dir1",
"/bar/dir1/file0",
"/bar/dir1/file1",
"/bar/dir2",
"/bar/dir2/file0",
"/bar/dir2/file1",
"/foo",
"/foo/dir0",
"/foo/dir0/file0",
"/foo/dir0/file1",
"/foo/dir0/file2",
"/foo/dir0/file3",
"/foo/dir1",
"/foo/dir1/file0",
"/foo/dir1/file1",
"/foo/dir1/file2",
"/foo/dir1/file3");
Set<String> INUCpaths = new HashSet<String>();
Collections.addAll(INUCpaths, "/bar/dir0/file0",
"/bar/dir0/file1",
"/bar/dir1/file0",
"/bar/dir1/file1",
"/bar/dir2/file0",
"/bar/dir2/file1");
ImageInfo v18Inodes = new ImageInfo(); // Hadoop version 18 inodes
v18Inodes.totalNumBlocks = 12;
v18Inodes.totalFileSize = 1069548540l;
v18Inodes.pathNames = pathNames;
v18Inodes.totalReplications = 14;
ImageInfo v18INUCs = new ImageInfo(); // Hadoop version 18 inodes under construction
v18INUCs.totalNumBlocks = 0;
v18INUCs.totalFileSize = 0;
v18INUCs.pathNames = INUCpaths;
v18INUCs.totalReplications = 6;
ImageInfo v19Inodes = new ImageInfo(); // Hadoop version 19 inodes
v19Inodes.totalNumBlocks = 12;
v19Inodes.totalFileSize = 1069548540l;
v19Inodes.pathNames = pathNames;
v19Inodes.totalReplications = 14;
ImageInfo v19INUCs = new ImageInfo(); // Hadoop version 19 inodes under construction
v19INUCs.totalNumBlocks = 0;
v19INUCs.totalFileSize = 0;
v19INUCs.pathNames = INUCpaths;
v19INUCs.totalReplications = 6;
spotCheck("18", TEST_CACHE_DATA_DIR + "/fsimageV18", v18Inodes, v18INUCs);
spotCheck("19", TEST_CACHE_DATA_DIR + "/fsimageV19", v19Inodes, v19INUCs);
}
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:64,代码来源:TestOIVCanReadOldVersions.java
示例4: testOldFSImages
import org.apache.hadoop.hdfs.tools.offlineImageViewer.SpotCheckImageVisitor.ImageInfo; //导入依赖的package包/类
@Test
public void testOldFSImages() {
// Define the expected values from the prior versions, as they were created
// and verified at time of creation
Set<String> pathNames = new HashSet<String>();
Collections.addAll(pathNames, "", /* root */
"/bar",
"/bar/dir0",
"/bar/dir0/file0",
"/bar/dir0/file1",
"/bar/dir1",
"/bar/dir1/file0",
"/bar/dir1/file1",
"/bar/dir2",
"/bar/dir2/file0",
"/bar/dir2/file1",
"/foo",
"/foo/dir0",
"/foo/dir0/file0",
"/foo/dir0/file1",
"/foo/dir0/file2",
"/foo/dir0/file3",
"/foo/dir1",
"/foo/dir1/file0",
"/foo/dir1/file1",
"/foo/dir1/file2",
"/foo/dir1/file3");
Set<String> INUCpaths = new HashSet<String>();
Collections.addAll(INUCpaths, "/bar/dir0/file0",
"/bar/dir0/file1",
"/bar/dir1/file0",
"/bar/dir1/file1",
"/bar/dir2/file0",
"/bar/dir2/file1");
ImageInfo v18Inodes = new ImageInfo(); // Hadoop version 18 inodes
v18Inodes.totalNumBlocks = 12;
v18Inodes.totalFileSize = 1069548540l;
v18Inodes.pathNames = pathNames;
v18Inodes.totalReplications = 14;
ImageInfo v18INUCs = new ImageInfo(); // Hadoop version 18 inodes under construction
v18INUCs.totalNumBlocks = 0;
v18INUCs.totalFileSize = 0;
v18INUCs.pathNames = INUCpaths;
v18INUCs.totalReplications = 6;
ImageInfo v19Inodes = new ImageInfo(); // Hadoop version 19 inodes
v19Inodes.totalNumBlocks = 12;
v19Inodes.totalFileSize = 1069548540l;
v19Inodes.pathNames = pathNames;
v19Inodes.totalReplications = 14;
ImageInfo v19INUCs = new ImageInfo(); // Hadoop version 19 inodes under construction
v19INUCs.totalNumBlocks = 0;
v19INUCs.totalFileSize = 0;
v19INUCs.pathNames = INUCpaths;
v19INUCs.totalReplications = 6;
spotCheck("18", TEST_CACHE_DATA_DIR + "/fsimageV18", v18Inodes, v18INUCs);
spotCheck("19", TEST_CACHE_DATA_DIR + "/fsimageV19", v19Inodes, v19INUCs);
}
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:65,代码来源:TestOIVCanReadOldVersions.java
注:本文中的org.apache.hadoop.hdfs.tools.offlineImageViewer.SpotCheckImageVisitor.ImageInfo类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论