本文整理汇总了Java中org.apache.hadoop.hbase.snapshot.ExportSnapshot类的典型用法代码示例。如果您正苦于以下问题:Java ExportSnapshot类的具体用法?Java ExportSnapshot怎么用?Java ExportSnapshot使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
ExportSnapshot类属于org.apache.hadoop.hbase.snapshot包,在下文中一共展示了ExportSnapshot类的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: main
import org.apache.hadoop.hbase.snapshot.ExportSnapshot; //导入依赖的package包/类
/**
* @param args
* @throws Throwable
*/
public static void main(String[] args) throws Throwable {
ProgramDriver pgd = new ProgramDriver();
pgd.addClass(RowCounter.NAME, RowCounter.class,
"Count rows in HBase table.");
pgd.addClass(CellCounter.NAME, CellCounter.class,
"Count cells in HBase table.");
pgd.addClass(Export.NAME, Export.class, "Write table data to HDFS.");
pgd.addClass(Import.NAME, Import.class, "Import data written by Export.");
pgd.addClass(ImportTsv.NAME, ImportTsv.class, "Import data in TSV format.");
pgd.addClass(LoadIncrementalHFiles.NAME, LoadIncrementalHFiles.class,
"Complete a bulk data load.");
pgd.addClass(CopyTable.NAME, CopyTable.class,
"Export a table from local cluster to peer cluster.");
pgd.addClass(VerifyReplication.NAME, VerifyReplication.class, "Compare" +
" the data from tables in two different clusters. WARNING: It" +
" doesn't work for incrementColumnValues'd cells since the" +
" timestamp is changed after being appended to the log.");
pgd.addClass(WALPlayer.NAME, WALPlayer.class, "Replay WAL files.");
pgd.addClass(ExportSnapshot.NAME, ExportSnapshot.class, "Export" +
" the specific snapshot to a given FileSystem.");
ProgramDriver.class.getMethod("driver", new Class [] {String[].class}).
invoke(pgd, new Object[]{args});
}
开发者ID:fengchen8086,项目名称:ditb,代码行数:30,代码来源:Driver.java
示例2: main
import org.apache.hadoop.hbase.snapshot.ExportSnapshot; //导入依赖的package包/类
/**
* @param args
* @throws Throwable
*/
public static void main(String[] args) throws Throwable {
ProgramDriver pgd = new ProgramDriver();
pgd.addClass(RowCounter.NAME, RowCounter.class,
"Count rows in HBase table.");
pgd.addClass(CellCounter.NAME, CellCounter.class,
"Count cells in HBase table.");
pgd.addClass(Export.NAME, Export.class, "Write table data to HDFS.");
pgd.addClass(Import.NAME, Import.class, "Import data written by Export.");
pgd.addClass(ImportTsv.NAME, ImportTsv.class, "Import data in TSV format.");
pgd.addClass(LoadIncrementalHFiles.NAME, LoadIncrementalHFiles.class,
"Complete a bulk data load.");
pgd.addClass(CopyTable.NAME, CopyTable.class,
"Export a table from local cluster to peer cluster.");
pgd.addClass(VerifyReplication.NAME, VerifyReplication.class, "Compare" +
" data from tables in two different clusters. It" +
" doesn't work for incrementColumnValues'd cells since" +
" timestamp is changed after appending to WAL.");
pgd.addClass(WALPlayer.NAME, WALPlayer.class, "Replay WAL files.");
pgd.addClass(ExportSnapshot.NAME, ExportSnapshot.class, "Export" +
" the specific snapshot to a given FileSystem.");
ProgramDriver.class.getMethod("driver", new Class [] {String[].class}).
invoke(pgd, new Object[]{args});
}
开发者ID:apache,项目名称:hbase,代码行数:30,代码来源:Driver.java
示例3: testBalanceSplit
import org.apache.hadoop.hbase.snapshot.ExportSnapshot; //导入依赖的package包/类
/**
* Verfy the result of getBalanceSplits() method.
* The result are groups of files, used as input list for the "export" mappers.
* All the groups should have similar amount of data.
*
* The input list is a pair of file path and length.
* The getBalanceSplits() function sort it by length,
* and assign to each group a file, going back and forth through the groups.
*/
@Test
public void testBalanceSplit() throws Exception {
// Create a list of files
List<Pair<Path, Long>> files = new ArrayList<Pair<Path, Long>>();
for (long i = 0; i <= 20; i++) {
files.add(new Pair<Path, Long>(new Path("file-" + i), i));
}
// Create 5 groups (total size 210)
// group 0: 20, 11, 10, 1 (total size: 42)
// group 1: 19, 12, 9, 2 (total size: 42)
// group 2: 18, 13, 8, 3 (total size: 42)
// group 3: 17, 12, 7, 4 (total size: 42)
// group 4: 16, 11, 6, 5 (total size: 42)
List<List<Path>> splits = ExportSnapshot.getBalancedSplits(files, 5);
assertEquals(5, splits.size());
assertEquals(Arrays.asList(new Path("file-20"), new Path("file-11"),
new Path("file-10"), new Path("file-1"), new Path("file-0")), splits.get(0));
assertEquals(Arrays.asList(new Path("file-19"), new Path("file-12"),
new Path("file-9"), new Path("file-2")), splits.get(1));
assertEquals(Arrays.asList(new Path("file-18"), new Path("file-13"),
new Path("file-8"), new Path("file-3")), splits.get(2));
assertEquals(Arrays.asList(new Path("file-17"), new Path("file-14"),
new Path("file-7"), new Path("file-4")), splits.get(3));
assertEquals(Arrays.asList(new Path("file-16"), new Path("file-15"),
new Path("file-6"), new Path("file-5")), splits.get(4));
}
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:37,代码来源:TestExportSnapshot.java
示例4: testExportFileSystemState
import org.apache.hadoop.hbase.snapshot.ExportSnapshot; //导入依赖的package包/类
/**
* Test ExportSnapshot
*/
private void testExportFileSystemState(final byte[] tableName, final byte[] snapshotName,
int filesExpected) throws Exception {
Path copyDir = TEST_UTIL.getDataTestDir("export-" + System.currentTimeMillis());
URI hdfsUri = FileSystem.get(TEST_UTIL.getConfiguration()).getUri();
FileSystem fs = FileSystem.get(copyDir.toUri(), new Configuration());
copyDir = copyDir.makeQualified(fs);
// Export Snapshot
int res = ExportSnapshot.innerMain(TEST_UTIL.getConfiguration(), new String[] {
"-snapshot", Bytes.toString(snapshotName),
"-copy-to", copyDir.toString()
});
assertEquals(0, res);
// Verify File-System state
FileStatus[] rootFiles = fs.listStatus(copyDir);
assertEquals(filesExpected, rootFiles.length);
for (FileStatus fileStatus: rootFiles) {
String name = fileStatus.getPath().getName();
assertTrue(fileStatus.isDir());
assertTrue(name.equals(HConstants.SNAPSHOT_DIR_NAME) || name.equals(".archive"));
}
// compare the snapshot metadata and verify the hfiles
final FileSystem hdfs = FileSystem.get(hdfsUri, TEST_UTIL.getConfiguration());
final Path snapshotDir = new Path(HConstants.SNAPSHOT_DIR_NAME, Bytes.toString(snapshotName));
verifySnapshot(hdfs, new Path(TEST_UTIL.getDefaultRootDirPath(), snapshotDir),
fs, new Path(copyDir, snapshotDir));
verifyArchive(fs, copyDir, tableName, Bytes.toString(snapshotName));
FSUtils.logFileSystemState(hdfs, snapshotDir, LOG);
// Remove the exported dir
fs.delete(copyDir, true);
}
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:38,代码来源:TestExportSnapshot.java
注:本文中的org.apache.hadoop.hbase.snapshot.ExportSnapshot类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论