本文整理汇总了Java中org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.BlockMetricType类的典型用法代码示例。如果您正苦于以下问题:Java BlockMetricType类的具体用法?Java BlockMetricType怎么用?Java BlockMetricType使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
BlockMetricType类属于org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics包,在下文中一共展示了BlockMetricType类的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: verifyDataAndIndexBlockRead
import org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.BlockMetricType; //导入依赖的package包/类
private void verifyDataAndIndexBlockRead(Map<String, Long> previousMetricSnapshot,
SchemaMetrics schemaMetrics, long expectDataBlockRead, long expectedIndexBlockRead){
Map<String, Long> currentMetricsSnapshot = SchemaMetrics.getMetricsSnapshot();
Map<String, Long> diffs =
SchemaMetrics.diffMetrics(previousMetricSnapshot, currentMetricsSnapshot);
long dataBlockRead = SchemaMetrics.getLong(diffs,
schemaMetrics.getBlockMetricName(BlockCategory.DATA, false, BlockMetricType.READ_COUNT));
long indexBlockRead = SchemaMetrics.getLong(diffs,
schemaMetrics.getBlockMetricName(BlockCategory.INDEX, false, BlockMetricType.READ_COUNT));
Assert.assertEquals(expectDataBlockRead, dataBlockRead);
Assert.assertEquals(expectedIndexBlockRead, indexBlockRead);
}
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:15,代码来源:TestBlocksScanned.java
示例2: testScannerSelection
import org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.BlockMetricType; //导入依赖的package包/类
@Test
public void testScannerSelection() throws IOException {
Configuration conf = TEST_UTIL.getConfiguration();
conf.setBoolean("hbase.store.delete.expired.storefile", false);
HColumnDescriptor hcd =
new HColumnDescriptor(FAMILY_BYTES)
.setMaxVersions(Integer.MAX_VALUE)
.setTimeToLive(TTL_SECONDS);
HTableDescriptor htd = new HTableDescriptor(TABLE);
htd.addFamily(hcd);
HRegionInfo info = new HRegionInfo(Bytes.toBytes(TABLE));
HRegion region =
HRegion.createHRegion(info, TEST_UTIL.getClusterTestDir(),
conf, htd);
for (int iFile = 0; iFile < totalNumFiles; ++iFile) {
if (iFile == NUM_EXPIRED_FILES) {
Threads.sleepWithoutInterrupt(TTL_MS);
}
for (int iRow = 0; iRow < NUM_ROWS; ++iRow) {
Put put = new Put(Bytes.toBytes("row" + iRow));
for (int iCol = 0; iCol < NUM_COLS_PER_ROW; ++iCol) {
put.add(FAMILY_BYTES, Bytes.toBytes("col" + iCol),
Bytes.toBytes("value" + iFile + "_" + iRow + "_" + iCol));
}
region.put(put);
}
region.flushcache();
}
Scan scan = new Scan();
scan.setMaxVersions(Integer.MAX_VALUE);
CacheConfig cacheConf = new CacheConfig(conf);
LruBlockCache cache = (LruBlockCache) cacheConf.getBlockCache();
cache.clearCache();
InternalScanner scanner = region.getScanner(scan);
List<KeyValue> results = new ArrayList<KeyValue>();
final int expectedKVsPerRow = numFreshFiles * NUM_COLS_PER_ROW;
int numReturnedRows = 0;
LOG.info("Scanning the entire table");
while (scanner.next(results) || results.size() > 0) {
assertEquals(expectedKVsPerRow, results.size());
++numReturnedRows;
results.clear();
}
assertEquals(NUM_ROWS, numReturnedRows);
Set<String> accessedFiles = cache.getCachedFileNamesForTest();
LOG.debug("Files accessed during scan: " + accessedFiles);
Map<String, Long> metricsBeforeCompaction =
SchemaMetrics.getMetricsSnapshot();
// Exercise both compaction codepaths.
if (explicitCompaction) {
region.getStore(FAMILY_BYTES).compactRecentForTesting(totalNumFiles);
} else {
region.compactStores();
}
SchemaMetrics.validateMetricChanges(metricsBeforeCompaction);
Map<String, Long> compactionMetrics =
SchemaMetrics.diffMetrics(metricsBeforeCompaction,
SchemaMetrics.getMetricsSnapshot());
long compactionDataBlocksRead = SchemaMetrics.getLong(
compactionMetrics,
SchemaMetrics.getInstance(TABLE, FAMILY).getBlockMetricName(
BlockCategory.DATA, true, BlockMetricType.READ_COUNT));
assertEquals("Invalid number of blocks accessed during compaction. " +
"We only expect non-expired files to be accessed.",
numFreshFiles, compactionDataBlocksRead);
region.close();
}
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:74,代码来源:TestScannerSelectionUsingTTL.java
示例3: testScannerSelection
import org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.BlockMetricType; //导入依赖的package包/类
@Test
public void testScannerSelection() throws IOException {
Configuration conf = TEST_UTIL.getConfiguration();
conf.setInt("hbase.hstore.compactionThreshold", 10000);
HColumnDescriptor hcd = new HColumnDescriptor(FAMILY_BYTES).setBlockCacheEnabled(true)
.setBloomFilterType(bloomType);
HTableDescriptor htd = new HTableDescriptor(TABLE);
htd.addFamily(hcd);
HRegionInfo info = new HRegionInfo(Bytes.toBytes(TABLE));
HRegion region = HRegion.createHRegion(info, TEST_UTIL.getClusterTestDir(), conf, htd);
for (int iFile = 0; iFile < NUM_FILES; ++iFile) {
for (int iRow = 0; iRow < NUM_ROWS; ++iRow) {
Put put = new Put(Bytes.toBytes("row" + iRow));
for (int iCol = 0; iCol < NUM_COLS_PER_ROW; ++iCol) {
put.add(FAMILY_BYTES, Bytes.toBytes("col" + iCol),
Bytes.toBytes("value" + iFile + "_" + iRow + "_" + iCol));
}
region.put(put);
}
region.flushcache();
}
Scan scan = new Scan(Bytes.toBytes("aaa"), Bytes.toBytes("aaz"));
CacheConfig cacheConf = new CacheConfig(conf);
LruBlockCache cache = (LruBlockCache) cacheConf.getBlockCache();
cache.clearCache();
Map<String, Long> metricsBefore = SchemaMetrics.getMetricsSnapshot();
SchemaMetrics.validateMetricChanges(metricsBefore);
InternalScanner scanner = region.getScanner(scan);
List<KeyValue> results = new ArrayList<KeyValue>();
while (scanner.next(results)) {
}
scanner.close();
assertEquals(0, results.size());
Set<String> accessedFiles = cache.getCachedFileNamesForTest();
assertEquals(accessedFiles.size(), 0);
//assertEquals(cache.getBlockCount(), 0);
Map<String, Long> diffMetrics = SchemaMetrics.diffMetrics(metricsBefore,
SchemaMetrics.getMetricsSnapshot());
SchemaMetrics schemaMetrics = SchemaMetrics.getInstance(TABLE, FAMILY);
long dataBlockRead = SchemaMetrics.getLong(diffMetrics,
schemaMetrics.getBlockMetricName(BlockCategory.DATA, false, BlockMetricType.READ_COUNT));
assertEquals(dataBlockRead, 0);
region.close();
}
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:47,代码来源:TestScannerSelectionUsingKeyRange.java
示例4: getMetricName
import org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.BlockMetricType; //导入依赖的package包/类
private String getMetricName(SchemaMetrics metrics, BlockCategory category) {
String hitsMetricName =
metrics.getBlockMetricName(category, SchemaMetrics.NO_COMPACTION,
BlockMetricType.CACHE_HIT);
return hitsMetricName;
}
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:7,代码来源:TestForceCacheImportantBlocks.java
注:本文中的org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.BlockMetricType类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论