• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java SchemaMetrics类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics的典型用法代码示例。如果您正苦于以下问题:Java SchemaMetrics类的具体用法?Java SchemaMetrics怎么用?Java SchemaMetrics使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



SchemaMetrics类属于org.apache.hadoop.hbase.regionserver.metrics包,在下文中一共展示了SchemaMetrics类的18个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: HFileWriterV2

import org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics; //导入依赖的package包/类
/** Constructor that takes a path, creates and closes the output stream. */
public HFileWriterV2(Configuration conf, CacheConfig cacheConf,
    FileSystem fs, Path path, FSDataOutputStream ostream, int blockSize,
    Compression.Algorithm compressAlgo, HFileDataBlockEncoder blockEncoder,
    final KeyComparator comparator, final ChecksumType checksumType,
    final int bytesPerChecksum, boolean includeMVCCReadpoint) throws IOException {
  super(cacheConf,
      ostream == null ? createOutputStream(conf, fs, path) : ostream,
      path, blockSize, compressAlgo, blockEncoder, comparator);
  SchemaMetrics.configureGlobally(conf);
  this.checksumType = checksumType;
  this.bytesPerChecksum = bytesPerChecksum;
  this.includeMemstoreTS = includeMVCCReadpoint;
  if (!conf.getBoolean(HConstants.HBASE_CHECKSUM_VERIFICATION, false)) {
    this.minorVersion = 0;
  }
  finishInit(conf);
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:19,代码来源:HFileWriterV2.java


示例2: assertTimeVaryingMetricCount

import org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics; //导入依赖的package包/类
private void assertTimeVaryingMetricCount(int expectedCount, String table, String cf,
    String regionName, String metricPrefix) {

  Integer expectedCountInteger = new Integer(expectedCount);

  if (cf != null) {
    String cfKey =
        SchemaMetrics.TABLE_PREFIX + table + "." + SchemaMetrics.CF_PREFIX + cf + "."
            + metricPrefix;
    Pair<Long, Integer> cfPair = RegionMetricsStorage.getTimeVaryingMetric(cfKey);
    assertEquals(expectedCountInteger, cfPair.getSecond());
  }

  if (regionName != null) {
    String rKey =
        SchemaMetrics.TABLE_PREFIX + table + "." + SchemaMetrics.REGION_PREFIX + regionName + "."
            + metricPrefix;

    Pair<Long, Integer> regionPair = RegionMetricsStorage.getTimeVaryingMetric(rKey);
    assertEquals(expectedCountInteger, regionPair.getSecond());
  }
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:23,代码来源:TestRegionServerMetrics.java


示例3: assertSizeMetric

import org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics; //导入依赖的package包/类
private void assertSizeMetric(String table, String[] cfs, int[] metrics) {
  // we have getsize & nextsize for each column family
  assertEquals(cfs.length * 2, metrics.length);

  for (int i =0; i < cfs.length; ++i) {
    String prefix = SchemaMetrics.generateSchemaMetricsPrefix(table, cfs[i]);
    String getMetric = prefix + SchemaMetrics.METRIC_GETSIZE;
    String nextMetric = prefix + SchemaMetrics.METRIC_NEXTSIZE;

    // verify getsize and nextsize matches
    int getSize = RegionMetricsStorage.getNumericMetrics().containsKey(getMetric) ?
        RegionMetricsStorage.getNumericMetrics().get(getMetric).intValue() : 0;
    int nextSize = RegionMetricsStorage.getNumericMetrics().containsKey(nextMetric) ?
        RegionMetricsStorage.getNumericMetrics().get(nextMetric).intValue() : 0;

    assertEquals(metrics[i], getSize);
    assertEquals(metrics[cfs.length + i], nextSize);
  }
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:20,代码来源:TestRegionServerMetrics.java


示例4: StoreFile

import org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics; //导入依赖的package包/类
/**
 * Constructor, loads a reader and it's indices, etc. May allocate a substantial amount of ram
 * depending on the underlying files (10-20MB?).
 * @param fs The current file system to use.
 * @param p The path of the file.
 * @param blockcache <code>true</code> if the block cache is enabled.
 * @param conf The current configuration.
 * @param cacheConf The cache configuration and block cache reference.
 * @param cfBloomType The bloom type to use for this store file as specified by column family
 *          configuration. This may or may not be the same as the Bloom filter type actually
 *          present in the HFile, because column family configuration might change. If this is
 *          {@link BloomType#NONE}, the existing Bloom filter is ignored.
 * @param dataBlockEncoder data block encoding algorithm.
 * @throws IOException When opening the reader fails.
 */
public StoreFile(final FileSystem fs, final Path p, final Configuration conf,
    final CacheConfig cacheConf, final BloomType cfBloomType,
    final HFileDataBlockEncoder dataBlockEncoder) throws IOException {
  this.fs = fs;
  this.path = p;
  this.cacheConf = cacheConf;
  this.dataBlockEncoder =
      dataBlockEncoder == null ? NoOpDataBlockEncoder.INSTANCE : dataBlockEncoder;
  if (BloomFilterFactory.isGeneralBloomEnabled(conf)) {
    this.cfBloomType = cfBloomType;
  } else {
    LOG.info("Ignoring bloom filter check for file " + path + ": " + "cfBloomType=" + cfBloomType
        + " (disabled in config)");
    this.cfBloomType = BloomType.NONE;
  }

  // cache the modification time stamp of this store file
  FileStatus[] stats = FSUtils.listStatus(fs, p, null);
  if (stats != null && stats.length == 1) {
    this.modificationTimeStamp = stats[0].getModificationTime();
  } else {
    this.modificationTimeStamp = 0;
  }
  SchemaMetrics.configureGlobally(conf);
  initPossibleIndexesAndReference(fs, p, conf);
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:42,代码来源:StoreFile.java


示例5: initializeMetricNames

import org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics; //导入依赖的package包/类
/**
 * Method used internally to initialize metric names throughout the constructors. To be called
 * after the store variable has been initialized!
 */
private void initializeMetricNames() {
  String tableName = SchemaMetrics.UNKNOWN;
  String family = SchemaMetrics.UNKNOWN;
  if (store != null) {
    tableName = store.getTableName();
    family = Bytes.toString(store.getFamily().getName());
  }
  this.metricNamePrefix = SchemaMetrics.generateSchemaMetricsPrefix(tableName, family);
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:14,代码来源:StoreScanner.java


示例6: HFileWriterV1

import org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics; //导入依赖的package包/类
/** Constructor that takes a path, creates and closes the output stream. */
public HFileWriterV1(Configuration conf, CacheConfig cacheConf,
    FileSystem fs, Path path, FSDataOutputStream ostream,
    int blockSize, Compression.Algorithm compress,
    HFileDataBlockEncoder blockEncoder,
    final KeyComparator comparator) throws IOException {
  super(cacheConf, ostream == null ? createOutputStream(conf, fs, path) : ostream, path,
      blockSize, compress, blockEncoder, comparator);
  SchemaMetrics.configureGlobally(conf);
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:11,代码来源:HFileWriterV1.java


示例7: getWriterFactory

import org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics; //导入依赖的package包/类
/**
 * Returns the factory to be used to create {@link HFile} writers
 */
public static final WriterFactory getWriterFactory(Configuration conf, CacheConfig cacheConf) {
  SchemaMetrics.configureGlobally(conf);
  int version = getFormatVersion(conf);
  switch (version) {
  case 1:
    return new HFileWriterV1.WriterFactoryV1(conf, cacheConf);
  case 2:
    return new HFileWriterV2.WriterFactoryV2(conf, cacheConf);
  default:
    throw new IllegalArgumentException("Cannot create writer for HFile " + "format version "
        + version);
  }
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:17,代码来源:HFile.java


示例8: updateSizeMetrics

import org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics; //导入依赖的package包/类
/**
 * Helper function that updates the local size counter and also updates any
 * per-cf or per-blocktype metrics it can discern from given
 * {@link CachedBlock}
 *
 * @param cb
 * @param evict
 */
protected long updateSizeMetrics(CachedBlock cb, boolean evict) {
  long heapsize = cb.heapSize();
  if (evict) {
    heapsize *= -1;
  }
  Cacheable cachedBlock = cb.getBuffer();
  SchemaMetrics schemaMetrics = cachedBlock.getSchemaMetrics();
  if (schemaMetrics != null) {
    schemaMetrics.updateOnCachePutOrEvict(
        cachedBlock.getBlockType().getCategory(), heapsize, evict);
  }
  return size.addAndGet(heapsize);
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:22,代码来源:LruBlockCache.java


示例9: testStatusSettingToAbortIfAnyExceptionDuringRegionInitilization

import org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics; //导入依赖的package包/类
/**
 * Testcase to check state of region initialization task set to ABORTED or not if any exceptions
 * during initialization
 * 
 * @throws Exception
 */
@Test
public void testStatusSettingToAbortIfAnyExceptionDuringRegionInitilization() throws Exception {
  HRegionInfo info = null;
  try {
    FileSystem fs = Mockito.mock(FileSystem.class);
    Mockito.when(fs.exists((Path) Mockito.anyObject())).thenThrow(new IOException());
    HTableDescriptor htd = new HTableDescriptor(tableName);
    htd.addFamily(new HColumnDescriptor("cf"));
    info = new HRegionInfo(htd.getName(), HConstants.EMPTY_BYTE_ARRAY,
        HConstants.EMPTY_BYTE_ARRAY, false);
    Path path = new Path(DIR + "testStatusSettingToAbortIfAnyExceptionDuringRegionInitilization");
    // no where we are instantiating HStore in this test case so useTableNameGlobally is null. To
    // avoid NullPointerException we are setting useTableNameGlobally to false.
    SchemaMetrics.setUseTableNameInTest(false);
    region = HRegion.newHRegion(path, null, fs, conf, info, htd, null);
    // region initialization throws IOException and set task state to ABORTED.
    region.initialize();
    fail("Region initialization should fail due to IOException");
  } catch (IOException io) {
    List<MonitoredTask> tasks = TaskMonitor.get().getTasks();
    for (MonitoredTask monitoredTask : tasks) {
      if (!(monitoredTask instanceof MonitoredRPCHandler)
          && monitoredTask.getDescription().contains(region.toString())) {
        assertTrue("Region state should be ABORTED.",
            monitoredTask.getState().equals(MonitoredTask.State.ABORTED));
        break;
      }
    }
  } finally {
    HRegion.closeHRegion(region);
  }
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:39,代码来源:TestHRegion.java


示例10: assertStoreMetricEquals

import org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics; //导入依赖的package包/类
private void assertStoreMetricEquals(long expected,
    SchemaMetrics schemaMetrics, StoreMetricType storeMetricType) {
  final String storeMetricName =
      schemaMetrics.getStoreMetricName(storeMetricType);
  Long startValue = startingMetrics.get(storeMetricName);
  assertEquals("Invalid value for store metric " + storeMetricName
      + " (type " + storeMetricType + ")", expected,
      RegionMetricsStorage.getNumericMetric(storeMetricName)
          - (startValue != null ? startValue : 0));
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:11,代码来源:TestRegionServerMetrics.java


示例11: testMultipleRegions

import org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics; //导入依赖的package包/类
@Test
public void testMultipleRegions() throws IOException, InterruptedException {

  TEST_UTIL.createRandomTable(
      TABLE_NAME,
      Arrays.asList(FAMILIES),
      MAX_VERSIONS, NUM_COLS_PER_ROW, NUM_FLUSHES, NUM_REGIONS, 1000);

  final HRegionServer rs =
      TEST_UTIL.getMiniHBaseCluster().getRegionServer(0);

  assertEquals(NUM_REGIONS + META_AND_ROOT, rs.getOnlineRegions().size());

  rs.doMetrics();
  for (HRegion r : TEST_UTIL.getMiniHBaseCluster().getRegions(
      Bytes.toBytes(TABLE_NAME))) {
    for (Map.Entry<byte[], Store> storeEntry : r.getStores().entrySet()) {
      LOG.info("For region " + r.getRegionNameAsString() + ", CF " +
          Bytes.toStringBinary(storeEntry.getKey()) + " found store files " +
          ": " + storeEntry.getValue().getStorefiles());
    }
  }

  assertStoreMetricEquals(NUM_FLUSHES * NUM_REGIONS * FAMILIES.length
      + META_AND_ROOT, ALL_METRICS, StoreMetricType.STORE_FILE_COUNT);

  for (String cf : FAMILIES) {
    SchemaMetrics schemaMetrics = SchemaMetrics.getInstance(TABLE_NAME, cf);
    assertStoreMetricEquals(NUM_FLUSHES * NUM_REGIONS, schemaMetrics,
        StoreMetricType.STORE_FILE_COUNT);
  }

  // ensure that the max value is also maintained
  final String storeMetricName = ALL_METRICS
      .getStoreMetricNameMax(StoreMetricType.STORE_FILE_COUNT);
  assertEquals("Invalid value for store metric " + storeMetricName,
      NUM_FLUSHES, RegionMetricsStorage.getNumericMetric(storeMetricName));
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:39,代码来源:TestRegionServerMetrics.java


示例12: _testBlocksScanned

import org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics; //导入依赖的package包/类
private void _testBlocksScanned(HTableDescriptor table) throws Exception {
  HRegion r = createNewHRegion(table, START_KEY, END_KEY,
      TEST_UTIL.getConfiguration());
  addContent(r, FAMILY, COL);
  r.flushcache();

  // Get the per-cf metrics
  SchemaMetrics schemaMetrics =
    SchemaMetrics.getInstance(Bytes.toString(table.getName()), Bytes.toString(FAMILY));
  Map<String, Long> schemaMetricSnapshot = SchemaMetrics.getMetricsSnapshot();

  // Do simple test of getting one row only first.
  Scan scan = new Scan(Bytes.toBytes("aaa"), Bytes.toBytes("aaz"));
  scan.addColumn(FAMILY, COL);
  scan.setMaxVersions(1);

  InternalScanner s = r.getScanner(scan);
  List<KeyValue> results = new ArrayList<KeyValue>();
  while (s.next(results));
  s.close();

  int expectResultSize = 'z' - 'a';
  Assert.assertEquals(expectResultSize, results.size());

  int kvPerBlock = (int) Math.ceil(BLOCK_SIZE / (double) results.get(0).getLength());
  Assert.assertEquals(2, kvPerBlock);

  long expectDataBlockRead = (long) Math.ceil(expectResultSize / (double) kvPerBlock);
  long expectIndexBlockRead = expectDataBlockRead;

  verifyDataAndIndexBlockRead(schemaMetricSnapshot, schemaMetrics,
      expectDataBlockRead, expectIndexBlockRead);
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:34,代码来源:TestBlocksScanned.java


示例13: verifyDataAndIndexBlockRead

import org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics; //导入依赖的package包/类
private void verifyDataAndIndexBlockRead(Map<String, Long> previousMetricSnapshot,
    SchemaMetrics schemaMetrics, long expectDataBlockRead, long expectedIndexBlockRead){
  Map<String, Long> currentMetricsSnapshot = SchemaMetrics.getMetricsSnapshot();
  Map<String, Long> diffs =
    SchemaMetrics.diffMetrics(previousMetricSnapshot, currentMetricsSnapshot);

  long dataBlockRead = SchemaMetrics.getLong(diffs,
      schemaMetrics.getBlockMetricName(BlockCategory.DATA, false, BlockMetricType.READ_COUNT));
  long indexBlockRead = SchemaMetrics.getLong(diffs,
      schemaMetrics.getBlockMetricName(BlockCategory.INDEX, false, BlockMetricType.READ_COUNT));

  Assert.assertEquals(expectDataBlockRead, dataBlockRead);
  Assert.assertEquals(expectedIndexBlockRead, indexBlockRead);
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:15,代码来源:TestBlocksScanned.java


示例14: setUp

import org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics; //导入依赖的package包/类
@Override
public void setUp() throws Exception {
  super.setUp();
  this.mvcc = new MultiVersionConsistencyControl();
  this.memstore = new MemStore();
  SchemaMetrics.setUseTableNameInTest(false);
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:8,代码来源:TestMemStore.java


示例15: setUp

import org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics; //导入依赖的package包/类
@Before
public void setUp() throws IOException {
  startingMetrics = SchemaMetrics.getMetricsSnapshot();
  conf = TEST_UTIL.getConfiguration();
  fs = FileSystem.get(conf);
  SchemaMetrics.configureGlobally(conf);
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:8,代码来源:TestHFileReaderV1.java


示例16: CachedBlock

import org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics; //导入依赖的package包/类
public CachedBlock(final long heapSize, String name, long accessTime) {
  super(new BlockCacheKey(name, 0),
      new Cacheable() {
        @Override
        public long heapSize() {
          return ((int)(heapSize - CachedBlock.PER_BLOCK_OVERHEAD));
        }

        @Override
        public int getSerializedLength() {
          return 0;
        }

        @Override
        public void serialize(ByteBuffer destination) {
        }

        @Override
        public CacheableDeserializer<Cacheable> getDeserializer() {
          // TODO Auto-generated method stub
          return null;
        }

        @Override
        public BlockType getBlockType() {
          return BlockType.DATA;
        }

        @Override
        public SchemaMetrics getSchemaMetrics() {
          return SchemaMetrics.ALL_SCHEMA_METRICS;
        }
      }, accessTime, false);
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:35,代码来源:TestCachedBlockQueue.java


示例17: initializeMetricNames

import org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics; //导入依赖的package包/类
/**
 * Method used internally to initialize metric names throughout the
 * constructors.
 *
 * To be called after the store variable has been initialized!
 */
private void initializeMetricNames() {
  String tableName = SchemaMetrics.UNKNOWN;
  String family = SchemaMetrics.UNKNOWN;
  if (store != null) {
    tableName = store.getTableName();
    family = Bytes.toString(store.getFamily().getName());
  }
  this.metricNamePrefix =
      SchemaMetrics.generateSchemaMetricsPrefix(tableName, family);
}
 
开发者ID:wanhao,项目名称:IRIndex,代码行数:17,代码来源:StoreScanner.java


示例18: getWriterFactory

import org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics; //导入依赖的package包/类
/**
 * Returns the factory to be used to create {@link HFile} writers
 */
public static final WriterFactory getWriterFactory(Configuration conf,
    CacheConfig cacheConf) {
  SchemaMetrics.configureGlobally(conf);
  int version = getFormatVersion(conf);
  switch (version) {
  case 1:
    return new HFileWriterV1.WriterFactoryV1(conf, cacheConf);
  case 2:
    return new HFileWriterV2.WriterFactoryV2(conf, cacheConf);
  default:
    throw new IllegalArgumentException("Cannot create writer for HFile " +
        "format version " + version);
  }
}
 
开发者ID:wanhao,项目名称:IRIndex,代码行数:18,代码来源:HFile.java



注:本文中的org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java Sequence类代码示例发布时间:2022-05-22
下一篇:
Java FixedReceiveBufferSizePredictorFactory类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap