• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java BucketEntry类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.BucketEntry的典型用法代码示例。如果您正苦于以下问题:Java BucketEntry类的具体用法?Java BucketEntry怎么用?Java BucketEntry使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



BucketEntry类属于org.apache.hadoop.hbase.io.hfile.bucket.BucketCache包,在下文中一共展示了BucketEntry类的12个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: CachedEntryQueue

import org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.BucketEntry; //导入依赖的package包/类
/**
 * @param maxSize the target size of elements in the queue
 * @param blockSize expected average size of blocks
 */
public CachedEntryQueue(long maxSize, long blockSize) {
  int initialSize = (int) (maxSize / blockSize);
  if (initialSize == 0) {
    initialSize++;
  }
  queue = MinMaxPriorityQueue.orderedBy(new Comparator<Map.Entry<BlockCacheKey, BucketEntry>>() {

    public int compare(Entry<BlockCacheKey, BucketEntry> entry1,
        Entry<BlockCacheKey, BucketEntry> entry2) {
      return BucketEntry.COMPARATOR.compare(entry1.getValue(), entry2.getValue());
    }

  }).expectedSize(initialSize).create();
  cacheSize = 0;
  this.maxSize = maxSize;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:21,代码来源:CachedEntryQueue.java


示例2: add

import org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.BucketEntry; //导入依赖的package包/类
/**
 * Attempt to add the specified entry to this queue.
 * <p>
 * If the queue is smaller than the max size, or if the specified element is
 * ordered after the smallest element in the queue, the element will be added
 * to the queue. Otherwise, there is no side effect of this call.
 * @param entry a bucket entry with key to try to add to the queue
 */
public void add(Map.Entry<BlockCacheKey, BucketEntry> entry) {
  if (cacheSize < maxSize) {
    queue.add(entry);
    cacheSize += entry.getValue().getLength();
  } else {
    BucketEntry head = queue.peek().getValue();
    if (BucketEntry.COMPARATOR.compare(entry.getValue(), head) > 0) {
      cacheSize += entry.getValue().getLength();
      cacheSize -= head.getLength();
      if (cacheSize > maxSize) {
        queue.poll();
      } else {
        cacheSize += head.getLength();
      }
      queue.add(entry);
    }
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:27,代码来源:CachedEntryQueue.java


示例3: testCacheFullException

import org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.BucketEntry; //导入依赖的package包/类
/**
 * Do Cache full exception
 * @throws IOException
 * @throws InterruptedException
 */
@Test (timeout=30000)
public void testCacheFullException()
    throws IOException, InterruptedException {
  this.bc.cacheBlock(this.plainKey, plainCacheable);
  RAMQueueEntry rqe = q.remove();
  RAMQueueEntry spiedRqe = Mockito.spy(rqe);
  final CacheFullException cfe = new CacheFullException(0, 0);
  BucketEntry mockedBucketEntry = Mockito.mock(BucketEntry.class);
  Mockito.doThrow(cfe).
    doReturn(mockedBucketEntry).
    when(spiedRqe).writeToCache((IOEngine)Mockito.any(), (BucketAllocator)Mockito.any(),
      (UniqueIndexMap<Integer>)Mockito.any(), (AtomicLong)Mockito.any());
  this.q.add(spiedRqe);
  doDrainOfOneEntry(bc, wt, q);
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:21,代码来源:TestBucketWriterThread.java


示例4: CachedEntryQueue

import org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.BucketEntry; //导入依赖的package包/类
/**
 * @param maxSize the target size of elements in the queue
 * @param blockSize expected average size of blocks
 */
public CachedEntryQueue(long maxSize, long blockSize) {
  int initialSize = (int) (maxSize / blockSize);
  if (initialSize == 0)
    initialSize++;
  queue = MinMaxPriorityQueue
      .orderedBy(new Comparator<Map.Entry<BlockCacheKey, BucketEntry>>() {
        public int compare(Entry<BlockCacheKey, BucketEntry> entry1,
            Entry<BlockCacheKey, BucketEntry> entry2) {
          return entry1.getValue().compareTo(entry2.getValue());
        }

      }).expectedSize(initialSize).create();
  cacheSize = 0;
  this.maxSize = maxSize;
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:20,代码来源:CachedEntryQueue.java


示例5: add

import org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.BucketEntry; //导入依赖的package包/类
/**
 * Attempt to add the specified entry to this queue.
 * 
 * <p>
 * If the queue is smaller than the max size, or if the specified element is
 * ordered after the smallest element in the queue, the element will be added
 * to the queue. Otherwise, there is no side effect of this call.
 * @param entry a bucket entry with key to try to add to the queue
 */
public void add(Map.Entry<BlockCacheKey, BucketEntry> entry) {
  if (cacheSize < maxSize) {
    queue.add(entry);
    cacheSize += entry.getValue().getLength();
  } else {
    BucketEntry head = queue.peek().getValue();
    if (entry.getValue().compareTo(head) > 0) {
      cacheSize += entry.getValue().getLength();
      cacheSize -= head.getLength();
      if (cacheSize > maxSize) {
        queue.poll();
      } else {
        cacheSize += head.getLength();
      }
      queue.add(entry);
    }
  }
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:28,代码来源:CachedEntryQueue.java


示例6: CachedEntryQueue

import org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.BucketEntry; //导入依赖的package包/类
/**
 * @param maxSize the target size of elements in the queue
 * @param blockSize expected average size of blocks
 */
public CachedEntryQueue(long maxSize, long blockSize) {
  int initialSize = (int) (maxSize / blockSize);
  if (initialSize == 0) {
    initialSize++;
  }
  queue = MinMaxPriorityQueue.orderedBy(new Comparator<Map.Entry<BlockCacheKey, BucketEntry>>() {

    @Override
    public int compare(Entry<BlockCacheKey, BucketEntry> entry1,
        Entry<BlockCacheKey, BucketEntry> entry2) {
      return BucketEntry.COMPARATOR.compare(entry1.getValue(), entry2.getValue());
    }

  }).expectedSize(initialSize).create();
  cacheSize = 0;
  this.maxSize = maxSize;
}
 
开发者ID:apache,项目名称:hbase,代码行数:22,代码来源:CachedEntryQueue.java


示例7: testCacheFullException

import org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.BucketEntry; //导入依赖的package包/类
/**
 * Do Cache full exception
 * @throws IOException
 * @throws InterruptedException
 */
@Test (timeout=30000)
public void testCacheFullException()
    throws IOException, InterruptedException {
  this.bc.cacheBlock(this.plainKey, plainCacheable);
  RAMQueueEntry rqe = q.remove();
  RAMQueueEntry spiedRqe = Mockito.spy(rqe);
  final CacheFullException cfe = new CacheFullException(0, 0);
  BucketEntry mockedBucketEntry = Mockito.mock(BucketEntry.class);
  Mockito.doThrow(cfe).
    doReturn(mockedBucketEntry).
    when(spiedRqe).writeToCache((IOEngine)Mockito.any(), (BucketAllocator)Mockito.any(),
      (UniqueIndexMap<Integer>)Mockito.any(), (LongAdder) Mockito.any());
  this.q.add(spiedRqe);
  doDrainOfOneEntry(bc, wt, q);
}
 
开发者ID:apache,项目名称:hbase,代码行数:21,代码来源:TestBucketWriterThread.java


示例8: BucketAllocator

import org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.BucketEntry; //导入依赖的package包/类
/**
 * Rebuild the allocator's data structures from a persisted map.
 * @param availableSpace capacity of cache
 * @param map A map stores the block key and BucketEntry(block's meta data
 *          like offset, length)
 * @param realCacheSize cached data size statistics for bucket cache
 * @throws BucketAllocatorException
 */
BucketAllocator(long availableSpace, int[] bucketSizes, Map<BlockCacheKey, BucketEntry> map,
    AtomicLong realCacheSize) throws BucketAllocatorException {
  this(availableSpace, bucketSizes);

  // each bucket has an offset, sizeindex. probably the buckets are too big
  // in our default state. so what we do is reconfigure them according to what
  // we've found. we can only reconfigure each bucket once; if more than once,
  // we know there's a bug, so we just log the info, throw, and start again...
  boolean[] reconfigured = new boolean[buckets.length];
  for (Map.Entry<BlockCacheKey, BucketEntry> entry : map.entrySet()) {
    long foundOffset = entry.getValue().offset();
    int foundLen = entry.getValue().getLength();
    int bucketSizeIndex = -1;
    for (int i = 0; i < bucketSizes.length; ++i) {
      if (foundLen <= bucketSizes[i]) {
        bucketSizeIndex = i;
        break;
      }
    }
    if (bucketSizeIndex == -1) {
      throw new BucketAllocatorException(
          "Can't match bucket size for the block with size " + foundLen);
    }
    int bucketNo = (int) (foundOffset / bucketCapacity);
    if (bucketNo < 0 || bucketNo >= buckets.length)
      throw new BucketAllocatorException("Can't find bucket " + bucketNo
          + ", total buckets=" + buckets.length
          + "; did you shrink the cache?");
    Bucket b = buckets[bucketNo];
    if (reconfigured[bucketNo]) {
      if (b.sizeIndex() != bucketSizeIndex)
        throw new BucketAllocatorException(
            "Inconsistent allocation in bucket map;");
    } else {
      if (!b.isCompletelyFree())
        throw new BucketAllocatorException("Reconfiguring bucket "
            + bucketNo + " but it's already allocated; corrupt data");
      // Need to remove the bucket from whichever list it's currently in at
      // the moment...
      BucketSizeInfo bsi = bucketSizeInfos[bucketSizeIndex];
      BucketSizeInfo oldbsi = bucketSizeInfos[b.sizeIndex()];
      oldbsi.removeBucket(b);
      bsi.instantiateBucket(b);
      reconfigured[bucketNo] = true;
    }
    realCacheSize.addAndGet(foundLen);
    buckets[bucketNo].addAllocation(foundOffset);
    usedSize += buckets[bucketNo].getItemAllocationSize();
    bucketSizeInfos[bucketSizeIndex].blockAllocated(b);
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:60,代码来源:BucketAllocator.java


示例9: BucketAllocator

import org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.BucketEntry; //导入依赖的package包/类
/**
 * Rebuild the allocator's data structures from a persisted map.
 * @param availableSpace capacity of cache
 * @param map A map stores the block key and BucketEntry(block's meta data
 *          like offset, length)
 * @param realCacheSize cached data size statistics for bucket cache
 * @throws BucketAllocatorException
 */
BucketAllocator(long availableSpace, Map<BlockCacheKey, BucketEntry> map,
    AtomicLong realCacheSize) throws BucketAllocatorException {
  this(availableSpace);

  // each bucket has an offset, sizeindex. probably the buckets are too big
  // in our default state. so what we do is reconfigure them according to what
  // we've found. we can only reconfigure each bucket once; if more than once,
  // we know there's a bug, so we just log the info, throw, and start again...
  boolean[] reconfigured = new boolean[buckets.length];
  for (Map.Entry<BlockCacheKey, BucketEntry> entry : map.entrySet()) {
    long foundOffset = entry.getValue().offset();
    int foundLen = entry.getValue().getLength();
    int bucketSizeIndex = -1;
    for (int i = 0; i < BUCKET_SIZES.length; ++i) {
      if (foundLen <= BUCKET_SIZES[i]) {
        bucketSizeIndex = i;
        break;
      }
    }
    if (bucketSizeIndex == -1) {
      throw new BucketAllocatorException(
          "Can't match bucket size for the block with size " + foundLen);
    }
    int bucketNo = (int) (foundOffset / (long) BUCKET_CAPACITY);
    if (bucketNo < 0 || bucketNo >= buckets.length)
      throw new BucketAllocatorException("Can't find bucket " + bucketNo
          + ", total buckets=" + buckets.length
          + "; did you shrink the cache?");
    Bucket b = buckets[bucketNo];
    if (reconfigured[bucketNo] == true) {
      if (b.sizeIndex() != bucketSizeIndex)
        throw new BucketAllocatorException(
            "Inconsistent allocation in bucket map;");
    } else {
      if (!b.isCompletelyFree())
        throw new BucketAllocatorException("Reconfiguring bucket "
            + bucketNo + " but it's already allocated; corrupt data");
      // Need to remove the bucket from whichever list it's currently in at
      // the moment...
      BucketSizeInfo bsi = bucketSizeInfos[bucketSizeIndex];
      BucketSizeInfo oldbsi = bucketSizeInfos[b.sizeIndex()];
      oldbsi.removeBucket(b);
      bsi.instantiateBucket(b);
      reconfigured[bucketNo] = true;
    }
    realCacheSize.addAndGet(foundLen);
    buckets[bucketNo].addAllocation(foundOffset);
    usedSize += buckets[bucketNo].itemAllocationSize();
    bucketSizeInfos[bucketSizeIndex].blockAllocated(b);
  }
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:60,代码来源:BucketAllocator.java


示例10: BucketAllocator

import org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.BucketEntry; //导入依赖的package包/类
/**
 * Rebuild the allocator's data structures from a persisted map.
 * @param availableSpace capacity of cache
 * @param map A map stores the block key and BucketEntry(block's meta data
 *          like offset, length)
 * @param realCacheSize cached data size statistics for bucket cache
 * @throws BucketAllocatorException
 */
BucketAllocator(long availableSpace, int[] bucketSizes, Map<BlockCacheKey, BucketEntry> map,
    LongAdder realCacheSize) throws BucketAllocatorException {
  this(availableSpace, bucketSizes);

  // each bucket has an offset, sizeindex. probably the buckets are too big
  // in our default state. so what we do is reconfigure them according to what
  // we've found. we can only reconfigure each bucket once; if more than once,
  // we know there's a bug, so we just log the info, throw, and start again...
  boolean[] reconfigured = new boolean[buckets.length];
  int sizeNotMatchedCount = 0;
  int insufficientCapacityCount = 0;
  Iterator<Map.Entry<BlockCacheKey, BucketEntry>> iterator = map.entrySet().iterator();
  while (iterator.hasNext()) {
    Map.Entry<BlockCacheKey, BucketEntry> entry = iterator.next();
    long foundOffset = entry.getValue().offset();
    int foundLen = entry.getValue().getLength();
    int bucketSizeIndex = -1;
    for (int i = 0; i < this.bucketSizes.length; ++i) {
      if (foundLen <= this.bucketSizes[i]) {
        bucketSizeIndex = i;
        break;
      }
    }
    if (bucketSizeIndex == -1) {
      sizeNotMatchedCount++;
      iterator.remove();
      continue;
    }
    int bucketNo = (int) (foundOffset / bucketCapacity);
    if (bucketNo < 0 || bucketNo >= buckets.length) {
      insufficientCapacityCount++;
      iterator.remove();
      continue;
    }
    Bucket b = buckets[bucketNo];
    if (reconfigured[bucketNo]) {
      if (b.sizeIndex() != bucketSizeIndex) {
        throw new BucketAllocatorException("Inconsistent allocation in bucket map;");
      }
    } else {
      if (!b.isCompletelyFree()) {
        throw new BucketAllocatorException(
            "Reconfiguring bucket " + bucketNo + " but it's already allocated; corrupt data");
      }
      // Need to remove the bucket from whichever list it's currently in at
      // the moment...
      BucketSizeInfo bsi = bucketSizeInfos[bucketSizeIndex];
      BucketSizeInfo oldbsi = bucketSizeInfos[b.sizeIndex()];
      oldbsi.removeBucket(b);
      bsi.instantiateBucket(b);
      reconfigured[bucketNo] = true;
    }
    realCacheSize.add(foundLen);
    buckets[bucketNo].addAllocation(foundOffset);
    usedSize += buckets[bucketNo].getItemAllocationSize();
    bucketSizeInfos[bucketSizeIndex].blockAllocated(b);
  }

  if (sizeNotMatchedCount > 0) {
    LOG.warn("There are " + sizeNotMatchedCount + " blocks which can't be rebuilt because " +
      "there is no matching bucket size for these blocks");
  }
  if (insufficientCapacityCount > 0) {
    LOG.warn("There are " + insufficientCapacityCount + " blocks which can't be rebuilt - "
      + "did you shrink the cache?");
  }
}
 
开发者ID:apache,项目名称:hbase,代码行数:76,代码来源:BucketAllocator.java


示例11: poll

import org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.BucketEntry; //导入依赖的package包/类
/**
 * @return The next element in this queue, or {@code null} if the queue is
 *         empty.
 */
public Map.Entry<BlockCacheKey, BucketEntry> poll() {
  return queue.poll();
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:8,代码来源:CachedEntryQueue.java


示例12: pollLast

import org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.BucketEntry; //导入依赖的package包/类
/**
 * @return The last element in this queue, or {@code null} if the queue is
 *         empty.
 */
public Map.Entry<BlockCacheKey, BucketEntry> pollLast() {
  return queue.pollLast();
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:8,代码来源:CachedEntryQueue.java



注:本文中的org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.BucketEntry类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java VideoCanvas类代码示例发布时间:2022-05-22
下一篇:
Java ServerFailure类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap