• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java ByteRange类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hbase.util.ByteRange的典型用法代码示例。如果您正苦于以下问题:Java ByteRange类的具体用法?Java ByteRange怎么用?Java ByteRange使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



ByteRange类属于org.apache.hadoop.hbase.util包,在下文中一共展示了ByteRange类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: testLABRandomAllocation

import org.apache.hadoop.hbase.util.ByteRange; //导入依赖的package包/类
/**
 * Test a bunch of random allocations
 */
@Test
public void testLABRandomAllocation() {
  Random rand = new Random();
  MemStoreLAB mslab = new HeapMemStoreLAB();
  int expectedOff = 0;
  byte[] lastBuffer = null;
  // 100K iterations by 0-1K alloc -> 50MB expected
  // should be reasonable for unit test and also cover wraparound
  // behavior
  for (int i = 0; i < 100000; i++) {
    int size = rand.nextInt(1000);
    ByteRange alloc = mslab.allocateBytes(size);
    
    if (alloc.getBytes() != lastBuffer) {
      expectedOff = 0;
      lastBuffer = alloc.getBytes();
    }
    assertEquals(expectedOff, alloc.getOffset());
    assertTrue("Allocation overruns buffer",
        alloc.getOffset() + size <= alloc.getBytes().length);
    expectedOff += size;
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:27,代码来源:TestMemStoreLAB.java


示例2: split

import org.apache.hadoop.hbase.util.ByteRange; //导入依赖的package包/类
/**
 * Called when we need to convert a leaf node into a branch with 2 leaves. Comments inside the
 * method assume we have token BAA starting at tokenStartOffset=0 and are adding BOO. The output
 * will be 3 nodes:<br>
 * <ul>
 * <li>1: B &lt;- branch
 * <li>2: AA &lt;- leaf
 * <li>3: OO &lt;- leaf
 * </ul>
 *
 * @param numTokenBytesToRetain =&gt; 1 (the B)
 * @param bytes =&gt; BOO
 */
protected void split(int numTokenBytesToRetain, final ByteRange bytes) {
  int childNodeDepth = nodeDepth;
  int childTokenStartOffset = tokenStartOffset + numTokenBytesToRetain;

  //create leaf AA
  TokenizerNode firstChild = builder.addNode(this, childNodeDepth, childTokenStartOffset,
    token, numTokenBytesToRetain);
  firstChild.setNumOccurrences(numOccurrences);// do before clearing this node's numOccurrences
  token.setLength(numTokenBytesToRetain);//shorten current token from BAA to B
  numOccurrences = 0;//current node is now a branch

  moveChildrenToDifferentParent(firstChild);//point the new leaf (AA) to the new branch (B)
  addChild(firstChild);//add the new leaf (AA) to the branch's (B's) children

  //create leaf OO
  TokenizerNode secondChild = builder.addNode(this, childNodeDepth, childTokenStartOffset,
    bytes, tokenStartOffset + numTokenBytesToRetain);
  addChild(secondChild);//add the new leaf (00) to the branch's (B's) children

  // we inserted branch node B as a new level above/before the two children, so increment the
  // depths of the children below
  firstChild.incrementNodeDepthRecursively();
  secondChild.incrementNodeDepthRecursively();
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:38,代码来源:TokenizerNode.java


示例3: addNode

import org.apache.hadoop.hbase.util.ByteRange; //导入依赖的package包/类
protected TokenizerNode addNode(TokenizerNode parent, int nodeDepth, int tokenStartOffset,
    final ByteRange token, int inputTokenOffset) {
  int inputTokenLength = token.getLength() - inputTokenOffset;
  int tokenOffset = appendTokenAndRepointByteRange(token, inputTokenOffset);
  TokenizerNode node = null;
  if (nodes.size() <= numNodes) {
    node = new TokenizerNode(this, parent, nodeDepth, tokenStartOffset, tokenOffset,
        inputTokenLength);
    nodes.add(node);
  } else {
    node = nodes.get(numNodes);
    node.reset();
    node.reconstruct(this, parent, nodeDepth, tokenStartOffset, tokenOffset, inputTokenLength);
  }
  ++numNodes;
  return node;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:18,代码来源:Tokenizer.java


示例4: store

import org.apache.hadoop.hbase.util.ByteRange; //导入依赖的package包/类
protected int store(ByteRange bytes) {
  int indexOfNewElement = numUniqueRanges;
  if (uniqueRanges.size() <= numUniqueRanges) {
    uniqueRanges.add(new SimpleMutableByteRange());
  }
  ByteRange storedRange = uniqueRanges.get(numUniqueRanges);
  int neededBytes = numBytes + bytes.getLength();
  byteAppender = ArrayUtils.growIfNecessary(byteAppender, neededBytes, 2 * neededBytes);
  bytes.deepCopyTo(byteAppender, numBytes);
  storedRange.set(byteAppender, numBytes, bytes.getLength());// this isn't valid yet
  numBytes += bytes.getLength();
  uniqueIndexByUniqueRange.put(storedRange, indexOfNewElement);
  int newestUniqueIndex = numUniqueRanges;
  ++numUniqueRanges;
  return newestUniqueIndex;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:17,代码来源:ByteRangeSet.java


示例5: toString

import org.apache.hadoop.hbase.util.ByteRange; //导入依赖的package包/类
/***************** standard methods ************************/

  @Override
  public String toString() {
    StringBuilder sb = new StringBuilder();
    int i = 0;
    for (ByteRange r : sortedRanges) {
      if (i > 0) {
        sb.append("\n");
      }
      sb.append(i + " " + Bytes.toStringBinary(r.deepCopyToNewArray()));
      ++i;
    }
    sb.append("\ntotalSize:" + numBytes);
    sb.append("\navgSize:" + getAvgSize());
    return sb.toString();
  }
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:18,代码来源:ByteRangeSet.java


示例6: split

import org.apache.hadoop.hbase.util.ByteRange; //导入依赖的package包/类
/**
 * Called when we need to convert a leaf node into a branch with 2 leaves. Comments inside the
 * method assume we have token BAA starting at tokenStartOffset=0 and are adding BOO. The output
 * will be 3 nodes:<br/>
 * <li>1: B <- branch
 * <li>2: AA <- leaf
 * <li>3: OO <- leaf
 *
 * @param numTokenBytesToRetain => 1 (the B)
 * @param bytes => BOO
 */
protected void split(int numTokenBytesToRetain, final ByteRange bytes) {
  int childNodeDepth = nodeDepth;
  int childTokenStartOffset = tokenStartOffset + numTokenBytesToRetain;

  //create leaf AA
  TokenizerNode firstChild = builder.addNode(this, childNodeDepth, childTokenStartOffset,
    token, numTokenBytesToRetain);
  firstChild.setNumOccurrences(numOccurrences);// do before clearing this node's numOccurrences
  token.setLength(numTokenBytesToRetain);//shorten current token from BAA to B
  numOccurrences = 0;//current node is now a branch

  moveChildrenToDifferentParent(firstChild);//point the new leaf (AA) to the new branch (B)
  addChild(firstChild);//add the new leaf (AA) to the branch's (B's) children

  //create leaf OO
  TokenizerNode secondChild = builder.addNode(this, childNodeDepth, childTokenStartOffset,
    bytes, tokenStartOffset + numTokenBytesToRetain);
  addChild(secondChild);//add the new leaf (00) to the branch's (B's) children

  // we inserted branch node B as a new level above/before the two children, so increment the
  // depths of the children below
  firstChild.incrementNodeDepthRecursively();
  secondChild.incrementNodeDepthRecursively();
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:36,代码来源:TokenizerNode.java


示例7: store

import org.apache.hadoop.hbase.util.ByteRange; //导入依赖的package包/类
protected int store(ByteRange bytes) {
  int indexOfNewElement = numUniqueRanges;
  if (uniqueRanges.size() <= numUniqueRanges) {
    uniqueRanges.add(new SimpleByteRange());
  }
  ByteRange storedRange = uniqueRanges.get(numUniqueRanges);
  int neededBytes = numBytes + bytes.getLength();
  byteAppender = ArrayUtils.growIfNecessary(byteAppender, neededBytes, 2 * neededBytes);
  bytes.deepCopyTo(byteAppender, numBytes);
  storedRange.set(byteAppender, numBytes, bytes.getLength());// this isn't valid yet
  numBytes += bytes.getLength();
  uniqueIndexByUniqueRange.put(storedRange, indexOfNewElement);
  int newestUniqueIndex = numUniqueRanges;
  ++numUniqueRanges;
  return newestUniqueIndex;
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:17,代码来源:ByteRangeSet.java


示例8: VisibilityLabelFilter

import org.apache.hadoop.hbase.util.ByteRange; //导入依赖的package包/类
public VisibilityLabelFilter(VisibilityExpEvaluator expEvaluator,
    Map<ByteRange, Integer> cfVsMaxVersions) {
  this.expEvaluator = expEvaluator;
  this.cfVsMaxVersions = cfVsMaxVersions;
  this.curFamily = new SimpleMutableByteRange();
  this.curQualifier = new SimpleMutableByteRange();
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:8,代码来源:VisibilityLabelFilter.java


示例9: createVisibilityLabelFilter

import org.apache.hadoop.hbase.util.ByteRange; //导入依赖的package包/类
public static Filter createVisibilityLabelFilter(Region region, Authorizations authorizations)
    throws IOException {
  Map<ByteRange, Integer> cfVsMaxVersions = new HashMap<ByteRange, Integer>();
  for (HColumnDescriptor hcd : region.getTableDesc().getFamilies()) {
    cfVsMaxVersions.put(new SimpleMutableByteRange(hcd.getName()), hcd.getMaxVersions());
  }
  VisibilityLabelService vls = VisibilityLabelServiceManager.getInstance()
      .getVisibilityLabelService();
  Filter visibilityLabelFilter = new VisibilityLabelFilter(
      vls.getVisibilityExpEvaluator(authorizations), cfVsMaxVersions);
  return visibilityLabelFilter;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:13,代码来源:VisibilityUtils.java


示例10: AccessControlFilter

import org.apache.hadoop.hbase.util.ByteRange; //导入依赖的package包/类
AccessControlFilter(TableAuthManager mgr, User ugi, TableName tableName,
    Strategy strategy, Map<ByteRange, Integer> cfVsMaxVersions) {
  authManager = mgr;
  table = tableName;
  user = ugi;
  isSystemTable = tableName.isSystemTable();
  this.strategy = strategy;
  this.cfVsMaxVersions = cfVsMaxVersions;
  this.prevFam = new SimpleMutableByteRange();
  this.prevQual = new SimpleMutableByteRange();
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:12,代码来源:AccessControlFilter.java


示例11: allocateBytes

import org.apache.hadoop.hbase.util.ByteRange; //导入依赖的package包/类
/**
 * Allocate a slice of the given length.
 *
 * If the size is larger than the maximum size specified for this
 * allocator, returns null.
 */
@Override
public ByteRange allocateBytes(int size) {
  Preconditions.checkArgument(size >= 0, "negative size");

  // Callers should satisfy large allocations directly from JVM since they
  // don't cause fragmentation as badly.
  if (size > maxAlloc) {
    return null;
  }

  while (true) {
    Chunk c = getOrMakeChunk();

    // Try to allocate from this chunk
    int allocOffset = c.alloc(size);
    if (allocOffset != -1) {
      // We succeeded - this is the common case - small alloc
      // from a big buffer
      return new SimpleMutableByteRange(c.data, allocOffset, size);
    }

    // not enough space!
    // try to retire this chunk
    tryRetireChunk(c);
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:33,代码来源:HeapMemStoreLAB.java


示例12: testLABLargeAllocation

import org.apache.hadoop.hbase.util.ByteRange; //导入依赖的package包/类
@Test
public void testLABLargeAllocation() {
  MemStoreLAB mslab = new HeapMemStoreLAB();
  ByteRange alloc = mslab.allocateBytes(2*1024*1024);
  assertNull("2MB allocation shouldn't be satisfied by LAB.",
    alloc);
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:8,代码来源:TestMemStoreLAB.java


示例13: testReusingChunks

import org.apache.hadoop.hbase.util.ByteRange; //导入依赖的package包/类
@Test
public void testReusingChunks() {
  Random rand = new Random();
  MemStoreLAB mslab = new HeapMemStoreLAB(conf);
  int expectedOff = 0;
  byte[] lastBuffer = null;
  // Randomly allocate some bytes
  for (int i = 0; i < 100; i++) {
    int size = rand.nextInt(1000);
    ByteRange alloc = mslab.allocateBytes(size);

    if (alloc.getBytes() != lastBuffer) {
      expectedOff = 0;
      lastBuffer = alloc.getBytes();
    }
    assertEquals(expectedOff, alloc.getOffset());
    assertTrue("Allocation overruns buffer", alloc.getOffset()
        + size <= alloc.getBytes().length);
    expectedOff += size;
  }
  // chunks will be put back to pool after close
  mslab.close();
  int chunkCount = chunkPool.getPoolSize();
  assertTrue(chunkCount > 0);
  // reconstruct mslab
  mslab = new HeapMemStoreLAB(conf);
  // chunk should be got from the pool, so we can reuse it.
  mslab.allocateBytes(1000);
  assertEquals(chunkCount - 1, chunkPool.getPoolSize());
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:31,代码来源:TestMemStoreChunkPool.java


示例14: addAll

import org.apache.hadoop.hbase.util.ByteRange; //导入依赖的package包/类
/***************** building *************************/

  public void addAll(ArrayList<ByteRange> sortedByteRanges) {
    for (int i = 0; i < sortedByteRanges.size(); ++i) {
      ByteRange byteRange = sortedByteRanges.get(i);
      addSorted(byteRange);
    }
  }
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:9,代码来源:Tokenizer.java


示例15: addSorted

import org.apache.hadoop.hbase.util.ByteRange; //导入依赖的package包/类
public void addSorted(final ByteRange bytes) {
  ++numArraysAdded;
  if (bytes.getLength() > maxElementLength) {
    maxElementLength = bytes.getLength();
  }
  if (root == null) {
    // nodeDepth of firstNode (non-root) is 1
    root = addNode(null, 1, 0, bytes, 0);
  } else {
    root.addSorted(bytes);
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:13,代码来源:Tokenizer.java


示例16: appendTokenAndRepointByteRange

import org.apache.hadoop.hbase.util.ByteRange; //导入依赖的package包/类
protected int appendTokenAndRepointByteRange(final ByteRange token, int inputTokenOffset) {
  int newOffset = tokensLength;
  int inputTokenLength = token.getLength() - inputTokenOffset;
  int newMinimum = tokensLength + inputTokenLength;
  tokens = ArrayUtils.growIfNecessary(tokens, newMinimum, 2 * newMinimum);
  token.deepCopySubRangeTo(inputTokenOffset, inputTokenLength, tokens, tokensLength);
  tokensLength += inputTokenLength;
  return newOffset;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:10,代码来源:Tokenizer.java


示例17: add

import org.apache.hadoop.hbase.util.ByteRange; //导入依赖的package包/类
/**
 * Check if the incoming byte range exists.  If not, add it to the backing byteAppender[] and
 * insert it into the tracking Map uniqueIndexByUniqueRange.
 */
public void add(ByteRange bytes) {
  Integer index = uniqueIndexByUniqueRange.get(bytes);
  if (index == null) {
    index = store(bytes);
  }
  int minLength = numInputs + 1;
  uniqueRangeIndexByInsertionId = ArrayUtils.growIfNecessary(uniqueRangeIndexByInsertionId,
      minLength, 2 * minLength);
  uniqueRangeIndexByInsertionId[numInputs] = index;
  ++numInputs;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:16,代码来源:ByteRangeSet.java


示例18: getInputs

import org.apache.hadoop.hbase.util.ByteRange; //导入依赖的package包/类
@Override
public List<ByteRange> getInputs() {
  List<String> d = Lists.newArrayList();
  d.add("abc");
  d.add("abcde");
  d.add("abc");
  d.add("bbc");
  d.add("abc");
  return ByteRangeUtils.fromArrays(Bytes.getUtf8ByteArrays(d));
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:11,代码来源:TestColumnDataSimple.java


示例19: getOutputs

import org.apache.hadoop.hbase.util.ByteRange; //导入依赖的package包/类
@Override
public List<ByteRange> getOutputs() {
  List<String> d = Lists.newArrayList();
  d.add("abc");
  d.add("abcde");
  d.add("bbc");
  return ByteRangeUtils.fromArrays(Bytes.getUtf8ByteArrays(d));
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:9,代码来源:TestColumnDataSimple.java


示例20: TestColumnBuilder

import org.apache.hadoop.hbase.util.ByteRange; //导入依赖的package包/类
/*************** construct ****************************/

  public TestColumnBuilder(TestColumnData columns) {
    this.columns = columns;
    List<ByteRange> inputs = columns.getInputs();
    this.columnSorter = new ByteRangeTreeSet(inputs);
    this.sortedUniqueColumns = columnSorter.compile().getSortedRanges();
    List<byte[]> copies = ByteRangeUtils.copyToNewArrays(sortedUniqueColumns);
    Assert.assertTrue(Bytes.isSorted(copies));
    this.blockMeta = new PrefixTreeBlockMeta();
    this.blockMeta.setNumMetaBytes(0);
    this.blockMeta.setNumRowBytes(0);
    this.builder = new Tokenizer();
  }
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:15,代码来源:TestColumnBuilder.java



注:本文中的org.apache.hadoop.hbase.util.ByteRange类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java PortMappingEntry类代码示例发布时间:2022-05-22
下一篇:
Java SheetContentsHandler类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap