• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java HFileBlockDecodingContext类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext的典型用法代码示例。如果您正苦于以下问题:Java HFileBlockDecodingContext类的具体用法?Java HFileBlockDecodingContext怎么用?Java HFileBlockDecodingContext使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



HFileBlockDecodingContext类属于org.apache.hadoop.hbase.io.encoding包,在下文中一共展示了HFileBlockDecodingContext类的10个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: unpack

import org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext; //导入依赖的package包/类
/**
 * Retrieves the decompressed/decrypted view of this block. An encoded block remains in its
 * encoded structure. Internal structures are shared between instances where applicable.
 */
HFileBlock unpack(HFileContext fileContext, FSReader reader) throws IOException {
  if (!fileContext.isCompressedOrEncrypted()) {
    // TODO: cannot use our own fileContext here because HFileBlock(ByteBuffer, boolean),
    // which is used for block serialization to L2 cache, does not preserve encoding and
    // encryption details.
    return this;
  }

  HFileBlock unpacked = new HFileBlock(this);
  unpacked.allocateBuffer(); // allocates space for the decompressed block

  HFileBlockDecodingContext ctx = blockType == BlockType.ENCODED_DATA ?
    reader.getBlockDecodingContext() : reader.getDefaultBlockDecodingContext();

  ByteBuff dup = this.buf.duplicate();
  dup.position(this.headerSize());
  dup = dup.slice();
  ctx.prepareDecoding(unpacked.getOnDiskSizeWithoutHeader(),
    unpacked.getUncompressedSizeWithoutHeader(), unpacked.getBufferWithoutHeader(),
    dup);
  return unpacked;
}
 
开发者ID:apache,项目名称:hbase,代码行数:27,代码来源:HFileBlock.java


示例2: newDataBlockDecodingContext

import org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext; //导入依赖的package包/类
@Override
public HFileBlockDecodingContext newDataBlockDecodingContext(HFileContext fileContext) {
  DataBlockEncoder encoder = encoding.getEncoder();
  if (encoder != null) {
    return encoder.newDataBlockDecodingContext(fileContext);
  }
  return new HFileBlockDefaultDecodingContext(fileContext);
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:9,代码来源:HFileDataBlockEncoderImpl.java


示例3: decodeKeyValues

import org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext; //导入依赖的package包/类
/**
 * I don't think this method is called during normal HBase operation, so efficiency is not
 * important.
 */
public ByteBuffer decodeKeyValues(DataInputStream source, int allocateHeaderLength,
    int skipLastBytes, HFileBlockDecodingContext decodingCtx) throws IOException {
  ByteBuffer sourceAsBuffer = ByteBufferUtils.drainInputStreamToBuffer(source);// waste
  sourceAsBuffer.mark();
  PrefixTreeBlockMeta blockMeta = new PrefixTreeBlockMeta(sourceAsBuffer);
  sourceAsBuffer.rewind();
  int numV1BytesWithHeader = allocateHeaderLength + blockMeta.getNumKeyValueBytes();
  byte[] keyValueBytesWithHeader = new byte[numV1BytesWithHeader];
  ByteBuffer result = ByteBuffer.wrap(keyValueBytesWithHeader);
  result.rewind();
  CellSearcher searcher = null;
  try {
    boolean includesMvcc = decodingCtx.getHFileContext().isIncludesMvcc();
    searcher = DecoderFactory.checkOut(sourceAsBuffer, includesMvcc);
    while (searcher.advance()) {
      KeyValue currentCell = KeyValueUtil.copyToNewKeyValue(searcher.current());
      // needs to be modified for DirectByteBuffers. no existing methods to
      // write VLongs to byte[]
      int offset = result.arrayOffset() + result.position();
      System.arraycopy(currentCell.getBuffer(), currentCell.getOffset(), result.array(), offset,
          currentCell.getLength());
      int keyValueLength = KeyValueUtil.length(currentCell);
      ByteBufferUtils.skip(result, keyValueLength);
      offset += keyValueLength;
      if (includesMvcc) {
        ByteBufferUtils.writeVLong(result, currentCell.getMvccVersion());
      }
    }
    result.position(result.limit());//make it appear as if we were appending
    return result;
  } finally {
    DecoderFactory.checkIn(searcher);
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:39,代码来源:PrefixTreeCodec.java


示例4: createSeeker

import org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext; //导入依赖的package包/类
/**
 * Is this the correct handling of an illegal comparator?  How to prevent that from getting all
 * the way to this point.
 */
@Override
public EncodedSeeker createSeeker(KVComparator comparator, HFileBlockDecodingContext decodingCtx) {
  if (comparator instanceof RawBytesComparator){
    throw new IllegalArgumentException("comparator must be KeyValue.KeyComparator");
  } else if (comparator instanceof MetaComparator){
    throw new IllegalArgumentException("DataBlockEncoding.PREFIX_TREE not compatible with hbase:meta "
        +"table");
  }

  return new PrefixTreeSeeker(decodingCtx.getHFileContext().isIncludesMvcc());
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:16,代码来源:PrefixTreeCodec.java


示例5: newDataBlockDecodingContext

import org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext; //导入依赖的package包/类
@Override
public HFileBlockDecodingContext newDataBlockDecodingContext(
    Algorithm compressionAlgorithm) {
  DataBlockEncoder encoder = encoding.getEncoder();
  if (encoder != null) {
    return encoder.newDataBlockDecodingContext(compressionAlgorithm);
  }
  return new HFileBlockDefaultDecodingContext(compressionAlgorithm);
}
 
开发者ID:cloud-software-foundation,项目名称:c5,代码行数:10,代码来源:HFileDataBlockEncoderImpl.java


示例6: newOnDiskDataBlockDecodingContext

import org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext; //导入依赖的package包/类
@Override
public HFileBlockDecodingContext newOnDiskDataBlockDecodingContext(
    Algorithm compressionAlgorithm) {
  if (onDisk != null) {
    DataBlockEncoder encoder = onDisk.getEncoder();
    if (encoder != null) {
      return encoder.newDataBlockDecodingContext(
          compressionAlgorithm);
    }
  }
  return new HFileBlockDefaultDecodingContext(compressionAlgorithm);
}
 
开发者ID:daidong,项目名称:DominoHBase,代码行数:13,代码来源:HFileDataBlockEncoderImpl.java


示例7: newDataBlockDecodingContext

import org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext; //导入依赖的package包/类
@Override
public HFileBlockDecodingContext newDataBlockDecodingContext(HFileContext meta) {
  return new HFileBlockDefaultDecodingContext(meta);
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:5,代码来源:NoOpDataBlockEncoder.java


示例8: unpack

import org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext; //导入依赖的package包/类
/**
 * Retrieves the decompressed/decrypted view of this block. An encoded block remains in its
 * encoded structure. Internal structures are shared between instances where applicable.
 */
HFileBlock unpack(HFileContext fileContext, FSReader reader) throws IOException {
  if (!fileContext.isCompressedOrEncrypted()) {
    // TODO: cannot use our own fileContext here because HFileBlock(ByteBuffer, boolean),
    // which is used for block serialization to L2 cache, does not preserve encoding and
    // encryption details.
    return this;
  }

  HFileBlock unpacked = new HFileBlock(this);
  unpacked.allocateBuffer(); // allocates space for the decompressed block

  HFileBlockDecodingContext ctx = blockType == BlockType.ENCODED_DATA ?
    reader.getBlockDecodingContext() : reader.getDefaultBlockDecodingContext();

  ByteBuffer dup = this.buf.duplicate();
  dup.position(this.headerSize());
  dup = dup.slice();
  ctx.prepareDecoding(unpacked.getOnDiskSizeWithoutHeader(),
    unpacked.getUncompressedSizeWithoutHeader(), unpacked.getBufferWithoutHeader(),
    dup);

  // Preserve the next block's header bytes in the new block if we have them.
  if (unpacked.hasNextBlockHeader()) {
    // Both the buffers are limited till checksum bytes and avoid the next block's header.
    // Below call to copyFromBufferToBuffer() will try positional read/write from/to buffers when
    // any of the buffer is DBB. So we change the limit on a dup buffer. No copying just create
    // new BB objects
    ByteBuffer inDup = this.buf.duplicate();
    inDup.limit(inDup.limit() + headerSize());
    ByteBuffer outDup = unpacked.buf.duplicate();
    outDup.limit(outDup.limit() + unpacked.headerSize());
    ByteBufferUtils.copyFromBufferToBuffer(
        outDup,
        inDup,
        this.onDiskDataSizeWithHeader,
        unpacked.headerSize() + unpacked.uncompressedSizeWithoutHeader
            + unpacked.totalChecksumBytes(), unpacked.headerSize());
  }
  return unpacked;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:45,代码来源:HFileBlock.java


示例9: getBlockDecodingContext

import org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext; //导入依赖的package包/类
/** Get a decoder for {@link BlockType#ENCODED_DATA} blocks from this file. */
HFileBlockDecodingContext getBlockDecodingContext();
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:3,代码来源:HFileBlock.java


示例10: getDefaultBlockDecodingContext

import org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext; //导入依赖的package包/类
/** Get the default decoder for blocks from this file. */
HFileBlockDecodingContext getDefaultBlockDecodingContext();
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:3,代码来源:HFileBlock.java



注:本文中的org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java ZKHelixManager类代码示例发布时间:2022-05-23
下一篇:
Java CircularList类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap