本文整理汇总了Java中org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext类的典型用法代码示例。如果您正苦于以下问题:Java HFileBlockDecodingContext类的具体用法?Java HFileBlockDecodingContext怎么用?Java HFileBlockDecodingContext使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
HFileBlockDecodingContext类属于org.apache.hadoop.hbase.io.encoding包,在下文中一共展示了HFileBlockDecodingContext类的10个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: unpack
import org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext; //导入依赖的package包/类
/**
* Retrieves the decompressed/decrypted view of this block. An encoded block remains in its
* encoded structure. Internal structures are shared between instances where applicable.
*/
HFileBlock unpack(HFileContext fileContext, FSReader reader) throws IOException {
if (!fileContext.isCompressedOrEncrypted()) {
// TODO: cannot use our own fileContext here because HFileBlock(ByteBuffer, boolean),
// which is used for block serialization to L2 cache, does not preserve encoding and
// encryption details.
return this;
}
HFileBlock unpacked = new HFileBlock(this);
unpacked.allocateBuffer(); // allocates space for the decompressed block
HFileBlockDecodingContext ctx = blockType == BlockType.ENCODED_DATA ?
reader.getBlockDecodingContext() : reader.getDefaultBlockDecodingContext();
ByteBuff dup = this.buf.duplicate();
dup.position(this.headerSize());
dup = dup.slice();
ctx.prepareDecoding(unpacked.getOnDiskSizeWithoutHeader(),
unpacked.getUncompressedSizeWithoutHeader(), unpacked.getBufferWithoutHeader(),
dup);
return unpacked;
}
开发者ID:apache,项目名称:hbase,代码行数:27,代码来源:HFileBlock.java
示例2: newDataBlockDecodingContext
import org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext; //导入依赖的package包/类
@Override
public HFileBlockDecodingContext newDataBlockDecodingContext(HFileContext fileContext) {
DataBlockEncoder encoder = encoding.getEncoder();
if (encoder != null) {
return encoder.newDataBlockDecodingContext(fileContext);
}
return new HFileBlockDefaultDecodingContext(fileContext);
}
开发者ID:fengchen8086,项目名称:ditb,代码行数:9,代码来源:HFileDataBlockEncoderImpl.java
示例3: decodeKeyValues
import org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext; //导入依赖的package包/类
/**
* I don't think this method is called during normal HBase operation, so efficiency is not
* important.
*/
public ByteBuffer decodeKeyValues(DataInputStream source, int allocateHeaderLength,
int skipLastBytes, HFileBlockDecodingContext decodingCtx) throws IOException {
ByteBuffer sourceAsBuffer = ByteBufferUtils.drainInputStreamToBuffer(source);// waste
sourceAsBuffer.mark();
PrefixTreeBlockMeta blockMeta = new PrefixTreeBlockMeta(sourceAsBuffer);
sourceAsBuffer.rewind();
int numV1BytesWithHeader = allocateHeaderLength + blockMeta.getNumKeyValueBytes();
byte[] keyValueBytesWithHeader = new byte[numV1BytesWithHeader];
ByteBuffer result = ByteBuffer.wrap(keyValueBytesWithHeader);
result.rewind();
CellSearcher searcher = null;
try {
boolean includesMvcc = decodingCtx.getHFileContext().isIncludesMvcc();
searcher = DecoderFactory.checkOut(sourceAsBuffer, includesMvcc);
while (searcher.advance()) {
KeyValue currentCell = KeyValueUtil.copyToNewKeyValue(searcher.current());
// needs to be modified for DirectByteBuffers. no existing methods to
// write VLongs to byte[]
int offset = result.arrayOffset() + result.position();
System.arraycopy(currentCell.getBuffer(), currentCell.getOffset(), result.array(), offset,
currentCell.getLength());
int keyValueLength = KeyValueUtil.length(currentCell);
ByteBufferUtils.skip(result, keyValueLength);
offset += keyValueLength;
if (includesMvcc) {
ByteBufferUtils.writeVLong(result, currentCell.getMvccVersion());
}
}
result.position(result.limit());//make it appear as if we were appending
return result;
} finally {
DecoderFactory.checkIn(searcher);
}
}
开发者ID:fengchen8086,项目名称:ditb,代码行数:39,代码来源:PrefixTreeCodec.java
示例4: createSeeker
import org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext; //导入依赖的package包/类
/**
* Is this the correct handling of an illegal comparator? How to prevent that from getting all
* the way to this point.
*/
@Override
public EncodedSeeker createSeeker(KVComparator comparator, HFileBlockDecodingContext decodingCtx) {
if (comparator instanceof RawBytesComparator){
throw new IllegalArgumentException("comparator must be KeyValue.KeyComparator");
} else if (comparator instanceof MetaComparator){
throw new IllegalArgumentException("DataBlockEncoding.PREFIX_TREE not compatible with hbase:meta "
+"table");
}
return new PrefixTreeSeeker(decodingCtx.getHFileContext().isIncludesMvcc());
}
开发者ID:fengchen8086,项目名称:ditb,代码行数:16,代码来源:PrefixTreeCodec.java
示例5: newDataBlockDecodingContext
import org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext; //导入依赖的package包/类
@Override
public HFileBlockDecodingContext newDataBlockDecodingContext(
Algorithm compressionAlgorithm) {
DataBlockEncoder encoder = encoding.getEncoder();
if (encoder != null) {
return encoder.newDataBlockDecodingContext(compressionAlgorithm);
}
return new HFileBlockDefaultDecodingContext(compressionAlgorithm);
}
开发者ID:cloud-software-foundation,项目名称:c5,代码行数:10,代码来源:HFileDataBlockEncoderImpl.java
示例6: newOnDiskDataBlockDecodingContext
import org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext; //导入依赖的package包/类
@Override
public HFileBlockDecodingContext newOnDiskDataBlockDecodingContext(
Algorithm compressionAlgorithm) {
if (onDisk != null) {
DataBlockEncoder encoder = onDisk.getEncoder();
if (encoder != null) {
return encoder.newDataBlockDecodingContext(
compressionAlgorithm);
}
}
return new HFileBlockDefaultDecodingContext(compressionAlgorithm);
}
开发者ID:daidong,项目名称:DominoHBase,代码行数:13,代码来源:HFileDataBlockEncoderImpl.java
示例7: newDataBlockDecodingContext
import org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext; //导入依赖的package包/类
@Override
public HFileBlockDecodingContext newDataBlockDecodingContext(HFileContext meta) {
return new HFileBlockDefaultDecodingContext(meta);
}
开发者ID:fengchen8086,项目名称:ditb,代码行数:5,代码来源:NoOpDataBlockEncoder.java
示例8: unpack
import org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext; //导入依赖的package包/类
/**
* Retrieves the decompressed/decrypted view of this block. An encoded block remains in its
* encoded structure. Internal structures are shared between instances where applicable.
*/
HFileBlock unpack(HFileContext fileContext, FSReader reader) throws IOException {
if (!fileContext.isCompressedOrEncrypted()) {
// TODO: cannot use our own fileContext here because HFileBlock(ByteBuffer, boolean),
// which is used for block serialization to L2 cache, does not preserve encoding and
// encryption details.
return this;
}
HFileBlock unpacked = new HFileBlock(this);
unpacked.allocateBuffer(); // allocates space for the decompressed block
HFileBlockDecodingContext ctx = blockType == BlockType.ENCODED_DATA ?
reader.getBlockDecodingContext() : reader.getDefaultBlockDecodingContext();
ByteBuffer dup = this.buf.duplicate();
dup.position(this.headerSize());
dup = dup.slice();
ctx.prepareDecoding(unpacked.getOnDiskSizeWithoutHeader(),
unpacked.getUncompressedSizeWithoutHeader(), unpacked.getBufferWithoutHeader(),
dup);
// Preserve the next block's header bytes in the new block if we have them.
if (unpacked.hasNextBlockHeader()) {
// Both the buffers are limited till checksum bytes and avoid the next block's header.
// Below call to copyFromBufferToBuffer() will try positional read/write from/to buffers when
// any of the buffer is DBB. So we change the limit on a dup buffer. No copying just create
// new BB objects
ByteBuffer inDup = this.buf.duplicate();
inDup.limit(inDup.limit() + headerSize());
ByteBuffer outDup = unpacked.buf.duplicate();
outDup.limit(outDup.limit() + unpacked.headerSize());
ByteBufferUtils.copyFromBufferToBuffer(
outDup,
inDup,
this.onDiskDataSizeWithHeader,
unpacked.headerSize() + unpacked.uncompressedSizeWithoutHeader
+ unpacked.totalChecksumBytes(), unpacked.headerSize());
}
return unpacked;
}
开发者ID:fengchen8086,项目名称:ditb,代码行数:45,代码来源:HFileBlock.java
示例9: getBlockDecodingContext
import org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext; //导入依赖的package包/类
/** Get a decoder for {@link BlockType#ENCODED_DATA} blocks from this file. */
HFileBlockDecodingContext getBlockDecodingContext();
开发者ID:fengchen8086,项目名称:ditb,代码行数:3,代码来源:HFileBlock.java
示例10: getDefaultBlockDecodingContext
import org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext; //导入依赖的package包/类
/** Get the default decoder for blocks from this file. */
HFileBlockDecodingContext getDefaultBlockDecodingContext();
开发者ID:fengchen8086,项目名称:ditb,代码行数:3,代码来源:HFileBlock.java
注:本文中的org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论