本文整理汇总了Java中org.apache.hadoop.hbase.util.ByteBufferUtils类的典型用法代码示例。如果您正苦于以下问题:Java ByteBufferUtils类的具体用法?Java ByteBufferUtils怎么用?Java ByteBufferUtils使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
ByteBufferUtils类属于org.apache.hadoop.hbase.util包,在下文中一共展示了ByteBufferUtils类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: midkey
import org.apache.hadoop.hbase.util.ByteBufferUtils; //导入依赖的package包/类
/**
* An approximation to the {@link HFile}'s mid-key. Operates on block
* boundaries, and does not go inside blocks. In other words, returns the
* first key of the middle block of the file.
*
* @return the first key of the middle block
*/
public byte[] midkey() throws IOException {
if (rootCount == 0)
throw new IOException("HFile empty");
byte[] targetMidKey = this.midKey.get();
if (targetMidKey != null) {
return targetMidKey;
}
if (midLeafBlockOffset >= 0) {
if (cachingBlockReader == null) {
throw new IOException("Have to read the middle leaf block but " +
"no block reader available");
}
// Caching, using pread, assuming this is not a compaction.
HFileBlock midLeafBlock = cachingBlockReader.readBlock(
midLeafBlockOffset, midLeafBlockOnDiskSize, true, true, false, true,
BlockType.LEAF_INDEX, null);
ByteBuffer b = midLeafBlock.getBufferWithoutHeader();
int numDataBlocks = b.getInt();
int keyRelOffset = b.getInt(Bytes.SIZEOF_INT * (midKeyEntry + 1));
int keyLen = b.getInt(Bytes.SIZEOF_INT * (midKeyEntry + 2)) -
keyRelOffset;
int keyOffset = Bytes.SIZEOF_INT * (numDataBlocks + 2) + keyRelOffset
+ SECONDARY_INDEX_ENTRY_OVERHEAD;
targetMidKey = ByteBufferUtils.toBytes(b, keyOffset, keyLen);
} else {
// The middle of the root-level index.
targetMidKey = blockKeys[rootCount / 2];
}
this.midKey.set(targetMidKey);
return targetMidKey;
}
开发者ID:fengchen8086,项目名称:ditb,代码行数:44,代码来源:HFileBlockIndex.java
示例2: getNonRootIndexedKey
import org.apache.hadoop.hbase.util.ByteBufferUtils; //导入依赖的package包/类
/**
* The indexed key at the ith position in the nonRootIndex. The position starts at 0.
* @param nonRootIndex
* @param i the ith position
* @return The indexed key at the ith position in the nonRootIndex.
*/
private byte[] getNonRootIndexedKey(ByteBuffer nonRootIndex, int i) {
int numEntries = nonRootIndex.getInt(0);
if (i < 0 || i >= numEntries) {
return null;
}
// Entries start after the number of entries and the secondary index.
// The secondary index takes numEntries + 1 ints.
int entriesOffset = Bytes.SIZEOF_INT * (numEntries + 2);
// Targetkey's offset relative to the end of secondary index
int targetKeyRelOffset = nonRootIndex.getInt(
Bytes.SIZEOF_INT * (i + 1));
// The offset of the target key in the blockIndex buffer
int targetKeyOffset = entriesOffset // Skip secondary index
+ targetKeyRelOffset // Skip all entries until mid
+ SECONDARY_INDEX_ENTRY_OVERHEAD; // Skip offset and on-disk-size
// We subtract the two consecutive secondary index elements, which
// gives us the size of the whole (offset, onDiskSize, key) tuple. We
// then need to subtract the overhead of offset and onDiskSize.
int targetKeyLength = nonRootIndex.getInt(Bytes.SIZEOF_INT * (i + 2)) -
targetKeyRelOffset - SECONDARY_INDEX_ENTRY_OVERHEAD;
return ByteBufferUtils.toBytes(nonRootIndex, targetKeyOffset, targetKeyLength);
}
开发者ID:fengchen8086,项目名称:ditb,代码行数:33,代码来源:HFileBlockIndex.java
示例3: decodeTags
import org.apache.hadoop.hbase.util.ByteBufferUtils; //导入依赖的package包/类
protected void decodeTags() {
current.tagsLength = ByteBufferUtils.readCompressedInt(currentBuffer);
if (tagCompressionContext != null) {
if (current.uncompressTags) {
// Tag compression is been used. uncompress it into tagsBuffer
current.ensureSpaceForTags();
try {
current.tagsCompressedLength = tagCompressionContext.uncompressTags(currentBuffer,
current.tagsBuffer, 0, current.tagsLength);
} catch (IOException e) {
throw new RuntimeException("Exception while uncompressing tags", e);
}
} else {
ByteBufferUtils.skip(currentBuffer, current.tagsCompressedLength);
current.uncompressTags = true;// Reset this.
}
current.tagsOffset = -1;
} else {
// When tag compress is not used, let us not do copying of tags bytes into tagsBuffer.
// Just mark the tags Offset so as to create the KV buffer later in getKeyValueBuffer()
current.tagsOffset = currentBuffer.position();
ByteBufferUtils.skip(currentBuffer, current.tagsLength);
}
}
开发者ID:fengchen8086,项目名称:ditb,代码行数:25,代码来源:BufferedDataBlockEncoder.java
示例4: getFirstKeyInBlock
import org.apache.hadoop.hbase.util.ByteBufferUtils; //导入依赖的package包/类
@Override
public ByteBuffer getFirstKeyInBlock(ByteBuffer block) {
block.mark();
block.position(Bytes.SIZEOF_INT);
int keyLength = ByteBufferUtils.readCompressedInt(block);
ByteBufferUtils.readCompressedInt(block);
int commonLength = ByteBufferUtils.readCompressedInt(block);
if (commonLength != 0) {
throw new AssertionError("Nonzero common length in the first key in "
+ "block: " + commonLength);
}
int pos = block.position();
block.reset();
ByteBuffer dup = block.duplicate();
dup.position(pos);
dup.limit(pos + keyLength);
return dup.slice();
}
开发者ID:fengchen8086,项目名称:ditb,代码行数:19,代码来源:PrefixKeyDeltaEncoder.java
示例5: compressKeyValues
import org.apache.hadoop.hbase.util.ByteBufferUtils; //导入依赖的package包/类
@Override
public void compressKeyValues(DataOutputStream out,
ByteBuffer in, boolean includesMemstoreTS) throws IOException {
in.rewind();
ByteBufferUtils.putInt(out, in.limit());
DiffCompressionState previousState = new DiffCompressionState();
DiffCompressionState currentState = new DiffCompressionState();
while (in.hasRemaining()) {
compressSingleKeyValue(previousState, currentState,
out, in);
afterEncodingKeyValue(in, out, includesMemstoreTS);
// swap previousState <-> currentState
DiffCompressionState tmp = previousState;
previousState = currentState;
currentState = tmp;
}
}
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:19,代码来源:DiffKeyDeltaEncoder.java
示例6: compressKeyValues
import org.apache.hadoop.hbase.util.ByteBufferUtils; //导入依赖的package包/类
@Override
public void compressKeyValues(DataOutputStream out,
ByteBuffer in, boolean includesMemstoreTS) throws IOException {
in.rewind();
ByteBufferUtils.putInt(out, in.limit());
FastDiffCompressionState previousState = new FastDiffCompressionState();
FastDiffCompressionState currentState = new FastDiffCompressionState();
while (in.hasRemaining()) {
compressSingleKeyValue(previousState, currentState,
out, in);
afterEncodingKeyValue(in, out, includesMemstoreTS);
// swap previousState <-> currentState
FastDiffCompressionState tmp = previousState;
previousState = currentState;
currentState = tmp;
}
}
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:19,代码来源:FastDiffDeltaEncoder.java
示例7: readKeyValueLen
import org.apache.hadoop.hbase.util.ByteBufferUtils; //导入依赖的package包/类
protected void readKeyValueLen() {
blockBuffer.mark();
currKeyLen = blockBuffer.getInt();
currValueLen = blockBuffer.getInt();
if (currKeyLen < 0 || currValueLen < 0 || currKeyLen > blockBuffer.limit()
|| currValueLen > blockBuffer.limit()) {
throw new IllegalStateException("Invalid currKeyLen " + currKeyLen + " or currValueLen "
+ currValueLen + ". Block offset: "
+ block.getOffset() + ", block length: " + blockBuffer.limit() + ", position: "
+ blockBuffer.position() + " (without header).");
}
ByteBufferUtils.skip(blockBuffer, currKeyLen + currValueLen);
if (reader.hfileContext.isIncludesTags()) {
// Read short as unsigned, high byte first
currTagsLen = ((blockBuffer.get() & 0xff) << 8) ^ (blockBuffer.get() & 0xff);
if (currTagsLen < 0 || currTagsLen > blockBuffer.limit()) {
throw new IllegalStateException("Invalid currTagsLen " + currTagsLen + ". Block offset: "
+ block.getOffset() + ", block length: " + blockBuffer.limit() + ", position: "
+ blockBuffer.position() + " (without header).");
}
ByteBufferUtils.skip(blockBuffer, currTagsLen);
}
readMvccVersion();
blockBuffer.reset();
}
开发者ID:grokcoder,项目名称:pbase,代码行数:26,代码来源:HFileReaderV3.java
示例8: readKeyValueLen
import org.apache.hadoop.hbase.util.ByteBufferUtils; //导入依赖的package包/类
protected void readKeyValueLen() {
blockBuffer.mark();
currKeyLen = blockBuffer.getInt();
currValueLen = blockBuffer.getInt();
ByteBufferUtils.skip(blockBuffer, currKeyLen + currValueLen);
readMvccVersion();
if (currKeyLen < 0 || currValueLen < 0
|| currKeyLen > blockBuffer.limit()
|| currValueLen > blockBuffer.limit()) {
throw new IllegalStateException("Invalid currKeyLen " + currKeyLen
+ " or currValueLen " + currValueLen + ". Block offset: "
+ block.getOffset() + ", block length: " + blockBuffer.limit()
+ ", position: " + blockBuffer.position() + " (without header).");
}
blockBuffer.reset();
}
开发者ID:grokcoder,项目名称:pbase,代码行数:17,代码来源:HFileReaderV2.java
示例9: getKeyValueBuffer
import org.apache.hadoop.hbase.util.ByteBufferUtils; //导入依赖的package包/类
@Override
public ByteBuffer getKeyValueBuffer() {
ByteBuffer kvBuffer = createKVBuffer();
kvBuffer.putInt(current.keyLength);
kvBuffer.putInt(current.valueLength);
kvBuffer.put(current.keyBuffer, 0, current.keyLength);
ByteBufferUtils.copyFromBufferToBuffer(kvBuffer, currentBuffer, current.valueOffset,
current.valueLength);
if (current.tagsLength > 0) {
// Put short as unsigned
kvBuffer.put((byte) (current.tagsLength >> 8 & 0xff));
kvBuffer.put((byte) (current.tagsLength & 0xff));
if (current.tagsOffset != -1) {
// the offset of the tags bytes in the underlying buffer is marked. So the temp
// buffer,tagsBuffer was not been used.
ByteBufferUtils.copyFromBufferToBuffer(kvBuffer, currentBuffer, current.tagsOffset,
current.tagsLength);
} else {
// When tagsOffset is marked as -1, tag compression was present and so the tags were
// uncompressed into temp buffer, tagsBuffer. Let us copy it from there
kvBuffer.put(current.tagsBuffer, 0, current.tagsLength);
}
}
return kvBuffer;
}
开发者ID:grokcoder,项目名称:pbase,代码行数:26,代码来源:BufferedDataBlockEncoder.java
示例10: readKeyValueLen
import org.apache.hadoop.hbase.util.ByteBufferUtils; //导入依赖的package包/类
protected void readKeyValueLen() {
blockBuffer.mark();
currKeyLen = blockBuffer.getInt();
currValueLen = blockBuffer.getInt();
if (currKeyLen < 0 || currValueLen < 0 || currKeyLen > blockBuffer.limit()
|| currValueLen > blockBuffer.limit()) {
throw new IllegalStateException("Invalid currKeyLen " + currKeyLen + " or currValueLen "
+ currValueLen + ". Block offset: "
+ block.getOffset() + ", block length: " + blockBuffer.limit() + ", position: "
+ blockBuffer.position() + " (without header).");
}
ByteBufferUtils.skip(blockBuffer, currKeyLen + currValueLen);
if (reader.hfileContext.isIncludesTags()) {
currTagsLen = blockBuffer.getShort();
if (currTagsLen < 0 || currTagsLen > blockBuffer.limit()) {
throw new IllegalStateException("Invalid currTagsLen " + currTagsLen + ". Block offset: "
+ block.getOffset() + ", block length: " + blockBuffer.limit() + ", position: "
+ blockBuffer.position() + " (without header).");
}
ByteBufferUtils.skip(blockBuffer, currTagsLen);
}
readMvccVersion();
blockBuffer.reset();
}
开发者ID:tenggyut,项目名称:HIndex,代码行数:25,代码来源:HFileReaderV3.java
示例11: internalEncodeKeyValues
import org.apache.hadoop.hbase.util.ByteBufferUtils; //导入依赖的package包/类
@Override
public void internalEncodeKeyValues(DataOutputStream out,
ByteBuffer in, HFileBlockDefaultEncodingContext encodingCtx) throws IOException {
in.rewind();
ByteBufferUtils.putInt(out, in.limit());
DiffCompressionState previousState = new DiffCompressionState();
DiffCompressionState currentState = new DiffCompressionState();
while (in.hasRemaining()) {
compressSingleKeyValue(previousState, currentState,
out, in);
afterEncodingKeyValue(in, out, encodingCtx);
// swap previousState <-> currentState
DiffCompressionState tmp = previousState;
previousState = currentState;
currentState = tmp;
}
}
开发者ID:tenggyut,项目名称:HIndex,代码行数:19,代码来源:DiffKeyDeltaEncoder.java
示例12: internalEncodeKeyValues
import org.apache.hadoop.hbase.util.ByteBufferUtils; //导入依赖的package包/类
@Override
public void internalEncodeKeyValues(DataOutputStream out, ByteBuffer in,
HFileBlockDefaultEncodingContext encodingCtx) throws IOException {
in.rewind();
ByteBufferUtils.putInt(out, in.limit());
FastDiffCompressionState previousState = new FastDiffCompressionState();
FastDiffCompressionState currentState = new FastDiffCompressionState();
while (in.hasRemaining()) {
compressSingleKeyValue(previousState, currentState,
out, in);
afterEncodingKeyValue(in, out, encodingCtx);
// swap previousState <-> currentState
FastDiffCompressionState tmp = previousState;
previousState = currentState;
currentState = tmp;
}
}
开发者ID:tenggyut,项目名称:HIndex,代码行数:19,代码来源:FastDiffDeltaEncoder.java
示例13: decodeTags
import org.apache.hadoop.hbase.util.ByteBufferUtils; //导入依赖的package包/类
protected void decodeTags() {
current.tagsLength = ByteBufferUtils.readCompressedInt(currentBuffer);
if (tagCompressionContext != null) {
if (current.uncompressTags) {
// Tag compression is been used. uncompress it into tagsBuffer
current.ensureSpaceForTags();
try {
current.tagsCompressedLength = tagCompressionContext.uncompressTags(currentBuffer,
current.tagsBuffer, 0, current.tagsLength);
} catch (IOException e) {
throw new RuntimeException("Exception while uncompressing tags", e);
}
} else {
ByteBufferUtils.skip(currentBuffer, current.tagsCompressedLength);
current.uncompressTags = true;// Reset this.
}
current.tagsOffset = -1;
} else {
// When tag compress is not used, let us not do temp copying of tags bytes into tagsBuffer.
// Just mark the tags Offset so as to create the KV buffer later in getKeyValueBuffer()
current.tagsOffset = currentBuffer.position();
ByteBufferUtils.skip(currentBuffer, current.tagsLength);
}
}
开发者ID:tenggyut,项目名称:HIndex,代码行数:25,代码来源:BufferedDataBlockEncoder.java
示例14: getFirstKeyInBlock
import org.apache.hadoop.hbase.util.ByteBufferUtils; //导入依赖的package包/类
@Override
public ByteBuffer getFirstKeyInBlock(ByteBuffer block) {
block.mark();
block.position(Bytes.SIZEOF_INT);
int keyLength = ByteBufferUtils.readCompressedInt(block);
ByteBufferUtils.readCompressedInt(block);
int commonLength = ByteBufferUtils.readCompressedInt(block);
if (commonLength != 0) {
throw new AssertionError("Nonzero common length in the first key in "
+ "block: " + commonLength);
}
int pos = block.position();
block.reset();
return ByteBuffer.wrap(block.array(), block.arrayOffset() + pos, keyLength)
.slice();
}
开发者ID:tenggyut,项目名称:HIndex,代码行数:17,代码来源:PrefixKeyDeltaEncoder.java
示例15: testSubBuffer
import org.apache.hadoop.hbase.util.ByteBufferUtils; //导入依赖的package包/类
@Test
public void testSubBuffer() {
ByteBuffer bb1 = ByteBuffer.allocateDirect(10);
ByteBuffer bb2 = ByteBuffer.allocateDirect(10);
MultiByteBuff multi = new MultiByteBuff(bb1, bb2);
long l1 = 1234L, l2 = 100L;
multi.putLong(l1);
multi.putLong(l2);
multi.rewind();
ByteBuffer sub = multi.asSubByteBuffer(Bytes.SIZEOF_LONG);
assertEquals(bb1, sub);
assertEquals(l1, ByteBufferUtils.toLong(sub, sub.position()));
multi.skip(Bytes.SIZEOF_LONG);
sub = multi.asSubByteBuffer(Bytes.SIZEOF_LONG);
assertNotEquals(bb1, sub);
assertNotEquals(bb2, sub);
assertEquals(l2, ByteBufferUtils.toLong(sub, sub.position()));
multi.rewind();
ObjectIntPair<ByteBuffer> p = new ObjectIntPair<>();
multi.asSubByteBuffer(8, Bytes.SIZEOF_LONG, p);
assertNotEquals(bb1, p.getFirst());
assertNotEquals(bb2, p.getFirst());
assertEquals(0, p.getSecond());
assertEquals(l2, ByteBufferUtils.toLong(sub, p.getSecond()));
}
开发者ID:apache,项目名称:hbase,代码行数:26,代码来源:TestMultiByteBuff.java
示例16: matchingQualifier
import org.apache.hadoop.hbase.util.ByteBufferUtils; //导入依赖的package包/类
public static boolean matchingQualifier(final Cell left, final Cell right) {
int lqlength = left.getQualifierLength();
int rqlength = right.getQualifierLength();
if (left instanceof ByteBufferExtendedCell && right instanceof ByteBufferExtendedCell) {
return ByteBufferUtils.equals(((ByteBufferExtendedCell) left).getQualifierByteBuffer(),
((ByteBufferExtendedCell) left).getQualifierPosition(), lqlength,
((ByteBufferExtendedCell) right).getQualifierByteBuffer(),
((ByteBufferExtendedCell) right).getQualifierPosition(), rqlength);
}
if (left instanceof ByteBufferExtendedCell) {
return ByteBufferUtils.equals(((ByteBufferExtendedCell) left).getQualifierByteBuffer(),
((ByteBufferExtendedCell) left).getQualifierPosition(), lqlength,
right.getQualifierArray(), right.getQualifierOffset(), rqlength);
}
if (right instanceof ByteBufferExtendedCell) {
return ByteBufferUtils.equals(((ByteBufferExtendedCell) right).getQualifierByteBuffer(),
((ByteBufferExtendedCell) right).getQualifierPosition(), rqlength,
left.getQualifierArray(), left.getQualifierOffset(), lqlength);
}
return Bytes.equals(left.getQualifierArray(), left.getQualifierOffset(),
lqlength, right.getQualifierArray(), right.getQualifierOffset(),
rqlength);
}
开发者ID:apache,项目名称:hbase,代码行数:24,代码来源:CellUtil.java
示例17: internalEncodeKeyValues
import org.apache.hadoop.hbase.util.ByteBufferUtils; //导入依赖的package包/类
@Override
public void internalEncodeKeyValues(DataOutputStream out,
ByteBuffer in, boolean includesMemstoreTS) throws IOException {
in.rewind();
ByteBufferUtils.putInt(out, in.limit());
DiffCompressionState previousState = new DiffCompressionState();
DiffCompressionState currentState = new DiffCompressionState();
while (in.hasRemaining()) {
compressSingleKeyValue(previousState, currentState,
out, in);
afterEncodingKeyValue(in, out, includesMemstoreTS);
// swap previousState <-> currentState
DiffCompressionState tmp = previousState;
previousState = currentState;
currentState = tmp;
}
}
开发者ID:cloud-software-foundation,项目名称:c5,代码行数:19,代码来源:DiffKeyDeltaEncoder.java
示例18: matchingFamily
import org.apache.hadoop.hbase.util.ByteBufferUtils; //导入依赖的package包/类
public static boolean matchingFamily(final Cell left, final Cell right) {
byte lfamlength = left.getFamilyLength();
byte rfamlength = right.getFamilyLength();
if (left instanceof ByteBufferExtendedCell && right instanceof ByteBufferExtendedCell) {
return ByteBufferUtils.equals(((ByteBufferExtendedCell) left).getFamilyByteBuffer(),
((ByteBufferExtendedCell) left).getFamilyPosition(), lfamlength,
((ByteBufferExtendedCell) right).getFamilyByteBuffer(),
((ByteBufferExtendedCell) right).getFamilyPosition(), rfamlength);
}
if (left instanceof ByteBufferExtendedCell) {
return ByteBufferUtils.equals(((ByteBufferExtendedCell) left).getFamilyByteBuffer(),
((ByteBufferExtendedCell) left).getFamilyPosition(), lfamlength,
right.getFamilyArray(), right.getFamilyOffset(), rfamlength);
}
if (right instanceof ByteBufferExtendedCell) {
return ByteBufferUtils.equals(((ByteBufferExtendedCell) right).getFamilyByteBuffer(),
((ByteBufferExtendedCell) right).getFamilyPosition(), rfamlength,
left.getFamilyArray(), left.getFamilyOffset(), lfamlength);
}
return Bytes.equals(left.getFamilyArray(), left.getFamilyOffset(), lfamlength,
right.getFamilyArray(), right.getFamilyOffset(), rfamlength);
}
开发者ID:apache,项目名称:hbase,代码行数:23,代码来源:CellUtil.java
示例19: getShort
import org.apache.hadoop.hbase.util.ByteBufferUtils; //导入依赖的package包/类
private short getShort(int index, int itemIndex) {
ByteBuffer item = items[itemIndex];
int offsetInItem = index - this.itemBeginPos[itemIndex];
int remainingLen = item.limit() - offsetInItem;
if (remainingLen >= Bytes.SIZEOF_SHORT) {
return ByteBufferUtils.toShort(item, offsetInItem);
}
if (items.length - 1 == itemIndex) {
// means cur item is the last one and we wont be able to read a int. Throw exception
throw new BufferUnderflowException();
}
ByteBuffer nextItem = items[itemIndex + 1];
// Get available bytes from this item and remaining from next
short l = 0;
for (int i = offsetInItem; i < item.capacity(); i++) {
l = (short) (l << 8);
l = (short) (l ^ (ByteBufferUtils.toByte(item, i) & 0xFF));
}
for (int i = 0; i < Bytes.SIZEOF_SHORT - remainingLen; i++) {
l = (short) (l << 8);
l = (short) (l ^ (ByteBufferUtils.toByte(item, i) & 0xFF));
}
return l;
}
开发者ID:apache,项目名称:hbase,代码行数:25,代码来源:MultiByteBuff.java
示例20: put
import org.apache.hadoop.hbase.util.ByteBufferUtils; //导入依赖的package包/类
@Override
public SingleByteBuff put(int offset, ByteBuff src, int srcOffset, int length) {
if (src instanceof SingleByteBuff) {
ByteBufferUtils.copyFromBufferToBuffer(((SingleByteBuff) src).buf, this.buf, srcOffset,
offset, length);
} else {
// TODO we can do some optimization here? Call to asSubByteBuffer might
// create a copy.
ObjectIntPair<ByteBuffer> pair = new ObjectIntPair<>();
src.asSubByteBuffer(srcOffset, length, pair);
if (pair.getFirst() != null) {
ByteBufferUtils.copyFromBufferToBuffer(pair.getFirst(), this.buf, pair.getSecond(), offset,
length);
}
}
return this;
}
开发者ID:apache,项目名称:hbase,代码行数:18,代码来源:SingleByteBuff.java
注:本文中的org.apache.hadoop.hbase.util.ByteBufferUtils类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论