• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java DataBlockEncoder类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hbase.io.encoding.DataBlockEncoder的典型用法代码示例。如果您正苦于以下问题:Java DataBlockEncoder类的具体用法?Java DataBlockEncoder怎么用?Java DataBlockEncoder使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



DataBlockEncoder类属于org.apache.hadoop.hbase.io.encoding包,在下文中一共展示了DataBlockEncoder类的16个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: encodeBufferToHFileBlockBuffer

import org.apache.hadoop.hbase.io.encoding.DataBlockEncoder; //导入依赖的package包/类
private ByteBuffer encodeBufferToHFileBlockBuffer(ByteBuffer in,
    DataBlockEncoding algo, boolean includesMemstoreTS,
    byte[] dummyHeader) {
  ByteArrayOutputStream encodedStream = new ByteArrayOutputStream();
  DataOutputStream dataOut = new DataOutputStream(encodedStream);
  DataBlockEncoder encoder = algo.getEncoder();
  try {
    encodedStream.write(dummyHeader);
    algo.writeIdInBytes(dataOut);
    encoder.compressKeyValues(dataOut, in,
        includesMemstoreTS);
  } catch (IOException e) {
    throw new RuntimeException(String.format("Bug in data block encoder " +
        "'%s', it probably requested too much data", algo.toString()), e);
  }
  return ByteBuffer.wrap(encodedStream.toByteArray());
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:18,代码来源:HFileDataBlockEncoderImpl.java


示例2: updateCurrentBlock

import org.apache.hadoop.hbase.io.encoding.DataBlockEncoder; //导入依赖的package包/类
/**
 * Updates the current block to be the given {@link HFileBlock}. Seeks to
 * the the first key/value pair.
 *
 * @param newBlock the block to make current
 */
private void updateCurrentBlock(HFileBlock newBlock) {
  block = newBlock;

  // sanity checks
  if (block.getBlockType() != BlockType.ENCODED_DATA) {
    throw new IllegalStateException(
        "EncodedScannerV2 works only on encoded data blocks");
  }

  short dataBlockEncoderId = block.getDataBlockEncodingId();
  if (dataBlockEncoder == null ||
      !DataBlockEncoding.isCorrectEncoder(dataBlockEncoder,
          dataBlockEncoderId)) {
    DataBlockEncoder encoder =
        DataBlockEncoding.getDataBlockEncoderById(dataBlockEncoderId);
    setDataBlockEncoder(encoder);
  }

  seeker.setCurrentBuffer(getEncodedBuffer(newBlock));
  blockFetches++;

  // Reset the next indexed key
  this.nextIndexedKey = null;
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:31,代码来源:HFileReaderV2.java


示例3: updateCurrentBlock

import org.apache.hadoop.hbase.io.encoding.DataBlockEncoder; //导入依赖的package包/类
/**
 * Updates the current block to be the given {@link HFileBlock}. Seeks to
 * the the first key/value pair.
 *
 * @param newBlock the block to make current
 */
private void updateCurrentBlock(HFileBlock newBlock) {
  block = newBlock;

  // sanity checks
  if (block.getBlockType() != BlockType.ENCODED_DATA) {
    throw new IllegalStateException(
        "EncodedScannerV2 works only on encoded data blocks");
  }

  short dataBlockEncoderId = block.getDataBlockEncodingId();
  if (dataBlockEncoder == null ||
      !DataBlockEncoding.isCorrectEncoder(dataBlockEncoder,
          dataBlockEncoderId)) {
    DataBlockEncoder encoder =
        DataBlockEncoding.getDataBlockEncoderById(dataBlockEncoderId);
    setDataBlockEncoder(encoder);
  }

  seeker.setCurrentBuffer(getEncodedBuffer(newBlock));
  blockFetches++;
}
 
开发者ID:zwqjsj0404,项目名称:HBase-Research,代码行数:28,代码来源:HFileReaderV2.java


示例4: newDataBlockEncodingContext

import org.apache.hadoop.hbase.io.encoding.DataBlockEncoder; //导入依赖的package包/类
@Override
public HFileBlockEncodingContext newDataBlockEncodingContext(
    byte[] dummyHeader, HFileContext fileContext) {
  DataBlockEncoder encoder = encoding.getEncoder();
  if (encoder != null) {
    return encoder.newDataBlockEncodingContext(encoding, dummyHeader, fileContext);
  }
  return new HFileBlockDefaultEncodingContext(null, dummyHeader, fileContext);
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:10,代码来源:HFileDataBlockEncoderImpl.java


示例5: newDataBlockDecodingContext

import org.apache.hadoop.hbase.io.encoding.DataBlockEncoder; //导入依赖的package包/类
@Override
public HFileBlockDecodingContext newDataBlockDecodingContext(HFileContext fileContext) {
  DataBlockEncoder encoder = encoding.getEncoder();
  if (encoder != null) {
    return encoder.newDataBlockDecodingContext(fileContext);
  }
  return new HFileBlockDefaultDecodingContext(fileContext);
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:9,代码来源:HFileDataBlockEncoderImpl.java


示例6: encodeBufferToHFileBlockBuffer

import org.apache.hadoop.hbase.io.encoding.DataBlockEncoder; //导入依赖的package包/类
/**
 * Encode a block of key value pairs.
 *
 * @param in input data to encode
 * @param algo encoding algorithm
 * @param encodeCtx where will the output data be stored
 */
private void encodeBufferToHFileBlockBuffer(ByteBuffer in, DataBlockEncoding algo,
    HFileBlockEncodingContext encodeCtx) {
  DataBlockEncoder encoder = algo.getEncoder();
  try {
    encoder.encodeKeyValues(in, encodeCtx);
  } catch (IOException e) {
    throw new RuntimeException(String.format(
        "Bug in data block encoder "
            + "'%s', it probably requested too much data, " +
            "exception message: %s.",
            algo.toString(), e.getMessage()), e);
  }
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:21,代码来源:HFileDataBlockEncoderImpl.java


示例7: encodeBufferToHFileBlockBuffer

import org.apache.hadoop.hbase.io.encoding.DataBlockEncoder; //导入依赖的package包/类
/**
 * Encode a block of key value pairs.
 *
 * @param in input data to encode
 * @param algo encoding algorithm
 * @param includesMemstoreTS includes memstore timestamp or not
 * @param encodeCtx where will the output data be stored
 */
private void encodeBufferToHFileBlockBuffer(ByteBuffer in,
    DataBlockEncoding algo, boolean includesMemstoreTS,
    HFileBlockEncodingContext encodeCtx) {
  DataBlockEncoder encoder = algo.getEncoder();
  try {
    encoder.encodeKeyValues(in, includesMemstoreTS, encodeCtx);
  } catch (IOException e) {
    throw new RuntimeException(String.format(
        "Bug in data block encoder "
            + "'%s', it probably requested too much data, " +
            "exception message: %s.",
            algo.toString(), e.getMessage()), e);
  }
}
 
开发者ID:cloud-software-foundation,项目名称:c5,代码行数:23,代码来源:HFileDataBlockEncoderImpl.java


示例8: newDataBlockEncodingContext

import org.apache.hadoop.hbase.io.encoding.DataBlockEncoder; //导入依赖的package包/类
@Override
public HFileBlockEncodingContext newDataBlockEncodingContext(
    Algorithm compressionAlgorithm,  byte[] dummyHeader) {
  DataBlockEncoder encoder = encoding.getEncoder();
  if (encoder != null) {
    return encoder.newDataBlockEncodingContext(
      compressionAlgorithm, encoding, dummyHeader);
  }
  return new HFileBlockDefaultEncodingContext(
    compressionAlgorithm, null, dummyHeader);
}
 
开发者ID:cloud-software-foundation,项目名称:c5,代码行数:12,代码来源:HFileDataBlockEncoderImpl.java


示例9: newDataBlockDecodingContext

import org.apache.hadoop.hbase.io.encoding.DataBlockEncoder; //导入依赖的package包/类
@Override
public HFileBlockDecodingContext newDataBlockDecodingContext(
    Algorithm compressionAlgorithm) {
  DataBlockEncoder encoder = encoding.getEncoder();
  if (encoder != null) {
    return encoder.newDataBlockDecodingContext(compressionAlgorithm);
  }
  return new HFileBlockDefaultDecodingContext(compressionAlgorithm);
}
 
开发者ID:cloud-software-foundation,项目名称:c5,代码行数:10,代码来源:HFileDataBlockEncoderImpl.java


示例10: newOnDiskDataBlockEncodingContext

import org.apache.hadoop.hbase.io.encoding.DataBlockEncoder; //导入依赖的package包/类
@Override
public HFileBlockEncodingContext newOnDiskDataBlockEncodingContext(
    Algorithm compressionAlgorithm,  byte[] dummyHeader) {
  if (onDisk != null) {
    DataBlockEncoder encoder = onDisk.getEncoder();
    if (encoder != null) {
      return encoder.newDataBlockEncodingContext(
          compressionAlgorithm, onDisk, dummyHeader);
    }
  }
  return new HFileBlockDefaultEncodingContext(compressionAlgorithm,
      null, dummyHeader);
}
 
开发者ID:daidong,项目名称:DominoHBase,代码行数:14,代码来源:HFileDataBlockEncoderImpl.java


示例11: newOnDiskDataBlockDecodingContext

import org.apache.hadoop.hbase.io.encoding.DataBlockEncoder; //导入依赖的package包/类
@Override
public HFileBlockDecodingContext newOnDiskDataBlockDecodingContext(
    Algorithm compressionAlgorithm) {
  if (onDisk != null) {
    DataBlockEncoder encoder = onDisk.getEncoder();
    if (encoder != null) {
      return encoder.newDataBlockDecodingContext(
          compressionAlgorithm);
    }
  }
  return new HFileBlockDefaultDecodingContext(compressionAlgorithm);
}
 
开发者ID:daidong,项目名称:DominoHBase,代码行数:13,代码来源:HFileDataBlockEncoderImpl.java


示例12: checkStatistics

import org.apache.hadoop.hbase.io.encoding.DataBlockEncoder; //导入依赖的package包/类
/**
 * Check statistics for given HFile for different data block encoders.
 * @param scanner Of file which will be compressed.
 * @param kvLimit Maximal count of KeyValue which will be processed.
 * @throws IOException thrown if scanner is invalid
 */
public void checkStatistics(final KeyValueScanner scanner, final int kvLimit)
    throws IOException {
  scanner.seek(KeyValue.LOWESTKEY);

  KeyValue currentKV;

  byte[] previousKey = null;
  byte[] currentKey;

  DataBlockEncoding[] encodings = DataBlockEncoding.values();

  ByteArrayOutputStream uncompressedOutputStream =
      new ByteArrayOutputStream();

  int j = 0;
  while ((currentKV = KeyValueUtil.ensureKeyValue(scanner.next())) != null && j < kvLimit) {
    // Iterates through key/value pairs
    j++;
    currentKey = currentKV.getKey();
    if (previousKey != null) {
      for (int i = 0; i < previousKey.length && i < currentKey.length &&
          previousKey[i] == currentKey[i]; ++i) {
        totalKeyRedundancyLength++;
      }
    }

    uncompressedOutputStream.write(currentKV.getBuffer(),
        currentKV.getOffset(), currentKV.getLength());

    previousKey = currentKey;

    int kLen = currentKV.getKeyLength();
    int vLen = currentKV.getValueLength();
    int cfLen = currentKV.getFamilyLength(currentKV.getFamilyOffset());
    int restLen = currentKV.getLength() - kLen - vLen;

    totalKeyLength += kLen;
    totalValueLength += vLen;
    totalPrefixLength += restLen;
    totalCFLength += cfLen;
  }

  rawKVs = uncompressedOutputStream.toByteArray();
  boolean useTag = (currentKV.getTagsLength() > 0);
  for (DataBlockEncoding encoding : encodings) {
    if (encoding == DataBlockEncoding.NONE) {
      continue;
    }
    DataBlockEncoder d = encoding.getEncoder();
    HFileContext meta = new HFileContextBuilder()
                        .withCompression(Compression.Algorithm.NONE)
                        .withIncludesMvcc(includesMemstoreTS)
                        .withIncludesTags(useTag).build();
    codecs.add(new EncodedDataBlock(d, encoding, rawKVs, meta ));
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:63,代码来源:DataBlockEncodingTool.java


示例13: setDataBlockEncoder

import org.apache.hadoop.hbase.io.encoding.DataBlockEncoder; //导入依赖的package包/类
private void setDataBlockEncoder(DataBlockEncoder dataBlockEncoder) {
  this.dataBlockEncoder = dataBlockEncoder;
  seeker = dataBlockEncoder.createSeeker(reader.getComparator(),
      includesMemstoreTS);
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:6,代码来源:HFileReaderV2.java


示例14: checkStatistics

import org.apache.hadoop.hbase.io.encoding.DataBlockEncoder; //导入依赖的package包/类
/**
 * Check statistics for given HFile for different data block encoders.
 * @param scanner Of file which will be compressed.
 * @param kvLimit Maximal count of KeyValue which will be processed.
 * @throws IOException thrown if scanner is invalid
 */
public void checkStatistics(final KeyValueScanner scanner, final int kvLimit)
    throws IOException {
  scanner.seek(KeyValue.LOWESTKEY);

  KeyValue currentKv;

  byte[] previousKey = null;
  byte[] currentKey;

  List<DataBlockEncoder> dataBlockEncoders =
      DataBlockEncoding.getAllEncoders();

  for (DataBlockEncoder d : dataBlockEncoders) {
    codecs.add(new EncodedDataBlock(d, includesMemstoreTS));
  }

  int j = 0;
  while ((currentKv = scanner.next()) != null && j < kvLimit) {
    // Iterates through key/value pairs
    j++;
    currentKey = currentKv.getKey();
    if (previousKey != null) {
      for (int i = 0; i < previousKey.length && i < currentKey.length &&
          previousKey[i] == currentKey[i]; ++i) {
        totalKeyRedundancyLength++;
      }
    }

    for (EncodedDataBlock codec : codecs) {
      codec.addKv(currentKv);
    }

    previousKey = currentKey;

    totalPrefixLength += currentKv.getLength() - currentKv.getKeyLength() -
        currentKv.getValueLength();
    totalKeyLength += currentKv.getKeyLength();
    totalValueLength += currentKv.getValueLength();
  }
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:47,代码来源:DataBlockEncodingTool.java


示例15: checkStatistics

import org.apache.hadoop.hbase.io.encoding.DataBlockEncoder; //导入依赖的package包/类
/**
 * Check statistics for given HFile for different data block encoders.
 * @param scanner Of file which will be compressed.
 * @param kvLimit Maximal count of KeyValue which will be processed.
 * @throws IOException thrown if scanner is invalid
 */
public void checkStatistics(final KeyValueScanner scanner, final int kvLimit)
    throws IOException {
  scanner.seek(KeyValue.LOWESTKEY);

  KeyValue currentKV;

  byte[] previousKey = null;
  byte[] currentKey;

  DataBlockEncoding[] encodings = DataBlockEncoding.values();

  ByteArrayOutputStream uncompressedOutputStream =
      new ByteArrayOutputStream();

  int j = 0;
  while ((currentKV = scanner.next()) != null && j < kvLimit) {
    // Iterates through key/value pairs
    j++;
    currentKey = currentKV.getKey();
    if (previousKey != null) {
      for (int i = 0; i < previousKey.length && i < currentKey.length &&
          previousKey[i] == currentKey[i]; ++i) {
        totalKeyRedundancyLength++;
      }
    }

    uncompressedOutputStream.write(currentKV.getBuffer(),
        currentKV.getOffset(), currentKV.getLength());

    previousKey = currentKey;

    int kLen = currentKV.getKeyLength();
    int vLen = currentKV.getValueLength();
    int cfLen = currentKV.getFamilyLength(currentKV.getFamilyOffset());
    int restLen = currentKV.getLength() - kLen - vLen;

    totalKeyLength += kLen;
    totalValueLength += vLen;
    totalPrefixLength += restLen;
    totalCFLength += cfLen;
  }

  rawKVs = uncompressedOutputStream.toByteArray();
  boolean useTag = (currentKV.getTagsLength() > 0);
  for (DataBlockEncoding encoding : encodings) {
    if (encoding == DataBlockEncoding.NONE) {
      continue;
    }
    DataBlockEncoder d = encoding.getEncoder();
    HFileContext meta = new HFileContextBuilder()
                        .withCompression(Compression.Algorithm.NONE)
                        .withIncludesMvcc(includesMemstoreTS)
                        .withIncludesTags(useTag).build();
    codecs.add(new EncodedDataBlock(d, encoding, rawKVs, meta ));
  }
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:63,代码来源:DataBlockEncodingTool.java


示例16: writeEncodedBlock

import org.apache.hadoop.hbase.io.encoding.DataBlockEncoder; //导入依赖的package包/类
static void writeEncodedBlock(Algorithm algo, DataBlockEncoding encoding,
     DataOutputStream dos, final List<Integer> encodedSizes,
    final List<ByteBuffer> encodedBlocks, int blockId, 
    boolean includesMemstoreTS, byte[] dummyHeader, boolean useTag) throws IOException {
  ByteArrayOutputStream baos = new ByteArrayOutputStream();
  DoubleOutputStream doubleOutputStream =
      new DoubleOutputStream(dos, baos);
  writeTestKeyValues(doubleOutputStream, blockId, includesMemstoreTS, useTag);
  ByteBuffer rawBuf = ByteBuffer.wrap(baos.toByteArray());
  rawBuf.rewind();

  DataBlockEncoder encoder = encoding.getEncoder();
  int headerLen = dummyHeader.length;
  byte[] encodedResultWithHeader = null;
  HFileContext meta = new HFileContextBuilder()
                      .withCompression(algo)
                      .withIncludesMvcc(includesMemstoreTS)
                      .withIncludesTags(useTag)
                      .build();
  if (encoder != null) {
    HFileBlockEncodingContext encodingCtx = encoder.newDataBlockEncodingContext(encoding,
        dummyHeader, meta);
    encoder.encodeKeyValues(rawBuf, encodingCtx);
    encodedResultWithHeader =
        encodingCtx.getUncompressedBytesWithHeader();
  } else {
    HFileBlockDefaultEncodingContext defaultEncodingCtx = new HFileBlockDefaultEncodingContext(
        encoding, dummyHeader, meta);
    byte[] rawBufWithHeader =
        new byte[rawBuf.array().length + headerLen];
    System.arraycopy(rawBuf.array(), 0, rawBufWithHeader,
        headerLen, rawBuf.array().length);
    defaultEncodingCtx.compressAfterEncodingWithBlockType(rawBufWithHeader,
        BlockType.DATA);
    encodedResultWithHeader =
      defaultEncodingCtx.getUncompressedBytesWithHeader();
  }
  final int encodedSize =
      encodedResultWithHeader.length - headerLen;
  if (encoder != null) {
    // We need to account for the two-byte encoding algorithm ID that
    // comes after the 24-byte block header but before encoded KVs.
    headerLen += DataBlockEncoding.ID_SIZE;
  }
  byte[] encodedDataSection =
      new byte[encodedResultWithHeader.length - headerLen];
  System.arraycopy(encodedResultWithHeader, headerLen,
      encodedDataSection, 0, encodedDataSection.length);
  final ByteBuffer encodedBuf =
      ByteBuffer.wrap(encodedDataSection);
  encodedSizes.add(encodedSize);
  encodedBlocks.add(encodedBuf);
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:54,代码来源:TestHFileBlock.java



注:本文中的org.apache.hadoop.hbase.io.encoding.DataBlockEncoder类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java Extensions类代码示例发布时间:2022-05-21
下一篇:
Java ExportMetricWriter类代码示例发布时间:2022-05-21
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap