• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java MetadataCollector类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.cassandra.io.sstable.metadata.MetadataCollector的典型用法代码示例。如果您正苦于以下问题:Java MetadataCollector类的具体用法?Java MetadataCollector怎么用?Java MetadataCollector使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



MetadataCollector类属于org.apache.cassandra.io.sstable.metadata包,在下文中一共展示了MetadataCollector类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: CompressedSequentialWriter

import org.apache.cassandra.io.sstable.metadata.MetadataCollector; //导入依赖的package包/类
public CompressedSequentialWriter(File file,
                                  String offsetsPath,
                                  CompressionParameters parameters,
                                  MetadataCollector sstableMetadataCollector)
{
    super(file, parameters.chunkLength());
    this.compressor = parameters.sstableCompressor;

    // buffer for compression should be the same size as buffer itself
    compressed = new ICompressor.WrappedArray(new byte[compressor.initialCompressedBufferLength(buffer.length)]);

    /* Index File (-CompressionInfo.db component) and it's header */
    metadataWriter = CompressionMetadata.Writer.open(parameters, offsetsPath);

    this.sstableMetadataCollector = sstableMetadataCollector;
    crcMetadata = new DataIntegrityMetadata.ChecksumWriter(out);
}
 
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:18,代码来源:CompressedSequentialWriter.java


示例2: writeFile

import org.apache.cassandra.io.sstable.metadata.MetadataCollector; //导入依赖的package包/类
private SSTableReader writeFile(ColumnFamilyStore cfs, int count)
{
    ArrayBackedSortedColumns cf = ArrayBackedSortedColumns.factory.create(cfs.metadata);
    for (int i = 0; i < count; i++)
        cf.addColumn(Util.column(String.valueOf(i), "a", 1));
    File dir = cfs.directories.getDirectoryForNewSSTables();
    String filename = cfs.getTempSSTablePath(dir);

    SSTableWriter writer = new SSTableWriter(filename,
            0,
            0,
            cfs.metadata,
            StorageService.getPartitioner(),
            new MetadataCollector(cfs.metadata.comparator));

    for (int i = 0; i < count * 5; i++)
        writer.append(StorageService.getPartitioner().decorateKey(ByteBufferUtil.bytes(i)), cf);
    return writer.closeAndOpenReader();
}
 
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:20,代码来源:AntiCompactionTest.java


示例3: writeFile

import org.apache.cassandra.io.sstable.metadata.MetadataCollector; //导入依赖的package包/类
private SSTableReader writeFile(ColumnFamilyStore cfs, int count)
{
    ArrayBackedSortedColumns cf = ArrayBackedSortedColumns.factory.create(cfs.metadata);
    for (int i = 0; i < count / 100; i++)
        cf.addColumn(Util.cellname(i), random(0, 1000), 1);
    File dir = cfs.directories.getDirectoryForNewSSTables();
    String filename = cfs.getTempSSTablePath(dir);

    SSTableWriter writer = new SSTableWriter(filename,
            0,
            0,
            cfs.metadata,
            StorageService.getPartitioner(),
            new MetadataCollector(cfs.metadata.comparator));

    for (int i = 0; i < count * 5; i++)
        writer.append(StorageService.getPartitioner().decorateKey(ByteBufferUtil.bytes(i)), cf);
    return writer.closeAndOpenReader();
}
 
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:20,代码来源:SSTableRewriterTest.java


示例4: BigTableWriter

import org.apache.cassandra.io.sstable.metadata.MetadataCollector; //导入依赖的package包/类
public BigTableWriter(Descriptor descriptor, 
                      Long keyCount, 
                      Long repairedAt, 
                      CFMetaData metadata, 
                      MetadataCollector metadataCollector, 
                      SerializationHeader header,
                      LifecycleTransaction txn)
{
    super(descriptor, keyCount, repairedAt, metadata, metadataCollector, header);
    txn.trackNew(this); // must track before any files are created

    if (compression)
    {
        dataFile = SequentialWriter.open(getFilename(),
                                         descriptor.filenameFor(Component.COMPRESSION_INFO),
                                         metadata.params.compression,
                                         metadataCollector);
        dbuilder = SegmentedFile.getCompressedBuilder((CompressedSequentialWriter) dataFile);
    }
    else
    {
        dataFile = SequentialWriter.open(new File(getFilename()), new File(descriptor.filenameFor(Component.CRC)));
        dbuilder = SegmentedFile.getBuilder(DatabaseDescriptor.getDiskAccessMode(), false);
    }
    iwriter = new IndexWriter(keyCount, dataFile);
}
 
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:27,代码来源:BigTableWriter.java


示例5: CompressedSequentialWriter

import org.apache.cassandra.io.sstable.metadata.MetadataCollector; //导入依赖的package包/类
public CompressedSequentialWriter(File file,
                                  String offsetsPath,
                                  CompressionParams parameters,
                                  MetadataCollector sstableMetadataCollector)
{
    super(file, parameters.chunkLength(), parameters.getSstableCompressor().preferredBufferType());
    this.compressor = parameters.getSstableCompressor();

    // buffer for compression should be the same size as buffer itself
    compressed = compressor.preferredBufferType().allocate(compressor.initialCompressedBufferLength(buffer.capacity()));

    /* Index File (-CompressionInfo.db component) and it's header */
    metadataWriter = CompressionMetadata.Writer.open(parameters, offsetsPath);

    this.sstableMetadataCollector = sstableMetadataCollector;
    crcMetadata = new DataIntegrityMetadata.ChecksumWriter(new DataOutputStream(Channels.newOutputStream(channel)));
}
 
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:18,代码来源:CompressedSequentialWriter.java


示例6: createCompactionWriter

import org.apache.cassandra.io.sstable.metadata.MetadataCollector; //导入依赖的package包/类
private SSTableWriter createCompactionWriter(long repairedAt)
{
    MetadataCollector sstableMetadataCollector = new MetadataCollector(cfs.getComparator());

    // Get the max timestamp of the precompacted sstables
    // and adds generation of live ancestors
    // -- note that we always only have one SSTable in toUpgrade here:
    for (SSTableReader sstable : toUpgrade)
    {
        sstableMetadataCollector.addAncestor(sstable.descriptor.generation);
        for (Integer i : sstable.getAncestors())
        {
            if (new File(sstable.descriptor.withGeneration(i).filenameFor(Component.DATA)).exists())
                sstableMetadataCollector.addAncestor(i);
        }
        sstableMetadataCollector.sstableLevel(sstable.getSSTableLevel());
    }

    return new SSTableWriter(cfs.getTempSSTablePath(directory), estimatedRows, repairedAt, cfs.metadata, cfs.partitioner, sstableMetadataCollector);
}
 
开发者ID:daidong,项目名称:GraphTrek,代码行数:21,代码来源:Upgrader.java


示例7: createCompactionWriter

import org.apache.cassandra.io.sstable.metadata.MetadataCollector; //导入依赖的package包/类
private SSTableWriter createCompactionWriter()
{
    MetadataCollector sstableMetadataCollector = new MetadataCollector(cfs.getComparator());

    // Get the max timestamp of the precompacted sstables
    // and adds generation of live ancestors
    for (SSTableReader sstable : toUpgrade)
    {
        sstableMetadataCollector.addAncestor(sstable.descriptor.generation);
        for (Integer i : sstable.getAncestors())
        {
            if (new File(sstable.descriptor.withGeneration(i).filenameFor(Component.DATA)).exists())
                sstableMetadataCollector.addAncestor(i);
        }
    }

    return new SSTableWriter(cfs.getTempSSTablePath(directory), estimatedRows, cfs.metadata, cfs.partitioner, sstableMetadataCollector);
}
 
开发者ID:mafernandez-stratio,项目名称:cassandra-cqlMod,代码行数:19,代码来源:Upgrader.java


示例8: CompressedSequentialWriter

import org.apache.cassandra.io.sstable.metadata.MetadataCollector; //导入依赖的package包/类
public CompressedSequentialWriter(File file,
                                  String indexFilePath,
                                  boolean skipIOCache,
                                  CompressionParameters parameters,
                                  MetadataCollector sstableMetadataCollector)
{
    super(file, parameters.chunkLength(), skipIOCache);
    this.compressor = parameters.sstableCompressor;

    // buffer for compression should be the same size as buffer itself
    compressed = new ICompressor.WrappedArray(new byte[compressor.initialCompressedBufferLength(buffer.length)]);

    /* Index File (-CompressionInfo.db component) and it's header */
    metadataWriter = CompressionMetadata.Writer.open(indexFilePath);
    metadataWriter.writeHeader(parameters);

    this.sstableMetadataCollector = sstableMetadataCollector;
}
 
开发者ID:mafernandez-stratio,项目名称:cassandra-cqlMod,代码行数:19,代码来源:CompressedSequentialWriter.java


示例9: CompressedSequentialWriter

import org.apache.cassandra.io.sstable.metadata.MetadataCollector; //导入依赖的package包/类
public CompressedSequentialWriter(File file,
                                  String offsetsPath,
                                  boolean skipIOCache,
                                  CompressionParameters parameters,
                                  MetadataCollector sstableMetadataCollector)
{
    super(file, parameters.chunkLength(), skipIOCache);
    this.compressor = parameters.sstableCompressor;

    // buffer for compression should be the same size as buffer itself
    compressed = new ICompressor.WrappedArray(new byte[compressor.initialCompressedBufferLength(buffer.length)]);

    /* Index File (-CompressionInfo.db component) and it's header */
    metadataWriter = CompressionMetadata.Writer.open(offsetsPath);
    metadataWriter.writeHeader(parameters);

    this.sstableMetadataCollector = sstableMetadataCollector;
    crcMetadata = new DataIntegrityMetadata.ChecksumWriter(out);
}
 
开发者ID:rajath26,项目名称:cassandra-trunk,代码行数:20,代码来源:CompressedSequentialWriter.java


示例10: create

import org.apache.cassandra.io.sstable.metadata.MetadataCollector; //导入依赖的package包/类
@SuppressWarnings("resource") // SimpleSSTableMultiWriter closes writer
public static SSTableMultiWriter create(Descriptor descriptor,
                                        long keyCount,
                                        long repairedAt,
                                        CFMetaData cfm,
                                        MetadataCollector metadataCollector,
                                        SerializationHeader header,
                                        LifecycleTransaction txn)
{
    SSTableWriter writer = SSTableWriter.create(descriptor, keyCount, repairedAt, cfm, metadataCollector, header, txn);
    return new SimpleSSTableMultiWriter(writer, txn);
}
 
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:13,代码来源:SimpleSSTableMultiWriter.java


示例11: open

import org.apache.cassandra.io.sstable.metadata.MetadataCollector; //导入依赖的package包/类
@Override
public SSTableWriter open(Descriptor descriptor,
                          long keyCount,
                          long repairedAt,
                          CFMetaData metadata,
                          MetadataCollector metadataCollector,
                          SerializationHeader header,
                          LifecycleTransaction txn)
{
    return new BigTableWriter(descriptor, keyCount, repairedAt, metadata, metadataCollector, header);
}
 
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:12,代码来源:BigFormat.java


示例12: BigTableWriter

import org.apache.cassandra.io.sstable.metadata.MetadataCollector; //导入依赖的package包/类
public BigTableWriter(Descriptor descriptor,
                      long keyCount,
                      long repairedAt,
                      CFMetaData metadata,
                      MetadataCollector metadataCollector,
                      SerializationHeader header)
{
    super(descriptor, keyCount, repairedAt, metadata, metadataCollector, header);
    //txn.trackNew(this); // must track before any files are created

    if (compression)
    {
        dataFile = new CompressedSequentialWriter(getFilename(),
                                         descriptor.filenameFor(Component.COMPRESSION_INFO),
                                         descriptor.filenameFor(descriptor.digestComponent),
                                         writerOption,
                                         metadata.params.compression,
                                         metadataCollector, descriptor.getConfiguration());
    }
    else
    {
        dataFile = new ChecksummedSequentialWriter(getFilename(),
                descriptor.filenameFor(Component.CRC),
                descriptor.filenameFor(descriptor.digestComponent),
                writerOption,
                descriptor.getConfiguration());
    }
    dbuilder = new FileHandle.Builder(descriptor.filenameFor(Component.DATA))
                             .withConfiguration(descriptor.getConfiguration())
                             .compressed(compression);
    //chunkCache.ifPresent(dbuilder::withChunkCache);
    iwriter = new IndexWriter(keyCount);

    columnIndexWriter = new ColumnIndex(this.header, dataFile, descriptor.version, this.observers,
                                        getRowIndexEntrySerializer().indexInfoSerializer());
}
 
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:37,代码来源:BigTableWriter.java


示例13: SSTableWriter

import org.apache.cassandra.io.sstable.metadata.MetadataCollector; //导入依赖的package包/类
protected SSTableWriter(Descriptor descriptor,
                        long keyCount,
                        long repairedAt,
                        CFMetaData metadata,
                        MetadataCollector metadataCollector,
                        SerializationHeader header)
{
    super(descriptor, components(metadata), metadata, DatabaseDescriptor.getDiskOptimizationStrategy());
    this.keyCount = keyCount;
    this.repairedAt = repairedAt;
    this.metadataCollector = metadataCollector;
    this.header = header != null ? header : SerializationHeader.makeWithoutStats(metadata); //null header indicates streaming from pre-3.0 sstable
    this.rowIndexEntrySerializer = descriptor.version.getSSTableFormat().getIndexSerializer(metadata, descriptor.version, header);
    this.observers = Collections.emptySet();
}
 
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:16,代码来源:SSTableWriter.java


示例14: create

import org.apache.cassandra.io.sstable.metadata.MetadataCollector; //导入依赖的package包/类
public static SSTableWriter create(Descriptor descriptor,
                                   Long keyCount,
                                   Long repairedAt,
                                   CFMetaData metadata,
                                   MetadataCollector metadataCollector,
                                   SerializationHeader header,
                                   LifecycleTransaction txn)
{
    Factory writerFactory = descriptor.getFormat().getWriterFactory();
    return writerFactory.open(descriptor, keyCount, repairedAt, metadata, metadataCollector, header, txn);
}
 
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:12,代码来源:SSTableWriter.java


示例15: open

import org.apache.cassandra.io.sstable.metadata.MetadataCollector; //导入依赖的package包/类
public abstract SSTableWriter open(Descriptor descriptor,
long keyCount,
long repairedAt,
CFMetaData metadata,
MetadataCollector metadataCollector,
SerializationHeader header,
LifecycleTransaction txn);
 
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:8,代码来源:SSTableWriter.java


示例16: create

import org.apache.cassandra.io.sstable.metadata.MetadataCollector; //导入依赖的package包/类
@SuppressWarnings("resource") // log and writer closed during doPostCleanup
public static SSTableTxnWriter create(CFMetaData cfm,
                                      Descriptor descriptor,
                                      long keyCount,
                                      long repairedAt,
                                      int sstableLevel,
                                      SerializationHeader header)
{
    // if the column family store does not exist, we create a new default SSTableMultiWriter to use:
    LifecycleTransaction txn = LifecycleTransaction.offline(OperationType.WRITE);
    MetadataCollector collector = new MetadataCollector(cfm.comparator).sstableLevel(sstableLevel);
    SSTableMultiWriter writer = SimpleSSTableMultiWriter.create(descriptor, keyCount, repairedAt, cfm, collector, header, txn);
    return new SSTableTxnWriter(writer);
}
 
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:15,代码来源:SSTableTxnWriter.java


示例17: createWithNoLogging

import org.apache.cassandra.io.sstable.metadata.MetadataCollector; //导入依赖的package包/类
public static SSTableTxnWriter createWithNoLogging(CFMetaData cfm,
                                                   Descriptor descriptor,
                                                   long keyCount,
                                                   long repairedAt,
                                                   int sstableLevel,
                                                   SerializationHeader header)
{
    // if the column family store does not exist, we create a new default SSTableMultiWriter to use:
    LifecycleTransaction txn = LifecycleTransaction.offline(OperationType.CLEANUP);
    MetadataCollector collector = new MetadataCollector(cfm.comparator).sstableLevel(sstableLevel);
    SSTableMultiWriter writer = SimpleSSTableMultiWriter.create(descriptor, keyCount, repairedAt, cfm, collector, header, txn);
    return new SSTableTxnWriter(writer);
}
 
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:14,代码来源:SSTableTxnWriter.java


示例18: CompressedSequentialWriter

import org.apache.cassandra.io.sstable.metadata.MetadataCollector; //导入依赖的package包/类
/**
 * Create CompressedSequentialWriter without digest file.
 *
 * @param file File to write
 * @param offsetsPath File name to write compression metadata
 * @param digestFile File to write digest
 * @param option Write option (buffer size and type will be set the same as compression params)
 * @param parameters Compression mparameters
 * @param sstableMetadataCollector Metadata collector
 */
public CompressedSequentialWriter(String file,
                                  String offsetsPath,
                                  String digestFile,
                                  SequentialWriterOption option,
                                  CompressionParams parameters,
                                  MetadataCollector sstableMetadataCollector,
                                  Configuration conf)
{
    super(file,
            SequentialWriterOption.newBuilder()
                        .bufferSize(option.bufferSize())
                        .bufferType(option.bufferType())
                        .bufferSize(parameters.chunkLength())
                        .bufferType(parameters.getSstableCompressor().preferredBufferType())
                        .finishOnClose(option.finishOnClose())
                        .build(),
            conf);
    this.compressor = parameters.getSstableCompressor();
    this.digestFile = Optional.ofNullable(digestFile);

    // buffer for compression should be the same size as buffer itself
    compressed = compressor.preferredBufferType().allocate(compressor.initialCompressedBufferLength(buffer.capacity()));

    /* Index File (-CompressionInfo.db component) and it's header */
    metadataWriter = CompressionMetadata.Writer.open(parameters, offsetsPath, conf);

    this.sstableMetadataCollector = sstableMetadataCollector;
    crcMetadata = new ChecksumWriter(new DataOutputStream(Channels.newOutputStream(channel)), conf);
}
 
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:40,代码来源:CompressedSequentialWriter.java


示例19: createSSTableWriter

import org.apache.cassandra.io.sstable.metadata.MetadataCollector; //导入依赖的package包/类
public static SSTableWriter createSSTableWriter(final Descriptor inputSSTableDescriptor,
                                                final CFMetaData outCfmMetaData,
                                                final SSTableReader inputSSTable) {
    final String sstableDirectory = System.getProperty("user.dir") + "/cassandra/compresseddata";
    LOGGER.info("Output directory: " + sstableDirectory);

    final File outputDirectory = new File(sstableDirectory + File.separatorChar
            + inputSSTableDescriptor.ksname
            + File.separatorChar + inputSSTableDescriptor.cfname);

    if (!outputDirectory.exists() && !outputDirectory.mkdirs()) {
        throw new FSWriteError(new IOException("failed to create tmp directory"),
                outputDirectory.getAbsolutePath());
    }

    final SSTableFormat.Type sstableFormat = SSTableFormat.Type.BIG;

    final BigTableWriter writer = new BigTableWriter(
            new Descriptor(
                    sstableFormat.info.getLatestVersion().getVersion(),
                    outputDirectory.getAbsolutePath(),
                    inputSSTableDescriptor.ksname, inputSSTableDescriptor.cfname,
                    inputSSTableDescriptor.generation,
                    sstableFormat,
                    inputSSTableDescriptor.getConfiguration()),
            inputSSTable.getTotalRows(), 0L, outCfmMetaData,
            new MetadataCollector(outCfmMetaData.comparator)
                    .sstableLevel(inputSSTable.getSSTableMetadata().sstableLevel),
            new SerializationHeader(true,
                    outCfmMetaData, outCfmMetaData.partitionColumns(),
                    org.apache.cassandra.db.rows.EncodingStats.NO_STATS));

    return writer;
}
 
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:35,代码来源:SSTableUtils.java


示例20: createFlushWriter

import org.apache.cassandra.io.sstable.metadata.MetadataCollector; //导入依赖的package包/类
public SSTableWriter createFlushWriter(String filename) throws ExecutionException, InterruptedException
{
    MetadataCollector sstableMetadataCollector = new MetadataCollector(cfs.metadata.comparator).replayPosition(context);
    return new SSTableWriter(filename,
                             rows.size(),
                             ActiveRepairService.UNREPAIRED_SSTABLE,
                             cfs.metadata,
                             cfs.partitioner,
                             sstableMetadataCollector);
}
 
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:11,代码来源:Memtable.java



注:本文中的org.apache.cassandra.io.sstable.metadata.MetadataCollector类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java Domain类代码示例发布时间:2022-05-23
下一篇:
Java Dependency类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap