• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java SegmentedFile类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.cassandra.io.util.SegmentedFile的典型用法代码示例。如果您正苦于以下问题:Java SegmentedFile类的具体用法?Java SegmentedFile怎么用?Java SegmentedFile使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



SegmentedFile类属于org.apache.cassandra.io.util包,在下文中一共展示了SegmentedFile类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: internalOpen

import org.apache.cassandra.io.util.SegmentedFile; //导入依赖的package包/类
/**
 * Open a RowIndexedReader which already has its state initialized (by SSTableWriter).
 */
static SSTableReader internalOpen(Descriptor desc,
                                  Set<Component> components,
                                  CFMetaData metadata,
                                  IPartitioner partitioner,
                                  SegmentedFile ifile,
                                  SegmentedFile dfile,
                                  IndexSummary isummary,
                                  IFilter bf,
                                  long maxDataAge,
                                  StatsMetadata sstableMetadata,
                                  OpenReason openReason)
{
    assert desc != null && partitioner != null && ifile != null && dfile != null && isummary != null && bf != null && sstableMetadata != null;
    return new SSTableReader(desc,
                             components,
                             metadata,
                             partitioner,
                             ifile, dfile,
                             isummary,
                             bf,
                             maxDataAge,
                             sstableMetadata,
                             openReason);
}
 
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:28,代码来源:SSTableReader.java


示例2: SSTableReader

import org.apache.cassandra.io.util.SegmentedFile; //导入依赖的package包/类
private SSTableReader(Descriptor desc,
                      Set<Component> components,
                      CFMetaData metadata,
                      IPartitioner partitioner,
                      SegmentedFile ifile,
                      SegmentedFile dfile,
                      IndexSummary indexSummary,
                      IFilter bloomFilter,
                      long maxDataAge,
                      StatsMetadata sstableMetadata,
                      OpenReason openReason)
{
    this(desc, components, metadata, partitioner, maxDataAge, sstableMetadata, openReason);
    this.ifile = ifile;
    this.dfile = dfile;
    this.indexSummary = indexSummary;
    this.bf = bloomFilter;
    this.setup(false);
}
 
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:20,代码来源:SSTableReader.java


示例3: internalOpen

import org.apache.cassandra.io.util.SegmentedFile; //导入依赖的package包/类
/**
 * Open a RowIndexedReader which already has its state initialized (by SSTableWriter).
 */
static SSTableReader internalOpen(Descriptor desc,
                                  Set<Component> components,
                                  CFMetaData metadata,
                                  IPartitioner partitioner,
                                  SegmentedFile ifile,
                                  SegmentedFile dfile,
                                  IndexSummary isummary,
                                  IFilter bf,
                                  long maxDataAge,
                                  StatsMetadata sstableMetadata,
                                  boolean isOpenEarly)
{
    assert desc != null && partitioner != null && ifile != null && dfile != null && isummary != null && bf != null && sstableMetadata != null;
    return new SSTableReader(desc,
                             components,
                             metadata,
                             partitioner,
                             ifile, dfile,
                             isummary,
                             bf,
                             maxDataAge,
                             sstableMetadata,
                             isOpenEarly);
}
 
开发者ID:daidong,项目名称:GraphTrek,代码行数:28,代码来源:SSTableReader.java


示例4: SSTableReader

import org.apache.cassandra.io.util.SegmentedFile; //导入依赖的package包/类
private SSTableReader(Descriptor desc,
                      Set<Component> components,
                      CFMetaData metadata,
                      IPartitioner partitioner,
                      SegmentedFile ifile,
                      SegmentedFile dfile,
                      IndexSummary indexSummary,
                      IFilter bloomFilter,
                      long maxDataAge,
                      StatsMetadata sstableMetadata,
                      boolean isOpenEarly)
{
    this(desc, components, metadata, partitioner, maxDataAge, sstableMetadata, isOpenEarly);

    this.ifile = ifile;
    this.dfile = dfile;
    this.indexSummary = indexSummary;
    this.bf = bloomFilter;
}
 
开发者ID:daidong,项目名称:GraphTrek,代码行数:20,代码来源:SSTableReader.java


示例5: load

import org.apache.cassandra.io.util.SegmentedFile; //导入依赖的package包/类
/**
 * Loads ifile, dfile and indexSummary, and optionally recreates the bloom filter.
 * @param saveSummaryIfCreated for bulk loading purposes, if the summary was absent and needed to be built, you can
 *                             avoid persisting it to disk by setting this to false
 */
private void load(boolean recreateBloomFilter, boolean saveSummaryIfCreated) throws IOException
{
    SegmentedFile.Builder ibuilder = SegmentedFile.getBuilder(DatabaseDescriptor.getIndexAccessMode());
    SegmentedFile.Builder dbuilder = compression
                                     ? SegmentedFile.getCompressedBuilder()
                                     : SegmentedFile.getBuilder(DatabaseDescriptor.getDiskAccessMode());

    boolean summaryLoaded = loadSummary(ibuilder, dbuilder);
    if (recreateBloomFilter || !summaryLoaded)
        buildSummary(recreateBloomFilter, ibuilder, dbuilder, summaryLoaded, Downsampling.BASE_SAMPLING_LEVEL);

    ifile = ibuilder.complete(descriptor.filenameFor(Component.PRIMARY_INDEX));
    dfile = dbuilder.complete(descriptor.filenameFor(Component.DATA));
    if (saveSummaryIfCreated && (recreateBloomFilter || !summaryLoaded)) // save summary information to disk
        saveSummary(ibuilder, dbuilder);
}
 
开发者ID:daidong,项目名称:GraphTrek,代码行数:22,代码来源:SSTableReader.java


示例6: internalOpen

import org.apache.cassandra.io.util.SegmentedFile; //导入依赖的package包/类
/**
 * Open a RowIndexedReader which already has its state initialized (by SSTableWriter).
 */
static SSTableReader internalOpen(Descriptor desc,
                                  Set<Component> components,
                                  CFMetaData metadata,
                                  IPartitioner partitioner,
                                  SegmentedFile ifile,
                                  SegmentedFile dfile,
                                  IndexSummary isummary,
                                  IFilter bf,
                                  long maxDataAge,
                                  StatsMetadata sstableMetadata)
{
    assert desc != null && partitioner != null && ifile != null && dfile != null && isummary != null && bf != null && sstableMetadata != null;
    return new SSTableReader(desc,
                             components,
                             metadata,
                             partitioner,
                             ifile, dfile,
                             isummary,
                             bf,
                             maxDataAge,
                             sstableMetadata);
}
 
开发者ID:rajath26,项目名称:cassandra-trunk,代码行数:26,代码来源:SSTableReader.java


示例7: SSTableReader

import org.apache.cassandra.io.util.SegmentedFile; //导入依赖的package包/类
private SSTableReader(Descriptor desc,
                      Set<Component> components,
                      CFMetaData metadata,
                      IPartitioner partitioner,
                      SegmentedFile ifile,
                      SegmentedFile dfile,
                      IndexSummary indexSummary,
                      IFilter bloomFilter,
                      long maxDataAge,
                      StatsMetadata sstableMetadata)
{
    this(desc, components, metadata, partitioner, maxDataAge, sstableMetadata);

    this.ifile = ifile;
    this.dfile = dfile;
    this.indexSummary = indexSummary;
    this.bf = bloomFilter;
}
 
开发者ID:rajath26,项目名称:cassandra-trunk,代码行数:19,代码来源:SSTableReader.java


示例8: closeAndOpenReader

import org.apache.cassandra.io.util.SegmentedFile; //导入依赖的package包/类
public SSTableReader closeAndOpenReader(long maxDataAge) throws IOException
{
    // index and filter
    iwriter.close();

    // main data
    long position = dataFile.getFilePointer();
    dataFile.close(); // calls force
    FileUtils.truncate(dataFile.getPath(), position);

    // write sstable statistics
    writeMetadata(descriptor, estimatedRowSize, estimatedColumnCount, replayPosition);

    // remove the 'tmp' marker from all components
    final Descriptor newdesc = rename(descriptor, components);

    // finalize in-memory state for the reader
    SegmentedFile ifile = iwriter.builder.complete(newdesc.filenameFor(SSTable.COMPONENT_INDEX));
    SegmentedFile dfile = dbuilder.complete(newdesc.filenameFor(SSTable.COMPONENT_DATA));
    SSTableReader sstable = SSTableReader.internalOpen(newdesc, components, metadata, replayPosition, partitioner, ifile, dfile, iwriter.summary, iwriter.bf, maxDataAge, estimatedRowSize, estimatedColumnCount);
    iwriter = null;
    dbuilder = null;
    return sstable;
}
 
开发者ID:devdattakulkarni,项目名称:Cassandra-KVPM,代码行数:25,代码来源:SSTableWriter.java


示例9: SSTableReader

import org.apache.cassandra.io.util.SegmentedFile; //导入依赖的package包/类
private SSTableReader(Descriptor desc,
                      Set<Component> components,
                      CFMetaData metadata,
                      ReplayPosition replayPosition,
                      IPartitioner partitioner,
                      SegmentedFile ifile,
                      SegmentedFile dfile,
                      IndexSummary indexSummary,
                      Filter bloomFilter,
                      long maxDataAge,
                      EstimatedHistogram rowSizes,
                      EstimatedHistogram columnCounts)
throws IOException
{
    super(desc, components, metadata, replayPosition, partitioner, rowSizes, columnCounts);
    this.maxDataAge = maxDataAge;

    this.ifile = ifile;
    this.dfile = dfile;
    this.indexSummary = indexSummary;
    this.bf = bloomFilter;
}
 
开发者ID:devdattakulkarni,项目名称:Cassandra-KVPM,代码行数:23,代码来源:SSTableReader.java


示例10: loadSummary

import org.apache.cassandra.io.util.SegmentedFile; //导入依赖的package包/类
/**
 * Load index summary from Summary.db file if it exists.
 *
 * if loaded index summary has different index interval from current value stored in schema,
 * then Summary.db file will be deleted and this returns false to rebuild summary.
 *
 * @param ibuilder
 * @param dbuilder
 * @return true if index summary is loaded successfully from Summary.db file.
 */
public boolean loadSummary(SegmentedFile.Builder ibuilder, SegmentedFile.Builder dbuilder)
{
    File summariesFile = new File(descriptor.filenameFor(Component.SUMMARY));
    if (!summariesFile.exists())
        return false;

    DataInputStream iStream = null;
    try
    {
        iStream = new DataInputStream(new FileInputStream(summariesFile));
        indexSummary = IndexSummary.serializer.deserialize(
                iStream, partitioner, descriptor.version.hasSamplingLevel,
                metadata.getMinIndexInterval(), metadata.getMaxIndexInterval());
        first = partitioner.decorateKey(ByteBufferUtil.readWithLength(iStream));
        last = partitioner.decorateKey(ByteBufferUtil.readWithLength(iStream));
        ibuilder.deserializeBounds(iStream);
        dbuilder.deserializeBounds(iStream);
    }
    catch (IOException e)
    {
        if (indexSummary != null)
            indexSummary.close();
        logger.debug("Cannot deserialize SSTable Summary File {}: {}", summariesFile.getPath(), e.getMessage());
        // corrupted; delete it and fall back to creating a new summary
        FileUtils.closeQuietly(iStream);
        // delete it and fall back to creating a new summary
        FileUtils.deleteWithConfirm(summariesFile);
        return false;
    }
    finally
    {
        FileUtils.closeQuietly(iStream);
    }

    return true;
}
 
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:47,代码来源:SSTableReader.java


示例11: SSTableWriter

import org.apache.cassandra.io.util.SegmentedFile; //导入依赖的package包/类
public SSTableWriter(String filename,
                     long keyCount,
                     long repairedAt,
                     CFMetaData metadata,
                     IPartitioner<?> partitioner,
                     MetadataCollector sstableMetadataCollector)
{
    super(Descriptor.fromFilename(filename),
          components(metadata),
          metadata,
          partitioner);
    this.repairedAt = repairedAt;
    iwriter = new IndexWriter(keyCount);

    if (compression)
    {
        dataFile = SequentialWriter.open(getFilename(),
                                         descriptor.filenameFor(Component.COMPRESSION_INFO),
                                         metadata.compressionParameters(),
                                         sstableMetadataCollector);
        dbuilder = SegmentedFile.getCompressedBuilder((CompressedSequentialWriter) dataFile);
    }
    else
    {
        dataFile = SequentialWriter.open(new File(getFilename()), new File(descriptor.filenameFor(Component.CRC)));
        dbuilder = SegmentedFile.getBuilder(DatabaseDescriptor.getDiskAccessMode());
    }

    this.sstableMetadataCollector = sstableMetadataCollector;
}
 
开发者ID:daidong,项目名称:GraphTrek,代码行数:31,代码来源:SSTableWriter.java


示例12: closeAndOpenReader

import org.apache.cassandra.io.util.SegmentedFile; //导入依赖的package包/类
public SSTableReader closeAndOpenReader(long maxDataAge, long repairedAt)
{
    Pair<Descriptor, StatsMetadata> p = close(repairedAt);
    Descriptor newdesc = p.left;
    StatsMetadata sstableMetadata = p.right;

    // finalize in-memory state for the reader
    SegmentedFile ifile = iwriter.builder.complete(newdesc.filenameFor(Component.PRIMARY_INDEX));
    SegmentedFile dfile = dbuilder.complete(newdesc.filenameFor(Component.DATA));
    SSTableReader sstable = SSTableReader.internalOpen(newdesc,
                                                       components,
                                                       metadata,
                                                       partitioner,
                                                       ifile,
                                                       dfile,
                                                       iwriter.summary.build(partitioner),
                                                       iwriter.bf,
                                                       maxDataAge,
                                                       sstableMetadata,
                                                       false);
    sstable.first = getMinimalKey(first);
    sstable.last = getMinimalKey(last);
    // try to save the summaries to disk
    sstable.saveSummary(iwriter.builder, dbuilder);
    iwriter = null;
    dbuilder = null;
    return sstable;
}
 
开发者ID:daidong,项目名称:GraphTrek,代码行数:29,代码来源:SSTableWriter.java


示例13: IndexWriter

import org.apache.cassandra.io.util.SegmentedFile; //导入依赖的package包/类
IndexWriter(long keyCount)
{
    indexFile = SequentialWriter.open(new File(descriptor.filenameFor(Component.PRIMARY_INDEX)));
    builder = SegmentedFile.getBuilder(DatabaseDescriptor.getIndexAccessMode());
    summary = new IndexSummaryBuilder(keyCount, metadata.getMinIndexInterval(), Downsampling.BASE_SAMPLING_LEVEL);
    bf = FilterFactory.getFilter(keyCount, metadata.getBloomFilterFpChance(), true);
}
 
开发者ID:daidong,项目名称:GraphTrek,代码行数:8,代码来源:SSTableWriter.java


示例14: loadSummary

import org.apache.cassandra.io.util.SegmentedFile; //导入依赖的package包/类
/**
 * Load index summary from Summary.db file if it exists.
 *
 * if loaded index summary has different index interval from current value stored in schema,
 * then Summary.db file will be deleted and this returns false to rebuild summary.
 *
 * @param ibuilder
 * @param dbuilder
 * @return true if index summary is loaded successfully from Summary.db file.
 */
public boolean loadSummary(SegmentedFile.Builder ibuilder, SegmentedFile.Builder dbuilder)
{
    File summariesFile = new File(descriptor.filenameFor(Component.SUMMARY));
    if (!summariesFile.exists())
        return false;

    DataInputStream iStream = null;
    try
    {
        iStream = new DataInputStream(new FileInputStream(summariesFile));
        indexSummary = IndexSummary.serializer.deserialize(iStream, partitioner, descriptor.version.hasSamplingLevel, metadata.getMinIndexInterval(), metadata.getMaxIndexInterval());
        first = partitioner.decorateKey(ByteBufferUtil.readWithLength(iStream));
        last = partitioner.decorateKey(ByteBufferUtil.readWithLength(iStream));
        ibuilder.deserializeBounds(iStream);
        dbuilder.deserializeBounds(iStream);
    }
    catch (IOException e)
    {
        logger.debug("Cannot deserialize SSTable Summary File {}: {}", summariesFile.getPath(), e.getMessage());
        // corrupted; delete it and fall back to creating a new summary
        FileUtils.closeQuietly(iStream);
        // delete it and fall back to creating a new summary
        FileUtils.deleteWithConfirm(summariesFile);
        return false;
    }
    finally
    {
        FileUtils.closeQuietly(iStream);
    }

    return true;
}
 
开发者ID:daidong,项目名称:GraphTrek,代码行数:43,代码来源:SSTableReader.java


示例15: SSTableWriter

import org.apache.cassandra.io.util.SegmentedFile; //导入依赖的package包/类
public SSTableWriter(String filename, long keyCount, CFMetaData metadata, IPartitioner partitioner, ReplayPosition replayPosition) throws IOException
{
    super(Descriptor.fromFilename(filename),
          new HashSet<Component>(Arrays.asList(Component.DATA, Component.FILTER, Component.PRIMARY_INDEX, Component.STATS)),
          metadata,
          replayPosition,
          partitioner,
          SSTable.defaultRowHistogram(),
          SSTable.defaultColumnHistogram());
    iwriter = new IndexWriter(descriptor, partitioner, keyCount);
    dbuilder = SegmentedFile.getBuilder(DatabaseDescriptor.getDiskAccessMode());
    dataFile = new BufferedRandomAccessFile(new File(getFilename()), "rw", BufferedRandomAccessFile.DEFAULT_BUFFER_SIZE, true);
}
 
开发者ID:devdattakulkarni,项目名称:Cassandra-KVPM,代码行数:14,代码来源:SSTableWriter.java


示例16: IndexWriter

import org.apache.cassandra.io.util.SegmentedFile; //导入依赖的package包/类
IndexWriter(Descriptor desc, IPartitioner part, long keyCount) throws IOException
{
    this.desc = desc;
    this.partitioner = part;
    indexFile = new BufferedRandomAccessFile(new File(desc.filenameFor(SSTable.COMPONENT_INDEX)), "rw", 8 * 1024 * 1024, true);
    builder = SegmentedFile.getBuilder(DatabaseDescriptor.getIndexAccessMode());
    summary = new IndexSummary(keyCount);
    bf = BloomFilter.getFilter(keyCount, 15);
}
 
开发者ID:devdattakulkarni,项目名称:Cassandra-KVPM,代码行数:10,代码来源:SSTableWriter.java


示例17: internalOpen

import org.apache.cassandra.io.util.SegmentedFile; //导入依赖的package包/类
/**
 * Open a RowIndexedReader which already has its state initialized (by SSTableWriter).
 */
static SSTableReader internalOpen(Descriptor desc, Set<Component> components, CFMetaData metadata, ReplayPosition replayPosition, IPartitioner partitioner, SegmentedFile ifile, SegmentedFile dfile, IndexSummary isummary, Filter bf, long maxDataAge, EstimatedHistogram rowsize,
                                  EstimatedHistogram columncount) throws IOException
{
    assert desc != null && partitioner != null && ifile != null && dfile != null && isummary != null && bf != null;
    return new SSTableReader(desc, components, metadata, replayPosition, partitioner, ifile, dfile, isummary, bf, maxDataAge, rowsize, columncount);
}
 
开发者ID:devdattakulkarni,项目名称:Cassandra-KVPM,代码行数:10,代码来源:SSTableReader.java


示例18: load

import org.apache.cassandra.io.util.SegmentedFile; //导入依赖的package包/类
/**
 * Loads ifile, dfile and indexSummary, and optionally recreates the bloom filter.
 * @param saveSummaryIfCreated for bulk loading purposes, if the summary was absent and needed to be built, you can
 *                             avoid persisting it to disk by setting this to false
 */
private void load(boolean recreateBloomFilter, boolean saveSummaryIfCreated) throws IOException
{
    SegmentedFile.Builder ibuilder = SegmentedFile.getBuilder(DatabaseDescriptor.getIndexAccessMode());
    SegmentedFile.Builder dbuilder = compression
                                     ? SegmentedFile.getCompressedBuilder()
                                     : SegmentedFile.getBuilder(DatabaseDescriptor.getDiskAccessMode());

    boolean summaryLoaded = loadSummary(ibuilder, dbuilder);
    boolean builtSummary = false;
    if (recreateBloomFilter || !summaryLoaded)
    {
        buildSummary(recreateBloomFilter, ibuilder, dbuilder, summaryLoaded, Downsampling.BASE_SAMPLING_LEVEL);
        builtSummary = true;
    }

    ifile = ibuilder.complete(descriptor.filenameFor(Component.PRIMARY_INDEX));
    dfile = dbuilder.complete(descriptor.filenameFor(Component.DATA));

    // Check for an index summary that was downsampled even though the serialization format doesn't support
    // that.  If it was downsampled, rebuild it.  See CASSANDRA-8993 for details.
    if (!descriptor.version.hasSamplingLevel && !builtSummary && !validateSummarySamplingLevel())
    {
        indexSummary.close();
        ifile.close();
        dfile.close();

        logger.info("Detected erroneously downsampled index summary; will rebuild summary at full sampling");
        FileUtils.deleteWithConfirm(new File(descriptor.filenameFor(Component.SUMMARY)));
        ibuilder = SegmentedFile.getBuilder(DatabaseDescriptor.getIndexAccessMode());
        dbuilder = compression
                   ? SegmentedFile.getCompressedBuilder()
                   : SegmentedFile.getBuilder(DatabaseDescriptor.getDiskAccessMode());
        buildSummary(false, ibuilder, dbuilder, false, Downsampling.BASE_SAMPLING_LEVEL);
        ifile = ibuilder.complete(descriptor.filenameFor(Component.PRIMARY_INDEX));
        dfile = dbuilder.complete(descriptor.filenameFor(Component.DATA));
        saveSummary(ibuilder, dbuilder);
    }
    else if (saveSummaryIfCreated && builtSummary)
    {
        saveSummary(ibuilder, dbuilder);
    }
}
 
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:48,代码来源:SSTableReader.java


示例19: buildSummary

import org.apache.cassandra.io.util.SegmentedFile; //导入依赖的package包/类
/**
 * Build index summary(and optionally bloom filter) by reading through Index.db file.
 *
 * @param recreateBloomFilter true if recreate bloom filter
 * @param ibuilder
 * @param dbuilder
 * @param summaryLoaded true if index summary is already loaded and not need to build again
 * @throws IOException
 */
private void buildSummary(boolean recreateBloomFilter, SegmentedFile.Builder ibuilder, SegmentedFile.Builder dbuilder, boolean summaryLoaded, int samplingLevel) throws IOException
{
    // we read the positions in a BRAF so we don't have to worry about an entry spanning a mmap boundary.
    RandomAccessReader primaryIndex = RandomAccessReader.open(new File(descriptor.filenameFor(Component.PRIMARY_INDEX)));

    try
    {
        long indexSize = primaryIndex.length();
        long histogramCount = sstableMetadata.estimatedRowSize.count();
        long estimatedKeys = histogramCount > 0 && !sstableMetadata.estimatedRowSize.isOverflowed()
                             ? histogramCount
                             : estimateRowsFromIndex(primaryIndex); // statistics is supposed to be optional

        try(IndexSummaryBuilder summaryBuilder = summaryLoaded ? null : new IndexSummaryBuilder(estimatedKeys, metadata.getMinIndexInterval(), samplingLevel))
        {

            if (recreateBloomFilter)
                bf = FilterFactory.getFilter(estimatedKeys, metadata.getBloomFilterFpChance(), true);

            long indexPosition;
            while ((indexPosition = primaryIndex.getFilePointer()) != indexSize)
            {
                ByteBuffer key = ByteBufferUtil.readWithShortLength(primaryIndex);
                RowIndexEntry indexEntry = metadata.comparator.rowIndexEntrySerializer().deserialize(primaryIndex, descriptor.version);
                DecoratedKey decoratedKey = partitioner.decorateKey(key);
                if (first == null)
                    first = decoratedKey;
                last = decoratedKey;

                if (recreateBloomFilter)
                    bf.add(decoratedKey.getKey());

                // if summary was already read from disk we don't want to re-populate it using primary index
                if (!summaryLoaded)
                {
                    summaryBuilder.maybeAddEntry(decoratedKey, indexPosition);
                    ibuilder.addPotentialBoundary(indexPosition);
                    dbuilder.addPotentialBoundary(indexEntry.position);
                }
            }

            if (!summaryLoaded)
                indexSummary = summaryBuilder.build(partitioner);
        }
    }
    finally
    {
        FileUtils.closeQuietly(primaryIndex);
    }

    first = getMinimalKey(first);
    last = getMinimalKey(last);
}
 
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:63,代码来源:SSTableReader.java


示例20: openEarly

import org.apache.cassandra.io.util.SegmentedFile; //导入依赖的package包/类
public SSTableReader openEarly(long maxDataAge)
{
    StatsMetadata sstableMetadata = (StatsMetadata) sstableMetadataCollector.finalizeMetadata(partitioner.getClass().getCanonicalName(),
                                              metadata.getBloomFilterFpChance(),
                                              repairedAt).get(MetadataType.STATS);

    // find the max (exclusive) readable key
    DecoratedKey exclusiveUpperBoundOfReadableIndex = iwriter.getMaxReadableKey(0);
    if (exclusiveUpperBoundOfReadableIndex == null)
        return null;

    // create temp links if they don't already exist
    Descriptor link = descriptor.asType(Descriptor.Type.TEMPLINK);
    if (!new File(link.filenameFor(Component.PRIMARY_INDEX)).exists())
    {
        FileUtils.createHardLink(new File(descriptor.filenameFor(Component.PRIMARY_INDEX)), new File(link.filenameFor(Component.PRIMARY_INDEX)));
        FileUtils.createHardLink(new File(descriptor.filenameFor(Component.DATA)), new File(link.filenameFor(Component.DATA)));
    }

    // open the reader early, giving it a FINAL descriptor type so that it is indistinguishable for other consumers
    SegmentedFile ifile = iwriter.builder.openEarly(link.filenameFor(Component.PRIMARY_INDEX));
    SegmentedFile dfile = dbuilder.openEarly(link.filenameFor(Component.DATA));
    SSTableReader sstable = SSTableReader.internalOpen(descriptor.asType(Descriptor.Type.FINAL),
                                                       components, metadata,
                                                       partitioner, ifile,
                                                       dfile, iwriter.summary.build(partitioner, exclusiveUpperBoundOfReadableIndex),
                                                       iwriter.bf, maxDataAge, sstableMetadata, true);

    // now it's open, find the ACTUAL last readable key (i.e. for which the data file has also been flushed)
    sstable.first = getMinimalKey(first);
    sstable.last = getMinimalKey(exclusiveUpperBoundOfReadableIndex);
    DecoratedKey inclusiveUpperBoundOfReadableData = iwriter.getMaxReadableKey(1);
    if (inclusiveUpperBoundOfReadableData == null)
        return null;
    int offset = 2;
    while (true)
    {
        RowIndexEntry indexEntry = sstable.getPosition(inclusiveUpperBoundOfReadableData, SSTableReader.Operator.GT);
        if (indexEntry != null && indexEntry.position <= dataFile.getLastFlushOffset())
            break;
        inclusiveUpperBoundOfReadableData = iwriter.getMaxReadableKey(offset++);
        if (inclusiveUpperBoundOfReadableData == null)
            return null;
    }
    sstable.last = getMinimalKey(inclusiveUpperBoundOfReadableData);
    return sstable;
}
 
开发者ID:daidong,项目名称:GraphTrek,代码行数:48,代码来源:SSTableWriter.java



注:本文中的org.apache.cassandra.io.util.SegmentedFile类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java Nimbus类代码示例发布时间:2022-05-22
下一篇:
Java SwingBottomInAnimationAdapter类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap