• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java LogMergePolicy类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.lucene.index.LogMergePolicy的典型用法代码示例。如果您正苦于以下问题:Java LogMergePolicy类的具体用法?Java LogMergePolicy怎么用?Java LogMergePolicy使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



LogMergePolicy类属于org.apache.lucene.index包,在下文中一共展示了LogMergePolicy类的19个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: newSortingMergePolicy

import org.apache.lucene.index.LogMergePolicy; //导入依赖的package包/类
static MergePolicy newSortingMergePolicy(Sort sort) {
  // usually create a MP with a low merge factor so that many merges happen
  MergePolicy mp;
  int thingToDo = random().nextInt(3);
  if (thingToDo == 0) {
    TieredMergePolicy tmp = newTieredMergePolicy(random());
    final int numSegs = TestUtil.nextInt(random(), 3, 5);
    tmp.setSegmentsPerTier(numSegs);
    tmp.setMaxMergeAtOnce(TestUtil.nextInt(random(), 2, numSegs));
    mp = tmp;
  } else if (thingToDo == 1) {
    LogMergePolicy lmp = newLogMergePolicy(random());
    lmp.setMergeFactor(TestUtil.nextInt(random(), 3, 5));
    mp = lmp;
  } else {
    // just a regular random one from LTC (could be alcoholic etc)
    mp = newMergePolicy();
  }
  // wrap it with a sorting mp
  return new SortingMergePolicy(mp, sort);
}
 
开发者ID:europeana,项目名称:search,代码行数:22,代码来源:TestSortingMergePolicy.java


示例2: reduceOpenFiles

import org.apache.lucene.index.LogMergePolicy; //导入依赖的package包/类
/** just tries to configure things to keep the open file
 * count lowish */
public static void reduceOpenFiles(IndexWriter w) {
  // keep number of open files lowish
  MergePolicy mp = w.getConfig().getMergePolicy();
  if (mp instanceof LogMergePolicy) {
    LogMergePolicy lmp = (LogMergePolicy) mp;
    lmp.setMergeFactor(Math.min(5, lmp.getMergeFactor()));
    lmp.setNoCFSRatio(1.0);
  } else if (mp instanceof TieredMergePolicy) {
    TieredMergePolicy tmp = (TieredMergePolicy) mp;
    tmp.setMaxMergeAtOnce(Math.min(5, tmp.getMaxMergeAtOnce()));
    tmp.setSegmentsPerTier(Math.min(5, tmp.getSegmentsPerTier()));
    tmp.setNoCFSRatio(1.0);
  }
  MergeScheduler ms = w.getConfig().getMergeScheduler();
  if (ms instanceof ConcurrentMergeScheduler) {
    // wtf... shouldnt it be even lower since its 1 by default?!?!
    ((ConcurrentMergeScheduler) ms).setMaxMergesAndThreads(3, 2);
  }
}
 
开发者ID:europeana,项目名称:search,代码行数:22,代码来源:TestUtil.java


示例3: testSubclassConcurrentMergeScheduler

import org.apache.lucene.index.LogMergePolicy; //导入依赖的package包/类
public void testSubclassConcurrentMergeScheduler() throws IOException {
  MockDirectoryWrapper dir = newMockDirectory();
  dir.failOn(new FailOnlyOnMerge());

  Document doc = new Document();
  Field idField = newStringField("id", "", Field.Store.YES);
  doc.add(idField);
  
  IndexWriter writer = new IndexWriter(dir, newIndexWriterConfig(new MockAnalyzer(random()))
      .setMergeScheduler(new MyMergeScheduler())
      .setMaxBufferedDocs(2).setRAMBufferSizeMB(IndexWriterConfig.DISABLE_AUTO_FLUSH)
      .setMergePolicy(newLogMergePolicy()));
  LogMergePolicy logMP = (LogMergePolicy) writer.getConfig().getMergePolicy();
  logMP.setMergeFactor(10);
  for(int i=0;i<20;i++)
    writer.addDocument(doc);

  ((MyMergeScheduler) writer.getConfig().getMergeScheduler()).sync();
  writer.close();
  
  assertTrue(mergeThreadCreated);
  assertTrue(mergeCalled);
  assertTrue(excCalled);
  dir.close();
}
 
开发者ID:europeana,项目名称:search,代码行数:26,代码来源:TestMergeSchedulerExternal.java


示例4: reduceOpenFiles

import org.apache.lucene.index.LogMergePolicy; //导入依赖的package包/类
/** just tries to configure things to keep the open file
 * count lowish */
public static void reduceOpenFiles(IndexWriter w) {
  // keep number of open files lowish
  MergePolicy mp = w.getConfig().getMergePolicy();
  if (mp instanceof LogMergePolicy) {
    LogMergePolicy lmp = (LogMergePolicy) mp;
    lmp.setMergeFactor(Math.min(5, lmp.getMergeFactor()));
    lmp.setUseCompoundFile(true);
  } else if (mp instanceof TieredMergePolicy) {
    TieredMergePolicy tmp = (TieredMergePolicy) mp;
    tmp.setMaxMergeAtOnce(Math.min(5, tmp.getMaxMergeAtOnce()));
    tmp.setSegmentsPerTier(Math.min(5, tmp.getSegmentsPerTier()));
    tmp.setUseCompoundFile(true);
  }
  MergeScheduler ms = w.getConfig().getMergeScheduler();
  if (ms instanceof ConcurrentMergeScheduler) {
    ((ConcurrentMergeScheduler) ms).setMaxThreadCount(2);
    ((ConcurrentMergeScheduler) ms).setMaxMergeCount(3);
  }
}
 
开发者ID:pkarmstr,项目名称:NYBC,代码行数:22,代码来源:_TestUtil.java


示例5: testSubclassConcurrentMergeScheduler

import org.apache.lucene.index.LogMergePolicy; //导入依赖的package包/类
public void testSubclassConcurrentMergeScheduler() throws IOException {
  MockDirectoryWrapper dir = newMockDirectory();
  dir.failOn(new FailOnlyOnMerge());

  Document doc = new Document();
  Field idField = newStringField("id", "", Field.Store.YES);
  doc.add(idField);
  
  IndexWriter writer = new IndexWriter(dir, newIndexWriterConfig(
      TEST_VERSION_CURRENT, new MockAnalyzer(random())).setMergeScheduler(new MyMergeScheduler())
      .setMaxBufferedDocs(2).setRAMBufferSizeMB(IndexWriterConfig.DISABLE_AUTO_FLUSH)
      .setMergePolicy(newLogMergePolicy()));
  LogMergePolicy logMP = (LogMergePolicy) writer.getConfig().getMergePolicy();
  logMP.setMergeFactor(10);
  for(int i=0;i<20;i++)
    writer.addDocument(doc);

  ((MyMergeScheduler) writer.getConfig().getMergeScheduler()).sync();
  writer.close();
  
  assertTrue(mergeThreadCreated);
  assertTrue(mergeCalled);
  assertTrue(excCalled);
  dir.close();
}
 
开发者ID:pkarmstr,项目名称:NYBC,代码行数:26,代码来源:TestMergeSchedulerExternal.java


示例6: newSortingMergePolicy

import org.apache.lucene.index.LogMergePolicy; //导入依赖的package包/类
static MergePolicy newSortingMergePolicy(Sorter sorter) {
  // create a MP with a low merge factor so that many merges happen
  MergePolicy mp;
  if (random().nextBoolean()) {
    TieredMergePolicy tmp = newTieredMergePolicy(random());
    final int numSegs = _TestUtil.nextInt(random(), 3, 5);
    tmp.setSegmentsPerTier(numSegs);
    tmp.setMaxMergeAtOnce(_TestUtil.nextInt(random(), 2, numSegs));
    mp = tmp;
  } else {
    LogMergePolicy lmp = newLogMergePolicy(random());
    lmp.setMergeFactor(_TestUtil.nextInt(random(), 3, 5));
    mp = lmp;
  }
  // wrap it with a sorting mp
  return new SortingMergePolicy(mp, sorter);
}
 
开发者ID:jimaguere,项目名称:Maskana-Gestor-de-Conocimiento,代码行数:18,代码来源:TestSortingMergePolicy.java


示例7: getIndexWriter

import org.apache.lucene.index.LogMergePolicy; //导入依赖的package包/类
public static IndexWriter getIndexWriter(String indexPath, boolean create) throws IOException {
    Directory dir = FSDirectory.open(Paths.get(indexPath));
    Analyzer analyzer = new SmartChineseAnalyzer();
    IndexWriterConfig iwc = new IndexWriterConfig(analyzer);
    LogMergePolicy mergePolicy = new LogByteSizeMergePolicy();
    mergePolicy.setMergeFactor(50);
    mergePolicy.setMaxMergeDocs(5000);
    if (create){
        iwc.setOpenMode(IndexWriterConfig.OpenMode.CREATE);
    } else {
        iwc.setOpenMode(IndexWriterConfig.OpenMode.CREATE_OR_APPEND);
    }
    return new IndexWriter(dir, iwc);
}
 
开发者ID:neal1991,项目名称:everywhere,代码行数:15,代码来源:IndexUtil.java


示例8: newLogMergePolicy

import org.apache.lucene.index.LogMergePolicy; //导入依赖的package包/类
public static LogMergePolicy newLogMergePolicy(Random r) {
  LogMergePolicy logmp = r.nextBoolean() ? new LogDocMergePolicy() : new LogByteSizeMergePolicy();
  logmp.setCalibrateSizeByDeletes(r.nextBoolean());
  if (rarely(r)) {
    logmp.setMergeFactor(TestUtil.nextInt(r, 2, 9));
  } else {
    logmp.setMergeFactor(TestUtil.nextInt(r, 10, 50));
  }
  configureRandom(r, logmp);
  return logmp;
}
 
开发者ID:europeana,项目名称:search,代码行数:12,代码来源:LuceneTestCase.java


示例9: testLogMergePolicyConfig

import org.apache.lucene.index.LogMergePolicy; //导入依赖的package包/类
public void testLogMergePolicyConfig() throws Exception {
  
  final Class<? extends LogMergePolicy> mpClass = random().nextBoolean()
    ? LogByteSizeMergePolicy.class : LogDocMergePolicy.class;

  System.setProperty("solr.test.log.merge.policy", mpClass.getName());

  initCore("solrconfig-logmergepolicy.xml","schema-minimal.xml");
  IndexWriterConfig iwc = solrConfig.indexConfig.toIndexWriterConfig(h.getCore().getLatestSchema());

  // verify some props set to -1 get lucene internal defaults
  assertEquals(-1, solrConfig.indexConfig.maxBufferedDocs);
  assertEquals(IndexWriterConfig.DISABLE_AUTO_FLUSH, 
               iwc.getMaxBufferedDocs());
  assertEquals(-1, solrConfig.indexConfig.maxIndexingThreads);
  assertEquals(IndexWriterConfig.DEFAULT_MAX_THREAD_STATES, 
               iwc.getMaxThreadStates());
  assertEquals(-1, solrConfig.indexConfig.ramBufferSizeMB, 0.0D);
  assertEquals(IndexWriterConfig.DEFAULT_RAM_BUFFER_SIZE_MB, 
               iwc.getRAMBufferSizeMB(), 0.0D);


  LogMergePolicy logMP = assertAndCast(mpClass, iwc.getMergePolicy());

  // set by legacy <mergeFactor> setting
  assertEquals(11, logMP.getMergeFactor());
  // set by legacy <maxMergeDocs> setting
  assertEquals(456, logMP.getMaxMergeDocs());

}
 
开发者ID:europeana,项目名称:search,代码行数:31,代码来源:TestMergePolicyConfig.java


示例10: setUseCompoundFile

import org.apache.lucene.index.LogMergePolicy; //导入依赖的package包/类
public static void setUseCompoundFile(MergePolicy mp, boolean v) {
  if (mp instanceof TieredMergePolicy) {
    ((TieredMergePolicy) mp).setUseCompoundFile(v);
  } else if (mp instanceof LogMergePolicy) {
    ((LogMergePolicy) mp).setUseCompoundFile(v);
  } else {
    throw new IllegalArgumentException("cannot set compound file for MergePolicy " + mp);
  }
}
 
开发者ID:pkarmstr,项目名称:NYBC,代码行数:10,代码来源:_TestUtil.java


示例11: clearWorkflowInstances

import org.apache.lucene.index.LogMergePolicy; //导入依赖的package包/类
@Override
public synchronized boolean clearWorkflowInstances() throws InstanceRepositoryException {
  IndexWriter writer = null;
  try {
      IndexWriterConfig config = new IndexWriterConfig(new StandardAnalyzer());
      config.setOpenMode(IndexWriterConfig.OpenMode.CREATE_OR_APPEND);
      LogMergePolicy lmp =new LogDocMergePolicy();
      lmp.setMergeFactor(mergeFactor);
      config.setMergePolicy(lmp);

      writer = new IndexWriter(indexDir, config);
      LOG.log(Level.FINE,
              "LuceneWorkflowEngine: remove all workflow instances");
      writer.deleteDocuments(new Term("myfield", "myvalue"));
  } catch (IOException e) {
      LOG.log(Level.SEVERE, e.getMessage());
      LOG
              .log(Level.WARNING,
                      "Exception removing workflow instances from index: Message: "
                              + e.getMessage());
      throw new InstanceRepositoryException(e.getMessage());
  } finally {
    if (writer != null){
      try{
        writer.close();
      }
      catch(Exception ignore){}
      
      writer = null;
    }

  }
  
  return true;
}
 
开发者ID:apache,项目名称:oodt,代码行数:36,代码来源:LuceneWorkflowInstanceRepository.java


示例12: asJson

import org.apache.lucene.index.LogMergePolicy; //导入依赖的package包/类
public JsonObject asJson() {
    JsonArrayBuilder strategiesJsonBuilder = Json.createArrayBuilder();
    for (ClusterStrategy strategy : this.clusterConfig.strategies) {
        strategiesJsonBuilder.add(Json.createObjectBuilder()
                .add("clusteringEps", strategy.clusteringEps)
                .add("clusteringMinPoints", strategy.clusteringMinPoints));
    }

    JsonObject json = Json.createObjectBuilder()
            .add("similarity", similarity.toString())
            .add("mergePolicy", this.mergePolicy instanceof TieredMergePolicy
                    ? Json.createObjectBuilder()
                            .add("type", "TieredMergePolicy")
                            .add("maxMergeAtOnce", ((TieredMergePolicy) this.mergePolicy).getMaxMergeAtOnce())
                            .add("segmentsPerTier", ((TieredMergePolicy) this.mergePolicy).getSegmentsPerTier())
                    : Json.createObjectBuilder()
                            .add("type", "LogDocMergePolicy")
                            .add("maxMergeDocs", ((LogMergePolicy) this.mergePolicy).getMaxMergeDocs())
                            .add("mergeFactor", ((LogMergePolicy) this.mergePolicy).getMergeFactor()))
            .add("lruTaxonomyWriterCacheSize", lruTaxonomyWriterCacheSize)
            .add("numberOfConcurrentTasks", numberOfConcurrentTasks)
            .add("commitCount", commitCount)
            .add("commitTimeout", commitTimeout)
            .add("cacheFacetOrdinals", this.cacheFacetOrdinals)
            .add("clustering", Json.createObjectBuilder()
                    .add("clusterMoreRecords", clusterConfig.clusterMoreRecords)
                    .add("strategies", strategiesJsonBuilder))
            .build();
    return json;
}
 
开发者ID:seecr,项目名称:meresco-lucene,代码行数:31,代码来源:LuceneSettings.java


示例13: testIndexWriterSettings

import org.apache.lucene.index.LogMergePolicy; //导入依赖的package包/类
/**
 * Test that IndexWriter settings stick.
 */
public void testIndexWriterSettings() throws Exception {
  // 1. alg definition (required in every "logic" test)
  String algLines[] = {
      "# ----- properties ",
      "content.source=org.apache.lucene.benchmark.byTask.feeds.LineDocSource",
      "docs.file=" + getReuters20LinesFile(),
      "content.source.log.step=3",
      "ram.flush.mb=-1",
      "max.buffered=2",
      "compound=cmpnd:true:false",
      "doc.term.vector=vector:false:true",
      "content.source.forever=false",
      "directory=RAMDirectory",
      "doc.stored=false",
      "merge.factor=3",
      "doc.tokenized=false",
      "debug.level=1",
      "# ----- alg ",
      "{ \"Rounds\"",
      "  ResetSystemErase",
      "  CreateIndex",
      "  { \"AddDocs\"  AddDoc > : * ",
      "  NewRound",
      "} : 2",
  };

  // 2. execute the algorithm  (required in every "logic" test)
  Benchmark benchmark = execBenchmark(algLines);
  final IndexWriter writer = benchmark.getRunData().getIndexWriter();
  assertEquals(2, writer.getConfig().getMaxBufferedDocs());
  assertEquals(IndexWriterConfig.DISABLE_AUTO_FLUSH, (int) writer.getConfig().getRAMBufferSizeMB());
  assertEquals(3, ((LogMergePolicy) writer.getConfig().getMergePolicy()).getMergeFactor());
  assertEquals(0.0d, writer.getConfig().getMergePolicy().getNoCFSRatio(), 0.0);
  writer.close();
  Directory dir = benchmark.getRunData().getDirectory();
  IndexReader reader = DirectoryReader.open(dir);
  Fields tfv = reader.getTermVectors(0);
  assertNotNull(tfv);
  assertTrue(tfv.size() > 0);
  reader.close();
}
 
开发者ID:europeana,项目名称:search,代码行数:45,代码来源:TestPerfTasksLogic.java


示例14: testOpenIfChangedManySegments

import org.apache.lucene.index.LogMergePolicy; //导入依赖的package包/类
@Test
public void testOpenIfChangedManySegments() throws Exception {
  // test openIfChanged() when the taxonomy contains many segments
  Directory dir = newDirectory();
  
  DirectoryTaxonomyWriter writer = new DirectoryTaxonomyWriter(dir) {
    @Override
    protected IndexWriterConfig createIndexWriterConfig(OpenMode openMode) {
      IndexWriterConfig conf = super.createIndexWriterConfig(openMode);
      LogMergePolicy lmp = (LogMergePolicy) conf.getMergePolicy();
      lmp.setMergeFactor(2);
      return conf;
    }
  };
  TaxonomyReader reader = new DirectoryTaxonomyReader(writer);
  
  int numRounds = random().nextInt(10) + 10;
  int numCategories = 1; // one for root
  for (int i = 0; i < numRounds; i++) {
    int numCats = random().nextInt(4) + 1;
    for (int j = 0; j < numCats; j++) {
      writer.addCategory(new FacetLabel(Integer.toString(i), Integer.toString(j)));
    }
    numCategories += numCats + 1 /* one for round-parent */;
    TaxonomyReader newtr = TaxonomyReader.openIfChanged(reader);
    assertNotNull(newtr);
    reader.close();
    reader = newtr;
    
    // assert categories
    assertEquals(numCategories, reader.getSize());
    int roundOrdinal = reader.getOrdinal(new FacetLabel(Integer.toString(i)));
    int[] parents = reader.getParallelTaxonomyArrays().parents();
    assertEquals(0, parents[roundOrdinal]); // round's parent is root
    for (int j = 0; j < numCats; j++) {
      int ord = reader.getOrdinal(new FacetLabel(Integer.toString(i), Integer.toString(j)));
      assertEquals(roundOrdinal, parents[ord]); // round's parent is root
    }
  }

  reader.close();
  writer.close();
  dir.close();
}
 
开发者ID:europeana,项目名称:search,代码行数:45,代码来源:TestDirectoryTaxonomyReader.java


示例15: beforeClass

import org.apache.lucene.index.LogMergePolicy; //导入依赖的package包/类
/** we will manually instantiate preflex-rw here */
@BeforeClass
public static void beforeClass() throws Exception {
  // NOTE: turn off compound file, this test will open some index files directly.
  LuceneTestCase.OLD_FORMAT_IMPERSONATION_IS_ACTIVE = true;
  IndexWriterConfig config = newIndexWriterConfig(new MockAnalyzer(random(), MockTokenizer.KEYWORD, false))
                               .setUseCompoundFile(false);
  
  termIndexInterval = config.getTermIndexInterval();
  indexDivisor = TestUtil.nextInt(random(), 1, 10);
  NUMBER_OF_DOCUMENTS = atLeast(100);
  NUMBER_OF_FIELDS = atLeast(Math.max(10, 3*termIndexInterval*indexDivisor/NUMBER_OF_DOCUMENTS));
  
  directory = newDirectory();

  config.setCodec(new PreFlexRWCodec());
  LogMergePolicy mp = newLogMergePolicy();
  // NOTE: turn off compound file, this test will open some index files directly.
  mp.setNoCFSRatio(0.0);
  config.setMergePolicy(mp);

  
  populate(directory, config);

  DirectoryReader r0 = IndexReader.open(directory);
  SegmentReader r = LuceneTestCase.getOnlySegmentReader(r0);
  String segment = r.getSegmentName();
  r.close();

  FieldInfosReader infosReader = new PreFlexRWCodec().fieldInfosFormat().getFieldInfosReader();
  FieldInfos fieldInfos = infosReader.read(directory, segment, "", IOContext.READONCE);
  String segmentFileName = IndexFileNames.segmentFileName(segment, "", Lucene3xPostingsFormat.TERMS_INDEX_EXTENSION);
  long tiiFileLength = directory.fileLength(segmentFileName);
  IndexInput input = directory.openInput(segmentFileName, newIOContext(random()));
  termEnum = new PreflexRWSegmentTermEnum(directory.openInput(IndexFileNames.segmentFileName(segment, "", Lucene3xPostingsFormat.TERMS_EXTENSION), newIOContext(random())), fieldInfos, false);
  int totalIndexInterval = termEnum.indexInterval * indexDivisor;
  
  SegmentTermEnum indexEnum = new PreflexRWSegmentTermEnum(input, fieldInfos, true);
  index = new TermInfosReaderIndex(indexEnum, indexDivisor, tiiFileLength, totalIndexInterval);
  indexEnum.close();
  input.close();
  
  reader = IndexReader.open(directory);
  sampleTerms = sample(random(),reader,1000);
}
 
开发者ID:europeana,项目名称:search,代码行数:46,代码来源:TestTermInfosReaderIndex.java


示例16: testIndexWriterSettings

import org.apache.lucene.index.LogMergePolicy; //导入依赖的package包/类
/**
 * Test that IndexWriter settings stick.
 */
public void testIndexWriterSettings() throws Exception {
  // 1. alg definition (required in every "logic" test)
  String algLines[] = {
      "# ----- properties ",
      "content.source=org.apache.lucene.benchmark.byTask.feeds.LineDocSource",
      "docs.file=" + getReuters20LinesFile(),
      "content.source.log.step=3",
      "ram.flush.mb=-1",
      "max.buffered=2",
      "compound=cmpnd:true:false",
      "doc.term.vector=vector:false:true",
      "content.source.forever=false",
      "directory=RAMDirectory",
      "doc.stored=false",
      "merge.factor=3",
      "doc.tokenized=false",
      "debug.level=1",
      "# ----- alg ",
      "{ \"Rounds\"",
      "  ResetSystemErase",
      "  CreateIndex",
      "  { \"AddDocs\"  AddDoc > : * ",
      "  NewRound",
      "} : 2",
  };

  // 2. execute the algorithm  (required in every "logic" test)
  Benchmark benchmark = execBenchmark(algLines);
  final IndexWriter writer = benchmark.getRunData().getIndexWriter();
  assertEquals(2, writer.getConfig().getMaxBufferedDocs());
  assertEquals(IndexWriterConfig.DISABLE_AUTO_FLUSH, (int) writer.getConfig().getRAMBufferSizeMB());
  assertEquals(3, ((LogMergePolicy) writer.getConfig().getMergePolicy()).getMergeFactor());
  assertFalse(((LogMergePolicy) writer.getConfig().getMergePolicy()).getUseCompoundFile());
  writer.close();
  Directory dir = benchmark.getRunData().getDirectory();
  IndexReader reader = DirectoryReader.open(dir);
  Fields tfv = reader.getTermVectors(0);
  assertNotNull(tfv);
  assertTrue(tfv.size() > 0);
  reader.close();
}
 
开发者ID:pkarmstr,项目名称:NYBC,代码行数:45,代码来源:TestPerfTasksLogic.java


示例17: testOpenIfChangedManySegments

import org.apache.lucene.index.LogMergePolicy; //导入依赖的package包/类
@Test
public void testOpenIfChangedManySegments() throws Exception {
  // test openIfChanged() when the taxonomy contains many segments
  Directory dir = newDirectory();
  
  DirectoryTaxonomyWriter writer = new DirectoryTaxonomyWriter(dir) {
    @Override
    protected IndexWriterConfig createIndexWriterConfig(OpenMode openMode) {
      IndexWriterConfig conf = super.createIndexWriterConfig(openMode);
      LogMergePolicy lmp = (LogMergePolicy) conf.getMergePolicy();
      lmp.setMergeFactor(2);
      return conf;
    }
  };
  TaxonomyReader reader = new DirectoryTaxonomyReader(writer);
  
  int numRounds = random().nextInt(10) + 10;
  int numCategories = 1; // one for root
  for (int i = 0; i < numRounds; i++) {
    int numCats = random().nextInt(4) + 1;
    for (int j = 0; j < numCats; j++) {
      writer.addCategory(new CategoryPath(Integer.toString(i), Integer.toString(j)));
    }
    numCategories += numCats + 1 /* one for round-parent */;
    TaxonomyReader newtr = TaxonomyReader.openIfChanged(reader);
    assertNotNull(newtr);
    reader.close();
    reader = newtr;
    
    // assert categories
    assertEquals(numCategories, reader.getSize());
    int roundOrdinal = reader.getOrdinal(new CategoryPath(Integer.toString(i)));
    int[] parents = reader.getParallelTaxonomyArrays().parents();
    assertEquals(0, parents[roundOrdinal]); // round's parent is root
    for (int j = 0; j < numCats; j++) {
      int ord = reader.getOrdinal(new CategoryPath(Integer.toString(i), Integer.toString(j)));
      assertEquals(roundOrdinal, parents[ord]); // round's parent is root
    }
  }

  reader.close();
  writer.close();
  dir.close();
}
 
开发者ID:pkarmstr,项目名称:NYBC,代码行数:45,代码来源:TestDirectoryTaxonomyReader.java


示例18: beforeClass

import org.apache.lucene.index.LogMergePolicy; //导入依赖的package包/类
/** we will manually instantiate preflex-rw here */
@BeforeClass
public static void beforeClass() throws Exception {
  LuceneTestCase.PREFLEX_IMPERSONATION_IS_ACTIVE = true;
  IndexWriterConfig config = newIndexWriterConfig(TEST_VERSION_CURRENT, 
      new MockAnalyzer(random(), MockTokenizer.KEYWORD, false));
  
  termIndexInterval = config.getTermIndexInterval();
  indexDivisor = _TestUtil.nextInt(random(), 1, 10);
  NUMBER_OF_DOCUMENTS = atLeast(100);
  NUMBER_OF_FIELDS = atLeast(Math.max(10, 3*termIndexInterval*indexDivisor/NUMBER_OF_DOCUMENTS));
  
  directory = newDirectory();

  config.setCodec(new PreFlexRWCodec());
  LogMergePolicy mp = newLogMergePolicy();
  // turn off compound file, this test will open some index files directly.
  mp.setUseCompoundFile(false);
  config.setMergePolicy(mp);

  
  populate(directory, config);

  DirectoryReader r0 = IndexReader.open(directory);
  SegmentReader r = LuceneTestCase.getOnlySegmentReader(r0);
  String segment = r.getSegmentName();
  r.close();

  FieldInfosReader infosReader = new PreFlexRWCodec().fieldInfosFormat().getFieldInfosReader();
  FieldInfos fieldInfos = infosReader.read(directory, segment, IOContext.READONCE);
  String segmentFileName = IndexFileNames.segmentFileName(segment, "", Lucene3xPostingsFormat.TERMS_INDEX_EXTENSION);
  long tiiFileLength = directory.fileLength(segmentFileName);
  IndexInput input = directory.openInput(segmentFileName, newIOContext(random()));
  termEnum = new SegmentTermEnum(directory.openInput(IndexFileNames.segmentFileName(segment, "", Lucene3xPostingsFormat.TERMS_EXTENSION), newIOContext(random())), fieldInfos, false);
  int totalIndexInterval = termEnum.indexInterval * indexDivisor;
  
  SegmentTermEnum indexEnum = new SegmentTermEnum(input, fieldInfos, true);
  index = new TermInfosReaderIndex(indexEnum, indexDivisor, tiiFileLength, totalIndexInterval);
  indexEnum.close();
  input.close();
  
  reader = IndexReader.open(directory);
  sampleTerms = sample(random(),reader,1000);
}
 
开发者ID:pkarmstr,项目名称:NYBC,代码行数:45,代码来源:TestTermInfosReaderIndex.java


示例19: beforeClass

import org.apache.lucene.index.LogMergePolicy; //导入依赖的package包/类
/** we will manually instantiate preflex-rw here */
@BeforeClass
public static void beforeClass() throws Exception {
  // NOTE: turn off compound file, this test will open some index files directly.
  LuceneTestCase.PREFLEX_IMPERSONATION_IS_ACTIVE = true;
  IndexWriterConfig config = newIndexWriterConfig(TEST_VERSION_CURRENT, 
      new MockAnalyzer(random(), MockTokenizer.KEYWORD, false)).setUseCompoundFile(false);
  
  termIndexInterval = config.getTermIndexInterval();
  indexDivisor = _TestUtil.nextInt(random(), 1, 10);
  NUMBER_OF_DOCUMENTS = atLeast(100);
  NUMBER_OF_FIELDS = atLeast(Math.max(10, 3*termIndexInterval*indexDivisor/NUMBER_OF_DOCUMENTS));
  
  directory = newDirectory();

  config.setCodec(new PreFlexRWCodec());
  LogMergePolicy mp = newLogMergePolicy();
  // NOTE: turn off compound file, this test will open some index files directly.
  mp.setNoCFSRatio(0.0);
  config.setMergePolicy(mp);

  
  populate(directory, config);

  DirectoryReader r0 = IndexReader.open(directory);
  SegmentReader r = LuceneTestCase.getOnlySegmentReader(r0);
  String segment = r.getSegmentName();
  r.close();

  FieldInfosReader infosReader = new PreFlexRWCodec().fieldInfosFormat().getFieldInfosReader();
  FieldInfos fieldInfos = infosReader.read(directory, segment, "", IOContext.READONCE);
  String segmentFileName = IndexFileNames.segmentFileName(segment, "", Lucene3xPostingsFormat.TERMS_INDEX_EXTENSION);
  long tiiFileLength = directory.fileLength(segmentFileName);
  IndexInput input = directory.openInput(segmentFileName, newIOContext(random()));
  termEnum = new SegmentTermEnum(directory.openInput(IndexFileNames.segmentFileName(segment, "", Lucene3xPostingsFormat.TERMS_EXTENSION), newIOContext(random())), fieldInfos, false);
  int totalIndexInterval = termEnum.indexInterval * indexDivisor;
  
  SegmentTermEnum indexEnum = new SegmentTermEnum(input, fieldInfos, true);
  index = new TermInfosReaderIndex(indexEnum, indexDivisor, tiiFileLength, totalIndexInterval);
  indexEnum.close();
  input.close();
  
  reader = IndexReader.open(directory);
  sampleTerms = sample(random(),reader,1000);
}
 
开发者ID:jimaguere,项目名称:Maskana-Gestor-de-Conocimiento,代码行数:46,代码来源:TestTermInfosReaderIndex.java



注:本文中的org.apache.lucene.index.LogMergePolicy类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java AbstractHeightMap类代码示例发布时间:2022-05-22
下一篇:
Java MavenCoordinate类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap