• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java BaseTokenStreamTestCase类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.lucene.analysis.BaseTokenStreamTestCase的典型用法代码示例。如果您正苦于以下问题:Java BaseTokenStreamTestCase类的具体用法?Java BaseTokenStreamTestCase怎么用?Java BaseTokenStreamTestCase使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



BaseTokenStreamTestCase类属于org.apache.lucene.analysis包,在下文中一共展示了BaseTokenStreamTestCase类的19个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: testMailtoSchemeEmails

import org.apache.lucene.analysis.BaseTokenStreamTestCase; //导入依赖的package包/类
public void testMailtoSchemeEmails () throws Exception {
  // See LUCENE-3880
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "mailto:[email protected]",
      new String[] {"mailto", "[email protected]"},
      new String[] { "<ALPHANUM>", "<EMAIL>" });

  // TODO: Support full mailto: scheme URIs. See RFC 6068: http://tools.ietf.org/html/rfc6068
  BaseTokenStreamTestCase.assertAnalyzesTo
      (a,  "mailto:[email protected],[email protected][email protected]"
         + "&subject=Subjectivity&body=Corpusivity%20or%20something%20like%20that",
       new String[] { "mailto",
                      "[email protected]",
                      // TODO: recognize ',' address delimiter. Also, see examples of ';' delimiter use at: http://www.mailto.co.uk/
                      ",[email protected]",
                      "[email protected]", // TODO: split field keys/values
                      "subject", "Subjectivity",
                      "body", "Corpusivity", "20or", "20something","20like", "20that" }, // TODO: Hex decoding + re-tokenization
       new String[] { "<ALPHANUM>",
                      "<EMAIL>",
                      "<EMAIL>",
                      "<EMAIL>",
                      "<ALPHANUM>", "<ALPHANUM>",
                      "<ALPHANUM>", "<ALPHANUM>", "<ALPHANUM>", "<ALPHANUM>", "<ALPHANUM>", "<ALPHANUM>" });
}
 
开发者ID:europeana,项目名称:search,代码行数:25,代码来源:TestUAX29URLEmailTokenizer.java


示例2: testMailtoSchemeEmails

import org.apache.lucene.analysis.BaseTokenStreamTestCase; //导入依赖的package包/类
public void testMailtoSchemeEmails () throws Exception {
  // See LUCENE-3880
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "MAILTO:[email protected]",
      new String[] {"mailto", "[email protected]"},
      new String[] { "<ALPHANUM>", "<EMAIL>" });

  // TODO: Support full mailto: scheme URIs. See RFC 6068: http://tools.ietf.org/html/rfc6068
  BaseTokenStreamTestCase.assertAnalyzesTo
      (a,  "mailto:[email protected],[email protected][email protected]"
          + "&subject=Subjectivity&body=Corpusivity%20or%20something%20like%20that",
          new String[] { "mailto",
              "[email protected]",
              // TODO: recognize ',' address delimiter. Also, see examples of ';' delimiter use at: http://www.mailto.co.uk/
              ",[email protected]",
              "[email protected]", // TODO: split field keys/values
              "subject", "subjectivity",
              "body", "corpusivity", "20or", "20something","20like", "20that" }, // TODO: Hex decoding + re-tokenization
          new String[] { "<ALPHANUM>",
              "<EMAIL>",
              "<EMAIL>",
              "<EMAIL>",
              "<ALPHANUM>", "<ALPHANUM>",
              "<ALPHANUM>", "<ALPHANUM>", "<ALPHANUM>", "<ALPHANUM>", "<ALPHANUM>", "<ALPHANUM>" });
}
 
开发者ID:europeana,项目名称:search,代码行数:25,代码来源:TestUAX29URLEmailAnalyzer.java


示例3: testThreadSafety

import org.apache.lucene.analysis.BaseTokenStreamTestCase; //导入依赖的package包/类
private void testThreadSafety(TokenFilterFactory factory) throws IOException {
    final Analyzer analyzer = new Analyzer() {
        @Override
        protected TokenStreamComponents createComponents(String fieldName) {
            Tokenizer tokenizer = new MockTokenizer();
            return new TokenStreamComponents(tokenizer, factory.create(tokenizer));
        }
    };
    BaseTokenStreamTestCase.checkRandomData(random(), analyzer, 100);
}
 
开发者ID:justor,项目名称:elasticsearch_my,代码行数:11,代码来源:AnalysisPolishFactoryTests.java


示例4: testStandardAnalyzer

import org.apache.lucene.analysis.BaseTokenStreamTestCase; //导入依赖的package包/类
public void testStandardAnalyzer() throws IOException {
    Analyzer analyzer = new JiebaAnalyzer();

    checkRandomData(new Random(0), analyzer, 1);

    System.out.println(BaseTokenStreamTestCase.toString(analyzer, "工信处女干事每月经过下属科室都要亲口交代24口交换机等技术性器件的安装工作"));
    System.out.println("==============");
    System.out.println(BaseTokenStreamTestCase.toString(analyzer, "hello  world,this is my first program"));
    System.out.println("==============");
    System.out.println(BaseTokenStreamTestCase.toString(analyzer, "这是一个伸手不见五指的黑夜。我叫孙悟空,我爱北京,我爱Python和C++。"));

}
 
开发者ID:hongfuli,项目名称:elasticsearch-analysis-jieba,代码行数:13,代码来源:JiebaAnalyzerTest.java


示例5: testAnalyzerFactory

import org.apache.lucene.analysis.BaseTokenStreamTestCase; //导入依赖的package包/类
public void testAnalyzerFactory() throws Exception {
  String text = "Fortieth, Quarantième, Cuadragésimo";
  Benchmark benchmark = execBenchmark(getAnalyzerFactoryConfig
      ("ascii folded, pattern replaced, standard tokenized, downcased, bigrammed.'analyzer'",
       "positionIncrementGap:100,offsetGap:1111,"
       +"MappingCharFilter(mapping:'test-mapping-ISOLatin1Accent-partial.txt'),"
       +"PatternReplaceCharFilterFactory(pattern:'e(\\\\\\\\S*)m',replacement:\"$1xxx$1\"),"
       +"StandardTokenizer,LowerCaseFilter,NGramTokenFilter(minGramSize:2,maxGramSize:2)"));
  BaseTokenStreamTestCase.assertAnalyzesTo(benchmark.getRunData().getAnalyzer(), text,
      new String[] { "fo", "or", "rt", "ti", "ie", "et", "th",
                     "qu", "ua", "ar", "ra", "an", "nt", "ti", "ix", "xx", "xx", "xe",
                     "cu", "ua", "ad", "dr", "ra", "ag", "gs", "si", "ix", "xx", "xx", "xs", "si", "io"});
}
 
开发者ID:europeana,项目名称:search,代码行数:14,代码来源:TestPerfTasksLogic.java


示例6: testHugeDoc

import org.apache.lucene.analysis.BaseTokenStreamTestCase; //导入依赖的package包/类
public void testHugeDoc() throws IOException {
  StringBuilder sb = new StringBuilder();
  char whitespace[] = new char[4094];
  Arrays.fill(whitespace, ' ');
  sb.append(whitespace);
  sb.append("testing 1234");
  String input = sb.toString();
  UAX29URLEmailTokenizer tokenizer = new UAX29URLEmailTokenizer(newAttributeFactory(), new StringReader(input));
  BaseTokenStreamTestCase.assertTokenStreamContents(tokenizer, new String[] { "testing", "1234" });
}
 
开发者ID:europeana,项目名称:search,代码行数:11,代码来源:TestUAX29URLEmailTokenizer.java


示例7: testLUCENE1545

import org.apache.lucene.analysis.BaseTokenStreamTestCase; //导入依赖的package包/类
public void testLUCENE1545() throws Exception {
  /*
   * Standard analyzer does not correctly tokenize combining character U+0364 COMBINING LATIN SMALL LETTRE E.
   * The word "moͤchte" is incorrectly tokenized into "mo" "chte", the combining character is lost.
   * Expected result is only on token "moͤchte".
   */
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "moͤchte", new String[] { "moͤchte" }); 
}
 
开发者ID:europeana,项目名称:search,代码行数:9,代码来源:TestUAX29URLEmailTokenizer.java


示例8: testApostrophesSA

import org.apache.lucene.analysis.BaseTokenStreamTestCase; //导入依赖的package包/类
public void testApostrophesSA() throws Exception {
  // internal apostrophes: O'Reilly, you're, O'Reilly's
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "O'Reilly", new String[]{"O'Reilly"});
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "you're", new String[]{"you're"});
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "she's", new String[]{"she's"});
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "Jim's", new String[]{"Jim's"});
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "don't", new String[]{"don't"});
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "O'Reilly's", new String[]{"O'Reilly's"});
}
 
开发者ID:europeana,项目名称:search,代码行数:10,代码来源:TestUAX29URLEmailTokenizer.java


示例9: testVariousTextSA

import org.apache.lucene.analysis.BaseTokenStreamTestCase; //导入依赖的package包/类
public void testVariousTextSA() throws Exception {
  // various
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "C embedded developers wanted", new String[]{"C", "embedded", "developers", "wanted"});
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "foo bar FOO BAR", new String[]{"foo", "bar", "FOO", "BAR"});
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "foo      bar .  FOO <> BAR", new String[]{"foo", "bar", "FOO", "BAR"});
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "\"QUOTED\" word", new String[]{"QUOTED", "word"});
}
 
开发者ID:europeana,项目名称:search,代码行数:8,代码来源:TestUAX29URLEmailTokenizer.java


示例10: testHugeDoc

import org.apache.lucene.analysis.BaseTokenStreamTestCase; //导入依赖的package包/类
public void testHugeDoc() throws IOException {
  StringBuilder sb = new StringBuilder();
  char whitespace[] = new char[4094];
  Arrays.fill(whitespace, ' ');
  sb.append(whitespace);
  sb.append("testing 1234");
  String input = sb.toString();
  BaseTokenStreamTestCase.assertAnalyzesTo(a, input, new String[]{"testing", "1234"}) ;
}
 
开发者ID:europeana,项目名称:search,代码行数:10,代码来源:TestUAX29URLEmailAnalyzer.java


示例11: testLUCENE1545

import org.apache.lucene.analysis.BaseTokenStreamTestCase; //导入依赖的package包/类
public void testLUCENE1545() throws Exception {
  /*
   * Standard analyzer does not correctly tokenize combining character U+0364 COMBINING LATIN SMALL LETTER E.
   * The word "moͤchte" is incorrectly tokenized into "mo" "chte", the combining character is lost.
   * Expected result is only one token "moͤchte".
   */
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "moͤchte", new String[] { "moͤchte" }); 
}
 
开发者ID:europeana,项目名称:search,代码行数:9,代码来源:TestUAX29URLEmailAnalyzer.java


示例12: testApostrophesSA

import org.apache.lucene.analysis.BaseTokenStreamTestCase; //导入依赖的package包/类
public void testApostrophesSA() throws Exception {
  // internal apostrophes: O'Reilly, you're, O'Reilly's
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "O'Reilly", new String[]{"o'reilly"});
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "you're", new String[]{"you're"});
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "she's", new String[]{"she's"});
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "Jim's", new String[]{"jim's"});
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "don't", new String[]{"don't"});
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "O'Reilly's", new String[]{"o'reilly's"});
}
 
开发者ID:europeana,项目名称:search,代码行数:10,代码来源:TestUAX29URLEmailAnalyzer.java


示例13: testNumericSA

import org.apache.lucene.analysis.BaseTokenStreamTestCase; //导入依赖的package包/类
public void testNumericSA() throws Exception {
  // floating point, serial, model numbers, ip addresses, etc.
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "21.35", new String[]{"21.35"});
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "R2D2 C3PO", new String[]{"r2d2", "c3po"});
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "216.239.63.104", new String[]{"216.239.63.104"});
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "216.239.63.104", new String[]{"216.239.63.104"});
}
 
开发者ID:europeana,项目名称:search,代码行数:8,代码来源:TestUAX29URLEmailAnalyzer.java


示例14: testVariousTextSA

import org.apache.lucene.analysis.BaseTokenStreamTestCase; //导入依赖的package包/类
public void testVariousTextSA() throws Exception {
  // various
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "C embedded developers wanted", new String[]{"c", "embedded", "developers", "wanted"});
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "foo bar FOO BAR", new String[]{"foo", "bar", "foo", "bar"});
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "foo      bar .  FOO <> BAR", new String[]{"foo", "bar", "foo", "bar"});
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "\"QUOTED\" word", new String[]{"quoted", "word"});
}
 
开发者ID:europeana,项目名称:search,代码行数:8,代码来源:TestUAX29URLEmailAnalyzer.java


示例15: testHugeDoc

import org.apache.lucene.analysis.BaseTokenStreamTestCase; //导入依赖的package包/类
public void testHugeDoc() throws IOException {
  StringBuilder sb = new StringBuilder();
  char whitespace[] = new char[4094];
  Arrays.fill(whitespace, ' ');
  sb.append(whitespace);
  sb.append("testing 1234");
  String input = sb.toString();
  StandardTokenizer tokenizer = new StandardTokenizer(new StringReader(input));
  BaseTokenStreamTestCase.assertTokenStreamContents(tokenizer, new String[] { "testing", "1234" });
}
 
开发者ID:europeana,项目名称:search,代码行数:11,代码来源:TestStandardAnalyzer.java


示例16: testNumericSA

import org.apache.lucene.analysis.BaseTokenStreamTestCase; //导入依赖的package包/类
public void testNumericSA() throws Exception {
  // floating point, serial, model numbers, ip addresses, etc.
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "21.35", new String[]{"21.35"});
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "R2D2 C3PO", new String[]{"R2D2", "C3PO"});
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "216.239.63.104", new String[]{"216.239.63.104"});
  BaseTokenStreamTestCase.assertAnalyzesTo(a, "216.239.63.104", new String[]{"216.239.63.104"});
}
 
开发者ID:europeana,项目名称:search,代码行数:8,代码来源:TestStandardAnalyzer.java


示例17: assertVocabulary

import org.apache.lucene.analysis.BaseTokenStreamTestCase; //导入依赖的package包/类
/** Run a vocabulary test against two data files. */
public static void assertVocabulary(Analyzer a, InputStream voc, InputStream out)
throws IOException {
  BufferedReader vocReader = new BufferedReader(
      new InputStreamReader(voc, StandardCharsets.UTF_8));
  BufferedReader outputReader = new BufferedReader(
      new InputStreamReader(out, StandardCharsets.UTF_8));
  String inputWord = null;
  while ((inputWord = vocReader.readLine()) != null) {
    String expectedWord = outputReader.readLine();
    Assert.assertNotNull(expectedWord);
    BaseTokenStreamTestCase.checkOneTerm(a, inputWord, expectedWord);
  }
}
 
开发者ID:europeana,项目名称:search,代码行数:15,代码来源:VocabularyAssert.java


示例18: testHugeDoc

import org.apache.lucene.analysis.BaseTokenStreamTestCase; //导入依赖的package包/类
public void testHugeDoc() throws IOException {
  StringBuilder sb = new StringBuilder();
  char whitespace[] = new char[4094];
  Arrays.fill(whitespace, ' ');
  sb.append(whitespace);
  sb.append("testing 1234");
  String input = sb.toString();
  UAX29URLEmailTokenizer tokenizer = new UAX29URLEmailTokenizer(TEST_VERSION_CURRENT, new StringReader(input));
  BaseTokenStreamTestCase.assertTokenStreamContents(tokenizer, new String[] { "testing", "1234" });
}
 
开发者ID:pkarmstr,项目名称:NYBC,代码行数:11,代码来源:TestUAX29URLEmailTokenizer.java


示例19: testHugeDoc

import org.apache.lucene.analysis.BaseTokenStreamTestCase; //导入依赖的package包/类
public void testHugeDoc() throws IOException {
  StringBuilder sb = new StringBuilder();
  char whitespace[] = new char[4094];
  Arrays.fill(whitespace, ' ');
  sb.append(whitespace);
  sb.append("testing 1234");
  String input = sb.toString();
  StandardTokenizer tokenizer = new StandardTokenizer(TEST_VERSION_CURRENT, new StringReader(input));
  BaseTokenStreamTestCase.assertTokenStreamContents(tokenizer, new String[] { "testing", "1234" });
}
 
开发者ID:pkarmstr,项目名称:NYBC,代码行数:11,代码来源:TestStandardAnalyzer.java



注:本文中的org.apache.lucene.analysis.BaseTokenStreamTestCase类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java DelegationTokenFetcher类代码示例发布时间:2022-05-21
下一篇:
Java InhibitAnyPolicyExtension类代码示例发布时间:2022-05-21
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap