• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java LexicalizedParser类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中edu.stanford.nlp.parser.lexparser.LexicalizedParser的典型用法代码示例。如果您正苦于以下问题:Java LexicalizedParser类的具体用法?Java LexicalizedParser怎么用?Java LexicalizedParser使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



LexicalizedParser类属于edu.stanford.nlp.parser.lexparser包,在下文中一共展示了LexicalizedParser类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: SentenceExtractThread

import edu.stanford.nlp.parser.lexparser.LexicalizedParser; //导入依赖的package包/类
public SentenceExtractThread(String resultDir,
        String filename_cluster_read, String extractedSentencesSaveDir,
        String textDir, LexicalizedParser lp, String dictPath) {
    super();
    this.clusterResultDir = resultDir;
    this.filename_cluster_read = filename_cluster_read;
    this.extractedSentencesSaveDir = extractedSentencesSaveDir;
    this.textDir = textDir;
    this.lp = lp;
    try {
        this.dict = WordNetUtil.openDictionary(dictPath);
    } catch (final IOException e) {
        this.log.error("打开WordNet失败!", e);
        //e.printStackTrace();
    }
}
 
开发者ID:procyon-lotor,项目名称:event-direct-mts,代码行数:17,代码来源:SentenceExtractThread.java


示例2: demoDP

import edu.stanford.nlp.parser.lexparser.LexicalizedParser; //导入依赖的package包/类
/**
 * demoDP demonstrates turning a file into tokens and then parse trees. Note
 * that the trees are printed by calling pennPrint on the Tree object. It is
 * also possible to pass a PrintWriter to pennPrint if you want to capture
 * the output.
 * 
 * file => tokens => parse trees
 */
public static void demoDP(LexicalizedParser lp, String filename) {
	// This option shows loading, sentence-segmenting and tokenizing
	// a file using DocumentPreprocessor.
	TreebankLanguagePack tlp = new PennTreebankLanguagePack();
	GrammaticalStructureFactory gsf = tlp.grammaticalStructureFactory();
	// You could also create a tokenizer here (as below) and pass it
	// to DocumentPreprocessor
	for (List<HasWord> sentence : new DocumentPreprocessor(filename)) {
		Tree parse = lp.apply(sentence);
		parse.pennPrint();
		System.out.println();

		GrammaticalStructure gs = gsf.newGrammaticalStructure(parse);
		Collection tdl = gs.typedDependenciesCCprocessed();
		System.out.println(tdl);
		System.out.println();
	}
}
 
开发者ID:opinion-extraction-propagation,项目名称:TASC-Tuples,代码行数:27,代码来源:ParserDemo.java


示例3: run

import edu.stanford.nlp.parser.lexparser.LexicalizedParser; //导入依赖的package包/类
@Override
public void run() {
  try {
    parser = new edu.stanford.nlp.parser.lexparser.LexicalizedParser(filename);
  } catch (Exception ex) {
    JOptionPane.showMessageDialog(ParserPanel.this, "Error loading parser: " + filename, null, JOptionPane.ERROR_MESSAGE);
    setStatus("Error loading parser");
    parser = null;
  } catch (OutOfMemoryError e) {
    JOptionPane.showMessageDialog(ParserPanel.this, "Could not load parser. Out of memory.", null, JOptionPane.ERROR_MESSAGE);
    setStatus("Error loading parser");
    parser = null;
  }

  stopProgressMonitor();
  if (parser != null) {
    setStatus("Loaded parser.");
    parserFileLabel.setText("Parser: " + filename);
    parseButton.setEnabled(true);
    parseNextButton.setEnabled(true);
  }
}
 
开发者ID:FabianFriedrich,项目名称:Text2Process,代码行数:23,代码来源:ParserPanel.java


示例4: main

import edu.stanford.nlp.parser.lexparser.LexicalizedParser; //导入依赖的package包/类
public static void main(String[] args) {
  LexicalizedParser lp = new LexicalizedParser("englishPCFG.ser.gz");
  lp.setOptionFlags(new String[]{"-maxLength", "80", "-retainTmpSubcategories"});

  String[] sent = { "This", "is", "an", "easy", "sentence", "." };
  Tree parse = (Tree) lp.apply(Arrays.asList(sent));
  parse.pennPrint();
  System.out.println();

  TreebankLanguagePack tlp = new PennTreebankLanguagePack();
  GrammaticalStructureFactory gsf = tlp.grammaticalStructureFactory();
  GrammaticalStructure gs = gsf.newGrammaticalStructure(parse);
  Collection tdl = gs.typedDependenciesCollapsed();
  System.out.println(tdl);
  System.out.println();

  TreePrint tp = new TreePrint("penn,typedDependenciesCollapsed");
  tp.printTree(parse);
}
 
开发者ID:FabianFriedrich,项目名称:Text2Process,代码行数:20,代码来源:ParserDemo.java


示例5: T2PStanfordWrapper

import edu.stanford.nlp.parser.lexparser.LexicalizedParser; //导入依赖的package包/类
/**
 * 
 */
public T2PStanfordWrapper() {
	try {
		ObjectInputStream in;
	    InputStream is;
	    URL u = T2PStanfordWrapper.class.getResource("/englishFactored.ser.gz");
	    if(u == null){
	    	//opening from IDE
	    	is = new FileInputStream(new File("resources/englishFactored.ser.gz"));		    		    	
	    }else{
	    	//opening from jar
	    	URLConnection uc = u.openConnection();
		    is = uc.getInputStream(); 				    
	    }
	    in = new ObjectInputStream(new GZIPInputStream(new BufferedInputStream(is)));  
	    f_parser = new LexicalizedParser(in);
		f_tlp = new PennTreebankLanguagePack(); //new ChineseTreebankLanguagePack();
	    f_gsf = f_tlp.grammaticalStructureFactory();
	}catch(Exception ex) {
		ex.printStackTrace();
	}	    
	//option flags as in the Parser example, but without maxlength
	f_parser.setOptionFlags(new String[]{"-retainTmpSubcategories"});				
	//f_parser.setOptionFlags(new String[]{"-segmentMarkov"});				
	Test.MAX_ITEMS = 4000000; //enables parsing of long sentences
}
 
开发者ID:FabianFriedrich,项目名称:Text2Process,代码行数:29,代码来源:T2PStanfordWrapper.java


示例6: main

import edu.stanford.nlp.parser.lexparser.LexicalizedParser; //导入依赖的package包/类
public static void main(String[] args) {
  LexicalizedParser lp = new LexicalizedParser("parsers/englishFactored.ser.gz");
  lp.setOptionFlags(new String[]{"-maxLength", "80", "-retainTmpSubcategories"});

  Tree parse = (Tree) lp.apply("Try this sentence, which is slightly longer.");

  TreebankLanguagePack tlp = new PennTreebankLanguagePack();
  GrammaticalStructureFactory gsf = tlp.grammaticalStructureFactory();
  GrammaticalStructure gs = gsf.newGrammaticalStructure(parse);
  Collection<TypedDependency> tdl = gs.typedDependenciesCollapsed();
  TypedDependency td = tdl.iterator().next();
  TreeGraphNode node = td.dep();
  node = (TreeGraphNode) node.parent();
  node.deepCopy();
  

  
}
 
开发者ID:FabianFriedrich,项目名称:Text2Process,代码行数:19,代码来源:ParserDemo.java


示例7: signature

import edu.stanford.nlp.parser.lexparser.LexicalizedParser; //导入依赖的package包/类
public static String signature(String annotatorName, Properties props) {
  StringBuilder os = new StringBuilder();
  os.append(annotatorName + ".model:" +
          props.getProperty(annotatorName + ".model",
                  LexicalizedParser.DEFAULT_PARSER_LOC));
  os.append(annotatorName + ".debug:" +
          props.getProperty(annotatorName + ".debug", "false"));
  os.append(annotatorName + ".flags:" +
          props.getProperty(annotatorName + ".flags", ""));
  os.append(annotatorName + ".maxlen:" +
          props.getProperty(annotatorName + ".maxlen", "-1"));
  os.append(annotatorName + ".treemap:" +
          props.getProperty(annotatorName + ".treemap", ""));
  os.append(annotatorName + ".maxtime:" +
          props.getProperty(annotatorName + ".maxtime", "0"));
  os.append(annotatorName + ".buildgraphs:" +
          props.getProperty(annotatorName + ".buildgraphs", "true"));
  os.append(annotatorName + ".nthreads:" + 
            props.getProperty(annotatorName + ".nthreads", props.getProperty("nthreads", "")));
  return os.toString();
}
 
开发者ID:benblamey,项目名称:stanford-nlp,代码行数:22,代码来源:ParserAnnotator.java


示例8: writeImage

import edu.stanford.nlp.parser.lexparser.LexicalizedParser; //导入依赖的package包/类
public static void writeImage(String sentence, String outFile, int scale) throws Exception {
    
    LexicalizedParser lp = null;
    try {
        lp = LexicalizedParser.loadModel("edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz");
    } catch (Exception e) {
        System.err.println("Could not load file englishPCFG.ser.gz. Try placing this file in the same directory as Dependencee.jar");
        return;
    }
    
    lp.setOptionFlags(new String[]{"-maxLength", "500", "-retainTmpSubcategories"});
    TokenizerFactory<CoreLabel> tokenizerFactory =
            PTBTokenizer.factory(new CoreLabelTokenFactory(), "");
    List<CoreLabel> wordList = tokenizerFactory.getTokenizer(new StringReader(sentence)).tokenize();
    Tree tree = lp.apply(wordList);
    writeImage(tree, outFile, scale);
    
}
 
开发者ID:awaisathar,项目名称:dependensee,代码行数:19,代码来源:Main.java


示例9: testWriteImage

import edu.stanford.nlp.parser.lexparser.LexicalizedParser; //导入依赖的package包/类
/**
 * Test of writeImage method, of class Main.
 */

@Test
public void testWriteImage() throws Exception {
    String text = "A quick brown fox jumped over the lazy dog.";
    TreebankLanguagePack tlp = new PennTreebankLanguagePack();
    GrammaticalStructureFactory gsf = tlp.grammaticalStructureFactory();
    LexicalizedParser lp = LexicalizedParser.loadModel();
    lp.setOptionFlags(new String[]{"-maxLength", "500", "-retainTmpSubcategories"});
    TokenizerFactory<CoreLabel> tokenizerFactory =
            PTBTokenizer.factory(new CoreLabelTokenFactory(), "");
    List<CoreLabel> wordList = tokenizerFactory.getTokenizer(new StringReader(text)).tokenize();
    Tree tree = lp.apply(wordList);
    GrammaticalStructure gs = gsf.newGrammaticalStructure(tree);
    Collection<TypedDependency> tdl = gs.typedDependenciesCollapsed();
    Main.writeImage(tdl, "image.png", 3);
    assert (new File("image.png").exists());
}
 
开发者ID:awaisathar,项目名称:dependensee,代码行数:21,代码来源:MainTest.java


示例10: demoDP

import edu.stanford.nlp.parser.lexparser.LexicalizedParser; //导入依赖的package包/类
public static void demoDP(LexicalizedParser lp, String filename) {
  // This option shows loading and sentence-segment and tokenizing
  // a file using DocumentPreprocessor
  TreebankLanguagePack tlp = new PennTreebankLanguagePack();
  GrammaticalStructureFactory gsf = tlp.grammaticalStructureFactory();
  // You could also create a tokenizer here (as below) and pass it
  // to DocumentPreprocessor
  for (List<HasWord> sentence : new DocumentPreprocessor(filename)) {
    Tree parse = lp.apply(sentence);
    parse.pennPrint();
    System.out.println();

    GrammaticalStructure gs = gsf.newGrammaticalStructure(parse);
    Collection tdl = gs.typedDependenciesCCprocessed(true);
    System.out.println(tdl);
    System.out.println();
  }
}
 
开发者ID:amark-india,项目名称:eventspotter,代码行数:19,代码来源:ParserDemo.java


示例11: instantiateStanfordParser

import edu.stanford.nlp.parser.lexparser.LexicalizedParser; //导入依赖的package包/类
private void instantiateStanfordParser()
    throws ResourceInstantiationException {
  if(stanfordParser != null) return;
  try {
    // String filepath = Files.fileFromURL(parserFile).getAbsolutePath();
    stanfordParser =
        LexicalizedParser.getParserFromSerializedFile(parserFile
            .toExternalForm());
  } catch(Exception e) {
    throw new ResourceInstantiationException(e);
  }
}
 
开发者ID:GateNLP,项目名称:gateplugin-Stanford_CoreNLP,代码行数:13,代码来源:Parser.java


示例12: initLexResources

import edu.stanford.nlp.parser.lexparser.LexicalizedParser; //导入依赖的package包/类
private void initLexResources() {
  try {
    options = new Options();
    options.testOptions.verbose = true;
    // Parser
    parser = LexicalizedParser.loadModel(_serializedGrammar);
    //parser = new LexicalizedParser(_serializedGrammar, options);
  } catch( Exception ex ) { ex.printStackTrace(); }

  // Dependency tree info
  TreebankLanguagePack tlp = new PennTreebankLanguagePack();
  gsf = tlp.grammaticalStructureFactory();
}
 
开发者ID:nchambers,项目名称:schemas,代码行数:14,代码来源:DirectoryParser.java


示例13: main

import edu.stanford.nlp.parser.lexparser.LexicalizedParser; //导入依赖的package包/类
public static void main(String[] args) throws IOException {
	
	LexicalizedParser parser = LexicalizedParser.loadModel();
	File[] files=new File(fileDir).listFiles();
	int num=0;
	double score=0.0;
	for(File f:files)
	{
		if(f.isDirectory())
			continue;
		BufferedReader br=new BufferedReader(new FileReader(f.getAbsolutePath()));
		String line="";
		while((line=br.readLine())!=null)
		{
			StringTokenizer st=new StringTokenizer(line, "\\.");
			while(st.hasMoreTokens())
			{
				score=score+parser.parse(st.nextToken()).score();
				num++;
			}
			
		}
		System.out.println(score+" for "+f.getName());
		br.close();
		
	}
	
	System.out.println(score+"/"+num);
	
	System.out.println(parser.parse(s).score());

	
}
 
开发者ID:siddBanPsu,项目名称:WikiKreator,代码行数:34,代码来源:ParseErrorChecker.java


示例14: setParse

import edu.stanford.nlp.parser.lexparser.LexicalizedParser; //导入依赖的package包/类
private void setParse(){
    if (this.segtext == null || this.segtext.length() == 0) {
   	    StringBuffer sb = new StringBuffer();
           for(String w : seggedText)
           {
               sb.append(w + " ");
           }
           segtext = sb.toString();
    }
	LexicalizedParser lp=DicModel.loadParser();
	Tree t = lp.parse(segtext);   
    ChineseGrammaticalStructure gs = new ChineseGrammaticalStructure(t);  
    parseResult = gs.typedDependenciesCollapsed();
}
 
开发者ID:intfloat,项目名称:weibo-emotion-analyzer,代码行数:15,代码来源:Sentence.java


示例15: main

import edu.stanford.nlp.parser.lexparser.LexicalizedParser; //导入依赖的package包/类
/**
 * The main method demonstrates the easiest way to load a parser. Simply
 * call loadModel and specify the path of a serialized grammar model, which
 * can be a file, a resource on the classpath, or even a URL. For example,
 * this demonstrates loading from the models jar file, which you therefore
 * need to include in the classpath for ParserDemo to work.
 */
public static void main(String[] args) {
	LexicalizedParser lp = LexicalizedParser
			.loadModel("edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz");
	if (args.length > 0) {
		demoDP(lp, args[0]);
	} else {
		demoAPI(lp);
	}
}
 
开发者ID:opinion-extraction-propagation,项目名称:TASC-Tuples,代码行数:17,代码来源:ParserDemo.java


示例16: demoAPI

import edu.stanford.nlp.parser.lexparser.LexicalizedParser; //导入依赖的package包/类
/**
 * demoAPI demonstrates other ways of calling the parser with already
 * tokenized text, or in some cases, raw text that needs to be tokenized as
 * a single sentence. Output is handled with a TreePrint object. Note that
 * the options used when creating the TreePrint can determine what results
 * to print out. Once again, one can capture the output by passing a
 * PrintWriter to TreePrint.printTree.
 * 
 * difference: already tokenized text
 * 
 * 
 */
public static void demoAPI(LexicalizedParser lp) {
	// This option shows parsing a list of correctly tokenized words
	String[] sent = { "This", "is", "an", "easy", "sentence", "." };
	List<CoreLabel> rawWords = Sentence.toCoreLabelList(sent);
	Tree parse = lp.apply(rawWords);
	parse.pennPrint();
	System.out.println();

	// This option shows loading and using an explicit tokenizer
	String sent2 = "Hey @Apple, pretty much all your products are amazing. You blow minds every time you launch a new gizmo."
			+ " that said, your hold music is crap";
	TokenizerFactory<CoreLabel> tokenizerFactory = PTBTokenizer.factory(
			new CoreLabelTokenFactory(), "");
	Tokenizer<CoreLabel> tok = tokenizerFactory
			.getTokenizer(new StringReader(sent2));
	List<CoreLabel> rawWords2 = tok.tokenize();
	parse = lp.apply(rawWords2);

	TreebankLanguagePack tlp = new PennTreebankLanguagePack();
	GrammaticalStructureFactory gsf = tlp.grammaticalStructureFactory();
	GrammaticalStructure gs = gsf.newGrammaticalStructure(parse);
	List<TypedDependency> tdl = gs.typedDependenciesCCprocessed();
	System.out.println(tdl);
	System.out.println();

	// You can also use a TreePrint object to print trees and dependencies
	TreePrint tp = new TreePrint("penn,typedDependenciesCollapsed");
	tp.printTree(parse);
}
 
开发者ID:opinion-extraction-propagation,项目名称:TASC-Tuples,代码行数:42,代码来源:ParserDemo.java


示例17: main

import edu.stanford.nlp.parser.lexparser.LexicalizedParser; //导入依赖的package包/类
public static void main(String[] args) {
	// TODO Auto-generated method stub
	LexicalizedParser lp = LexicalizedParser
			.loadModel("edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz");
	if (args.length > 0) {
		String contentfilename = args[0];
		String authorfilename = args[1];
		String dependencyfilename = args[2];
		DependencyParser dependencyParser = new DependencyParser();
		ArrayList<ArrayList<String>> ret = dependencyParser
				.getDependencyByLine(lp, contentfilename, authorfilename);
		try {
			BufferedWriter bw = new BufferedWriter(new FileWriter(
					dependencyfilename));
			for (ArrayList<String> arr : ret) {

				bw.write(arr.get(0) + "\t" + arr.get(1) + "\t" + arr.get(2)
						+ "\t" + arr.get(3) + "\t" + arr.get(4) + "\t"
						+ arr.get(5) + "\t" + arr.get(6) + "\n");
			}
			bw.flush();
			bw.close();
		} catch (Exception e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}
	} else {
		System.out
				.println("java -jar GenerateDependency.jar contentfilename authorfilename dependencyfilename");
	}
}
 
开发者ID:opinion-extraction-propagation,项目名称:TASC-Tuples,代码行数:32,代码来源:GenerateDependency.java


示例18: init

import edu.stanford.nlp.parser.lexparser.LexicalizedParser; //导入依赖的package包/类
public void init() {
  this.lp = LexicalizedParser.loadModel("edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz",
                                        "-maxLength", "80", "-retainTmpSubcategories");
  this.tlp = new PennTreebankLanguagePack();
  this.gsf = this.tlp.grammaticalStructureFactory();
  //this.parsedTree = new ArrayList<DependencyTree>();
  //this.trees = new ArrayList<Tree>();
}
 
开发者ID:dkmfbk,项目名称:pikes,代码行数:9,代码来源:DependenciesBuilder.java


示例19: instantiateStanfordParser

import edu.stanford.nlp.parser.lexparser.LexicalizedParser; //导入依赖的package包/类
private void instantiateStanfordParser()
  throws ResourceInstantiationException {
  if(stanfordParser != null) return;
  
  try {
    String filepath = Files.fileFromURL(parserFile).getAbsolutePath();
    stanfordParser = LexicalizedParser.getParserFromSerializedFile(filepath);
  }
  catch(Exception e) {
    throw new ResourceInstantiationException(e);
  }
}
 
开发者ID:vita-us,项目名称:ViTA,代码行数:13,代码来源:Parser.java


示例20: ParserAnnotator

import edu.stanford.nlp.parser.lexparser.LexicalizedParser; //导入依赖的package包/类
public ParserAnnotator(LexicalizedParser parser, boolean verbose, int maxSent, Function<Tree, Tree> treeMap) {
  VERBOSE = verbose;
  this.BUILD_GRAPHS = parser.getTLPParams().supportsBasicDependencies();
  this.parser = parser;
  this.maxSentenceLength = maxSent;
  this.treeMap = treeMap;
  this.maxParseTime = 0;
  if (this.BUILD_GRAPHS) {
    TreebankLanguagePack tlp = parser.getTLPParams().treebankLanguagePack();
    this.gsf = tlp.grammaticalStructureFactory(tlp.punctuationWordRejectFilter(), tlp.typedDependencyHeadFinder());
  } else {
    this.gsf = null;
  }
  this.nThreads = 1;
}
 
开发者ID:benblamey,项目名称:stanford-nlp,代码行数:16,代码来源:ParserAnnotator.java



注:本文中的edu.stanford.nlp.parser.lexparser.LexicalizedParser类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java ECParameterSpec类代码示例发布时间:2022-05-21
下一篇:
Java BuilderPreferenceAccess类代码示例发布时间:2022-05-21
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap