• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java Filter类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中edu.stanford.nlp.util.Filter的典型用法代码示例。如果您正苦于以下问题:Java Filter类的具体用法?Java Filter怎么用?Java Filter使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



Filter类属于edu.stanford.nlp.util包,在下文中一共展示了Filter类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: spliceOutHelper

import edu.stanford.nlp.util.Filter; //导入依赖的package包/类
private List<Tree> spliceOutHelper(Filter<Tree> nodeFilter, TreeFactory tf) {
  // recurse over all children first
  Tree[] kids = children();
  List<Tree> l = new ArrayList<Tree>();
  for (int i = 0; i < kids.length; i++) {
    l.addAll(kids[i].spliceOutHelper(nodeFilter, tf));
  }
  // check if this node is being spliced out
  if (nodeFilter.accept(this)) {
    // no, so add our children and return
    Tree t;
    if ( ! l.isEmpty()) {
      t = tf.newTreeNode(label(), l);
    } else {
      t = tf.newLeaf(label());
    }
    l = new ArrayList<Tree>(1);
    l.add(t);
    return l;
  }
  // we're out, so return our children
  return l;
}
 
开发者ID:FabianFriedrich,项目名称:Text2Process,代码行数:24,代码来源:Tree.java


示例2: prune

import edu.stanford.nlp.util.Filter; //导入依赖的package包/类
/**
 * Creates a deep copy of the tree, where all nodes that the filter
 * does not accept and all children of such nodes are pruned.  If all
 * of a node's children are pruned, that node is cut as well.
 * A <code>Filter</code> can assume
 * that it will not be called with a <code>null</code> argument.
 *
 * @param filter the filter to be apply
 * @param tf     the TreeFactory to be used to make new Tree nodes if needed
 * @return a filtered copy of the tree, including the possibility of
 *         <code>null</code> if the root node of the tree is filtered
 */
public Tree prune(Filter<Tree> filter, TreeFactory tf) {
  // is the current node to be pruned?
  if ( ! filter.accept(this)) {
    return null;
  }
  // if not, recurse over all children
  List<Tree> l = new ArrayList<Tree>();
  Tree[] kids = children();
  for (int i = 0; i < kids.length; i++) {
    Tree prunedChild = kids[i].prune(filter, tf);
    if (prunedChild != null) {
      l.add(prunedChild);
    }
  }
  // and check if this node has lost all its children
  if (l.isEmpty() && !(kids.length == 0)) {
    return null;
  }
  // if we're still ok, copy the node
  if (isLeaf()) {
    return tf.newLeaf(label());
  }
  return tf.newTreeNode(label(), l);
}
 
开发者ID:FabianFriedrich,项目名称:Text2Process,代码行数:37,代码来源:Tree.java


示例3: getDeps

import edu.stanford.nlp.util.Filter; //导入依赖的package包/类
/**
 * The constructor builds a list of typed dependencies using
 * information from a <code>GrammaticalStructure</code>.
 *
 * @param getExtra If true, the list of typed dependencies will contain extra ones.
 *              If false, the list of typed dependencies will respect the tree structure.
 */
private List<TypedDependency> getDeps(boolean getExtra, Filter<TypedDependency> f) {
  List<TypedDependency> basicDep = Generics.newArrayList();

  for (Dependency<Label, Label, Object> d : dependencies()) {
    TreeGraphNode gov = (TreeGraphNode) d.governor();
    TreeGraphNode dep = (TreeGraphNode) d.dependent();
      //System.out.println("Gov: " + gov);
      //System.out.println("Dep: " + dep);
    GrammaticalRelation reln = getGrammaticalRelation(gov, dep);
      //System.out.println("Reln: " + reln);
    basicDep.add(new TypedDependency(reln, gov, dep));
  }
  if (getExtra) {
    TreeGraphNode rootTree = root();
    getDep(rootTree, basicDep, f); // adds stuff to basicDep
  }
  Collections.sort(basicDep);
  return basicDep;
}
 
开发者ID:FabianFriedrich,项目名称:Text2Process,代码行数:27,代码来源:GrammaticalStructure.java


示例4: constituents

import edu.stanford.nlp.util.Filter; //导入依赖的package包/类
/**
 * Adds the constituents derived from <code>this</code> tree to
 * the ordered <code>Constituent</code> <code>Set</code>, beginning
 * numbering from the second argument and returning the number of
 * the right edge.  The reason for the return of the right frontier
 * is in order to produce bracketings recursively by threading through
 * the daughters of a given tree.
 *
 * @param constituentsSet set of constituents to add results of bracketing
 *                        this tree to
 * @param left            left position to begin labeling the bracketings with
 * @param cf              ConstituentFactory used to build the Constituent objects
 * @param charLevel       If true, compute constituents without respect to whitespace. Otherwise, preserve whitespace boundaries.
 * @param filter          A filter to use to decide whether or not to add a tree as a constituent.
 * @return Index of right frontier of Constituent
 */
private int constituents(Set<Constituent> constituentsSet, int left, ConstituentFactory cf, boolean charLevel, Filter<Tree> filter) {

  if(isPreTerminal())
    return left + ((charLevel) ? firstChild().value().length() : 1);

  int position = left;

  // System.err.println("In bracketing trees left is " + left);
  // System.err.println("  label is " + label() +
  //                       "; num daughters: " + children().length);
  Tree[] kids = children();
  for (Tree kid : kids) {
    position = kid.constituents(constituentsSet, position, cf, charLevel, filter);
    // System.err.println("  position went to " + position);
  }

  if (filter == null || filter.accept(this)) {
    //Compute span of entire tree at the end of recursion
    constituentsSet.add(cf.newConstituent(left, position - 1, label(), score()));
  }
  // System.err.println("  added " + label());
  return position;
}
 
开发者ID:benblamey,项目名称:stanford-nlp,代码行数:40,代码来源:Tree.java


示例5: prune

import edu.stanford.nlp.util.Filter; //导入依赖的package包/类
/**
 * Creates a deep copy of the tree, where all nodes that the filter
 * does not accept and all children of such nodes are pruned.  If all
 * of a node's children are pruned, that node is cut as well.
 * A <code>Filter</code> can assume
 * that it will not be called with a <code>null</code> argument.
 *
 * @param filter the filter to be applied
 * @param tf     the TreeFactory to be used to make new Tree nodes if needed
 * @return a filtered copy of the tree, including the possibility of
 *         <code>null</code> if the root node of the tree is filtered
 */
public Tree prune(Filter<Tree> filter, TreeFactory tf) {
  // is the current node to be pruned?
  if ( ! filter.accept(this)) {
    return null;
  }
  // if not, recurse over all children
  List<Tree> l = new ArrayList<Tree>();
  Tree[] kids = children();
  for (int i = 0; i < kids.length; i++) {
    Tree prunedChild = kids[i].prune(filter, tf);
    if (prunedChild != null) {
      l.add(prunedChild);
    }
  }
  // and check if this node has lost all its children
  if (l.isEmpty() && !(kids.length == 0)) {
    return null;
  }
  // if we're still ok, copy the node
  if (isLeaf()) {
    return tf.newLeaf(label());
  }
  return tf.newTreeNode(label(), l);
}
 
开发者ID:chbrown,项目名称:stanford-parser,代码行数:37,代码来源:Tree.java


示例6: FrenchTreeNormalizer

import edu.stanford.nlp.util.Filter; //导入依赖的package包/类
public FrenchTreeNormalizer() {
  super(new FrenchTreebankLanguagePack());

  rootLabel = tlp.startSymbol();

  aOverAFilter = new FrenchAOverAFilter();

  emptyFilter = new Filter<Tree>() {
    private static final long serialVersionUID = -22673346831392110L;
    public boolean accept(Tree tree) {
      if(tree.isPreTerminal() && (tree.firstChild().value().equals("") || tree.firstChild().value().equals("-NONE-"))) {
        return false;
      }
      return true;
    }
  };
}
 
开发者ID:amark-india,项目名称:eventspotter,代码行数:18,代码来源:FrenchTreeNormalizer.java


示例7: spliceOutHelper

import edu.stanford.nlp.util.Filter; //导入依赖的package包/类
private List<Tree> spliceOutHelper(Filter<Tree> nodeFilter, TreeFactory tf) {
  // recurse over all children first
  Tree[] kids = children();
  List<Tree> l = new ArrayList<Tree>();
  for (Tree kid : kids) {
    l.addAll(kid.spliceOutHelper(nodeFilter, tf));
  }
  // check if this node is being spliced out
  if (nodeFilter.accept(this)) {
    // no, so add our children and return
    Tree t;
    if ( ! l.isEmpty()) {
      t = tf.newTreeNode(label(), l);
    } else {
      t = tf.newLeaf(label());
    }
    l = new ArrayList<Tree>(1);
    l.add(t);
    return l;
  }
  // we're out, so return our children
  return l;
}
 
开发者ID:paulirwin,项目名称:Stanford.NER.Net,代码行数:24,代码来源:Tree.java


示例8: NegraPennTreeNormalizer

import edu.stanford.nlp.util.Filter; //导入依赖的package包/类
public NegraPennTreeNormalizer(TreebankLanguagePack tlp, int nodeCleanup) {
  this.tlp = tlp;
  this.nodeCleanup = nodeCleanup;

  emptyFilter = new Filter<Tree>() {
    private static final long serialVersionUID = -606371737889816130L;
    public boolean accept(Tree t) {
      Tree[] kids = t.children();
      Label l = t.label();
      if ((l != null) && l.value() != null && (l.value().matches("^\\*T.*$")) && !t.isLeaf() && kids.length == 1 && kids[0].isLeaf())
        return false;
      return true;
    }
  };
  aOverAFilter = new Filter<Tree>() {
    private static final long serialVersionUID = -606371737889816130L;
    public boolean accept(Tree t) {
      if (t.isLeaf() || t.isPreTerminal() || t.children().length != 1)
        return true;
      if (t.label() != null && t.label().equals(t.children()[0].label()))
        return false;
      return true;
    }
  };
}
 
开发者ID:amark-india,项目名称:eventspotter,代码行数:26,代码来源:NegraPennTreeNormalizer.java


示例9: toTypedDependencies

import edu.stanford.nlp.util.Filter; //导入依赖的package包/类
/**
 * Transform a parse tree into a list of TypedDependency instances
 * 
 * @param tree
 * @return
 */
public static List<TypedDependency> toTypedDependencies(Tree tree) {
	TreebankLanguagePack tlp = new PennTreebankLanguagePack();
	Filter<String> filter = Filters.acceptFilter();
	GrammaticalStructureFactory gsf = tlp.grammaticalStructureFactory(filter, tlp.typedDependencyHeadFinder());
	GrammaticalStructure gs = gsf.newGrammaticalStructure(tree);
	return (List<TypedDependency>) gs.typedDependencies();
}
 
开发者ID:hakchul77,项目名称:irnlp_toolkit,代码行数:14,代码来源:StanfordNlpWrapper.java


示例10: transformTree

import edu.stanford.nlp.util.Filter; //导入依赖的package包/类
public Tree transformTree(Tree t) {
  if (tlp.isStartSymbol(t.value())) {
    t = t.firstChild();
  }
  Tree result = t.deepCopy();
  result = result.prune(new Filter<Tree>() {
    private static final long serialVersionUID = 1669994102700201499L;

    public boolean accept(Tree tree) {
      return collinizerPruneRegex == null || tree.label() == null || ! collinizerPruneRegex.matcher(tree.label().value()).matches();
    }
  });
  if (result == null) {
    return null;
  }
  for (Tree node : result) {
    // System.err.print("ATB collinizer: " + node.label().value()+" --> ");
    if (node.label() != null && ! node.isLeaf()) {
      node.label().setValue(tlp.basicCategory(node.label().value()));
    }
    if (node.label().value().equals("ADVP")) {
      node.label().setValue("PRT");
    }
    // System.err.println(node.label().value());
  }
  if (retainPunctuation) {
    return result;
  } else {
    return result.prune(punctuationRejecter);
  }
}
 
开发者ID:FabianFriedrich,项目名称:Text2Process,代码行数:32,代码来源:ArabicTreebankParserParams.java


示例11: totalIntCount

import edu.stanford.nlp.util.Filter; //导入依赖的package包/类
/**
 * Returns the total count for all objects in this Counter that pass the
 * given Filter. Passing in a filter that always returns true is equivalent
 * to calling {@link #totalCount()}.
 */
public int totalIntCount(Filter<E> filter) {
  int total = 0;
  for (E key : map.keySet()) {
    if (filter.accept(key)) {
      total += getIntCount(key);
    }
  }
  return (total);
}
 
开发者ID:FabianFriedrich,项目名称:Text2Process,代码行数:15,代码来源:IntCounter.java


示例12: mapDependencies

import edu.stanford.nlp.util.Filter; //导入依赖的package包/类
/**
 * Return a set of Label-Label dependencies, represented as
 * Dependency objects, for the Tree.  The Labels are the ones of the leaf
 * nodes of the tree, without mucking with them.
 *
 * @param f  Dependencies are excluded for which the Dependency is not
 *           accepted by the Filter
 * @param hf The HeadFinder to use to identify the head of constituents.
 *           The code assumes
 *           that it can use <code>headPreTerminal(hf)</code> to find a
 *           tag and word to make a CyclicCoreLabel.
 * @return Set of dependencies (each a <code>Dependency</code> between two
 *           <code>CyclicCoreLabel</code>s, which each contain a tag(), word(),
 *           and value(), the last two of which are identical).
 */
public Set<Dependency<Label, Label, Object>> mapDependencies(Filter<Dependency<Label, Label, Object>> f, HeadFinder hf) {
  if (hf == null) {
    throw new IllegalArgumentException("mapDependencies: need headfinder");
  }
  Set<Dependency<Label, Label, Object>> deps = new HashSet<Dependency<Label, Label, Object>>();
  for (Tree node : this) {
    if (node.isLeaf() || node.children().length < 2) {
      continue;
    }
    // every child with a different head (or repeated) is an argument
    // Label l = node.label();
    // System.err.println("doing kids of label: " + l);
    //Tree hwt = node.headPreTerminal(hf);
    Tree hwt = node.headTerminal(hf);
    // System.err.println("have hf, found head preterm: " + hwt);
    if (hwt == null) {
      throw new IllegalStateException("mapDependencies: headFinder failed!");
    }

    for (Tree child : node.children()) {
      // Label dl = child.label();
      // Tree dwt = child.headPreTerminal(hf);
      Tree dwt = child.headTerminal(hf);
      if (dwt == null) {
        throw new IllegalStateException("mapDependencies: headFinder failed!");
      }
      //System.err.println("kid is " + dl);
       //System.err.println("transformed to " + dml.toString("value{map}"));
      if (dwt != hwt) {
        Dependency<Label, Label, Object> p = new UnnamedDependency(hwt.label(), dwt.label());
        if (f.accept(p)) {
          deps.add(p);
        }
      }
    }
  }
  return deps;
}
 
开发者ID:FabianFriedrich,项目名称:Text2Process,代码行数:54,代码来源:Tree.java


示例13: dependencies

import edu.stanford.nlp.util.Filter; //导入依赖的package包/类
/**
 * Return a set of node-node dependencies, represented as Dependency
 * objects, for the Tree.
 *
 * @param hf The HeadFinder to use to identify the head of constituents.
 *           If this is <code>null</code>, then nodes are assumed to already
 *           be marked with their heads.
 * @return Set of dependencies (each a <code>Dependency</code>)
 */
@Override
public Set<Dependency<Label, Label, Object>> dependencies(Filter<Dependency<Label, Label, Object>> f, HeadFinder hf) {
  Set<Dependency<Label, Label, Object>> deps = Generics.newHashSet();
  for (Tree t : this) {

    TreeGraphNode node = safeCast(t);
    if (node == null || node.isLeaf() || node.children().length < 2) {
      continue;
    }

    TreeGraphNode headWordNode;
    if (hf != null) {
      headWordNode = safeCast(node.headTerminal(hf));
    } else {
      headWordNode = node.headWordNode();
    }

    for (Tree k : node.children()) {
      TreeGraphNode kid = safeCast(k);
      if (kid == null) {
        continue;
      }
      TreeGraphNode kidHeadWordNode;
      if (hf != null) {
        kidHeadWordNode = safeCast(kid.headTerminal(hf));
      } else {
        kidHeadWordNode = kid.headWordNode();
      }

      if (headWordNode != null && headWordNode != kidHeadWordNode) {
        Dependency<Label, Label, Object> d = new UnnamedDependency(headWordNode, kidHeadWordNode);
        if (f.accept(d)) {
          deps.add(d);
        }
      }
    }
  }
  return deps;
}
 
开发者ID:FabianFriedrich,项目名称:Text2Process,代码行数:49,代码来源:TreeGraphNode.java


示例14: GrammaticalStructure

import edu.stanford.nlp.util.Filter; //导入依赖的package包/类
/**
 * Create a new GrammaticalStructure, analyzing the parse tree and
 * populate the GrammaticalStructure with as many labeled
 * grammatical relation arcs as possible.
 *
 * @param t             A Tree to analyze
 * @param relations     A set of GrammaticalRelations to consider
 * @param relationsLock Something needed to make this thread-safe
 * @param hf            A HeadFinder for analysis
 * @param puncFilter    A Filter to reject punctuation. To delete punctuation
 *                      dependencies, this filter should return false on
 *                      punctuation word strings, and true otherwise.
 *                      If punctuation dependencies should be kept, you
 *                      should pass in a Filters.&lt;String&gt;acceptFilter().
 */
public GrammaticalStructure(Tree t, Collection<GrammaticalRelation> relations,
                            Lock relationsLock, HeadFinder hf, Filter<String> puncFilter) {
  super(t); // makes a Tree with TreeGraphNode nodes
  // add head word and tag to phrase nodes
  root.percolateHeads(hf);
  // add dependencies, using heads
  NoPunctFilter puncDepFilter = new NoPunctFilter(puncFilter);
  NoPunctTypedDependencyFilter puncTypedDepFilter = new NoPunctTypedDependencyFilter(puncFilter);
  dependencies = root.dependencies(puncDepFilter);
  for (Dependency<Label, Label, Object> p : dependencies) {
    //System.out.println("first dep found " + p);
    TreeGraphNode gov = (TreeGraphNode) p.governor();
    TreeGraphNode dep = (TreeGraphNode) p.dependent();
    dep.addArc(GrammaticalRelation.getAnnotationClass(GOVERNOR), gov);
  }
  // analyze the root (and its descendants, recursively)
  if (relationsLock != null) {
    relationsLock.lock();
  }
  try {
    analyzeNode(root, root, relations);
  }
  finally {
    if (relationsLock != null) {
      relationsLock.unlock();
    }
  }
  // add typed dependencies
  typedDependencies = getDeps(false, puncTypedDepFilter);
  allTypedDependencies = getDeps(true, puncTypedDepFilter);
}
 
开发者ID:FabianFriedrich,项目名称:Text2Process,代码行数:47,代码来源:GrammaticalStructure.java


示例15: getDep

import edu.stanford.nlp.util.Filter; //导入依赖的package包/类
/** Look through the tree t and adds to the List basicDep dependencies
 *  which aren't in it but which satisfy the filter f.
 *
 * @param t The tree to examine (not changed)
 * @param basicDep The list of dependencies which may be augmented
 * @param f Additional dependencies are added only if they pass this filter
 */
private static void getDep(TreeGraphNode t, List<TypedDependency> basicDep,
                           Filter<TypedDependency> f) {
  if (t.numChildren() > 0) {          // don't do leaves
    Map<Class<? extends CoreAnnotation>, Set<TreeGraphNode>> depMap = getAllDependents(t);
    for (Class<? extends CoreAnnotation> depName : depMap.keySet()) {
      for (TreeGraphNode depNode : depMap.get(depName)) {
        TreeGraphNode gov = t.headWordNode();
        TreeGraphNode dep = depNode.headWordNode();
        if (gov != dep) {
          List<GrammaticalRelation> rels = getListGrammaticalRelation(t, depNode);
          if (!rels.isEmpty()) {
            for (GrammaticalRelation rel : rels) {
              TypedDependency newDep = new TypedDependency(rel, gov, dep);
              if (!basicDep.contains(newDep) && f.accept(newDep)) {
                newDep.setExtra();
                basicDep.add(newDep);
              }
            }
          }
        }
      }
    }
    // now recurse into children
    for (Tree kid : t.children()) {
      getDep((TreeGraphNode) kid, basicDep, f);
    }
  }
}
 
开发者ID:FabianFriedrich,项目名称:Text2Process,代码行数:36,代码来源:GrammaticalStructure.java


示例16: GrammaticalStructure

import edu.stanford.nlp.util.Filter; //导入依赖的package包/类
/**
 * Create a new GrammaticalStructure, analyzing the parse tree and
 * populate the GrammaticalStructure with as many labeled
 * grammatical relation arcs as possible.
 *
 * @param t             A Tree to analyze
 * @param relations     A set of GrammaticalRelations to consider
 * @param relationsLock Something needed to make this thread-safe
 * @param hf            A HeadFinder for analysis
 * @param puncFilter    A Filter to reject punctuation. To delete punctuation
 *                      dependencies, this filter should return false on
 *                      punctuation word strings, and true otherwise.
 *                      If punctuation dependencies should be kept, you
 *                      should pass in a Filters.&lt;String&gt;acceptFilter().
 */
public GrammaticalStructure(Tree t, Collection<GrammaticalRelation> relations,
                            Lock relationsLock, HeadFinder hf, Filter<String> puncFilter) {
  super(t); // makes a Tree with TreeGraphNode nodes
  // add head word and tag to phrase nodes
  root.percolateHeads(hf);
  if (root.value() == null) {
    root.setValue("ROOT");  // todo: cdm: it doesn't seem like this line should be here
  }
  // add dependencies, using heads
  this.puncFilter = puncFilter;
  NoPunctFilter puncDepFilter = new NoPunctFilter(puncFilter);
  NoPunctTypedDependencyFilter puncTypedDepFilter = new NoPunctTypedDependencyFilter(puncFilter);
  dependencies = root.dependencies(puncDepFilter, null);
  for (Dependency<Label, Label, Object> p : dependencies) {
    //System.out.println("dep found " + p);
    TreeGraphNode gov = (TreeGraphNode) p.governor();
    TreeGraphNode dep = (TreeGraphNode) p.dependent();
    dep.addArc(GrammaticalRelation.getAnnotationClass(GOVERNOR), gov);
  }
  // analyze the root (and its descendants, recursively)
  if (relationsLock != null) {
    relationsLock.lock();
  }
  try {
    analyzeNode(root, root, relations);
  }
  finally {
    if (relationsLock != null) {
      relationsLock.unlock();
    }
  }
  // add typed dependencies
  typedDependencies = getDeps(false, puncTypedDepFilter);
  allTypedDependencies = getDeps(true, puncTypedDepFilter);
}
 
开发者ID:jaimeguzman,项目名称:data_mining,代码行数:51,代码来源:GrammaticalStructure.java


示例17: mapDependencies

import edu.stanford.nlp.util.Filter; //导入依赖的package包/类
/**
 * Return a set of Label-Label dependencies, represented as
 * Dependency objects, for the Tree.  The Labels are the ones of the leaf
 * nodes of the tree, without mucking with them.
 *
 * @param f  Dependencies are excluded for which the Dependency is not
 *           accepted by the Filter
 * @param hf The HeadFinder to use to identify the head of constituents.
 *           The code assumes
 *           that it can use <code>headPreTerminal(hf)</code> to find a
 *           tag and word to make a CoreLabel.
 * @return Set of dependencies (each a <code>Dependency</code> between two
 *           <code>CoreLabel</code>s, which each contain a tag(), word(),
 *           and value(), the last two of which are identical).
 */
public Set<Dependency<Label, Label, Object>> mapDependencies(Filter<Dependency<Label, Label, Object>> f, HeadFinder hf) {
  if (hf == null) {
    throw new IllegalArgumentException("mapDependencies: need headfinder");
  }
  Set<Dependency<Label, Label, Object>> deps = Generics.newHashSet();
  for (Tree node : this) {
    if (node.isLeaf() || node.children().length < 2) {
      continue;
    }
    // Label l = node.label();
    // System.err.println("doing kids of label: " + l);
    //Tree hwt = node.headPreTerminal(hf);
    Tree hwt = node.headTerminal(hf);
    // System.err.println("have hf, found head preterm: " + hwt);
    if (hwt == null) {
      throw new IllegalStateException("mapDependencies: headFinder failed!");
    }

    for (Tree child : node.children()) {
      // Label dl = child.label();
      // Tree dwt = child.headPreTerminal(hf);
      Tree dwt = child.headTerminal(hf);
      if (dwt == null) {
        throw new IllegalStateException("mapDependencies: headFinder failed!");
      }
      //System.err.println("kid is " + dl);
       //System.err.println("transformed to " + dml.toString("value{map}"));
      if (dwt != hwt) {
        Dependency<Label, Label, Object> p = new UnnamedDependency(hwt.label(), dwt.label());
        if (f.accept(p)) {
          deps.add(p);
        }
      }
    }
  }
  return deps;
}
 
开发者ID:paulirwin,项目名称:Stanford.NER.Net,代码行数:53,代码来源:Tree.java


示例18: mapDependencies

import edu.stanford.nlp.util.Filter; //导入依赖的package包/类
/**
 * Return a set of Label-Label dependencies, represented as
 * Dependency objects, for the Tree.  The Labels are the ones of the leaf
 * nodes of the tree, without mucking with them.
 *
 * @param f  Dependencies are excluded for which the Dependency is not
 *           accepted by the Filter
 * @param hf The HeadFinder to use to identify the head of constituents.
 *           The code assumes
 *           that it can use <code>headPreTerminal(hf)</code> to find a
 *           tag and word to make a CyclicCoreLabel.
 * @return Set of dependencies (each a <code>Dependency</code> between two
 *           <code>CyclicCoreLabel</code>s, which each contain a tag(), word(),
 *           and value(), the last two of which are identical).
 */
public Set<Dependency<Label, Label, Object>> mapDependencies(Filter<Dependency<Label, Label, Object>> f, HeadFinder hf) {
  if (hf == null) {
    throw new IllegalArgumentException("mapDependencies: need headfinder");
  }
  Set<Dependency<Label, Label, Object>> deps = new HashSet<Dependency<Label, Label, Object>>();
  for (Tree node : this) {
    if (node.isLeaf() || node.children().length < 2) {
      continue;
    }
    // Label l = node.label();
    // System.err.println("doing kids of label: " + l);
    //Tree hwt = node.headPreTerminal(hf);
    Tree hwt = node.headTerminal(hf);
    // System.err.println("have hf, found head preterm: " + hwt);
    if (hwt == null) {
      throw new IllegalStateException("mapDependencies: headFinder failed!");
    }

    for (Tree child : node.children()) {
      // Label dl = child.label();
      // Tree dwt = child.headPreTerminal(hf);
      Tree dwt = child.headTerminal(hf);
      if (dwt == null) {
        throw new IllegalStateException("mapDependencies: headFinder failed!");
      }
      //System.err.println("kid is " + dl);
       //System.err.println("transformed to " + dml.toString("value{map}"));
      if (dwt != hwt) {
        Dependency<Label, Label, Object> p = new UnnamedDependency(hwt.label(), dwt.label());
        if (f.accept(p)) {
          deps.add(p);
        }
      }
    }
  }
  return deps;
}
 
开发者ID:chbrown,项目名称:stanford-parser,代码行数:53,代码来源:Tree.java


示例19: getTreeDeps

import edu.stanford.nlp.util.Filter; //导入依赖的package包/类
/** Look through the tree t and adds to the List basicDep dependencies
 *  which aren't in it but which satisfy the filter puncTypedDepFilter.
 *
 * @param t The tree to examine (not changed)
 * @param basicDep The list of dependencies which may be augmented
 * @param f Additional dependencies are added only if they pass this filter
 */
private static void getTreeDeps(TreeGraphNode t, List<TypedDependency> basicDep,
                                Filter<TypedDependency> puncTypedDepFilter,
                                Filter<TypedDependency> extraTreeDepFilter) {
  if (t.isPhrasal()) {          // don't do leaves of POS tags (chris changed this from numChildren > 0 in 2010)
    Map<Class<? extends GrammaticalRelationAnnotation>, Set<TreeGraphNode>> depMap = getAllDependents(t);
    for (Class<? extends GrammaticalRelationAnnotation> depName : depMap.keySet()) {
      for (TreeGraphNode depNode : depMap.get(depName)) {
        TreeGraphNode gov = t.headWordNode();
        TreeGraphNode dep = depNode.headWordNode();
        if (gov != dep) {
          List<GrammaticalRelation> rels = getListGrammaticalRelation(t, depNode);
          if (!rels.isEmpty()) {
            for (GrammaticalRelation rel : rels) {
              TypedDependency newDep = new TypedDependency(rel, gov, dep);
              if (!basicDep.contains(newDep) && puncTypedDepFilter.accept(newDep) && extraTreeDepFilter.accept(newDep)) {
                newDep.setExtra();
                basicDep.add(newDep);
              }
            }
          }
        }
      }
    }
    // now recurse into children
    for (Tree kid : t.children()) {
      getTreeDeps((TreeGraphNode) kid, basicDep, puncTypedDepFilter, extraTreeDepFilter);
    }
  }
}
 
开发者ID:paulirwin,项目名称:Stanford.NER.Net,代码行数:37,代码来源:GrammaticalStructure.java


示例20: DependencyEval

import edu.stanford.nlp.util.Filter; //导入依赖的package包/类
/** 
 * @param punctFilter A filter that accepts punctuation words.
 */
public DependencyEval(String str, boolean runningAverages, Filter<String> punctFilter) {
  super(str, runningAverages);
  this.punctFilter = punctFilter;
}
 
开发者ID:FabianFriedrich,项目名称:Text2Process,代码行数:8,代码来源:DependencyEval.java



注:本文中的edu.stanford.nlp.util.Filter类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java DrillbitContext类代码示例发布时间:2022-05-23
下一篇:
Java SlideInDownAnimator类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap