• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java Distribution类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中edu.stanford.nlp.stats.Distribution的典型用法代码示例。如果您正苦于以下问题:Java Distribution类的具体用法?Java Distribution怎么用?Java Distribution使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



Distribution类属于edu.stanford.nlp.stats包,在下文中一共展示了Distribution类的7个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: getSegmentedWordLengthDistribution

import edu.stanford.nlp.stats.Distribution; //导入依赖的package包/类
private Distribution<Integer> getSegmentedWordLengthDistribution(Treebank tb) {
  // CharacterLevelTagExtender ext = new CharacterLevelTagExtender();
  ClassicCounter<Integer> c = new ClassicCounter<Integer>();
  for (Iterator iterator = tb.iterator(); iterator.hasNext();) {
    Tree gold = (Tree) iterator.next();
    StringBuilder goldChars = new StringBuilder();
    Sentence goldYield = gold.yield();
    for (Iterator wordIter = goldYield.iterator(); wordIter.hasNext();) {
      Word word = (Word) wordIter.next();
      goldChars.append(word);
    }
    Sentence ourWords = segmentWords(goldChars.toString());
    for (int i = 0; i < ourWords.size(); i++) {
      c.incrementCount(Integer.valueOf(ourWords.get(i).toString().length()));
    }
  }
  return Distribution.getDistribution(c);
}
 
开发者ID:FabianFriedrich,项目名称:Text2Process,代码行数:19,代码来源:ChineseMarkovWordSegmenter.java


示例2: finishTraining

import edu.stanford.nlp.stats.Distribution; //导入依赖的package包/类
@Override
public void finishTraining() {
  lex.finishTraining();

  int numTags = tagIndex.size();
  POSes = new HashSet<String>(tagIndex.objectsList());
  initialPOSDist = Distribution.laplaceSmoothedDistribution(initial, numTags, 0.5);
  markovPOSDists = new HashMap<String, Distribution>();
  Set entries = ruleCounter.lowestLevelCounterEntrySet();
  for (Iterator iter = entries.iterator(); iter.hasNext();) {
    Map.Entry entry = (Map.Entry) iter.next();
    //      Map.Entry<List<String>, Counter> entry = (Map.Entry<List<String>, Counter>) iter.next();
    Distribution d = Distribution.laplaceSmoothedDistribution((ClassicCounter) entry.getValue(), numTags, 0.5);
    markovPOSDists.put(((List<String>) entry.getKey()).get(0), d);
  }
}
 
开发者ID:amark-india,项目名称:eventspotter,代码行数:17,代码来源:ChineseMarkovWordSegmenter.java


示例3: getSegmentedWordLengthDistribution

import edu.stanford.nlp.stats.Distribution; //导入依赖的package包/类
private Distribution<Integer> getSegmentedWordLengthDistribution(Treebank tb) {
  // CharacterLevelTagExtender ext = new CharacterLevelTagExtender();
  ClassicCounter<Integer> c = new ClassicCounter<Integer>();
  for (Iterator iterator = tb.iterator(); iterator.hasNext();) {
    Tree gold = (Tree) iterator.next();
    StringBuilder goldChars = new StringBuilder();
    ArrayList goldYield = gold.yield();
    for (Iterator wordIter = goldYield.iterator(); wordIter.hasNext();) {
      Word word = (Word) wordIter.next();
      goldChars.append(word);
    }
    List<HasWord> ourWords = segment(goldChars.toString());
    for (int i = 0; i < ourWords.size(); i++) {
      c.incrementCount(Integer.valueOf(ourWords.get(i).word().length()));
    }
  }
  return Distribution.getDistribution(c);
}
 
开发者ID:amark-india,项目名称:eventspotter,代码行数:19,代码来源:ChineseMarkovWordSegmenter.java


示例4: argVectorsDiffer

import edu.stanford.nlp.stats.Distribution; //导入依赖的package包/类
private boolean argVectorsDiffer(Counter<String> args1, Counter<String> args2) {
  System.out.println("argVectorsDiffer top!");
  Distribution<String> dist1 = Distribution.getDistribution(args1);
  Distribution<String> dist2 = Distribution.getDistribution(args2);

  Set<String> argdiffs = new HashSet<String>();

  for( String token : dist1.keySet() ) {
    double prob1 = dist1.getCount(token);
    if( dist1.getCount(token) > 0.02 ) {
      double prob2 = dist2.getCount(token);
      double ratio = (prob1 < prob2 ? prob1 / prob2 : prob2 / prob1);
      System.out.printf("- %s\t%.4f\t%.4f\tratio=%.4f\n", token, prob1, prob2, ratio);
      if( ratio < 0.2 ) {
        argdiffs.add(token);
        System.out.println("  arg differs: " + token);
      }
    }
  }

  if( argdiffs.size() >= 2 ) {
    System.out.println("Arg vectors differ!!");
    return true;
  }

  return false;
}
 
开发者ID:nchambers,项目名称:schemas,代码行数:28,代码来源:SlotInducer.java


示例5: train

import edu.stanford.nlp.stats.Distribution; //导入依赖的package包/类
public void train(Collection<Tree> trees) {
  Numberer tagNumberer = Numberer.getGlobalNumberer("tags");
  lex.train(trees);
  ClassicCounter<String> initial = new ClassicCounter<String>();
  GeneralizedCounter ruleCounter = new GeneralizedCounter(2);
  for (Tree tree : trees) {
    List<Label> tags = tree.preTerminalYield();
    String last = null;
    for (Label tagLabel : tags) {
      String tag = tagLabel.value();
      tagNumberer.number(tag);
      if (last == null) {
        initial.incrementCount(tag);
      } else {
        ruleCounter.incrementCount2D(last, tag);
      }
      last = tag;
    }
  }
  int numTags = tagNumberer.total();
  POSes = new HashSet<String>(ErasureUtils.<Collection<String>>uncheckedCast(tagNumberer.objects()));
  initialPOSDist = Distribution.laplaceSmoothedDistribution(initial, numTags, 0.5);
  markovPOSDists = new HashMap<String, Distribution>();
  Set entries = ruleCounter.lowestLevelCounterEntrySet();
  for (Iterator iter = entries.iterator(); iter.hasNext();) {
    Map.Entry entry = (Map.Entry) iter.next();
    //      Map.Entry<List<String>, Counter> entry = (Map.Entry<List<String>, Counter>) iter.next();
    Distribution d = Distribution.laplaceSmoothedDistribution((ClassicCounter) entry.getValue(), numTags, 0.5);
    markovPOSDists.put(((List<String>) entry.getKey()).get(0), d);
  }
}
 
开发者ID:FabianFriedrich,项目名称:Text2Process,代码行数:32,代码来源:ChineseMarkovWordSegmenter.java


示例6: computeInputPrior

import edu.stanford.nlp.stats.Distribution; //导入依赖的package包/类
protected Distribution<String> computeInputPrior(Map<String, List<List<String>>> allTrainPaths) {
  ClassicCounter<String> result = new ClassicCounter<String>();
  for (Iterator<List<List<String>>> catI = allTrainPaths.values().iterator(); catI.hasNext();) {
    List<List<String>> pathList = catI.next();
    for (List<String> path : pathList) {
      for (String input : path) {
        result.incrementCount(input);
      }
    }
  }
  return Distribution.laplaceSmoothedDistribution(result, result.size() * 2, 0.5);
}
 
开发者ID:FabianFriedrich,项目名称:Text2Process,代码行数:13,代码来源:GrammarCompactor.java


示例7: computeInputPrior

import edu.stanford.nlp.stats.Distribution; //导入依赖的package包/类
protected static Distribution<String> computeInputPrior(Map<String, List<List<String>>> allTrainPaths) {
  ClassicCounter<String> result = new ClassicCounter<String>();
  for (List<List<String>> pathList : allTrainPaths.values()) {
    for (List<String> path : pathList) {
      for (String input : path) {
        result.incrementCount(input);
      }
    }
  }
  return Distribution.laplaceSmoothedDistribution(result, result.size() * 2, 0.5);
}
 
开发者ID:benblamey,项目名称:stanford-nlp,代码行数:12,代码来源:GrammarCompactor.java



注:本文中的edu.stanford.nlp.stats.Distribution类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java EmptyReaderEventListener类代码示例发布时间:2022-05-22
下一篇:
Java BufferedAsymmetricBlockCipher类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap