• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java WeightedVectorWritable类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.mahout.clustering.classify.WeightedVectorWritable的典型用法代码示例。如果您正苦于以下问题:Java WeightedVectorWritable类的具体用法?Java WeightedVectorWritable怎么用?Java WeightedVectorWritable使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



WeightedVectorWritable类属于org.apache.mahout.clustering.classify包,在下文中一共展示了WeightedVectorWritable类的6个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: process

import org.apache.mahout.clustering.classify.WeightedVectorWritable; //导入依赖的package包/类
/**
 * This method takes the clustered points output by the clustering algorithms as input and writes them into
 * their respective clusters.
 */
public void process() throws IOException {
  createPostProcessDirectory();
  for (Pair<?,WeightedVectorWritable> record : 
       new SequenceFileDirIterable<Writable,WeightedVectorWritable>(clusteredPoints,
                                                                    PathType.GLOB,
                                                                    PathFilters.partFilter(),
                                                                    null,
                                                                    false,
                                                                    conf)) {
    String clusterId = record.getFirst().toString().trim();
    putVectorInRespectiveCluster(clusterId, record.getSecond());
  }
  IOUtils.close(writersForClusters.values());
  writersForClusters.clear();
}
 
开发者ID:saradelrio,项目名称:Chi-FRBCS-BigDataCS,代码行数:20,代码来源:ClusterOutputPostProcessor.java


示例2: clusterDataMR

import org.apache.mahout.clustering.classify.WeightedVectorWritable; //导入依赖的package包/类
/**
 * Cluster the data using Hadoop
 */
private static void clusterDataMR(Path input, Path clustersIn, Path output)
    throws IOException, InterruptedException, ClassNotFoundException {
  Configuration conf = new Configuration();
  conf.set(STATE_IN_KEY, clustersIn.toString());
  Job job = new Job(conf,
      "Mean Shift Driver running clusterData over input: " + input);
  job.setOutputKeyClass(IntWritable.class);
  job.setOutputValueClass(WeightedVectorWritable.class);
  job.setMapperClass(MeanShiftCanopyClusterMapper.class);

  job.setInputFormatClass(SequenceFileInputFormat.class);
  job.setOutputFormatClass(SequenceFileOutputFormat.class);
  job.setNumReduceTasks(0);
  job.setJarByClass(MeanShiftCanopyDriver.class);

  FileInputFormat.setInputPaths(job, input);
  FileOutputFormat.setOutputPath(job, output);

  if (!job.waitForCompletion(true)) {
    throw new InterruptedException(
        "Mean Shift Clustering failed on clustersIn " + clustersIn);
  }
}
 
开发者ID:saradelrio,项目名称:Chi-FRBCS-BigDataCS,代码行数:27,代码来源:MeanShiftCanopyDriver.java


示例3: putVectorInRespectiveCluster

import org.apache.mahout.clustering.classify.WeightedVectorWritable; //导入依赖的package包/类
/**
 * 
 * Finds out the cluster directory of the vector and writes it into the specified cluster.
 */
private void putVectorInRespectiveCluster(String clusterId, WeightedVectorWritable point) throws IOException {
  Writer writer = findWriterForVector(clusterId);
  postProcessedClusterDirectories.put(clusterId,
                                      PathDirectory.getClusterPathForClusterId(clusterPostProcessorOutput, clusterId));
  writeVectorToCluster(writer, point);
}
 
开发者ID:saradelrio,项目名称:Chi-FRBCS-BigDataCS,代码行数:11,代码来源:ClusterOutputPostProcessor.java


示例4: map

import org.apache.mahout.clustering.classify.WeightedVectorWritable; //导入依赖的package包/类
@Override
 protected void map(WritableComparable<?> key, ClusterWritable clusterWritable, Context context)
   throws IOException, InterruptedException {
   // canopies use canopyIds assigned when input vectors are processed as vectorIds too
MeanShiftCanopy canopy = (MeanShiftCanopy)clusterWritable.getValue();
   int vectorId = canopy.getId();
   for (MeanShiftCanopy msc : canopies) {
     for (int containedId : msc.getBoundPoints().toList()) {
       if (vectorId == containedId) {
         context.write(new IntWritable(msc.getId()),
                        new WeightedVectorWritable(1, canopy.getCenter()));
       }
     }
   }
 }
 
开发者ID:saradelrio,项目名称:Chi-FRBCS-BigDataCS,代码行数:16,代码来源:MeanShiftCanopyClusterMapper.java


示例5: map

import org.apache.mahout.clustering.classify.WeightedVectorWritable; //导入依赖的package包/类
/**
 * The key is the cluster id and the value is the vector.
 */
@Override
protected void map(IntWritable key, WeightedVectorWritable vector, Context context) throws IOException,
                                                                                   InterruptedException {
  context.write(new Text(key.toString().trim()), new VectorWritable(vector.getVector()));
}
 
开发者ID:saradelrio,项目名称:Chi-FRBCS-BigDataCS,代码行数:9,代码来源:ClusterOutputPostProcessorMapper.java


示例6: writeVectorToCluster

import org.apache.mahout.clustering.classify.WeightedVectorWritable; //导入依赖的package包/类
/**
 * Writes vector to the cluster directory.
 */
private void writeVectorToCluster(Writer writer, WeightedVectorWritable point) throws IOException {
  writer.append(new LongWritable(uniqueVectorId++), new VectorWritable(point.getVector()));
  writer.sync();
}
 
开发者ID:saradelrio,项目名称:Chi-FRBCS-BigDataCS,代码行数:8,代码来源:ClusterOutputPostProcessor.java



注:本文中的org.apache.mahout.clustering.classify.WeightedVectorWritable类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java XHTMLImporterImpl类代码示例发布时间:2022-05-22
下一篇:
Java DetectedActivityFence类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap