本文整理汇总了Scala中org.apache.spark.OneToOneDependency类的典型用法代码示例。如果您正苦于以下问题:Scala OneToOneDependency类的具体用法?Scala OneToOneDependency怎么用?Scala OneToOneDependency使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
在下文中一共展示了OneToOneDependency类的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Scala代码示例。
示例1: ParallelCollectionLRDD
//设置package包名称以及导入依赖的类
package org.apache.spark.lineage.rdd
import org.apache.spark.OneToOneDependency
import org.apache.spark.lineage.LineageContext
import org.apache.spark.rdd.ParallelCollectionRDD
import scala.collection.Map
import scala.reflect._
private[spark] class ParallelCollectionLRDD[T: ClassTag](
@transient lc: LineageContext,
@transient data: Seq[T],
numSlices: Int,
locationPrefs: Map[Int, Seq[String]])
extends ParallelCollectionRDD[T](lc.sparkContext, data, numSlices, locationPrefs)
with Lineage[T] {
override def lineageContext = lc
override def ttag: ClassTag[T] = classTag[T]
override def tapRight(): TapLRDD[T] = {
val tap = new TapParallelCollectionLRDD[T](lineageContext, Seq(new OneToOneDependency(this)))
setTap(tap)
setCaptureLineage(true)
tap
}
}
开发者ID:lmd1993,项目名称:bigsiftParallel,代码行数:29,代码来源:ParallelCollectionLRDD.scala
示例2: MyEdgeRDDImpl
//设置package包名称以及导入依赖的类
package org.apache.spark.graphx
import org.apache.spark.{HashPartitioner, OneToOneDependency}
import org.apache.spark.rdd.RDD
import org.apache.spark.storage.StorageLevel
import scala.reflect.ClassTag
class MyEdgeRDDImpl[ED: ClassTag] private[graphx]
(
@transient override val partitionsRDD: RDD[(PartitionID, MyEdgePartition[ED])],
val targetStorageLevel: StorageLevel = StorageLevel.MEMORY_ONLY)
extends MyEdgeRDD[ED](partitionsRDD.context, List(new OneToOneDependency(partitionsRDD))) {
override val partitioner =
partitionsRDD.partitioner.orElse(Some(new HashPartitioner(partitions.length)))
override def mapValues[ED2: ClassTag](f: Edge[ED] => ED2): MyEdgeRDDImpl[ED2] =
mapEdgePartitions((pid, part) => part.map(f))
def mapEdgePartitions[ED2: ClassTag](
f: (PartitionID, MyEdgePartition[ED]) => MyEdgePartition[ED2]): MyEdgeRDDImpl[ED2] = {
this.withPartitionsRDD[ED2](partitionsRDD.mapPartitions({ iter =>
if (iter.hasNext) {
val (pid, ep) = iter.next()
Iterator(Tuple2(pid, f(pid, ep)))
} else {
Iterator.empty
}
}, preservesPartitioning = true))
}
private[graphx] def withPartitionsRDD[ED2: ClassTag]( partitionsRDD: RDD[(PartitionID, MyEdgePartition[ED2])]): MyEdgeRDDImpl[ED2] = {
new MyEdgeRDDImpl(partitionsRDD, this.targetStorageLevel)
}
override def withTargetStorageLevel(storageLevel: StorageLevel): MyEdgeRDD[ED] = {
new MyEdgeRDDImpl(this.partitionsRDD, storageLevel)
}
}
开发者ID:yuanqingsunny,项目名称:graph,代码行数:42,代码来源:MyEdgeRDDImpl.scala
示例3: AnnotatedSuccinctRDDImpl
//设置package包名称以及导入依赖的类
package edu.berkeley.cs.succinct.annot.impl
import edu.berkeley.cs.succinct.annot.AnnotatedSuccinctRDD
import org.apache.spark.OneToOneDependency
import org.apache.spark.rdd.RDD
import org.apache.spark.storage.StorageLevel
import org.apache.spark.succinct.annot.AnnotatedSuccinctPartition
class AnnotatedSuccinctRDDImpl private[succinct](val partitionsRDD: RDD[AnnotatedSuccinctPartition])
extends AnnotatedSuccinctRDD(partitionsRDD.context, List(new OneToOneDependency(partitionsRDD))) {
val recordCount: Long = partitionsRDD.map(_.count).aggregate(0L)(_ + _, _ + _)
override def cache(): this.type = {
this
}
override def count(): Long = {
recordCount
}
}
开发者ID:anuragkh,项目名称:annotation-search,代码行数:23,代码来源:AnnotatedSuccinctRDDImpl.scala
注:本文中的org.apache.spark.OneToOneDependency类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论