本文整理汇总了Scala中org.apache.spark.input.PortableDataStream类的典型用法代码示例。如果您正苦于以下问题:Scala PortableDataStream类的具体用法?Scala PortableDataStream怎么用?Scala PortableDataStream使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
在下文中一共展示了PortableDataStream类的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Scala代码示例。
示例1: MetadataExtractor
//设置package包名称以及导入依赖的类
package com.jasonfeist.spark.tika
import org.apache.spark.input.PortableDataStream
import org.apache.tika.io.TikaInputStream
import org.apache.tika.metadata.Metadata
import org.apache.tika.parser.{AutoDetectParser, ParseContext}
import org.apache.tika.sax.BodyContentHandler
import scala.collection.mutable
class MetadataExtractor extends Serializable {
def extract(
file: (String, PortableDataStream)
) : (BodyContentHandler, Metadata, mutable.Map[String, String]) = {
val tis = TikaInputStream.get(file._2.open())
val parser = new AutoDetectParser()
val handler = new BodyContentHandler(-1)
val metadata = new Metadata()
parser.parse(tis, handler, metadata, new ParseContext())
val lowerCaseToCaseSensitive = mutable.Map[String, String]()
for (name <- metadata.names()) {
lowerCaseToCaseSensitive += (name.toLowerCase -> name)
}
(handler, metadata, lowerCaseToCaseSensitive)
}
}
开发者ID:jasonfeist,项目名称:tika-spark-datasource,代码行数:31,代码来源:MetadataExtractor.scala
示例2: TikaMetadataRelation
//设置package包名称以及导入依赖的类
package com.jasonfeist.spark.tika
import org.apache.spark.input.PortableDataStream
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.{Row, SQLContext}
import org.apache.spark.sql.sources.{BaseRelation, TableScan}
import org.apache.spark.sql.types.{StructType}
import org.slf4j.LoggerFactory
class TikaMetadataRelation protected[tika] (path: String,
userSchema: StructType,
metadataExtractor: MetadataExtractor,
fieldDataExtractor: FieldDataExtractor)
(@transient val sqlContext: SQLContext)
extends BaseRelation with TableScan with Serializable {
val logger = LoggerFactory.getLogger(classOf[TikaMetadataRelation])
override def schema: StructType = this.userSchema
override def buildScan(): RDD[Row] = {
val rdd = sqlContext
.sparkContext.binaryFiles(path)
rdd.map(extractFunc(_))
}
def extractFunc(
file: (String, PortableDataStream)
) : Row =
{
val extractedData = metadataExtractor.extract(file)
val rowArray = new Array[Any](schema.fields.length)
var index = 0
while (index < schema.fields.length) {
val field = schema(index)
val fieldData = fieldDataExtractor.matchedField(field.name,
field.dataType, extractedData._1, file._1, extractedData._2,
extractedData._3)
rowArray(index) = fieldData
index = index + 1
}
Row.fromSeq(rowArray)
}
}
开发者ID:jasonfeist,项目名称:tika-spark-datasource,代码行数:47,代码来源:TikaMetadataRelation.scala
注:本文中的org.apache.spark.input.PortableDataStream类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论