本文整理汇总了Scala中com.univocity.parsers.csv.CsvParser类的典型用法代码示例。如果您正苦于以下问题:Scala CsvParser类的具体用法?Scala CsvParser怎么用?Scala CsvParser使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
在下文中一共展示了CsvParser类的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Scala代码示例。
示例1: Iris
//设置package包名称以及导入依赖的类
package com.esri
import java.io.File
import breeze.linalg.{DenseVector => BDV, Vector => BV}
import com.univocity.parsers.csv.{CsvParser, CsvParserSettings}
import scala.collection.JavaConversions._
import scala.math._
case class Iris(label: String, vec: BV[Double])
object IrisApp extends App {
val settings = new CsvParserSettings()
val reader = new CsvParser(settings)
val irisArr = reader.parseAll(new File("iris.data.txt"))
.map(row => {
val dv = new BDV[Double](Array(row(0).toDouble, row(1).toDouble, row(2).toDouble, row(3).toDouble))
val label = row(4)
val vec = dv / sqrt(dv dot dv)
Iris(label, vec)
})
val rnd = new java.security.SecureRandom()
val somSize = 7
val nodes = for {
q <- 0 until somSize
r <- 0 until somSize
} yield {
val iris = irisArr(rnd.nextInt(irisArr.length))
Node(q, r, iris.vec)
}
val data = irisArr.map(_.vec)
val epochMax = 10000
implicit val pb = TerminalProgressBar(epochMax)
val som = SOM(nodes)
val aDecay = ExpDecay(0.5, epochMax)
val rDecay = ExpDecay(4, epochMax)
som.trainDecay(data, epochMax, aDecay, rDecay)
som.saveIris("/tmp/iris.png", 110, irisArr)
}
开发者ID:mraad,项目名称:spark-som-path,代码行数:45,代码来源:IrisApp.scala
示例2: CsvPublisher
//设置package包名称以及导入依赖的类
package io.eels.component.csv
import java.io.InputStream
import java.util.concurrent.atomic.AtomicBoolean
import com.sksamuel.exts.Logging
import com.sksamuel.exts.io.Using
import com.univocity.parsers.csv.CsvParser
import io.eels.Row
import io.eels.datastream.{DataStream, Publisher, Subscriber, Subscription}
import io.eels.schema.StructType
class CsvPublisher(createParser: () => CsvParser,
inputFn: () => InputStream,
header: Header,
skipBadRows: Boolean,
schema: StructType) extends Publisher[Seq[Row]] with Logging with Using {
val rowsToSkip: Int = header match {
case Header.FirstRow => 1
case _ => 0
}
override def subscribe(subscriber: Subscriber[Seq[Row]]): Unit = {
val input = inputFn()
val parser = createParser()
try {
parser.beginParsing(input)
val running = new AtomicBoolean(true)
subscriber.subscribed(Subscription.fromRunning(running))
Iterator.continually(parser.parseNext)
.takeWhile(_ != null)
.takeWhile(_ => running.get)
.drop(rowsToSkip)
.map { records => Row(schema, records.toVector) }
.grouped(DataStream.DefaultBatchSize)
.foreach(subscriber.next)
subscriber.completed()
} catch {
case t: Throwable => subscriber.error(t)
} finally {
parser.stopParsing()
input.close()
}
}
}
开发者ID:51zero,项目名称:eel-sdk,代码行数:54,代码来源:CsvPublisher.scala
注:本文中的com.univocity.parsers.csv.CsvParser类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论