• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java ClassTag类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中scala.reflect.ClassTag的典型用法代码示例。如果您正苦于以下问题:Java ClassTag类的具体用法?Java ClassTag怎么用?Java ClassTag使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



ClassTag类属于scala.reflect包,在下文中一共展示了ClassTag类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: matrixObjectToRDDStringIJV

import scala.reflect.ClassTag; //导入依赖的package包/类
/**
 * Convert a {@code MatrixObject} to a {@code RDD<String>} in IJV format.
 *
 * @param matrixObject
 *            the {@code MatrixObject}
 * @return the {@code MatrixObject} converted to a {@code RDD<String>}
 */
public static RDD<String> matrixObjectToRDDStringIJV(MatrixObject matrixObject) {

	// NOTE: The following works when called from Java but does not
	// currently work when called from Spark Shell (when you call
	// collect() on the RDD<String>).
	//
	// JavaRDD<String> javaRDD = jsc.parallelize(list);
	// RDD<String> rdd = JavaRDD.toRDD(javaRDD);
	//
	// Therefore, we call parallelize() on the SparkContext rather than
	// the JavaSparkContext to produce the RDD<String> for Scala.

	List<String> list = matrixObjectToListStringIJV(matrixObject);

	ClassTag<String> tag = scala.reflect.ClassTag$.MODULE$.apply(String.class);
	return sc().parallelize(JavaConversions.asScalaBuffer(list), sc().defaultParallelism(), tag);
}
 
开发者ID:apache,项目名称:systemml,代码行数:25,代码来源:MLContextConversionUtil.java


示例2: frameObjectToRDDStringIJV

import scala.reflect.ClassTag; //导入依赖的package包/类
/**
 * Convert a {@code FrameObject} to a {@code RDD<String>} in IJV format.
 *
 * @param frameObject
 *            the {@code FrameObject}
 * @return the {@code FrameObject} converted to a {@code RDD<String>}
 */
public static RDD<String> frameObjectToRDDStringIJV(FrameObject frameObject) {

	// NOTE: The following works when called from Java but does not
	// currently work when called from Spark Shell (when you call
	// collect() on the RDD<String>).
	//
	// JavaRDD<String> javaRDD = jsc.parallelize(list);
	// RDD<String> rdd = JavaRDD.toRDD(javaRDD);
	//
	// Therefore, we call parallelize() on the SparkContext rather than
	// the JavaSparkContext to produce the RDD<String> for Scala.

	List<String> list = frameObjectToListStringIJV(frameObject);

	ClassTag<String> tag = scala.reflect.ClassTag$.MODULE$.apply(String.class);
	return sc().parallelize(JavaConversions.asScalaBuffer(list), sc().defaultParallelism(), tag);
}
 
开发者ID:apache,项目名称:systemml,代码行数:25,代码来源:MLContextConversionUtil.java


示例3: matrixObjectToRDDStringCSV

import scala.reflect.ClassTag; //导入依赖的package包/类
/**
 * Convert a {@code MatrixObject} to a {@code RDD<String>} in CSV format.
 *
 * @param matrixObject
 *            the {@code MatrixObject}
 * @return the {@code MatrixObject} converted to a {@code RDD<String>}
 */
public static RDD<String> matrixObjectToRDDStringCSV(MatrixObject matrixObject) {

	// NOTE: The following works when called from Java but does not
	// currently work when called from Spark Shell (when you call
	// collect() on the RDD<String>).
	//
	// JavaRDD<String> javaRDD = jsc.parallelize(list);
	// RDD<String> rdd = JavaRDD.toRDD(javaRDD);
	//
	// Therefore, we call parallelize() on the SparkContext rather than
	// the JavaSparkContext to produce the RDD<String> for Scala.

	List<String> list = matrixObjectToListStringCSV(matrixObject);

	ClassTag<String> tag = scala.reflect.ClassTag$.MODULE$.apply(String.class);
	return sc().parallelize(JavaConversions.asScalaBuffer(list), sc().defaultParallelism(), tag);
}
 
开发者ID:apache,项目名称:systemml,代码行数:25,代码来源:MLContextConversionUtil.java


示例4: frameObjectToRDDStringCSV

import scala.reflect.ClassTag; //导入依赖的package包/类
/**
 * Convert a {@code FrameObject} to a {@code RDD<String>} in CSV format.
 *
 * @param frameObject
 *            the {@code FrameObject}
 * @param delimiter
 *            the delimiter
 * @return the {@code FrameObject} converted to a {@code RDD<String>}
 */
public static RDD<String> frameObjectToRDDStringCSV(FrameObject frameObject, String delimiter) {

	// NOTE: The following works when called from Java but does not
	// currently work when called from Spark Shell (when you call
	// collect() on the RDD<String>).
	//
	// JavaRDD<String> javaRDD = jsc.parallelize(list);
	// RDD<String> rdd = JavaRDD.toRDD(javaRDD);
	//
	// Therefore, we call parallelize() on the SparkContext rather than
	// the JavaSparkContext to produce the RDD<String> for Scala.

	List<String> list = frameObjectToListStringCSV(frameObject, delimiter);

	ClassTag<String> tag = scala.reflect.ClassTag$.MODULE$.apply(String.class);
	return sc().parallelize(JavaConversions.asScalaBuffer(list), sc().defaultParallelism(), tag);
}
 
开发者ID:apache,项目名称:systemml,代码行数:27,代码来源:MLContextConversionUtil.java


示例5: create

import scala.reflect.ClassTag; //导入依赖的package包/类
@Override
@SuppressWarnings("unchecked")
public JavaStreamingContext create() {
  sparkConf.set("spark.streaming.kafka.maxRatePerPartition", String.valueOf(maxRatePerPartition));
  JavaStreamingContext result = new JavaStreamingContext(sparkConf, new Duration(duration));
  Map<String, String> props = new HashMap<>();
  if (!autoOffsetValue.isEmpty()) {
    props.put(AbstractStreamingBinding.AUTO_OFFSET_RESET, autoOffsetValue);
  }
  logMessage("topic list " + topic, isRunningInMesos);
  logMessage("Auto offset reset is set to " + autoOffsetValue, isRunningInMesos);
  props.putAll(extraKafkaConfigs);
  for (Map.Entry<String, String> map : props.entrySet()) {
    logMessage(Utils.format("Adding extra kafka config, {}:{}", map.getKey(), map.getValue()), isRunningInMesos);
  }
  props.put("key.deserializer", "org.apache.kafka.common.serialization.ByteArrayDeserializer");
  props.put("value.deserializer", "org.apache.kafka.common.serialization.ByteArrayDeserializer");
  props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
  JavaPairInputDStream<byte[], byte[]> dStream;
  if (offsetHelper.isSDCCheckPointing()) {
    JavaInputDStream stream =
        KafkaUtils.createDirectStream(
            result,
            byte[].class,
            byte[].class,
            Tuple2.class,
            props,
            MaprStreamsOffsetManagerImpl.get().getOffsetForDStream(topic, numberOfPartitions),
            MESSAGE_HANDLER_FUNCTION
        );
    ClassTag<byte[]> byteClassTag = scala.reflect.ClassTag$.MODULE$.apply(byte[].class);
    dStream = JavaPairInputDStream.fromInputDStream(stream.inputDStream(), byteClassTag, byteClassTag);
  } else {
    dStream =
        KafkaUtils.createDirectStream(result, byte[].class, byte[].class,
            props, new HashSet<>(Arrays.asList(topic.split(","))));
  }
  Driver$.MODULE$.foreach(dStream.dstream(), MaprStreamsOffsetManagerImpl.get());
  return result;
}
 
开发者ID:streamsets,项目名称:datacollector,代码行数:41,代码来源:MapRStreamingBinding.java


示例6: convert

import scala.reflect.ClassTag; //导入依赖的package包/类
@Override
public RDD<Tuple> convert(List<RDD<Tuple>> predecessors,
        PODistinct poDistinct) throws IOException {
    SparkUtil.assertPredecessorSize(predecessors, poDistinct, 1);
    RDD<Tuple> rdd = predecessors.get(0);

    ClassTag<Tuple2<Tuple, Object>> tuple2ClassManifest = SparkUtil
            .<Tuple, Object> getTuple2Manifest();

    RDD<Tuple2<Tuple, Object>> rddPairs = rdd.map(TO_KEY_VALUE_FUNCTION,
            tuple2ClassManifest);
    PairRDDFunctions<Tuple, Object> pairRDDFunctions
      = new PairRDDFunctions<Tuple, Object>(
            rddPairs, SparkUtil.getManifest(Tuple.class),
            SparkUtil.getManifest(Object.class), null);
    int parallelism = SparkUtil.getParallelism(predecessors, poDistinct);
    return pairRDDFunctions.reduceByKey(MERGE_VALUES_FUNCTION, parallelism)
            .map(TO_VALUE_FUNCTION, SparkUtil.getManifest(Tuple.class));
}
 
开发者ID:sigmoidanalytics,项目名称:spork,代码行数:20,代码来源:DistinctConverter.java


示例7: MizoRDD

import scala.reflect.ClassTag; //导入依赖的package包/类
public MizoRDD(SparkContext context, IMizoRDDConfig config, ClassTag<TReturn> classTag) {
    super(context, new ArrayBuffer<>(), classTag);

    if (!Strings.isNullOrEmpty(config.logConfigPath())) {
        PropertyConfigurator.configure(config.logConfigPath());
    }

    this.config = config;
    this.regionsPaths = getRegionsPaths(config.regionDirectoriesPath());
    this.relationTypes = loadRelationTypes(config.titanConfigPath());
}
 
开发者ID:imri,项目名称:mizo,代码行数:12,代码来源:MizoRDD.java


示例8: deserialize

import scala.reflect.ClassTag; //导入依赖的package包/类
@Override
public <T> T deserialize(final ByteBuffer byteBuffer, final ClassLoader classLoader, final ClassTag<T> classTag) {
    this.input.setBuffer(byteBuffer.array());
    return this.gryoSerializer.getGryoPool().readWithKryo(kryo -> {
        kryo.setClassLoader(classLoader);
        return (T) kryo.readClassAndObject(this.input);
    });
}
 
开发者ID:PKUSilvester,项目名称:LiteGraph,代码行数:9,代码来源:GryoSerializerInstance.java


示例9: getPartitionOffset

import scala.reflect.ClassTag; //导入依赖的package包/类
public static <T> DStream<Tuple2<Integer, Iterable<Long>>>  getPartitionOffset(
    DStream<MessageAndMetadata<T>> unionStreams, Properties props) {
  ClassTag<MessageAndMetadata<T>> messageMetaClassTag = 
      ScalaUtil.<T>getMessageAndMetadataClassTag();
  JavaDStream<MessageAndMetadata<T>> javaDStream = 
      new JavaDStream<MessageAndMetadata<T>>(unionStreams, messageMetaClassTag);
  JavaPairDStream<Integer, Iterable<Long>> partitonOffset = getPartitionOffset(javaDStream, props);
  return partitonOffset.dstream();
}
 
开发者ID:dibbhatt,项目名称:kafka-spark-consumer,代码行数:10,代码来源:ProcessedOffsetManager.java


示例10: persists

import scala.reflect.ClassTag; //导入依赖的package包/类
@SuppressWarnings("deprecation")
public static void persists(DStream<Tuple2<Integer, Iterable<Long>>> partitonOffset, Properties props) {
  ClassTag<Tuple2<Integer, Iterable<Long>>> tuple2ClassTag = 
      ScalaUtil.<Integer, Iterable<Long>>getTuple2ClassTag();
  JavaDStream<Tuple2<Integer, Iterable<Long>>> jpartitonOffset = 
      new JavaDStream<Tuple2<Integer, Iterable<Long>>>(partitonOffset, tuple2ClassTag);
  jpartitonOffset.foreachRDD(new VoidFunction<JavaRDD<Tuple2<Integer, Iterable<Long>>>>() {
    @Override
    public void call(JavaRDD<Tuple2<Integer, Iterable<Long>>> po) throws Exception {
      List<Tuple2<Integer, Iterable<Long>>> poList = po.collect();
      doPersists(poList, props);
    }
  });
}
 
开发者ID:dibbhatt,项目名称:kafka-spark-consumer,代码行数:15,代码来源:ProcessedOffsetManager.java


示例11: MatrixMultiplicationRDD

import scala.reflect.ClassTag; //导入依赖的package包/类
public MatrixMultiplicationRDD(final RDD<?> oneParent, final ClassTag<MatrixStore<N>> evidence$2) {
    super(oneParent, evidence$2);
}
 
开发者ID:optimatika,项目名称:ojAlgo-extensions,代码行数:4,代码来源:MatrixMultiplicationRDD.java


示例12: NumberRDD

import scala.reflect.ClassTag; //导入依赖的package包/类
public NumberRDD(final RDD<?> oneParent, final ClassTag<N> evidence$2) {
    super(oneParent, evidence$2);
}
 
开发者ID:optimatika,项目名称:ojAlgo-extensions,代码行数:4,代码来源:NumberRDD.java


示例13: OtherBlockMatrixRDD

import scala.reflect.ClassTag; //导入依赖的package包/类
public OtherBlockMatrixRDD(final RDD<?> oneParent, final ClassTag<MatrixStore<N>> evidence$2) {
    super(oneParent, evidence$2);
}
 
开发者ID:optimatika,项目名称:ojAlgo-extensions,代码行数:4,代码来源:OtherBlockMatrixRDD.java


示例14: serialize

import scala.reflect.ClassTag; //导入依赖的package包/类
@Override
public <T> ByteBuffer serialize(final T t, final ClassTag<T> classTag) {
    this.gryoSerializer.getGryoPool().writeWithKryo(kryo -> kryo.writeClassAndObject(this.output, t));
    return ByteBuffer.wrap(this.output.getBuffer());
}
 
开发者ID:PKUSilvester,项目名称:LiteGraph,代码行数:6,代码来源:GryoSerializerInstance.java


示例15: writeObject

import scala.reflect.ClassTag; //导入依赖的package包/类
@Override
public <T> SerializationStream writeObject(final T t, final ClassTag<T> classTag) {
    this.gryoSerializer.getGryoPool().writeWithKryo(kryo -> kryo.writeClassAndObject(this.output, t));
    return this;
}
 
开发者ID:PKUSilvester,项目名称:LiteGraph,代码行数:6,代码来源:GryoSerializationStream.java


示例16: createGraph

import scala.reflect.ClassTag; //导入依赖的package包/类
public Graph<RyaTypeWritable, RyaTypeWritable> createGraph(SparkContext sc, Configuration conf) throws IOException, AccumuloSecurityException{
    StorageLevel storageLvl1 = StorageLevel.MEMORY_ONLY();
    StorageLevel storageLvl2 = StorageLevel.MEMORY_ONLY();
    ClassTag<RyaTypeWritable> RTWTag = ClassTag$.MODULE$.apply(RyaTypeWritable.class);
    RyaTypeWritable rtw = null;
    RDD<Tuple2<Object, RyaTypeWritable>> vertexRDD = getVertexRDD(sc, conf);

    RDD<Tuple2<Object, Edge>> edgeRDD = getEdgeRDD(sc, conf);
    JavaRDD<Tuple2<Object, Edge>> jrddTuple = edgeRDD.toJavaRDD();
    JavaRDD<Edge<RyaTypeWritable>> jrdd = jrddTuple.map(tuple -> tuple._2);

    RDD<Edge<RyaTypeWritable>> goodERDD = JavaRDD.toRDD(jrdd);

    return Graph.apply(vertexRDD, goodERDD, rtw, storageLvl1, storageLvl2, RTWTag, RTWTag);
}
 
开发者ID:apache,项目名称:incubator-rya,代码行数:16,代码来源:GraphXGraphGenerator.java


示例17: create

import scala.reflect.ClassTag; //导入依赖的package包/类
@Override
@SuppressWarnings("unchecked")
public JavaStreamingContext create() {
  sparkConf.set("spark.streaming.kafka.maxRatePerPartition", String.valueOf(maxRatePerPartition));
  JavaStreamingContext result = new JavaStreamingContext(sparkConf, new Duration(duration));
  Map<String, String> props = new HashMap<>();
  props.putAll(extraKafkaConfigs);
  for (Map.Entry<String, String> map : props.entrySet()) {
    logMessage(Utils.format("Adding extra kafka config, {}:{}", map.getKey(), map.getValue()), isRunningInMesos);
  }
  props.put("metadata.broker.list", metaDataBrokerList);
  props.put(GROUP_ID_KEY, groupId);
  if (!autoOffsetValue.isEmpty()) {
    autoOffsetValue = getConfigurableAutoOffsetResetIfNonEmpty(autoOffsetValue);
    props.put(AUTO_OFFSET_RESET, autoOffsetValue);
  }
  logMessage("Meta data broker list " + metaDataBrokerList, isRunningInMesos);
  logMessage("Topic is " + topic, isRunningInMesos);
  logMessage("Auto offset reset is set to " + autoOffsetValue, isRunningInMesos);
  JavaPairInputDStream<byte[], byte[]> dStream;
  if (offsetHelper.isSDCCheckPointing()) {
    JavaInputDStream<Tuple2<byte[], byte[]>> stream =
        KafkaUtils.createDirectStream(
            result,
            byte[].class,
            byte[].class,
            DefaultDecoder.class,
            DefaultDecoder.class,
            (Class<Tuple2<byte[], byte[]>>) ((Class)(Tuple2.class)),
            props,
            KafkaOffsetManagerImpl.get().getOffsetForDStream(topic, numberOfPartitions),
            MESSAGE_HANDLER_FUNCTION
        );
    ClassTag<byte[]> byteClassTag = scala.reflect.ClassTag$.MODULE$.apply(byte[].class);
    dStream = JavaPairInputDStream.fromInputDStream(stream.inputDStream(), byteClassTag, byteClassTag);
  } else {
    dStream =
        KafkaUtils.createDirectStream(result, byte[].class, byte[].class, DefaultDecoder.class, DefaultDecoder.class,
            props, new HashSet<>(Arrays.asList(topic.split(","))));
  }
  Driver$.MODULE$.foreach(dStream.dstream(), KafkaOffsetManagerImpl.get());
  return result;
}
 
开发者ID:streamsets,项目名称:datacollector,代码行数:44,代码来源:SparkStreamingBinding.java


示例18: classTag

import scala.reflect.ClassTag; //导入依赖的package包/类
@Override
public ClassTag<T> classTag() {
    return ClassTag$.MODULE$.<T>apply(((BaseConfig<T,BaseConfig>)((DeepRDD) this.rdd()).config.value())
            .getEntityClass());
}
 
开发者ID:Stratio,项目名称:deep-spark,代码行数:6,代码来源:DeepJavaRDD.java


示例19: getManifest

import scala.reflect.ClassTag; //导入依赖的package包/类
public static <T> ClassTag<T> getManifest(Class<T> clazz) {
    return ClassTag$.MODULE$.apply(clazz);
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:4,代码来源:SparkUtil.java


示例20: getTuple2Manifest

import scala.reflect.ClassTag; //导入依赖的package包/类
@SuppressWarnings("unchecked")
public static <K,V> ClassTag<Tuple2<K, V>> getTuple2Manifest() {
    return (ClassTag<Tuple2<K, V>>)(Object)getManifest(Tuple2.class);
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:5,代码来源:SparkUtil.java



注:本文中的scala.reflect.ClassTag类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java ParametersWithRandom类代码示例发布时间:2022-05-21
下一篇:
Java NonMonotonicSequenceException类代码示例发布时间:2022-05-21
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap