• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java SinkCall类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中cascading.scheme.SinkCall的典型用法代码示例。如果您正苦于以下问题:Java SinkCall类的具体用法?Java SinkCall怎么用?Java SinkCall使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



SinkCall类属于cascading.scheme包,在下文中一共展示了SinkCall类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: extractField

import cascading.scheme.SinkCall; //导入依赖的package包/类
@SuppressWarnings({ "rawtypes" })
@Override
protected Object extractField(Object target) {
    List<String> fieldNames = getFieldNames();
    for (int i = 0; i < fieldNames.size(); i++) {
        if (target instanceof SinkCall) {
            target = ((SinkCall) target).getOutgoingEntry().getObject(fieldNames.get(i));
            if (target == null) {
                return NOT_FOUND;
            }
        }
        else {
            return NOT_FOUND;
        }
    }
    return target;
}
 
开发者ID:xushjie1987,项目名称:es-hadoop-v2.2.0,代码行数:18,代码来源:CascadingFieldExtractor.java


示例2: write

import cascading.scheme.SinkCall; //导入依赖的package包/类
@SuppressWarnings("unchecked")
@Override
public Result write(SinkCall<Object[], ?> sinkCall, Generator generator) {
    Tuple tuple = CascadingUtils.coerceToString(sinkCall);
    // consider names (in case of aliases these are already applied)
    List<String> names = (List<String>) sinkCall.getContext()[0];

    generator.writeBeginObject();
    for (int i = 0; i < tuple.size(); i++) {
        String name = (i < names.size() ? names.get(i) : "tuple" + i);
        // filter out fields
        if (shouldKeep(generator.getParentPath(), name)) {
            generator.writeFieldName(name);
            Object object = tuple.getObject(i);
            Result result = jdkWriter.write(object, generator);
            if (!result.isSuccesful()) {
                if (object instanceof Writable) {
                    return writableWriter.write((Writable) object, generator);
                }
                return Result.FAILED(object);
            }
        }
    }
    generator.writeEndObject();
    return Result.SUCCESFUL();
}
 
开发者ID:xushjie1987,项目名称:es-hadoop-v2.2.0,代码行数:27,代码来源:CascadingValueWriter.java


示例3: coerceToString

import cascading.scheme.SinkCall; //导入依赖的package包/类
static Tuple coerceToString(SinkCall<?, ?> sinkCall) {
    TupleEntry entry = sinkCall.getOutgoingEntry();
    Fields fields = entry.getFields();
    Tuple tuple = entry.getTuple();

    if (fields.hasTypes()) {
        Type types[] = new Type[fields.size()];
        for (int index = 0; index < fields.size(); index++) {
            Type type = fields.getType(index);
            if (type instanceof CoercibleType<?>) {
                types[index] = String.class;
            }
            else {
                types[index] = type;
            }
        }

        tuple = entry.getCoercedTuple(types);
    }
    return tuple;
}
 
开发者ID:xushjie1987,项目名称:es-hadoop-v2.2.0,代码行数:22,代码来源:CascadingUtils.java


示例4: convert

import cascading.scheme.SinkCall; //导入依赖的package包/类
@Override
public void convert(Object from, BytesArray to) {
    // expect a tuple holding one field - chararray or bytearray
    Assert.isTrue(from instanceof SinkCall,
            String.format("Unexpected object type, expecting [%s], given [%s]", SinkCall.class, from.getClass()));

    // handle common cases
    SinkCall sinkCall = (SinkCall) from;
    Tuple rawTuple = sinkCall.getOutgoingEntry().getTuple();

    if (rawTuple == null || rawTuple.isEmpty()) {
        to.bytes("{}");
        return;
    }
    Assert.isTrue(rawTuple.size() == 1, "When using JSON input, only one field is expected");

    // postpone the coercion
    Tuple tuple = CascadingUtils.coerceToString(sinkCall);
    super.convert(tuple.getObject(0), to);
}
 
开发者ID:xushjie1987,项目名称:es-hadoop-v2.2.0,代码行数:21,代码来源:CascadingLocalBytesConverter.java


示例5: sinkPrepare

import cascading.scheme.SinkCall; //导入依赖的package包/类
@Override
public void sinkPrepare( FlowProcess<? extends Configuration> flowProcess, SinkCall<Object[], OutputCollector> sinkCall ) throws IOException {
	if( !( flowProcess instanceof FlowProcessWrapper ) ) {
		throw new RuntimeException( "not a flow process wrapper" );
	}

	if( !"process-default".equals( flowProcess.getProperty( "default" ) ) ) {
		throw new RuntimeException( "not default value" );
	}

	if( !"sink-replace".equals( flowProcess.getProperty( "replace" ) ) ) {
		throw new RuntimeException( "not replaced value" );
	}

	flowProcess = ( (FlowProcessWrapper) flowProcess ).getDelegate();

	if( !"process-default".equals( flowProcess.getProperty( "default" ) ) ) {
		throw new RuntimeException( "not default value" );
	}

	if( !"process-replace".equals( flowProcess.getProperty( "replace" ) ) ) {
		throw new RuntimeException( "not replaced value" );
	}

	super.sinkPrepare( flowProcess, sinkCall );
}
 
开发者ID:dataArtisans,项目名称:cascading-flink,代码行数:27,代码来源:FlinkConfigDefScheme.java


示例6: sinkPrepare

import cascading.scheme.SinkCall; //导入依赖的package包/类
@Override
public void sinkPrepare(final FlowProcess<JobConf> flowProcess, final SinkCall<Object[], OutputCollector> sinkCall) throws IOException {
    final StringWriter stringWriter = new StringWriter(4 * 1024);
    final CSVWriter csvWriter = createCsvWriter(stringWriter);
    sinkCall.setContext(new Object[5]);
    sinkCall.getContext()[0] = new Text();
    sinkCall.getContext()[1] = stringWriter;
    sinkCall.getContext()[2] = Charset.forName(charsetName);
    sinkCall.getContext()[3] = csvWriter;
    sinkCall.getContext()[4] = new String[getSinkFields().size()];

    if (hasHeader) {
        final Fields fields = sinkCall.getOutgoingEntry().getFields();
        write(sinkCall, fields);
    }
}
 
开发者ID:tresata,项目名称:cascading-opencsv,代码行数:17,代码来源:OpenCsvScheme.java


示例7: write

import cascading.scheme.SinkCall; //导入依赖的package包/类
@SuppressWarnings("unchecked")
protected void write(final SinkCall<Object[], OutputCollector> sinkCall, final Iterable<? extends Object> value) throws IOException {
    final Text text = (Text) sinkCall.getContext()[0];
    final StringWriter stringWriter = (StringWriter) sinkCall.getContext()[1];
    final Charset charset = (Charset) sinkCall.getContext()[2];
    final CSVWriter csvWriter = (CSVWriter) sinkCall.getContext()[3];
    final String[] nextLine = (String[]) sinkCall.getContext()[4];
    stringWriter.getBuffer().setLength(0);
    int i = 0;
    for (Object item: value) {
        nextLine[i] = item == null ? "" : item.toString();
        i++;
    }
    csvWriter.writeNext(nextLine);
    final int l = stringWriter.getBuffer().length();
    stringWriter.getBuffer().setLength(l > 0 ? l - 1 : 0);
    text.set(stringWriter.getBuffer().toString().getBytes(charset));
    sinkCall.getOutput().collect(null, text);
}
 
开发者ID:tresata,项目名称:cascading-opencsv,代码行数:20,代码来源:OpenCsvScheme.java


示例8: write

import cascading.scheme.SinkCall; //导入依赖的package包/类
@SuppressWarnings("unchecked")
@Override
public Result write(SinkCall<Object[], ?> sinkCall, Generator generator) {
    Tuple tuple = CascadingUtils.coerceToString(sinkCall);
    // consider names (in case of aliases these are already applied)
    List<String> names = (List<String>) sinkCall.getContext()[SINK_CTX_ALIASES];

    generator.writeBeginObject();
    for (int i = 0; i < tuple.size(); i++) {
        String name = (i < names.size() ? names.get(i) : "tuple" + i);
        // filter out fields
        if (shouldKeep(generator.getParentPath(), name)) {
            generator.writeFieldName(name);
            Object object = tuple.getObject(i);
            Result result = jdkWriter.write(object, generator);
            if (!result.isSuccesful()) {
                if (object instanceof Writable) {
                    return writableWriter.write((Writable) object, generator);
                }
                return Result.FAILED(object);
            }
        }
    }
    generator.writeEndObject();
    return Result.SUCCESFUL();
}
 
开发者ID:elastic,项目名称:elasticsearch-hadoop,代码行数:27,代码来源:CascadingValueWriter.java


示例9: sink

import cascading.scheme.SinkCall; //导入依赖的package包/类
@Override
public void sink(FlowProcess<JobConf> process, SinkCall<Object[], OutputCollector> sinkCall)
    throws IOException {
  TupleEntry tuple = sinkCall.getOutgoingEntry();

  Object obj = tuple.getObject(0);
  String key;
  //a hack since byte[] isn't natively handled by hadoop
  if (getStructure() instanceof DefaultPailStructure) {
    key = getCategory(obj);
  } else {
    key = Utils.join(getStructure().getTarget(obj), "/") + getCategory(obj);
  }
  if (bw == null) { bw = new BytesWritable(); }
  if (keyW == null) { keyW = new Text(); }
  serialize(obj, bw);
  keyW.set(key);
  sinkCall.getOutput().collect(keyW, bw);
}
 
开发者ID:indix,项目名称:dfs-datastores,代码行数:20,代码来源:PailTap.java


示例10: sinkPrepare

import cascading.scheme.SinkCall; //导入依赖的package包/类
@Override
public void sinkPrepare(FlowProcess<Properties> flowProcess, SinkCall<Object[], Object> sinkCall) throws IOException {
    super.sinkPrepare(flowProcess, sinkCall);

    Object[] context = new Object[1];
    Settings settings = HadoopSettingsManager.loadFrom(flowProcess.getConfigCopy()).merge(props);
    context[0] = CascadingUtils.fieldToAlias(settings, getSinkFields());
    sinkCall.setContext(context);
}
 
开发者ID:xushjie1987,项目名称:es-hadoop-v2.2.0,代码行数:10,代码来源:EsLocalScheme.java


示例11: toString

import cascading.scheme.SinkCall; //导入依赖的package包/类
@Override
public String toString(Object field) {
    if (field instanceof SinkCall) {
        return ((SinkCall) field).getOutgoingEntry().toString();
    }
    return field.toString();
}
 
开发者ID:xushjie1987,项目名称:es-hadoop-v2.2.0,代码行数:8,代码来源:CascadingFieldExtractor.java


示例12: sinkPrepare

import cascading.scheme.SinkCall; //导入依赖的package包/类
@Override
public void sinkPrepare(FlowProcess<JobConf> flowProcess, SinkCall<Object[], OutputCollector> sinkCall) throws IOException {
    super.sinkPrepare(flowProcess, sinkCall);

    Object[] context = new Object[1];
    // the tuple is fixed, so we can just use a collection/index
    Settings settings = loadSettings(flowProcess.getConfigCopy(), false);
    context[0] = CascadingUtils.fieldToAlias(settings, getSinkFields());
    sinkCall.setContext(context);
    IS_ES_20 = SettingsUtils.isEs20(settings);
}
 
开发者ID:xushjie1987,项目名称:es-hadoop-v2.2.0,代码行数:12,代码来源:EsHadoopScheme.java


示例13: sink

import cascading.scheme.SinkCall; //导入依赖的package包/类
@Override
public void sink(FlowProcess<JobConf> flowProcess, SinkCall<Object, Object> sinkCall) throws IOException {
    Tuple tuple = sinkCall.getOutgoingEntry().getTuple();
    StringBuffer sb = new StringBuffer();
    for (Object object : tuple) {
        if (object instanceof Writable) {
            sb.append(WritableUtils.fromWritable((Writable) object));
        }
        else {
            sb.append(object);
        }
        sb.append(" ");
    }
    ((PrintStream) sinkCall.getOutput()).println(sb.toString());
}
 
开发者ID:xushjie1987,项目名称:es-hadoop-v2.2.0,代码行数:16,代码来源:HadoopPrintStreamTap.java


示例14: sinkPrepare

import cascading.scheme.SinkCall; //导入依赖的package包/类
@Override
public void sinkPrepare(FlowProcess<JobConf> flowProcess, SinkCall<Object[], OutputCollector> sinkCall) throws IOException {
  initIndices();
  sinkCall.setContext(new Object[2]);
  sinkCall.getContext()[0] = new LongWritable();
  sinkCall.getContext()[1] = new ListWritable<>(Text.class);
}
 
开发者ID:datascienceinc,项目名称:cascading.csv,代码行数:8,代码来源:CsvScheme.java


示例15: sink

import cascading.scheme.SinkCall; //导入依赖的package包/类
@Override
@SuppressWarnings("unchecked")
public void sink(FlowProcess<JobConf> flowProcess, SinkCall<Object[], OutputCollector> sinkCall) throws IOException {
  ListWritable<Text> record = (ListWritable<Text>) sinkCall.getContext()[1];
  record.clear();

  TupleEntry entry = sinkCall.getOutgoingEntry();
  Tuple tuple = entry.getTuple();

  Fields fields = getSinkFields();
  for (int i = 0; i < fields.size(); i++) {
    int index = indices != null ? indices.get(fields.get(i).toString()) : i;
    if (record.size() < index) {
      for (int j = record.size(); j < index; j++) {
        record.add(null);
      }
    }

    Object value = tuple.getObject(i);
    if (value != null) {
      record.add(index, new Text(value.toString()));
    } else {
      record.add(index, null);
    }
  }

  sinkCall.getOutput().collect(null, record);
}
 
开发者ID:datascienceinc,项目名称:cascading.csv,代码行数:29,代码来源:CsvScheme.java


示例16: sink

import cascading.scheme.SinkCall; //导入依赖的package包/类
/**
 * Copies the values from the outgoing {@link TupleEntry} to the {@link Corc}.
 */
@SuppressWarnings("unchecked")
@Override
public void sink(FlowProcess<? extends Configuration> flowProcess, SinkCall<Corc, OutputCollector> sinkCall)
    throws IOException {
  Corc corc = sinkCall.getContext();
  TupleEntry tupleEntry = sinkCall.getOutgoingEntry();
  for (Comparable<?> fieldName : tupleEntry.getFields()) {
    corc.set(fieldName.toString(), tupleEntry.getObject(fieldName));
  }
  sinkCall.getOutput().collect(null, corc);
}
 
开发者ID:HotelsDotCom,项目名称:corc,代码行数:15,代码来源:OrcFile.java


示例17: sink

import cascading.scheme.SinkCall; //导入依赖的package包/类
@SuppressWarnings("unchecked")
@Override
public void sink(FlowProcess<JobConf> fp, SinkCall<Object[], OutputCollector> sc)
    throws IOException {
  TupleEntry tuple = sc.getOutgoingEntry();

  if (tuple.size() != 1) {
    throw new RuntimeException("ParquetValueScheme expects tuples with an arity of exactly 1, but found " + tuple.getFields());
  }

  T value = (T) tuple.getObject(0);
  OutputCollector output = sc.getOutput();
  output.collect(null, value);
}
 
开发者ID:apache,项目名称:parquet-mr,代码行数:15,代码来源:ParquetValueScheme.java


示例18: sink

import cascading.scheme.SinkCall; //导入依赖的package包/类
@SuppressWarnings("unchecked")
@Override
public void sink(FlowProcess<? extends JobConf> fp, SinkCall<Object[], OutputCollector> sc)
    throws IOException {
  TupleEntry tuple = sc.getOutgoingEntry();

  if (tuple.size() != 1) {
    throw new RuntimeException("ParquetValueScheme expects tuples with an arity of exactly 1, but found " + tuple.getFields());
  }

  T value = (T) tuple.getObject(0);
  OutputCollector output = sc.getOutput();
  output.collect(null, value);
}
 
开发者ID:apache,项目名称:parquet-mr,代码行数:15,代码来源:ParquetValueScheme.java


示例19: sink

import cascading.scheme.SinkCall; //导入依赖的package包/类
@Override
public void sink(FlowProcess<Config> flowProcess, SinkCall<Void, RedisSchemeCollector> sinkCall) throws IOException {
    Intermediate entry = getIntermediate(sinkCall.getOutgoingEntry());
    String key = getKey(entry);
    Value value = getValue(entry);
    String command = getCommand();
    logger.info("what: {} {}", key, value);
    sinkCall.getOutput().collect(command, key, value);
}
 
开发者ID:screen6,项目名称:cascading.redis,代码行数:10,代码来源:RedisBaseScheme.java


示例20: sinkPrepare

import cascading.scheme.SinkCall; //导入依赖的package包/类
@Override
public void sinkPrepare(FlowProcess<Properties> flowProcess, SinkCall<Object[], Object> sinkCall) throws IOException {
    super.sinkPrepare(flowProcess, sinkCall);

    Object[] context = new Object[SINK_CTX_SIZE];
    Settings settings = HadoopSettingsManager.loadFrom(flowProcess.getConfigCopy()).merge(props);
    context[SINK_CTX_ALIASES] = CascadingUtils.fieldToAlias(settings, getSinkFields());
    sinkCall.setContext(context);
}
 
开发者ID:elastic,项目名称:elasticsearch-hadoop,代码行数:10,代码来源:EsLocalScheme.java



注:本文中的cascading.scheme.SinkCall类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java IPresentationDamager类代码示例发布时间:2022-05-21
下一篇:
Java PeerListListener类代码示例发布时间:2022-05-21
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap