• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java Field类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.kafka.connect.data.Field的典型用法代码示例。如果您正苦于以下问题:Java Field类的具体用法?Java Field怎么用?Java Field使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



Field类属于org.apache.kafka.connect.data包,在下文中一共展示了Field类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: applyWithSchema

import org.apache.kafka.connect.data.Field; //导入依赖的package包/类
private R applyWithSchema(R record) {
    Schema valueSchema = operatingSchema(record);
    Schema updatedSchema = getOrBuildSchema(valueSchema);

    // Whole-record casting
    if (wholeValueCastType != null)
        return newRecord(record, updatedSchema, castValueToType(operatingValue(record), wholeValueCastType));

    // Casting within a struct
    final Struct value = requireStruct(operatingValue(record), PURPOSE);

    final Struct updatedValue = new Struct(updatedSchema);
    for (Field field : value.schema().fields()) {
        final Object origFieldValue = value.get(field);
        final Schema.Type targetType = casts.get(field.name());
        final Object newFieldValue = targetType != null ? castValueToType(origFieldValue, targetType) : origFieldValue;
        updatedValue.put(updatedSchema.field(field.name()), newFieldValue);
    }
    return newRecord(record, updatedSchema, updatedValue);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:21,代码来源:Cast.java


示例2: buildWithSchema

import org.apache.kafka.connect.data.Field; //导入依赖的package包/类
private void buildWithSchema(Struct record, String fieldNamePrefix, Struct newRecord) {
    for (Field field : record.schema().fields()) {
        final String fieldName = fieldName(fieldNamePrefix, field.name());
        switch (field.schema().type()) {
            case INT8:
            case INT16:
            case INT32:
            case INT64:
            case FLOAT32:
            case FLOAT64:
            case BOOLEAN:
            case STRING:
            case BYTES:
                newRecord.put(fieldName, record.get(field));
                break;
            case STRUCT:
                buildWithSchema(record.getStruct(field.name()), fieldName, newRecord);
                break;
            default:
                throw new DataException("Flatten transformation does not support " + field.schema().type()
                        + " for record without schemas (for field " + fieldName + ").");
        }
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:25,代码来源:Flatten.java


示例3: makeUpdatedSchema

import org.apache.kafka.connect.data.Field; //导入依赖的package包/类
private Schema makeUpdatedSchema(Schema schema) {
    final SchemaBuilder builder = SchemaUtil.copySchemaBasics(schema, SchemaBuilder.struct());

    for (Field field : schema.fields()) {
        builder.field(field.name(), field.schema());
    }

    if (topicField != null) {
        builder.field(topicField.name, topicField.optional ? Schema.OPTIONAL_STRING_SCHEMA : Schema.STRING_SCHEMA);
    }
    if (partitionField != null) {
        builder.field(partitionField.name, partitionField.optional ? Schema.OPTIONAL_INT32_SCHEMA : Schema.INT32_SCHEMA);
    }
    if (offsetField != null) {
        builder.field(offsetField.name, offsetField.optional ? Schema.OPTIONAL_INT64_SCHEMA : Schema.INT64_SCHEMA);
    }
    if (timestampField != null) {
        builder.field(timestampField.name, timestampField.optional ? OPTIONAL_TIMESTAMP_SCHEMA : Timestamp.SCHEMA);
    }
    if (staticField != null) {
        builder.field(staticField.name, staticField.optional ? Schema.OPTIONAL_STRING_SCHEMA : Schema.STRING_SCHEMA);
    }

    return builder.build();
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:26,代码来源:InsertField.java


示例4: applyWithSchema

import org.apache.kafka.connect.data.Field; //导入依赖的package包/类
private R applyWithSchema(R record) {
    final Struct value = requireStruct(operatingValue(record), PURPOSE);

    Schema updatedSchema = schemaUpdateCache.get(value.schema());
    if (updatedSchema == null) {
        updatedSchema = makeUpdatedSchema(value.schema());
        schemaUpdateCache.put(value.schema(), updatedSchema);
    }

    final Struct updatedValue = new Struct(updatedSchema);

    for (Field field : updatedSchema.fields()) {
        final Object fieldValue = value.get(reverseRenamed(field.name()));
        updatedValue.put(field.name(), fieldValue);
    }

    return newRecord(record, updatedSchema, updatedValue);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:19,代码来源:ReplaceField.java


示例5: convert

import org.apache.kafka.connect.data.Field; //导入依赖的package包/类
@Override
public Object convert(Schema schema, JsonNode value) {
    if (!value.isObject())
        throw new DataException("Structs should be encoded as JSON objects, but found " + value.getNodeType());

    // We only have ISchema here but need Schema, so we need to materialize the actual schema. Using ISchema
    // avoids having to materialize the schema for non-Struct types but it cannot be avoided for Structs since
    // they require a schema to be provided at construction. However, the schema is only a SchemaBuilder during
    // translation of schemas to JSON; during the more common translation of data to JSON, the call to schema.schema()
    // just returns the schema Object and has no overhead.
    Struct result = new Struct(schema.schema());
    for (Field field : schema.fields())
        result.put(field, convertToConnect(field.schema(), value.get(field.name())));

    return result;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:17,代码来源:JsonConverter.java


示例6: getRowData

import org.apache.kafka.connect.data.Field; //导入依赖的package包/类
@Override
public List<List<String>> getRowData() {
  List<List<String>> rows = new ArrayList<>();

  if (Schema.Type.STRUCT != this.schema.type()) {
    return rows;
  }

  for (Field field : this.schema.fields()) {
    rows.add(
        ImmutableList.of(
            field.name(),
            type(field.schema()),
            String.format("%s", field.schema().isOptional()),
            null != field.schema().defaultValue() ? field.schema().defaultValue().toString() : "",
            !Strings.isNullOrEmpty(field.schema().doc()) ? field.schema().doc() : ""
        )
    );
  }


  return rows;
}
 
开发者ID:jcustenborder,项目名称:connect-utils,代码行数:24,代码来源:TemplateSchema.java


示例7: serialize

import org.apache.kafka.connect.data.Field; //导入依赖的package包/类
@Override
public void serialize(Struct struct, JsonGenerator jsonGenerator, SerializerProvider serializerProvider) throws IOException, JsonProcessingException {
  struct.validate();
  Storage result = new Storage();
  result.schema = struct.schema();
  result.fieldValues = new ArrayList<>();
  for (Field field : struct.schema().fields()) {
    log.trace("serialize() - Processing field '{}'", field.name());
    FieldValue fieldValue = new FieldValue();
    fieldValue.name = field.name();
    fieldValue.schema = field.schema();
    fieldValue.value(struct.get(field));
    result.fieldValues.add(fieldValue);
  }
  jsonGenerator.writeObject(result);
}
 
开发者ID:jcustenborder,项目名称:connect-utils,代码行数:17,代码来源:StructSerializationModule.java


示例8: Storage

import org.apache.kafka.connect.data.Field; //导入依赖的package包/类
Storage(Schema schema) {
  this.name = schema.name();
  this.doc = schema.doc();
  this.type = schema.type();
  this.defaultValue = schema.defaultValue();
  this.version = schema.version();
  this.parameters = schema.parameters();
  this.isOptional = schema.isOptional();

  if (Schema.Type.MAP == this.type) {
    this.keySchema = schema.keySchema();
    this.valueSchema = schema.valueSchema();
  } else if (Schema.Type.ARRAY == this.type) {
    this.keySchema = null;
    this.valueSchema = schema.valueSchema();
  } else if (Schema.Type.STRUCT == this.type) {
    this.fieldSchemas = new LinkedHashMap<>();
    for (Field field : schema.fields()) {
      this.fieldSchemas.put(field.name(), field.schema());
    }
  }
}
 
开发者ID:jcustenborder,项目名称:connect-utils,代码行数:23,代码来源:SchemaSerializationModule.java


示例9: getFieldIndexByName

import org.apache.kafka.connect.data.Field; //导入依赖的package包/类
public static int getFieldIndexByName(final Schema schema, final String fieldName) {
  if (schema.fields() == null) {
    return -1;
  }
  for (int i = 0; i < schema.fields().size(); i++) {
    Field field = schema.fields().get(i);
    int dotIndex = field.name().indexOf('.');
    if (dotIndex == -1) {
      if (field.name().equals(fieldName)) {
        return i;
      }
    } else {
      if (dotIndex < fieldName.length()) {
        String
            fieldNameWithDot =
            fieldName.substring(0, dotIndex) + "." + fieldName.substring(dotIndex + 1);
        if (field.name().equals(fieldNameWithDot)) {
          return i;
        }
      }
    }

  }
  return -1;
}
 
开发者ID:confluentinc,项目名称:ksql,代码行数:26,代码来源:SchemaUtil.java


示例10: handleStruct

import org.apache.kafka.connect.data.Field; //导入依赖的package包/类
void handleStruct(Event event) {
  final Struct input = (Struct) event.event;
  List<Field> fields = input.schema().fields();
  final Map result = new LinkedHashMap(fields.size());

  for (Field field : fields) {
    Object key = field.name();
    Object value = input.get(field);

    if (null == value) {
      continue;
    }

    if (!event.setValue(key, value)) {
      result.put(key, value);
    }
  }

  event.event = result.isEmpty() ? null : result;
}
 
开发者ID:jcustenborder,项目名称:kafka-connect-splunk,代码行数:21,代码来源:ObjectMapperFactory.java


示例11: valueSchema

import org.apache.kafka.connect.data.Field; //导入依赖的package包/类
public static Schema valueSchema(SObjectDescriptor descriptor) {
  String name = String.format("%s.%s", SObjectHelper.class.getPackage().getName(), descriptor.name());
  SchemaBuilder builder = SchemaBuilder.struct();
  builder.name(name);

  for (SObjectDescriptor.Field field : descriptor.fields()) {
    if (isTextArea(field)) {
      continue;
    }
    Schema schema = schema(field);
    builder.field(field.name(), schema);
  }

  builder.field(FIELD_OBJECT_TYPE, Schema.OPTIONAL_STRING_SCHEMA);
  builder.field(FIELD_EVENT_TYPE, Schema.OPTIONAL_STRING_SCHEMA);

  return builder.build();
}
 
开发者ID:jcustenborder,项目名称:kafka-connect-salesforce,代码行数:19,代码来源:SObjectHelper.java


示例12: convertStruct

import org.apache.kafka.connect.data.Field; //导入依赖的package包/类
public void convertStruct(JsonNode sObjectNode, Schema schema, Struct struct) {
  for (Field field : schema.fields()) {
    String fieldName = field.name();
    JsonNode valueNode = sObjectNode.findValue(fieldName);

    final Object value;
    if (ADDRESS_SCHEMA_NAME.equals(field.schema().name())) {
      Struct address = new Struct(field.schema());
      for (Field addressField : field.schema().fields()) {
        JsonNode fieldValueNode = valueNode.findValue(addressField.name());
        Object fieldValue = PARSER.parseJsonNode(addressField.schema(), fieldValueNode);
        address.put(addressField, fieldValue);
      }
      value = address;
    } else {
      value = PARSER.parseJsonNode(field.schema(), valueNode);
    }
    struct.put(field, value);
  }
}
 
开发者ID:jcustenborder,项目名称:kafka-connect-salesforce,代码行数:21,代码来源:SObjectHelper.java


示例13: convertMap

import org.apache.kafka.connect.data.Field; //导入依赖的package包/类
private com.google.cloud.bigquery.Field.Builder convertMap(Schema kafkaConnectSchema,
                                                           String fieldName) {
  Schema keySchema = kafkaConnectSchema.keySchema();
  Schema valueSchema = kafkaConnectSchema.valueSchema();

  com.google.cloud.bigquery.Field.Builder keyFieldBuilder =
      convertField(keySchema, MAP_KEY_FIELD_NAME);
  com.google.cloud.bigquery.Field.Builder valueFieldBuilder =
      convertField(valueSchema, MAP_VALUE_FIELD_NAME);

  com.google.cloud.bigquery.Field keyField = keyFieldBuilder.build();
  com.google.cloud.bigquery.Field valueField = valueFieldBuilder.build();

  com.google.cloud.bigquery.Field.Type bigQueryMapEntryType =
      com.google.cloud.bigquery.Field.Type.record(keyField, valueField);

  com.google.cloud.bigquery.Field.Builder bigQueryRecordBuilder =
      com.google.cloud.bigquery.Field.newBuilder(fieldName, bigQueryMapEntryType);

  return bigQueryRecordBuilder.setMode(com.google.cloud.bigquery.Field.Mode.REPEATED);
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:22,代码来源:BigQuerySchemaConverter.java


示例14: convertStruct

import org.apache.kafka.connect.data.Field; //导入依赖的package包/类
private Map<String, Object> convertStruct(Object kafkaConnectObject,
                                          Schema kafkaConnectSchema) {
  Map<String, Object> bigQueryRecord = new HashMap<>();
  List<Field> kafkaConnectSchemaFields = kafkaConnectSchema.fields();
  Struct kafkaConnectStruct = (Struct) kafkaConnectObject;
  for (Field kafkaConnectField : kafkaConnectSchemaFields) {
    Object bigQueryObject = convertObject(
        kafkaConnectStruct.get(kafkaConnectField.name()),
        kafkaConnectField.schema()
    );
    if (bigQueryObject != null) {
      bigQueryRecord.put(kafkaConnectField.name(), bigQueryObject);
    }
  }
  return bigQueryRecord;
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:17,代码来源:BigQueryRecordConverter.java


示例15: KsqlStructuredDataOutputNode

import org.apache.kafka.connect.data.Field; //导入依赖的package包/类
@JsonCreator
public KsqlStructuredDataOutputNode(@JsonProperty("id") final PlanNodeId id,
                                    @JsonProperty("source") final PlanNode source,
                                    @JsonProperty("schema") final Schema schema,
                                    @JsonProperty("timestamp") final Field timestampField,
                                    @JsonProperty("key") final Field keyField,
                                    @JsonProperty("ksqlTopic") final KsqlTopic ksqlTopic,
                                    @JsonProperty("topicName") final String topicName,
                                    @JsonProperty("outputProperties") final Map<String, Object>
                                          outputProperties,
                                    @JsonProperty("limit") final Optional<Integer> limit) {
  super(id, source, schema, limit);
  this.kafkaTopicName = topicName;
  this.keyField = keyField;
  this.timestampField = timestampField;
  this.ksqlTopic = ksqlTopic;
  this.outputProperties = outputProperties;
}
 
开发者ID:confluentinc,项目名称:ksql,代码行数:19,代码来源:KsqlStructuredDataOutputNode.java


示例16: SourceDescription

import org.apache.kafka.connect.data.Field; //导入依赖的package包/类
public SourceDescription(StructuredDataSource dataSource, boolean extended, String serdes, String topology, String executionPlan, List<String> readQueries, List<String> writeQueries, KafkaTopicClient topicClient) {

    this(
        "",
        dataSource.getName(),
        readQueries,
        writeQueries,
        dataSource.getSchema().fields().stream().map(
            field -> {
              return new FieldSchemaInfo(field.name(), SchemaUtil
                  .getSchemaFieldName(field));
            }).collect(Collectors.toList()),
        dataSource.getDataSourceType().getKqlType(),
        Optional.ofNullable(dataSource.getKeyField()).map(Field::name).orElse(""),
        Optional.ofNullable(dataSource.getTimestampField()).map(Field::name).orElse(""),
        (extended ? MetricCollectors.getStatsFor(dataSource.getTopicName(), false) : ""),
        (extended ? MetricCollectors.getStatsFor(dataSource.getTopicName(), true) : ""),
        extended,
        serdes,
        dataSource.getKsqlTopic().getKafkaTopicName(),
        topology,
        executionPlan,
        (extended && topicClient != null ? getPartitions(topicClient,  dataSource.getKsqlTopic().getKafkaTopicName()) : 0),
        (extended && topicClient != null ? getReplication(topicClient, dataSource.getKsqlTopic().getKafkaTopicName()) : 0)
    );
  }
 
开发者ID:confluentinc,项目名称:ksql,代码行数:27,代码来源:SourceDescription.java


示例17: selectKey

import org.apache.kafka.connect.data.Field; //导入依赖的package包/类
@SuppressWarnings("unchecked")
public SchemaKStream selectKey(final Field newKeyField) {
  if (keyField != null &&
      keyField.name().equals(newKeyField.name())) {
    return this;
  }

  KStream keyedKStream = kstream.selectKey((key, value) -> {

    String
        newKey =
        value.getColumns().get(SchemaUtil.getFieldIndexByName(schema, newKeyField.name()))
            .toString();
    return newKey;
  }).map((KeyValueMapper<String, GenericRow, KeyValue<String, GenericRow>>) (key, row) -> {
    row.getColumns().set(SchemaUtil.ROWKEY_NAME_INDEX, key);
    return new KeyValue<>(key, row);
  });

  return new SchemaKStream(schema, keyedKStream, newKeyField, Collections.singletonList(this),
                           Type.REKEY, functionRegistry, schemaRegistryClient);
}
 
开发者ID:confluentinc,项目名称:ksql,代码行数:23,代码来源:SchemaKStream.java


示例18: visitSubscriptExpression

import org.apache.kafka.connect.data.Field; //导入依赖的package包/类
@Override
protected Pair<String, Schema> visitSubscriptExpression(SubscriptExpression node,
                                                        Boolean unmangleNames) {
  String arrayBaseName = node.getBase().toString();
  Optional<Field> schemaField = SchemaUtil.getFieldByName(schema, arrayBaseName);
  if (!schemaField.isPresent()) {
    throw new KsqlException("Field not found: " + arrayBaseName);
  }
  if (schemaField.get().schema().type() == Schema.Type.ARRAY) {
    return new Pair<>(process(node.getBase(), unmangleNames).getLeft() + "[(int)("
                      + process(node.getIndex(), unmangleNames).getLeft() + ")]", schema);
  } else if (schemaField.get().schema().type() == Schema.Type.MAP) {
    return new Pair<>("("
                      + SchemaUtil.getJavaCastString(schemaField.get().schema().valueSchema())
                      + process(node.getBase(), unmangleNames).getLeft() + ".get"
                      + "(" + process(node.getIndex(), unmangleNames).getLeft() + "))", schema);
  }
  throw new UnsupportedOperationException();
}
 
开发者ID:confluentinc,项目名称:ksql,代码行数:20,代码来源:SqlToJavaVisitor.java


示例19: convertSchema

import org.apache.kafka.connect.data.Field; //导入依赖的package包/类
public static List<FieldSchema> convertSchema(Schema schema) {
  List<FieldSchema> columns = new ArrayList<>();
  if (Schema.Type.STRUCT.equals(schema.type())) {
    for (Field field: schema.fields()) {
      columns.add(new FieldSchema(
          field.name(), convert(field.schema()).getTypeName(), field.schema().doc()));
    }
  }
  return columns;
}
 
开发者ID:jiangxiluning,项目名称:kafka-connect-hdfs,代码行数:11,代码来源:HiveSchemaConverter.java


示例20: convertStruct

import org.apache.kafka.connect.data.Field; //导入依赖的package包/类
public static TypeInfo convertStruct(Schema schema) {
  final List<Field> fields = schema.fields();
  final List<String> names = new ArrayList<>(fields.size());
  final List<TypeInfo> types = new ArrayList<>(fields.size());
  for (Field field : fields) {
    names.add(field.name());
    types.add(convert(field.schema()));
  }
  return TypeInfoFactory.getStructTypeInfo(names, types);
}
 
开发者ID:jiangxiluning,项目名称:kafka-connect-hdfs,代码行数:11,代码来源:HiveSchemaConverter.java



注:本文中的org.apache.kafka.connect.data.Field类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java GPUImageSepiaFilter类代码示例发布时间:2022-05-21
下一篇:
Java IndentingWriter类代码示例发布时间:2022-05-21
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap