• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java StorageDescriptor类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hive.metastore.api.StorageDescriptor的典型用法代码示例。如果您正苦于以下问题:Java StorageDescriptor类的具体用法?Java StorageDescriptor怎么用?Java StorageDescriptor使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



StorageDescriptor类属于org.apache.hadoop.hive.metastore.api包,在下文中一共展示了StorageDescriptor类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: createPartitionedTable

import org.apache.hadoop.hive.metastore.api.StorageDescriptor; //导入依赖的package包/类
private Table createPartitionedTable(String databaseName, String tableName) throws Exception {
  Table table = new Table();
  table.setDbName(DATABASE);
  table.setTableName(tableName);
  table.setPartitionKeys(Arrays.asList(new FieldSchema("partcol", "int", null)));
  table.setSd(new StorageDescriptor());
  table.getSd().setCols(Arrays.asList(new FieldSchema("id", "int", null), new FieldSchema("name", "string", null)));
  table.getSd().setInputFormat("org.apache.hadoop.mapred.TextInputFormat");
  table.getSd().setOutputFormat("org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat");
  table.getSd().setSerdeInfo(new SerDeInfo());
  table.getSd().getSerdeInfo().setSerializationLib("org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe");
  HiveMetaStoreClient client = server.newClient();
  client.createTable(table);
  client.close();
  return table;
}
 
开发者ID:HotelsDotCom,项目名称:beeju,代码行数:17,代码来源:HiveServer2JUnitRuleTest.java


示例2: createPartitionedTable

import org.apache.hadoop.hive.metastore.api.StorageDescriptor; //导入依赖的package包/类
static Table createPartitionedTable(HiveMetaStoreClient metaStoreClient, String database, String table, File location)
  throws Exception {

  Table hiveTable = new Table();
  hiveTable.setDbName(database);
  hiveTable.setTableName(table);
  hiveTable.setTableType(TableType.EXTERNAL_TABLE.name());
  hiveTable.putToParameters("EXTERNAL", "TRUE");

  hiveTable.setPartitionKeys(PARTITION_COLUMNS);

  StorageDescriptor sd = new StorageDescriptor();
  sd.setCols(DATA_COLUMNS);
  sd.setLocation(location.toURI().toString());
  sd.setParameters(new HashMap<String, String>());
  sd.setSerdeInfo(new SerDeInfo());

  hiveTable.setSd(sd);

  metaStoreClient.createTable(hiveTable);

  return hiveTable;
}
 
开发者ID:HotelsDotCom,项目名称:waggle-dance,代码行数:24,代码来源:TestUtils.java


示例3: setupHiveTables

import org.apache.hadoop.hive.metastore.api.StorageDescriptor; //导入依赖的package包/类
private void setupHiveTables() throws TException, IOException {
  List<FieldSchema> partitionKeys = Lists.newArrayList(newFieldSchema("p1"), newFieldSchema("p2"));

  File tableLocation = new File("db1", "table1");
  StorageDescriptor sd = newStorageDescriptor(tableLocation, "col0");
  table1 = newTable("table1", "db1", partitionKeys, sd);
  Partition partition1 = newPartition(table1, "value1", "value2");
  Partition partition2 = newPartition(table1, "value11", "value22");
  table1Partitions = Arrays.asList(partition1, partition2); //
  table1PartitionNames = Arrays.asList(Warehouse.makePartName(partitionKeys, partition1.getValues()),
      Warehouse.makePartName(partitionKeys, partition2.getValues()));

  File tableLocation2 = new File("db2", "table2");
  StorageDescriptor sd2 = newStorageDescriptor(tableLocation2, "col0");
  table2 = newTable("table2", "db2", partitionKeys, sd2);
}
 
开发者ID:HotelsDotCom,项目名称:circus-train,代码行数:17,代码来源:DiffGeneratedPartitionPredicateTest.java


示例4: newTable

import org.apache.hadoop.hive.metastore.api.StorageDescriptor; //导入依赖的package包/类
private Table newTable() {
  Table table = new Table();
  table.setDbName(DB_NAME);
  table.setTableName(TABLE_NAME);
  table.setTableType(TableType.EXTERNAL_TABLE.name());

  StorageDescriptor sd = new StorageDescriptor();
  sd.setLocation(tableLocation);
  table.setSd(sd);

  HashMap<String, String> parameters = new HashMap<>();
  parameters.put(StatsSetupConst.ROW_COUNT, "1");
  table.setParameters(parameters);

  table.setPartitionKeys(PARTITIONS);
  return table;
}
 
开发者ID:HotelsDotCom,项目名称:circus-train,代码行数:18,代码来源:ReplicaTest.java


示例5: createView

import org.apache.hadoop.hive.metastore.api.StorageDescriptor; //导入依赖的package包/类
private static Table createView(
    HiveMetaStoreClient metaStoreClient,
    String database,
    String view,
    String table,
    List<FieldSchema> partitionCols)
  throws TException {
  Table hiveView = new Table();
  hiveView.setDbName(database);
  hiveView.setTableName(view);
  hiveView.setTableType(TableType.VIRTUAL_VIEW.name());
  hiveView.setViewOriginalText(hql(database, table));
  hiveView.setViewExpandedText(expandHql(database, table, DATA_COLUMNS, partitionCols));
  hiveView.setPartitionKeys(partitionCols);

  StorageDescriptor sd = new StorageDescriptor();
  sd.setCols(DATA_COLUMNS);
  sd.setParameters(new HashMap<String, String>());
  sd.setSerdeInfo(new SerDeInfo());
  hiveView.setSd(sd);

  metaStoreClient.createTable(hiveView);

  return hiveView;
}
 
开发者ID:HotelsDotCom,项目名称:circus-train,代码行数:26,代码来源:TestUtils.java


示例6: HivePartition

import org.apache.hadoop.hive.metastore.api.StorageDescriptor; //导入依赖的package包/类
@JsonCreator
public HivePartition(@JsonProperty("values") List<String> values, @JsonProperty("tableName") String tableName, @JsonProperty("dbName") String dbName, @JsonProperty("createTime") int createTime,
                     @JsonProperty("lastAccessTime") int lastAccessTime,  @JsonProperty("sd") StorageDescriptorWrapper sd,
                     @JsonProperty("parameters") Map<String, String> parameters
) {
  this.values = values;
  this.tableName = tableName;
  this.dbName = dbName;
  this.createTime = createTime;
  this.lastAccessTime = lastAccessTime;
  this.sd = sd;
  this.parameters = parameters;

  StorageDescriptor sdUnwrapped = sd.getSd();
  this.partition = new org.apache.hadoop.hive.metastore.api.Partition(values, tableName, dbName, createTime, lastAccessTime, sdUnwrapped, parameters);
}
 
开发者ID:skhalifa,项目名称:QDrill,代码行数:17,代码来源:HiveTable.java


示例7: StorageDescriptorWrapper

import org.apache.hadoop.hive.metastore.api.StorageDescriptor; //导入依赖的package包/类
public StorageDescriptorWrapper(StorageDescriptor sd) {
      this.sd = sd;
      this.cols = Lists.newArrayList();
      for (FieldSchema f : sd.getCols()) {
        this.cols.add(new FieldSchemaWrapper(f));
      }
      this.location = sd.getLocation();
      this.inputFormat = sd.getInputFormat();
      this.outputFormat = sd.getOutputFormat();
      this.compressed = sd.isCompressed();
      this.numBuckets = sd.getNumBuckets();
      this.serDeInfo = new SerDeInfoWrapper(sd.getSerdeInfo());
//      this.bucketCols = sd.getBucketCols();
      this.sortCols = Lists.newArrayList();
      for (Order o : sd.getSortCols()) {
        this.sortCols.add(new OrderWrapper(o));
      }
      this.parameters = sd.getParameters();
    }
 
开发者ID:skhalifa,项目名称:QDrill,代码行数:20,代码来源:HiveTable.java


示例8: createUnpartitionedTable

import org.apache.hadoop.hive.metastore.api.StorageDescriptor; //导入依赖的package包/类
static Table createUnpartitionedTable(
    HiveMetaStoreClient metaStoreClient,
    String database,
    String table,
    File location)
  throws TException {
  Table hiveTable = new Table();
  hiveTable.setDbName(database);
  hiveTable.setTableName(table);
  hiveTable.setTableType(TableType.EXTERNAL_TABLE.name());
  hiveTable.putToParameters("EXTERNAL", "TRUE");

  StorageDescriptor sd = new StorageDescriptor();
  sd.setCols(DATA_COLUMNS);
  sd.setLocation(location.toURI().toString());
  sd.setParameters(new HashMap<String, String>());
  sd.setSerdeInfo(new SerDeInfo());

  hiveTable.setSd(sd);

  metaStoreClient.createTable(hiveTable);

  return hiveTable;
}
 
开发者ID:HotelsDotCom,项目名称:waggle-dance,代码行数:25,代码来源:TestUtils.java


示例9: extractHiveStorageFormat

import org.apache.hadoop.hive.metastore.api.StorageDescriptor; //导入依赖的package包/类
private HiveStorageFormat extractHiveStorageFormat(final Table table) throws MetaException {
    final StorageDescriptor descriptor = table.getSd();
    if (descriptor == null) {
        throw new MetaException("Table is missing storage descriptor");
    }
    final SerDeInfo serdeInfo = descriptor.getSerdeInfo();
    if (serdeInfo == null) {
        throw new MetaException(
            "Table storage descriptor is missing SerDe info");
    }
    final String outputFormat = descriptor.getOutputFormat();
    final String serializationLib = serdeInfo.getSerializationLib();

    for (HiveStorageFormat format : HiveStorageFormat.values()) {
        if (format.getOutputFormat().equals(outputFormat) && format.getSerde().equals(serializationLib)) {
            return format;
        }
    }
    throw new MetaException(
        String.format("Output format %s with SerDe %s is not supported", outputFormat, serializationLib));
}
 
开发者ID:Netflix,项目名称:metacat,代码行数:22,代码来源:HiveConnectorTableService.java


示例10: copyTableSdToPartitionInfoSd

import org.apache.hadoop.hive.metastore.api.StorageDescriptor; //导入依赖的package包/类
private void copyTableSdToPartitionInfoSd(final PartitionInfo partitionInfo, final Table table) {
    final StorageInfo sd = partitionInfo.getSerde();
    final StorageDescriptor tableSd = table.getSd();

    if (StringUtils.isBlank(sd.getInputFormat())) {
        sd.setInputFormat(tableSd.getInputFormat());
    }
    if (StringUtils.isBlank(sd.getOutputFormat())) {
        sd.setOutputFormat(tableSd.getOutputFormat());
    }
    if (sd.getParameters() == null || sd.getParameters().isEmpty()) {
        sd.setParameters(tableSd.getParameters());
    }
    final SerDeInfo tableSerde = tableSd.getSerdeInfo();
    if (tableSerde != null) {
        if (StringUtils.isBlank(sd.getSerializationLib())) {
            sd.setSerializationLib(tableSerde.getSerializationLib());
        }
        if (sd.getSerdeInfoParameters() == null || sd.getSerdeInfoParameters().isEmpty()) {
            sd.setSerdeInfoParameters(tableSerde.getParameters());
        }
    }
}
 
开发者ID:Netflix,项目名称:metacat,代码行数:24,代码来源:HiveConnectorFastPartitionService.java


示例11: toStorageInfo

import org.apache.hadoop.hive.metastore.api.StorageDescriptor; //导入依赖的package包/类
private StorageInfo toStorageInfo(final StorageDescriptor sd, final String owner) {
    if (sd == null) {
        return new StorageInfo();
    }
    if (sd.getSerdeInfo() != null) {
        return StorageInfo.builder().owner(owner)
            .uri(sd.getLocation())
            .inputFormat(sd.getInputFormat())
            .outputFormat(sd.getOutputFormat())
            .parameters(sd.getParameters())
            .serializationLib(sd.getSerdeInfo().getSerializationLib())
            .serdeInfoParameters(sd.getSerdeInfo().getParameters())
            .build();
    }
    return StorageInfo.builder().owner(owner).uri(sd.getLocation()).inputFormat(sd.getInputFormat())
        .outputFormat(sd.getOutputFormat()).parameters(sd.getParameters()).build();
}
 
开发者ID:Netflix,项目名称:metacat,代码行数:18,代码来源:HiveConnectorInfoConverter.java


示例12: toStorageDto

import org.apache.hadoop.hive.metastore.api.StorageDescriptor; //导入依赖的package包/类
private StorageDto toStorageDto(@Nullable final StorageDescriptor sd, final String owner) {
    final StorageDto result = new StorageDto();
    if (sd != null) {
        result.setOwner(owner);
        result.setUri(sd.getLocation());
        result.setInputFormat(sd.getInputFormat());
        result.setOutputFormat(sd.getOutputFormat());
        result.setParameters(sd.getParameters());
        final SerDeInfo serde = sd.getSerdeInfo();
        if (serde != null) {
            result.setSerializationLib(serde.getSerializationLib());
            result.setSerdeInfoParameters(serde.getParameters());
        }
    }
    return result;
}
 
开发者ID:Netflix,项目名称:metacat,代码行数:17,代码来源:HiveConvertersImpl.java


示例13: testCheckTableSchemaMappingMissingColumn

import org.apache.hadoop.hive.metastore.api.StorageDescriptor; //导入依赖的package包/类
@Test
public void testCheckTableSchemaMappingMissingColumn() throws MetaException {
  TableDescription description = getHashRangeTable();

  Table table = new Table();
  Map<String, String> parameters = Maps.newHashMap();
  parameters.put(DynamoDBConstants.DYNAMODB_COLUMN_MAPPING, "col1:dynamo_col1$,hashKey:hashKey");
  table.setParameters(parameters);
  StorageDescriptor sd = new StorageDescriptor();
  List<FieldSchema> cols = Lists.newArrayList();
  cols.add(new FieldSchema("col1", "string", ""));
  cols.add(new FieldSchema("col2", "tinyint", ""));
  cols.add(new FieldSchema("col3", "map<string,string>", ""));
  cols.add(new FieldSchema("hashMap", "string", ""));
  sd.setCols(cols);
  table.setSd(sd);

  exceptionRule.expect(MetaException.class);
  exceptionRule.expectMessage("Could not find column mapping for column: col2");
  storageHandler.checkTableSchemaMapping(description, table);
}
 
开发者ID:awslabs,项目名称:emr-dynamodb-connector,代码行数:22,代码来源:DynamoDBStorageHandlerTest.java


示例14: testCheckTableSchemaMappingValid

import org.apache.hadoop.hive.metastore.api.StorageDescriptor; //导入依赖的package包/类
@Test
public void testCheckTableSchemaMappingValid() throws MetaException {
  TableDescription description = getHashRangeTable();

  Table table = new Table();
  Map<String, String> parameters = Maps.newHashMap();
  parameters.put(DynamoDBConstants.DYNAMODB_COLUMN_MAPPING, "col1:dynamo_col1$," +
      "col2:dynamo_col2#,hashKey:hashKey");
  table.setParameters(parameters);
  StorageDescriptor sd = new StorageDescriptor();
  List<FieldSchema> cols = Lists.newArrayList();
  cols.add(new FieldSchema("col1", "string", ""));
  cols.add(new FieldSchema("col2", "bigint", ""));
  cols.add(new FieldSchema("hashKey", "string", ""));
  sd.setCols(cols);
  table.setSd(sd);
  storageHandler.checkTableSchemaMapping(description, table);
}
 
开发者ID:awslabs,项目名称:emr-dynamodb-connector,代码行数:19,代码来源:DynamoDBStorageHandlerTest.java


示例15: testCheckTableSchemaTypeInvalidType

import org.apache.hadoop.hive.metastore.api.StorageDescriptor; //导入依赖的package包/类
@Test
public void testCheckTableSchemaTypeInvalidType() throws MetaException {
  TableDescription description = getHashRangeTable();

  Table table = new Table();
  Map<String, String> parameters = Maps.newHashMap();
  parameters.put(DynamoDBConstants.DYNAMODB_COLUMN_MAPPING, "col1:dynamo_col1$," +
      "col2:dynamo_col2#,hashKey:hashKey");
  table.setParameters(parameters);
  StorageDescriptor sd = new StorageDescriptor();
  List<FieldSchema> cols = Lists.newArrayList();
  cols.add(new FieldSchema("col1", "string", ""));
  cols.add(new FieldSchema("col2", "tinyint", ""));
  cols.add(new FieldSchema("hashKey", "string", ""));
  sd.setCols(cols);
  table.setSd(sd);

  exceptionRule.expect(MetaException.class);
  exceptionRule.expectMessage("The hive type tinyint is not supported in DynamoDB");
  storageHandler.checkTableSchemaType(description, table);
}
 
开发者ID:awslabs,项目名称:emr-dynamodb-connector,代码行数:22,代码来源:DynamoDBStorageHandlerTest.java


示例16: testCheckTableSchemaTypeInvalidHashKeyType

import org.apache.hadoop.hive.metastore.api.StorageDescriptor; //导入依赖的package包/类
@Test
public void testCheckTableSchemaTypeInvalidHashKeyType() throws MetaException {
  TableDescription description = getHashRangeTable();

  Table table = new Table();
  Map<String, String> parameters = Maps.newHashMap();
  parameters.put(DynamoDBConstants.DYNAMODB_COLUMN_MAPPING, "col1:dynamo_col1$," +
      "col2:dynamo_col2#,hashKey:hashKey");
  table.setParameters(parameters);
  StorageDescriptor sd = new StorageDescriptor();
  List<FieldSchema> cols = Lists.newArrayList();
  cols.add(new FieldSchema("col1", "string", ""));
  cols.add(new FieldSchema("col2", "bigint", ""));
  cols.add(new FieldSchema("hashKey", "map<string,string>", ""));
  sd.setCols(cols);
  table.setSd(sd);

  exceptionRule.expect(MetaException.class);
  exceptionRule.expectMessage("The key element hashKey does not match type. DynamoDB Type: S " +
      "Hive type: " + "map<string,string>");
  storageHandler.checkTableSchemaType(description, table);
}
 
开发者ID:awslabs,项目名称:emr-dynamodb-connector,代码行数:23,代码来源:DynamoDBStorageHandlerTest.java


示例17: testCheckTableSchemaTypeValid

import org.apache.hadoop.hive.metastore.api.StorageDescriptor; //导入依赖的package包/类
@Test
public void testCheckTableSchemaTypeValid() throws MetaException {
  TableDescription description = getHashRangeTable();

  Table table = new Table();
  Map<String, String> parameters = Maps.newHashMap();
  parameters.put(DynamoDBConstants.DYNAMODB_COLUMN_MAPPING, "col1:dynamo_col1$," +
      "col2:dynamo_col2#,hashKey:hashKey");
  table.setParameters(parameters);
  StorageDescriptor sd = new StorageDescriptor();
  List<FieldSchema> cols = Lists.newArrayList();
  cols.add(new FieldSchema("col1", "string", ""));
  cols.add(new FieldSchema("col2", "bigint", ""));
  cols.add(new FieldSchema("hashKey", "string", ""));
  sd.setCols(cols);
  table.setSd(sd);
  // This check is expected to pass for the given input
  storageHandler.checkTableSchemaType(description, table);
}
 
开发者ID:awslabs,项目名称:emr-dynamodb-connector,代码行数:20,代码来源:DynamoDBStorageHandlerTest.java


示例18: getInputFormatFromSD

import org.apache.hadoop.hive.metastore.api.StorageDescriptor; //导入依赖的package包/类
/**
 * Get the input format from given {@link StorageDescriptor}
 * @param properties
 * @param hiveReadEntry
 * @param sd
 * @return {@link InputFormat} class or null if a failure has occurred. Failure is logged as warning.
 */
private Class<? extends InputFormat<?, ?>> getInputFormatFromSD(final Properties properties,
    final HiveReadEntry hiveReadEntry, final StorageDescriptor sd, final HiveConf hiveConf) {
  final Table hiveTable = hiveReadEntry.getTable();
  try {
    final String inputFormatName = sd.getInputFormat();
    if (!Strings.isNullOrEmpty(inputFormatName)) {
      return (Class<? extends InputFormat<?, ?>>) Class.forName(inputFormatName);
    }

    final JobConf job = new JobConf(hiveConf);
    HiveUtilities.addConfToJob(job, properties);
    return HiveUtilities.getInputFormatClass(job, sd, hiveTable);
  } catch (final Exception e) {
    logger.warn("Failed to get InputFormat class from Hive table '{}.{}'. StorageDescriptor [{}]",
        hiveTable.getDbName(), hiveTable.getTableName(), sd.toString(), e);
    return null;
  }
}
 
开发者ID:axbaretto,项目名称:drill,代码行数:26,代码来源:ConvertHiveParquetScanToDrillParquetScan.java


示例19: getInputFormatClass

import org.apache.hadoop.hive.metastore.api.StorageDescriptor; //导入依赖的package包/类
/**
 * Utility method which gets table or partition {@link InputFormat} class. First it
 * tries to get the class name from given StorageDescriptor object. If it doesn't contain it tries to get it from
 * StorageHandler class set in table properties. If not found throws an exception.
 * @param job {@link JobConf} instance needed incase the table is StorageHandler based table.
 * @param sd {@link StorageDescriptor} instance of currently reading partition or table (for non-partitioned tables).
 * @param table Table object
 * @throws Exception
 */
public static Class<? extends InputFormat<?, ?>> getInputFormatClass(final JobConf job, final StorageDescriptor sd,
    final Table table) throws Exception {
  final String inputFormatName = sd.getInputFormat();
  if (Strings.isNullOrEmpty(inputFormatName)) {
    final String storageHandlerClass = table.getParameters().get(META_TABLE_STORAGE);
    if (Strings.isNullOrEmpty(storageHandlerClass)) {
      throw new ExecutionSetupException("Unable to get Hive table InputFormat class. There is neither " +
          "InputFormat class explicitly specified nor StorageHandler class");
    }
    final HiveStorageHandler storageHandler = HiveUtils.getStorageHandler(job, storageHandlerClass);
    return (Class<? extends InputFormat<?, ?>>) storageHandler.getInputFormatClass();
  } else {
    return (Class<? extends InputFormat<?, ?>>) Class.forName(inputFormatName) ;
  }
}
 
开发者ID:axbaretto,项目名称:drill,代码行数:25,代码来源:HiveUtilities.java


示例20: HiveTableWithColumnCache

import org.apache.hadoop.hive.metastore.api.StorageDescriptor; //导入依赖的package包/类
public HiveTableWithColumnCache(
  String tableName,
  String dbName,
  String owner,
  int createTime,
  int lastAccessTime,
  int retention,
  StorageDescriptor sd,
  List<FieldSchema> partitionKeys,
  Map<String,String> parameters,
  String viewOriginalText,
  String viewExpandedText,
  String tableType,
  ColumnListsCache columnListsCache) {
  super(tableName, dbName, owner, createTime, lastAccessTime, retention, sd,
    partitionKeys, parameters, viewOriginalText, viewExpandedText, tableType);
  this.columnListsCache = columnListsCache;
}
 
开发者ID:axbaretto,项目名称:drill,代码行数:19,代码来源:HiveTableWithColumnCache.java



注:本文中的org.apache.hadoop.hive.metastore.api.StorageDescriptor类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java Activity类代码示例发布时间:2022-05-21
下一篇:
Java FromJson类代码示例发布时间:2022-05-21
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap