• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java Hive类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hive.ql.metadata.Hive的典型用法代码示例。如果您正苦于以下问题:Java Hive类的具体用法?Java Hive怎么用?Java Hive使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



Hive类属于org.apache.hadoop.hive.ql.metadata包,在下文中一共展示了Hive类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: getHealthStatus

import org.apache.hadoop.hive.ql.metadata.Hive; //导入依赖的package包/类
/**
 * {@inheritDoc}
 */
@Override
public HealthStatus getHealthStatus() {
  boolean isHealthy = true;
  StringBuilder details = new StringBuilder();

  try {
    /** Try to issue command on hive **/
    Hive.get(LensServerConf.getHiveConf()).getAllDatabases();
  } catch (HiveException e) {
    isHealthy = false;
    details.append("Could not connect to Hive.");
    log.error("Could not connect to Hive.", e);
  }

  /** Check if service is up **/
  if (!this.getServiceState().equals(STATE.STARTED)) {
    isHealthy = false;
    details.append("Cube metastore service is down");
    log.error("Cube metastore service is down");
  }

  return isHealthy
    ? new HealthStatus(true, "Cube metastore service is healthy.")
    : new HealthStatus(false, details.toString());
}
 
开发者ID:apache,项目名称:lens,代码行数:29,代码来源:CubeMetastoreServiceImpl.java


示例2: createHiveTable

import org.apache.hadoop.hive.ql.metadata.Hive; //导入依赖的package包/类
/**
 * Creates the hive table.
 *
 * @param tableName the table name
 * @throws HiveException the hive exception
 */
public static void createHiveTable(String tableName, Map<String, String> parameters) throws HiveException {
  List<FieldSchema> columns = new ArrayList<FieldSchema>();
  columns.add(new FieldSchema("col1", "string", ""));
  List<FieldSchema> partCols = new ArrayList<FieldSchema>();
  partCols.add(new FieldSchema("pcol1", "string", ""));
  Map<String, String> params = new HashMap<String, String>();
  params.put("test.hive.table.prop", "tvalue");
  if (null != parameters && !parameters.isEmpty()) {
    params.putAll(parameters);
  }
  Table tbl = Hive.get().newTable(tableName);
  tbl.setTableType(TableType.MANAGED_TABLE);
  tbl.getTTable().getSd().setCols(columns);
  tbl.setPartCols(partCols);
  tbl.getTTable().getParameters().putAll(params);
  Hive.get().createTable(tbl);
}
 
开发者ID:apache,项目名称:lens,代码行数:24,代码来源:LensServerTestUtil.java


示例3: testServicesStartOnMetastoreDown

import org.apache.hadoop.hive.ql.metadata.Hive; //导入依赖的package包/类
public void testServicesStartOnMetastoreDown() throws Exception {
  LensServices services = new LensServices(LensServices.LENS_SERVICES_NAME, logSegregationContext);
  HiveConf hiveConf = new HiveConf();

  // Set metastore uri to an invalid location
  hiveConf.setVar(HiveConf.ConfVars.METASTOREURIS, "thrift://localhost:49153");

  try {
    services.init(hiveConf);
    Assert.fail("Expected init to fail because of invalid metastore config");
  } catch (Throwable th) {
    Assert.assertTrue(th.getMessage().contains(
      "Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient"));
  } finally {
    try {
      services.stop();
    } catch (Exception exc) {
      log.error("Error stopping services", exc);
      Assert.fail("services.stop() got unexpected exception " + exc);
    }
    Hive.closeCurrent();
  }
}
 
开发者ID:apache,项目名称:lens,代码行数:24,代码来源:TestStartupOnMetastoreDown.java


示例4: createTable

import org.apache.hadoop.hive.ql.metadata.Hive; //导入依赖的package包/类
/**
 * Creates the table.
 *
 * @param db     the db
 * @param table  the table
 * @param udb    the udb
 * @param utable the utable
 * @param setCustomSerde whether to set custom serde or not
 * @param columnMapping columnmapping for the table
 *
 * @throws Exception the exception
 */
void createTable(HiveConf conf, String db, String table, String udb, String utable, boolean setCustomSerde,
  Map<String, String> columnMapping) throws Exception {
  Table tbl1 = new Table(db, table);
  if (setCustomSerde) {
    tbl1.setSerializationLib("DatabaseJarSerde");
  }
  if (StringUtils.isNotBlank(udb)) {
    tbl1.setProperty(LensConfConstants.NATIVE_DB_NAME, udb);
  }
  if (StringUtils.isNotBlank(utable)) {
    tbl1.setProperty(LensConfConstants.NATIVE_TABLE_NAME, utable);
  }
  if (columnMapping != null && !columnMapping.isEmpty()) {
    tbl1.setProperty(LensConfConstants.NATIVE_TABLE_COLUMN_MAPPING, StringUtils.join(columnMapping.entrySet(), ","));
    log.info("columnMapping property:{}", tbl1.getProperty(LensConfConstants.NATIVE_TABLE_COLUMN_MAPPING));
  }

  List<FieldSchema> columns = new ArrayList<FieldSchema>();
  columns.add(new FieldSchema("id", "int", "col1"));
  columns.add(new FieldSchema("name", "string", "col2"));
  tbl1.setFields(columns);

  Hive.get(conf).createTable(tbl1);
  System.out.println("Created table " + table);
}
 
开发者ID:apache,项目名称:lens,代码行数:38,代码来源:TestColumnarSQLRewriter.java


示例5: createSources

import org.apache.hadoop.hive.ql.metadata.Hive; //导入依赖的package包/类
public void createSources(HiveConf conf, String dbName) throws Exception {
    try {
      Database database = new Database();
      database.setName(dbName);
      Hive.get(conf).dropDatabase(dbName, true, true, true);
      Hive.get(conf).createDatabase(database);
      SessionState.get().setCurrentDatabase(dbName);
      CubeMetastoreClient client = CubeMetastoreClient.getInstance(conf);
      createFromXML(client);
      assertTestFactTimelineClass(client);
      createCubeCheapFactPartitions(client);
      // commenting this as the week date format throws IllegalPatternException
      // createCubeFactWeekly(client);
      createTestFact2Partitions(client);
      createTestFact2RawPartitions(client);
      createBaseCubeFactPartitions(client);
      createSummaryPartitions(client);
//      dump(client);
    } catch (Exception exc) {
      log.error("Exception while creating sources.", exc);
      throw exc;
    }
  }
 
开发者ID:apache,项目名称:lens,代码行数:24,代码来源:CubeTestSetup.java


示例6: addTableToTablesQueried

import org.apache.hadoop.hive.ql.metadata.Hive; //导入依赖的package包/类
private void addTableToTablesQueried(String table, Hive metastore) throws HiveException {
  if (!tablesQueried.contains(table)) {
    Table tbl = metastore.getTable(table, false);
    if (tbl == null) {
      // table not found, possible case if query is create table
      log.info("Table {} not found while extracting plan details", table);
      return;
    }
    tablesQueried.add(table);
    String costStr = tbl.getParameters().get(LensConfConstants.STORAGE_COST);

    Double weight = 1d;
    if (costStr != null) {
      weight = Double.parseDouble(costStr);
    }
    tableWeights.put(table, weight);
  }
}
 
开发者ID:apache,项目名称:lens,代码行数:19,代码来源:HiveQueryPlan.java


示例7: beforeTest

import org.apache.hadoop.hive.ql.metadata.Hive; //导入依赖的package包/类
/**
 * Before test.
 *
 * @throws Exception the exception
 */
@BeforeTest
public void beforeTest() throws Exception {
  // Check if hadoop property set
  System.out.println("###HADOOP_PATH " + System.getProperty("hadoop.bin.path"));
  assertNotNull(System.getProperty("hadoop.bin.path"));
  createDriver();
  ss = new SessionState(hiveConf, "testuser");
  SessionState.start(ss);
  Hive client = Hive.get(hiveConf);
  Database database = new Database();
  database.setName(dataBase);
  client.createDatabase(database, true);
  SessionState.get().setCurrentDatabase(dataBase);
  sessionid = SessionState.get().getSessionId();
  driverConf.setBoolean(LensConfConstants.QUERY_ADD_INSERT_OVEWRITE, false);
  QueryContext context = createContext("USE " + dataBase, this.queryConf);
  driver.execute(context);
  driverConf.setBoolean(LensConfConstants.QUERY_ADD_INSERT_OVEWRITE, true);
  driverConf.setBoolean(LensConfConstants.QUERY_PERSISTENT_RESULT_INDRIVER, true);
}
 
开发者ID:apache,项目名称:lens,代码行数:26,代码来源:TestHiveDriver.java


示例8: testTemptable

import org.apache.hadoop.hive.ql.metadata.Hive; //导入依赖的package包/类
/**
 * Test temptable.
 *
 * @throws Exception the exception
 */
@Test
public void testTemptable() throws Exception {
  int handleSize = getHandleSize();
  createTestTable("test_temp");
  queryConf.setBoolean(LensConfConstants.QUERY_PERSISTENT_RESULT_INDRIVER, false);
  Hive.get(hiveConf).dropTable("test_temp_output");
  String query = "CREATE TABLE test_temp_output AS SELECT ID FROM test_temp";
  QueryContext context = createContext(query, queryConf);
  LensResultSet resultSet = driver.execute(context);
  assertNull(resultSet);
  assertHandleSize(handleSize);

  // fetch results from temp table
  String select = "SELECT * FROM test_temp_output";
  context = createContext(select, queryConf);
  resultSet = driver.execute(context);
  assertHandleSize(handleSize);
  validateInMemoryResult(resultSet, "test_temp_output");
  assertHandleSize(handleSize);
}
 
开发者ID:apache,项目名称:lens,代码行数:26,代码来源:TestHiveDriver.java


示例9: setup

import org.apache.hadoop.hive.ql.metadata.Hive; //导入依赖的package包/类
@Before
public void setup() throws Exception {
  conf = new HiveConf();
  baseDir = Files.createTempDir();
  baseDir.setWritable(true, false);
  conf.setVar(HiveConf.ConfVars.SCRATCHDIR, baseDir.getAbsolutePath());
  SessionState.start(conf);
  conf.setVar(ConfVars.HIVE_AUTHORIZATION_TASK_FACTORY,
      SentryHiveAuthorizationTaskFactoryImpl.class.getName());

  db = Mockito.mock(Hive.class);
  table = new Table(DB, TABLE);
  partition = new Partition(table);
  context = new Context(conf);
  parseDriver = new ParseDriver();
  analyzer = new DDLSemanticAnalyzer(conf, db);
  SessionState.start(conf);
  Mockito.when(db.getTable(TABLE, false)).thenReturn(table);
  Mockito.when(db.getPartition(table, new HashMap<String, String>(), false))
  .thenReturn(partition);

  HadoopDefaultAuthenticator auth = new HadoopDefaultAuthenticator();
  auth.setConf(conf);
  currentUser = auth.getUserName();

}
 
开发者ID:apache,项目名称:incubator-sentry,代码行数:27,代码来源:TestSentryHiveAuthorizationTaskFactory.java


示例10: setupTables

import org.apache.hadoop.hive.ql.metadata.Hive; //导入依赖的package包/类
private List<Table> setupTables(Hive hiveClient, String databaseName, String... tableNames) throws HiveException {
    List<Table> tables = new ArrayList<>();
    when(hiveClient.getAllTables(databaseName)).thenReturn(Arrays.asList(tableNames));
    for(String tableName : tableNames) {
        Table testTable = createTestTable(databaseName, tableName);
        when(hiveClient.getTable(databaseName, tableName)).thenReturn(testTable);
        tables.add(testTable);
    }
    return tables;
}
 
开发者ID:apache,项目名称:incubator-atlas,代码行数:11,代码来源:HiveMetaStoreBridgeTest.java


示例11: initialize

import org.apache.hadoop.hive.ql.metadata.Hive; //导入依赖的package包/类
/**
 * Initialize.
 *
 * @param conf the conf
 */
public void initialize(HiveConf conf) {
  String temp = conf.get(LensConfConstants.STATISTICS_WAREHOUSE_KEY, LensConfConstants.DEFAULT_STATISTICS_WAREHOUSE);
  warehousePath = new Path(temp);
  database = conf.get(LensConfConstants.STATISTICS_DATABASE_KEY, LensConfConstants.DEFAULT_STATISTICS_DATABASE);
  this.conf = conf;
  try {
    client = Hive.get(conf);
  } catch (Exception e) {
    LOG.error("Unable to connect to hive metastore", e);
    throw new IllegalArgumentException("Unable to connect to hive metastore", e);
  }
}
 
开发者ID:apache,项目名称:lens,代码行数:18,代码来源:StatisticsLogPartitionHandler.java


示例12: setCurrentDatabase

import org.apache.hadoop.hive.ql.metadata.Hive; //导入依赖的package包/类
/**
 * Change the current database used by the CubeMetastoreClient
 *
 * @param database current database to set
 */
@Override
public void setCurrentDatabase(LensSessionHandle sessionid, String database) throws LensException {
  try (SessionContext ignored = new SessionContext(sessionid)) {
    if (!Hive.get(getSession(sessionid).getHiveConf()).databaseExists(database)) {
      throw new NotFoundException("Database " + database + " does not exist");
    }
    log.info("Set database " + database);
    getSession(sessionid).setCurrentDatabase(database);
  } catch (HiveException e) {
    throw new LensException(e);
  }
}
 
开发者ID:apache,项目名称:lens,代码行数:18,代码来源:CubeMetastoreServiceImpl.java


示例13: dropDatabase

import org.apache.hadoop.hive.ql.metadata.Hive; //导入依赖的package包/类
/**
 * Drop a database from cube metastore
 *
 * @param database database name
 * @param cascade  flag indicating if the tables in the database should be dropped as well
 */
@Override
public void dropDatabase(LensSessionHandle sessionid, String database, boolean cascade) throws LensException {
  try (SessionContext ignored = new SessionContext(sessionid)){
    Hive.get(getSession(sessionid).getHiveConf()).dropDatabase(database, false, true, cascade);
    log.info("Database dropped " + database + " cascade? " + true);
  } catch (HiveException | NoSuchObjectException e) {
    throw new LensException(e);
  }
}
 
开发者ID:apache,项目名称:lens,代码行数:16,代码来源:CubeMetastoreServiceImpl.java


示例14: createDatabase

import org.apache.hadoop.hive.ql.metadata.Hive; //导入依赖的package包/类
/**
 * Create a database in Hive metastore
 *
 * @param database database name
 * @param ignore   ignore if database already exists
 * @throws LensException
 */
@Override
public void createDatabase(LensSessionHandle sessionid, String database, boolean ignore) throws LensException {
  try (SessionContext ignored = new SessionContext(sessionid)){
    Database db = new Database();
    db.setName(database);
    Hive.get(getSession(sessionid).getHiveConf()).createDatabase(db, ignore);
  } catch (AlreadyExistsException | HiveException e) {
    throw new LensException(e);
  }
  log.info("Database created " + database);
}
 
开发者ID:apache,项目名称:lens,代码行数:19,代码来源:CubeMetastoreServiceImpl.java


示例15: getAllDatabases

import org.apache.hadoop.hive.ql.metadata.Hive; //导入依赖的package包/类
/**
 * @return get all database names
 */
@Override
public List<String> getAllDatabases(LensSessionHandle sessionid) throws LensException {
  try (SessionContext ignored = new SessionContext(sessionid)){
    return Hive.get(getSession(sessionid).getHiveConf()).getAllDatabases();
  } catch (HiveException e) {
    throw new LensException(e);
  }
}
 
开发者ID:apache,项目名称:lens,代码行数:12,代码来源:CubeMetastoreServiceImpl.java


示例16: getAllNativeTableNames

import org.apache.hadoop.hive.ql.metadata.Hive; //导入依赖的package包/类
@Override
public List<String> getAllNativeTableNames(LensSessionHandle sessionid,
  String dbOption, String dbName) throws LensException {
  try (SessionContext ignored = new SessionContext(sessionid)){
    if (!StringUtils.isBlank(dbName)) {
      if (!Hive.get(getSession(sessionid).getHiveConf()).databaseExists(dbName)) {
        throw new NotFoundException("Database " + dbName + " does not exist");
      }
    }
    if (StringUtils.isBlank(dbName)
      && (StringUtils.isBlank(dbOption)
      || dbOption.equalsIgnoreCase("current"))) {
      // use current db if no dbname/dboption is passed
      dbName = getSession(sessionid).getCurrentDatabase();
    }
    List<String> tables;
    if (!StringUtils.isBlank(dbName)) {
      tables = getNativeTablesFromDB(sessionid, dbName, false);
    } else {
      log.info("Getting tables from all dbs");
      tables = new ArrayList<>();
      for (String db : getAllDatabases(sessionid)) {
        tables.addAll(getNativeTablesFromDB(sessionid, db, true));
      }
    }
    return tables;
  } catch (HiveException e) {
    throw new LensException(e);
  }
}
 
开发者ID:apache,项目名称:lens,代码行数:31,代码来源:CubeMetastoreServiceImpl.java


示例17: tearDown

import org.apache.hadoop.hive.ql.metadata.Hive; //导入依赖的package包/类
@AfterClass
public void tearDown() throws Exception {
  Hive hive = Hive.get(conf);
  for (String db : testDatabases) {
    hive.dropDatabase(db, true, true);
  }
}
 
开发者ID:apache,项目名称:lens,代码行数:8,代码来源:TestDatabaseResourceService.java


示例18: clean

import org.apache.hadoop.hive.ql.metadata.Hive; //导入依赖的package包/类
/**
 * Clean.
 *
 * @throws HiveException the hive exception
 */
@AfterTest
public void clean() throws HiveException {
  try {
    Hive.get().dropTable("default.sales_fact");
    Hive.get().dropTable("default.time_dim");
    Hive.get().dropTable("default.item_dim");
    Hive.get().dropTable("default.branch_dim");
    Hive.get().dropTable("default.location_dim");
  } catch (HiveException e) {
    log.error("Encountered hive exception", e);
  }
}
 
开发者ID:apache,项目名称:lens,代码行数:18,代码来源:TestColumnarSQLRewriter.java


示例19: clean

import org.apache.hadoop.hive.ql.metadata.Hive; //导入依赖的package包/类
/**
 * Clean.
 *
 * @throws HiveException the hive exception
 */
@AfterTest
public void clean() throws HiveException {
  try {
    Hive.get().dropTable("default.sales_fact");
  } catch (HiveException e) {
    log.error("Encountered hive exception", e);
  }
}
 
开发者ID:apache,项目名称:lens,代码行数:14,代码来源:TestDruidSQLRewriter.java


示例20: createTable

import org.apache.hadoop.hive.ql.metadata.Hive; //导入依赖的package包/类
/**
 * Creates the table.
 *
 * @param db             the db
 * @param table          the table
 * @param udb            the udb
 * @param utable         the utable
 * @param setCustomSerde whether to set custom serde or not
 * @param columnMapping  columnmapping for the table
 * @throws Exception the exception
 */
void createTable(
  HiveConf conf, String db, String table, String udb, String utable, boolean setCustomSerde,
  Map<String, String> columnMapping) throws Exception {
  Table tbl1 = new Table(db, table);

  if (StringUtils.isNotBlank(udb)) {
    tbl1.setProperty(LensConfConstants.NATIVE_DB_NAME, udb);
  }
  if (StringUtils.isNotBlank(utable)) {
    tbl1.setProperty(LensConfConstants.NATIVE_TABLE_NAME, utable);
  }
  if (columnMapping != null && !columnMapping.isEmpty()) {
    tbl1.setProperty(LensConfConstants.NATIVE_TABLE_COLUMN_MAPPING, StringUtils.join(columnMapping.entrySet(), ","));
    log.info("columnMapping property:{}", tbl1.getProperty(LensConfConstants.NATIVE_TABLE_COLUMN_MAPPING));
  }

  List<FieldSchema> columns = new ArrayList<FieldSchema>();
  columns.add(new FieldSchema("id", "int", "col1"));
  columns.add(new FieldSchema("name", "string", "col2"));
  columns.add(new FieldSchema("dollars_sold", "double", "col3"));
  columns.add(new FieldSchema("units_sold", "int", "col4"));

  tbl1.setFields(columns);

  Hive.get(conf).createTable(tbl1);
  System.out.println("Created table " + table);
}
 
开发者ID:apache,项目名称:lens,代码行数:39,代码来源:TestDruidSQLRewriter.java



注:本文中的org.apache.hadoop.hive.ql.metadata.Hive类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java SectionName类代码示例发布时间:2022-05-23
下一篇:
Java Parameter类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap