• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java HiveMetaStore类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hive.metastore.HiveMetaStore的典型用法代码示例。如果您正苦于以下问题:Java HiveMetaStore类的具体用法?Java HiveMetaStore怎么用?Java HiveMetaStore使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



HiveMetaStore类属于org.apache.hadoop.hive.metastore包,在下文中一共展示了HiveMetaStore类的10个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: insertThriftRenamePartitionLogEntry

import org.apache.hadoop.hive.metastore.HiveMetaStore; //导入依赖的package包/类
/**
 * Insert a thrift audit log entry that represents renaming a partition.
 *
 * @param hmsHandler the HMSHandler for the event
 * @param oldPartition the old partition name
 * @param newPartition the new partition name
 * @param hiveConf Hive configuration
 * @throws Exception if there's an error inserting into the audit log
 */
public static void insertThriftRenamePartitionLogEntry(
    HiveMetaStore.HMSHandler hmsHandler,
    Partition oldPartition,
    Partition newPartition,
    HiveConf hiveConf) throws Exception {
  final MetastoreAuditLogListener metastoreAuditLogListener =
      new MetastoreAuditLogListener(hiveConf);

  AlterPartitionEvent event = new AlterPartitionEvent(
      oldPartition,
      newPartition,
      true,
      hmsHandler
  );

  metastoreAuditLogListener.onAlterPartition(event);
}
 
开发者ID:airbnb,项目名称:reair,代码行数:27,代码来源:AuditLogHookUtils.java


示例2: init

import org.apache.hadoop.hive.metastore.HiveMetaStore; //导入依赖的package包/类
@Override
public void init() throws InitUnitException {
    try {
        hdfsUnit.getFileSystem().mkdirs(new Path(HIVE_HOME));
        hdfsUnit.getFileSystem().setOwner(new Path(HIVE_HOME), "hive", "hive");
    } catch (IOException e) {
        throw new InitUnitException("Failed to create hive home directory: " + HIVE_HOME, e);
    }
    metastorePort = PortProvider.nextPort();
    final HiveConf hiveConf = gatherConfigs();
    new Thread(new Runnable() {
        @Override
        public void run() {
            try {
                //TODO: remove static call
                HiveMetaStore.startMetaStore(metastorePort, null, hiveConf);
            } catch (Throwable throwable) {
                throwable.printStackTrace();
            }
        }
    }).start();
    hiveServer = new HiveServer2();
    hiveServer.init(hiveConf);
    hiveServer.start();
    jdbcUrl = String.format("jdbc:hive2://%s:%s/default", HIVE_HOST, port);
}
 
开发者ID:intropro,项目名称:prairie,代码行数:27,代码来源:Hive2Unit.java


示例3: startThrift

import org.apache.hadoop.hive.metastore.HiveMetaStore; //导入依赖的package包/类
private void startThrift() throws Exception {
  final Lock startLock = new ReentrantLock();
  final Condition startCondition = startLock.newCondition();
  final AtomicBoolean startedServing = new AtomicBoolean();
  try (ServerSocket socket = new ServerSocket(0)) {
    thriftPort = socket.getLocalPort();
  }
  conf.setVar(ConfVars.METASTOREURIS, getThriftConnectionUri());
  final HiveConf hiveConf = new HiveConf(conf, HiveMetaStoreClient.class);
  thriftServer.execute(new Runnable() {
    @Override
    public void run() {
      try {
        HadoopThriftAuthBridge bridge = new HadoopThriftAuthBridge23();
        HiveMetaStore.startMetaStore(thriftPort, bridge, hiveConf, startLock, startCondition, startedServing);
      } catch (Throwable e) {
        LOG.error("Unable to start a Thrift server for Hive Metastore", e);
      }
    }
  });
  int i = 0;
  while (i++ < 3) {
    startLock.lock();
    try {
      if (startCondition.await(1, TimeUnit.MINUTES)) {
        break;
      }
    } finally {
      startLock.unlock();
    }
    if (i == 3) {
      throw new RuntimeException("Maximum number of tries reached whilst waiting for Thrift server to be ready");
    }
  }
}
 
开发者ID:HotelsDotCom,项目名称:beeju,代码行数:36,代码来源:ThriftHiveMetaStoreJUnitRule.java


示例4: simulatedRenamePartition

import org.apache.hadoop.hive.metastore.HiveMetaStore; //导入依赖的package包/类
private void simulatedRenamePartition(String dbName,
    String tableName,
    String oldPartitionName,
    List<String> newPartitionValues) throws Exception {
  Partition oldPartition = srcMetastore.getPartition(dbName, tableName, oldPartitionName);
  Partition newPartition = new Partition(oldPartition);
  newPartition.setValues(newPartitionValues);

  HiveConf hiveConf = AuditLogHookUtils.getMetastoreHiveConf(
      embeddedMySqlDb,
      AUDIT_LOG_DB_NAME,
      AUDIT_LOG_TABLE_NAME,
      AUDIT_LOG_OBJECTS_TABLE_NAME
  );

  HiveMetaStore.HMSHandler handler = Mockito.mock(HiveMetaStore.HMSHandler.class);
  Mockito.when(
      handler.get_table(dbName, tableName)
  ).thenReturn(srcMetastore.getTable(dbName, tableName));

  AuditLogHookUtils.insertThriftRenamePartitionLogEntry(
      handler,
      oldPartition,
      newPartition,
      hiveConf
  );
}
 
开发者ID:airbnb,项目名称:reair,代码行数:28,代码来源:ReplicationServerTest.java


示例5: run

import org.apache.hadoop.hive.metastore.HiveMetaStore; //导入依赖的package包/类
@Override
public void run() {
    try {
        HiveMetaStore.startMetaStore(hiveMetastorePort, 
                new HadoopThriftAuthBridge(), 
                hiveConf);
    } catch (Throwable t) {
        t.printStackTrace();
    }
}
 
开发者ID:sakserv,项目名称:hadoop-mini-clusters,代码行数:11,代码来源:HiveLocalMetaStore.java


示例6: start

import org.apache.hadoop.hive.metastore.HiveMetaStore; //导入依赖的package包/类
@Override
public void start() throws IOException {
  final HiveConf serverConf = new HiveConf(new Configuration(), this.getClass());
  serverConf.set("hive.metastore.local", "false");
  serverConf.set(HiveConf.ConfVars.METASTORECONNECTURLKEY.varname, "jdbc:derby:target/metastore_db;create=true");
  //serverConf.set(HiveConf.ConfVars.METASTORE_EVENT_LISTENERS.varname, NotificationListener.class.getName());
  File derbyLogFile = new File("target/derby.log");
  derbyLogFile.createNewFile();
  setSystemProperty("derby.stream.error.file", derbyLogFile.getPath());
  serverThread = new Thread(new Runnable() {
    @Override
    public void run() {
      try {
        HiveMetaStore.startMetaStore(9083, ShimLoader.getHadoopThriftAuthBridge(),
            serverConf);
        //LOG.info("Started metastore server on port " + msPort);
      }
      catch (Throwable e) {
        //LOG.error("Metastore Thrift Server threw an exception...", e);
      }
    }
  });
  serverThread.setDaemon(true);
  serverThread.start();
  try {
    Thread.sleep(10000L);
  } catch (InterruptedException e) {
    // do nothing
  }
}
 
开发者ID:kite-sdk,项目名称:kite-examples-integration-tests,代码行数:31,代码来源:HiveMetastoreService.java


示例7: startMetastore

import org.apache.hadoop.hive.metastore.HiveMetaStore; //导入依赖的package包/类
private void startMetastore() throws Exception {
  Callable<Void> metastoreService = new Callable<Void>() {
    public Void call() throws Exception {
      try {
        HiveMetaStore.startMetaStore(getMetastorePort(conf),
            ShimLoader.getHadoopThriftAuthBridge(), conf);
      } catch (Throwable e) {
        throw new Exception("Error starting metastore", e);
      }
      return null;
    }
  };
  metaStoreExecutor.submit(metastoreService);
}
 
开发者ID:apache,项目名称:incubator-sentry,代码行数:15,代码来源:InternalMetastoreServer.java


示例8: start

import org.apache.hadoop.hive.metastore.HiveMetaStore; //导入依赖的package包/类
@Override
public void start() {
  // Fix for ACCESS-148. Resets a static field
  // so the default database is created even
  // though is has been created before in this JVM
  Reflection.staticField("createDefaultDB")
  .ofType(boolean.class)
  .in(HiveMetaStore.HMSHandler.class)
  .set(false);
}
 
开发者ID:apache,项目名称:incubator-sentry,代码行数:11,代码来源:EmbeddedHiveServer.java


示例9: setUpClass

import org.apache.hadoop.hive.metastore.HiveMetaStore; //导入依赖的package包/类
/**
 * Start all required mini clusters.
 */
@BeforeClass
public static void setUpClass() throws Exception {
  // Conf dir
  if (!new File(confDir).mkdirs()) {
    fail("Failed to create config directories.");
  }

  // HDFS
  File minidfsDir = new File("target/minidfs").getAbsoluteFile();
  if (!minidfsDir.exists()) {
    Assert.assertTrue(minidfsDir.mkdirs());
  }
  Set<PosixFilePermission> set = new HashSet<>();
  set.add(PosixFilePermission.OWNER_EXECUTE);
  set.add(PosixFilePermission.OWNER_READ);
  set.add(PosixFilePermission.OWNER_WRITE);
  set.add(PosixFilePermission.OTHERS_READ);
  java.nio.file.Files.setPosixFilePermissions(minidfsDir.toPath(), set);
  System.setProperty(MiniDFSCluster.PROP_TEST_BUILD_DATA, minidfsDir.getPath());
  final Configuration conf = new HdfsConfiguration();
  conf.set("hadoop.proxyuser." + System.getProperty("user.name") + ".hosts", "*");
  conf.set("hadoop.proxyuser." + System.getProperty("user.name") + ".groups", "*");
  miniDFS = new MiniDFSCluster.Builder(conf).build();
  miniDFS.getFileSystem().setPermission(new Path("/"), FsPermission.createImmutable((short)0777));
  miniMR = MiniMRClientClusterFactory.create(BaseHiveIT.class, 1, conf);
  writeConfiguration(miniMR.getConfig(), confDir + "/core-site.xml");
  writeConfiguration(miniMR.getConfig(), confDir + "/hdfs-site.xml");
  writeConfiguration(miniMR.getConfig(), confDir + "/mapred-site.xml");
  writeConfiguration(miniMR.getConfig(), confDir + "/yarn-site.xml");

  // Configuration for both HMS and HS2
  METASTORE_PORT = NetworkUtils.getRandomPort();
  HIVE_SERVER_PORT = NetworkUtils.getRandomPort();
  final HiveConf hiveConf = new HiveConf(miniDFS.getConfiguration(0), HiveConf.class);
  hiveConf.set(HiveConf.ConfVars.METASTORECONNECTURLKEY.varname, "jdbc:derby:;databaseName=target/metastore_db;create=true");
  hiveConf.set(HiveConf.ConfVars.METASTOREURIS.varname, Utils.format("thrift://{}:{}", HOSTNAME, METASTORE_PORT));
  hiveConf.set(HiveConf.ConfVars.HIVE_SERVER2_THRIFT_BIND_HOST.varname, "localhost");
  hiveConf.set("org.jpox.autoCreateSchema", "true");
  hiveConf.set("datanucleus.schema.autoCreateTables", "true");
  hiveConf.set("hive.metastore.schema.verification", "false");
  hiveConf.setInt(HiveConf.ConfVars.HIVE_SERVER2_THRIFT_PORT.varname, HIVE_SERVER_PORT);

  // Hive metastore
  Callable<Void> metastoreService = () -> {
    try {
      HiveMetaStore.startMetaStore(METASTORE_PORT, ShimLoader.getHadoopThriftAuthBridge(), hiveConf);
      while(true);
    } catch (Throwable e) {
      throw new Exception("Error starting metastore", e);
    }
  };
  hiveMetastoreExecutor.submit(metastoreService);
  NetworkUtils.waitForStartUp(HOSTNAME, METASTORE_PORT, MINICLUSTER_BOOT_RETRY, MINICLUSTER_BOOT_SLEEP);

  // HiveServer 2
  hiveServer2 = new HiveServer2();
  hiveServer2.init(hiveConf);
  hiveServer2.start();
  writeConfiguration(hiveServer2.getHiveConf(), confDir + "/hive-site.xml");
  NetworkUtils.waitForStartUp(HOSTNAME, HIVE_SERVER_PORT, MINICLUSTER_BOOT_RETRY, MINICLUSTER_BOOT_SLEEP);

  // JDBC Connection to Hive
  Class.forName(HIVE_JDBC_DRIVER);
  hiveConnection = HiveMetastoreUtil.getHiveConnection(
    getHiveJdbcUrl(),
    HadoopSecurityUtil.getLoginUser(conf),
    Collections.emptyList()
  );

  // And finally we're initialized
  isHiveInitialized = true;
}
 
开发者ID:streamsets,项目名称:datacollector,代码行数:76,代码来源:BaseHiveIT.java


示例10: MetacatHMSHandler

import org.apache.hadoop.hive.metastore.HiveMetaStore; //导入依赖的package包/类
/**
 * Constructor.
 *
 * @param name client name
 * @throws MetaException exception
 */
public MetacatHMSHandler(final String name) throws MetaException {
    this(name, new HiveConf(HiveMetaStore.HMSHandler.class));
}
 
开发者ID:Netflix,项目名称:metacat,代码行数:10,代码来源:MetacatHMSHandler.java



注:本文中的org.apache.hadoop.hive.metastore.HiveMetaStore类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java ServiceGroupRegistrationClient类代码示例发布时间:2022-05-22
下一篇:
Java Constructor类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap