• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java ResultSerialization类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hbase.mapreduce.ResultSerialization的典型用法代码示例。如果您正苦于以下问题:Java ResultSerialization类的具体用法?Java ResultSerialization怎么用?Java ResultSerialization使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



ResultSerialization类属于org.apache.hadoop.hbase.mapreduce包,在下文中一共展示了ResultSerialization类的9个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: initTableReduceJob

import org.apache.hadoop.hbase.mapreduce.ResultSerialization; //导入依赖的package包/类
/**
 * Use this before submitting a TableReduce job. It will
 * appropriately set up the JobConf.
 *
 * @param table  The output table.
 * @param reducer  The reducer class to use.
 * @param job  The current job configuration to adjust.
 * @param partitioner  Partitioner to use. Pass <code>null</code> to use
 * default partitioner.
 * @param addDependencyJars upload HBase jars and jars for any of the configured
 *           job classes via the distributed cache (tmpjars).
 * @throws IOException When determining the region count fails.
 */
public static void initTableReduceJob(String table,
  Class<? extends TableReduce> reducer, JobConf job, Class partitioner,
  boolean addDependencyJars) throws IOException {
  job.setOutputFormat(TableOutputFormat.class);
  job.setReducerClass(reducer);
  job.set(TableOutputFormat.OUTPUT_TABLE, table);
  job.setOutputKeyClass(ImmutableBytesWritable.class);
  job.setOutputValueClass(Put.class);
  job.setStrings("io.serializations", job.get("io.serializations"),
      MutationSerialization.class.getName(), ResultSerialization.class.getName());
  if (partitioner == HRegionPartitioner.class) {
    job.setPartitionerClass(HRegionPartitioner.class);
    int regions =
      MetaTableAccessor.getRegionCount(HBaseConfiguration.create(job), TableName.valueOf(table));
    if (job.getNumReduceTasks() > regions) {
      job.setNumReduceTasks(regions);
    }
  } else if (partitioner != null) {
    job.setPartitionerClass(partitioner);
  }
  if (addDependencyJars) {
    addDependencyJars(job);
  }
  initCredentials(job);
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:39,代码来源:TableMapReduceUtil.java


示例2: initTableReduceJob

import org.apache.hadoop.hbase.mapreduce.ResultSerialization; //导入依赖的package包/类
/**
 * Use this before submitting a TableReduce job. It will
 * appropriately set up the JobConf.
 *
 * @param table  The output table.
 * @param reducer  The reducer class to use.
 * @param job  The current job configuration to adjust.
 * @param partitioner  Partitioner to use. Pass <code>null</code> to use
 * default partitioner.
 * @param addDependencyJars upload HBase jars and jars for any of the configured
 *           job classes via the distributed cache (tmpjars).
 * @throws IOException When determining the region count fails.
 */
public static void initTableReduceJob(String table,
  Class<? extends TableReduce> reducer, JobConf job, Class partitioner,
  boolean addDependencyJars) throws IOException {
  job.setOutputFormat(TableOutputFormat.class);
  job.setReducerClass(reducer);
  job.set(TableOutputFormat.OUTPUT_TABLE, table);
  job.setOutputKeyClass(ImmutableBytesWritable.class);
  job.setOutputValueClass(Put.class);
  job.setStrings("io.serializations", job.get("io.serializations"),
      MutationSerialization.class.getName(), ResultSerialization.class.getName());
  if (partitioner == HRegionPartitioner.class) {
    job.setPartitionerClass(HRegionPartitioner.class);
    int regions = MetaReader.getRegionCount(HBaseConfiguration.create(job), table);
    if (job.getNumReduceTasks() > regions) {
      job.setNumReduceTasks(regions);
    }
  } else if (partitioner != null) {
    job.setPartitionerClass(partitioner);
  }
  if (addDependencyJars) {
    addDependencyJars(job);
  }
  initCredentials(job);
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:38,代码来源:TableMapReduceUtil.java


示例3: export

import org.apache.hadoop.hbase.mapreduce.ResultSerialization; //导入依赖的package包/类
@Override
public void export(RpcController controller, ExportProtos.ExportRequest request,
        RpcCallback<ExportProtos.ExportResponse> done) {
  Region region = env.getRegion();
  Configuration conf = HBaseConfiguration.create(env.getConfiguration());
  conf.setStrings("io.serializations", conf.get("io.serializations"), ResultSerialization.class.getName());
  try {
    Scan scan = validateKey(region.getRegionInfo(), request);
    Token userToken = null;
    if (userProvider.isHadoopSecurityEnabled() && !request.hasFsToken()) {
      LOG.warn("Hadoop security is enable, but no found of user token");
    } else if (userProvider.isHadoopSecurityEnabled()) {
      userToken = new Token(request.getFsToken().getIdentifier().toByteArray(),
              request.getFsToken().getPassword().toByteArray(),
              new Text(request.getFsToken().getKind()),
              new Text(request.getFsToken().getService()));
    }
    ExportProtos.ExportResponse response = processData(region, conf, userProvider,
      scan, userToken, getWriterOptions(conf, region.getRegionInfo(), request));
    done.run(response);
  } catch (IOException e) {
    CoprocessorRpcUtils.setControllerException(controller, e);
    LOG.error(e.toString(), e);
  }
}
 
开发者ID:apache,项目名称:hbase,代码行数:26,代码来源:Export.java


示例4: initTableMapJob

import org.apache.hadoop.hbase.mapreduce.ResultSerialization; //导入依赖的package包/类
/**
 * Use this before submitting a TableMap job. It will
 * appropriately set up the JobConf.
 *
 * @param table  The table name to read from.
 * @param columns  The columns to scan.
 * @param mapper  The mapper class to use.
 * @param outputKeyClass  The class of the output key.
 * @param outputValueClass  The class of the output value.
 * @param job  The current job configuration to adjust.
 * @param addDependencyJars upload HBase jars and jars for any of the configured
 *           job classes via the distributed cache (tmpjars).
 */
public static void initTableMapJob(String table, String columns,
  Class<? extends TableMap> mapper,
  Class<?> outputKeyClass,
  Class<?> outputValueClass, JobConf job, boolean addDependencyJars,
  Class<? extends InputFormat> inputFormat) {

  job.setInputFormat(inputFormat);
  job.setMapOutputValueClass(outputValueClass);
  job.setMapOutputKeyClass(outputKeyClass);
  job.setMapperClass(mapper);
  job.setStrings("io.serializations", job.get("io.serializations"),
      MutationSerialization.class.getName(), ResultSerialization.class.getName());
  FileInputFormat.addInputPaths(job, table);
  job.set(TableInputFormat.COLUMN_LIST, columns);
  if (addDependencyJars) {
    try {
      addDependencyJars(job);
    } catch (IOException e) {
      e.printStackTrace();
    }
  }
  try {
    initCredentials(job);
  } catch (IOException ioe) {
    // just spit out the stack trace?  really?
    ioe.printStackTrace();
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:42,代码来源:TableMapReduceUtil.java


示例5: run

import org.apache.hadoop.hbase.mapreduce.ResultSerialization; //导入依赖的package包/类
@Override
public int run(String[] args) throws Exception {
  String[] otherArgs = new GenericOptionsParser(getConf(), args).getRemainingArgs();
  if (otherArgs.length < 2) {
    usage("Wrong number of arguments: " + otherArgs.length);
    return -1;
  }
  String inputVersionString = System.getProperty(ResultSerialization.IMPORT_FORMAT_VER);
  if (inputVersionString != null) {
    getConf().set(ResultSerialization.IMPORT_FORMAT_VER, inputVersionString);
  }
  Job job = createSubmittableJob(getConf(), otherArgs);
  return (job.waitForCompletion(true) ? 0 : 1);
}
 
开发者ID:dmmcerlean,项目名称:cloud-bigtable-client,代码行数:15,代码来源:Import.java


示例6: initTableMapJob

import org.apache.hadoop.hbase.mapreduce.ResultSerialization; //导入依赖的package包/类
/**
 * Use this before submitting a TableMap job. It will
 * appropriately set up the JobConf.
 *
 * @param table  The table name to read from.
 * @param columns  The columns to scan.
 * @param mapper  The mapper class to use.
 * @param outputKeyClass  The class of the output key.
 * @param outputValueClass  The class of the output value.
 * @param job  The current job configuration to adjust.
 * @param addDependencyJars upload HBase jars and jars for any of the configured
 *           job classes via the distributed cache (tmpjars).
 */
public static void initTableMapJob(String table, String columns,
  Class<? extends TableMap> mapper,
  Class<?> outputKeyClass,
  Class<?> outputValueClass, JobConf job, boolean addDependencyJars) {

  job.setInputFormat(TableInputFormat.class);
  job.setMapOutputValueClass(outputValueClass);
  job.setMapOutputKeyClass(outputKeyClass);
  job.setMapperClass(mapper);
  job.setStrings("io.serializations", job.get("io.serializations"),
      MutationSerialization.class.getName(), ResultSerialization.class.getName());
  FileInputFormat.addInputPaths(job, table);
  job.set(TableInputFormat.COLUMN_LIST, columns);
  if (addDependencyJars) {
    try {
      addDependencyJars(job);
    } catch (IOException e) {
      e.printStackTrace();
    }
  }
  try {
    initCredentials(job);
  } catch (IOException ioe) {
    // just spit out the stack trace?  really?
    ioe.printStackTrace();
  }
}
 
开发者ID:cloud-software-foundation,项目名称:c5,代码行数:41,代码来源:TableMapReduceUtil.java


示例7: run

import org.apache.hadoop.hbase.mapreduce.ResultSerialization; //导入依赖的package包/类
/**
 * Read sequence file from HDFS created by HBase export.
 *
 * @param args the command-line arguments
 * @return the process exit code
 * @throws Exception if something goes wrong
 */
public int run(final String[] args) throws Exception {

  Cli cli = Cli.builder().setArgs(args).addOptions(CliCommonOpts.InputFileOption.values()).build();
  int result = cli.runCmd();

  if (result != 0) {
    return result;
  }

  Path inputFile = new Path(cli.getArgValueAsString(CliCommonOpts.InputFileOption.INPUT));

  Configuration conf = super.getConf();

  conf.setStrings("io.serializations", conf.get("io.serializations"),
      ResultSerialization.class.getName());

  SequenceFile.Reader reader =
      new SequenceFile.Reader(conf, SequenceFile.Reader.file(inputFile));

  HBaseScanAvroStock.AvroStockReader stockReader =
      new HBaseScanAvroStock.AvroStockReader();

  try {
    ImmutableBytesWritable key = new ImmutableBytesWritable();
    Result value = new Result();

    while (reader.next(key)) {
      value = (Result) reader.getCurrentValue(value);
      Stock stock = stockReader.decode(value.getValue(
          HBaseWriter.STOCK_DETAILS_COLUMN_FAMILY_AS_BYTES,
          HBaseWriter.STOCK_COLUMN_QUALIFIER_AS_BYTES));
      System.out.println(new String(key.get()) + ": " +
      ToStringBuilder
            .reflectionToString(stock, ToStringStyle.SIMPLE_STYLE));
    }
  } finally {
    reader.close();
  }
  return 0;
}
 
开发者ID:Hanmourang,项目名称:hiped2,代码行数:48,代码来源:ExportedReader.java


示例8: configure

import org.apache.hadoop.hbase.mapreduce.ResultSerialization; //导入依赖的package包/类
private void configure(Configuration conf) {
    conf.setStrings("io.serializations", conf.get("io.serializations"), MutationSerialization.class.getName(), ResultSerialization.class.getName());
}
 
开发者ID:flipkart-incubator,项目名称:hbase-object-mapper,代码行数:4,代码来源:AbstractMRTest.java


示例9: configureIncrementalLoad

import org.apache.hadoop.hbase.mapreduce.ResultSerialization; //导入依赖的package包/类
static void configureIncrementalLoad(Job job, HTableDescriptor tableDescriptor,
                                     RegionLocator regionLocator, Class<? extends OutputFormat<?, ?>> cls) throws IOException,
        UnsupportedEncodingException {
    Configuration conf = job.getConfiguration();
    job.setOutputKeyClass(ImmutableBytesWritable.class);
    job.setOutputValueClass(KeyValue.class);
    job.setOutputFormatClass(cls);

    // Based on the configured map output class, set the correct reducer to properly
    // sort the incoming values.
    // TODO it would be nice to pick one or the other of these formats.
    if (KeyValue.class.equals(job.getMapOutputValueClass())) {
        job.setReducerClass(KeyValueSortReducer.class);
    } else if (Put.class.equals(job.getMapOutputValueClass())) {
        job.setReducerClass(PutSortReducer.class);
    } else if (Text.class.equals(job.getMapOutputValueClass())) {
        job.setReducerClass(TextSortReducer.class);
    } else {
        LOG.warn("Unknown map output value type:" + job.getMapOutputValueClass());
    }

    conf.setStrings("io.serializations", conf.get("io.serializations"),
            MutationSerialization.class.getName(), ResultSerialization.class.getName(),
            KeyValueSerialization.class.getName());

    // Use table's region boundaries for TOP split points.
    LOG.info("Looking up current regions for table " + tableDescriptor.getTableName());
    List<ImmutableBytesWritable> startKeys = getRegionStartKeys(regionLocator);
    LOG.info("Configuring " + startKeys.size() + " reduce partitions " +
            "to match current region count");
    job.setNumReduceTasks(startKeys.size());

    configurePartitioner(job, startKeys);
    // Set compression algorithms based on column families
    configureCompression(conf, tableDescriptor);
    configureBloomType(tableDescriptor, conf);
    configureBlockSize(tableDescriptor, conf);
    configureDataBlockEncoding(tableDescriptor, conf);

    TableMapReduceUtil.addDependencyJars(job);
    TableMapReduceUtil.initCredentials(job);
    LOG.info("Incremental table " + regionLocator.getName() + " output configured.");
}
 
开发者ID:apache,项目名称:kylin,代码行数:44,代码来源:HFileOutputFormat3.java



注:本文中的org.apache.hadoop.hbase.mapreduce.ResultSerialization类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java GeoApiContext类代码示例发布时间:2022-05-21
下一篇:
Java MultiGetResponse类代码示例发布时间:2022-05-21
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap