• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java MapReduceProtos类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hbase.protobuf.generated.MapReduceProtos的典型用法代码示例。如果您正苦于以下问题:Java MapReduceProtos类的具体用法?Java MapReduceProtos怎么用?Java MapReduceProtos使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



MapReduceProtos类属于org.apache.hadoop.hbase.protobuf.generated包,在下文中一共展示了MapReduceProtos类的9个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: toScanMetrics

import org.apache.hadoop.hbase.protobuf.generated.MapReduceProtos; //导入依赖的package包/类
public static ScanMetrics toScanMetrics(final byte[] bytes) {
  Parser<MapReduceProtos.ScanMetrics> parser = MapReduceProtos.ScanMetrics.PARSER;
  MapReduceProtos.ScanMetrics pScanMetrics = null;
  try {
    pScanMetrics = parser.parseFrom(bytes);
  } catch (InvalidProtocolBufferException e) {
    //Ignored there are just no key values to add.
  }
  ScanMetrics scanMetrics = new ScanMetrics();
  if (pScanMetrics != null) {
    for (HBaseProtos.NameInt64Pair pair : pScanMetrics.getMetricsList()) {
      if (pair.hasName() && pair.hasValue()) {
        scanMetrics.setCounter(pair.getName(), pair.getValue());
      }
    }
  }
  return scanMetrics;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:19,代码来源:ProtobufUtil.java


示例2: write

import org.apache.hadoop.hbase.protobuf.generated.MapReduceProtos; //导入依赖的package包/类
@Override
public void write(DataOutput out) throws IOException {
  MapReduceProtos.TableSnapshotRegionSplit.Builder builder =
    MapReduceProtos.TableSnapshotRegionSplit.newBuilder()
      .setRegion(HBaseProtos.RegionSpecifier.newBuilder()
        .setType(HBaseProtos.RegionSpecifier.RegionSpecifierType.ENCODED_REGION_NAME)
        .setValue(HBaseZeroCopyByteString.wrap(Bytes.toBytes(regionName))).build());

  for (String location : locations) {
    builder.addLocations(location);
  }

  MapReduceProtos.TableSnapshotRegionSplit split = builder.build();

  ByteArrayOutputStream baos = new ByteArrayOutputStream();
  split.writeTo(baos);
  baos.close();
  byte[] buf = baos.toByteArray();
  out.writeInt(buf.length);
  out.write(buf);
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:22,代码来源:TableSnapshotInputFormatImpl.java


示例3: toScanMetrics

import org.apache.hadoop.hbase.protobuf.generated.MapReduceProtos; //导入依赖的package包/类
public static ScanMetrics toScanMetrics(final byte[] bytes) {
  MapReduceProtos.ScanMetrics.Builder builder = MapReduceProtos.ScanMetrics.newBuilder();
  try {
    builder.mergeFrom(bytes);
  } catch (InvalidProtocolBufferException e) {
    //Ignored there are just no key values to add.
  }
  MapReduceProtos.ScanMetrics pScanMetrics = builder.build();
  ScanMetrics scanMetrics = new ScanMetrics();
  for (HBaseProtos.NameInt64Pair pair : pScanMetrics.getMetricsList()) {
    if (pair.hasName() && pair.hasValue()) {
      scanMetrics.setCounter(pair.getName(), pair.getValue());
    }
  }
  return scanMetrics;
}
 
开发者ID:daidong,项目名称:DominoHBase,代码行数:17,代码来源:ProtobufUtil.java


示例4: readFields

import org.apache.hadoop.hbase.protobuf.generated.MapReduceProtos; //导入依赖的package包/类
@Override
public void readFields(DataInput in) throws IOException {
  int len = in.readInt();
  byte[] buf = new byte[len];
  in.readFully(buf);
  MapReduceProtos.TableSnapshotRegionSplit split = MapReduceProtos.TableSnapshotRegionSplit.PARSER.parseFrom(buf);
  this.regionName = Bytes.toString(split.getRegion().getValue().toByteArray());
  List<String> locationsList = split.getLocationsList();
  this.locations = locationsList.toArray(new String[locationsList.size()]);
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:11,代码来源:TableSnapshotInputFormatImpl.java


示例5: writeScanMetrics

import org.apache.hadoop.hbase.protobuf.generated.MapReduceProtos; //导入依赖的package包/类
/**
 * Publish the scan metrics. For now, we use scan.setAttribute to pass the metrics back to the
 * application or TableInputFormat.Later, we could push it to other systems. We don't use
 * metrics framework because it doesn't support multi-instances of the same metrics on the same
 * machine; for scan/map reduce scenarios, we will have multiple scans running at the same time.
 *
 * By default, scan metrics are disabled; if the application wants to collect them, this
 * behavior can be turned on by calling calling {@link Scan#setScanMetricsEnabled(boolean)}
 * 
 * <p>This invocation clears the scan metrics. Metrics are aggregated in the Scan instance.
 */
protected void writeScanMetrics() {
  if (this.scanMetrics == null || scanMetricsPublished) {
    return;
  }
  MapReduceProtos.ScanMetrics pScanMetrics = ProtobufUtil.toScanMetrics(scanMetrics);
  scan.setAttribute(Scan.SCAN_ATTRIBUTES_METRICS_DATA, pScanMetrics.toByteArray());
  scanMetricsPublished = true;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:20,代码来源:ClientScanner.java


示例6: writeScanMetrics

import org.apache.hadoop.hbase.protobuf.generated.MapReduceProtos; //导入依赖的package包/类
/**
 * Publish the scan metrics. For now, we use scan.setAttribute to pass the metrics back to the
 * application or TableInputFormat.Later, we could push it to other systems. We don't use metrics
 * framework because it doesn't support multi-instances of the same metrics on the same machine;
 * for scan/map reduce scenarios, we will have multiple scans running at the same time.
 *
 * By default, scan metrics are disabled; if the application wants to collect them, this
 * behavior can be turned on by calling calling {@link Scan#setScanMetricsEnabled(boolean)}
 *
 * <p>This invocation clears the scan metrics. Metrics are aggregated in the Scan instance.
 */
protected void writeScanMetrics() {
  if (this.scanMetrics == null || scanMetricsPublished) {
    return;
  }
  MapReduceProtos.ScanMetrics pScanMetrics = ProtobufUtil.toScanMetrics(scanMetrics);
  scan.setAttribute(Scan.SCAN_ATTRIBUTES_METRICS_DATA, pScanMetrics.toByteArray());
  scanMetricsPublished = true;
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:20,代码来源:ClientScanner.java


示例7: writeScanMetrics

import org.apache.hadoop.hbase.protobuf.generated.MapReduceProtos; //导入依赖的package包/类
/**
 * Publish the scan metrics. For now, we use scan.setAttribute to pass the metrics back to the
 * application or TableInputFormat.Later, we could push it to other systems. We don't use metrics
 * framework because it doesn't support multi-instances of the same metrics on the same machine;
 * for scan/map reduce scenarios, we will have multiple scans running at the same time.
 *
 * By default, scan metrics are disabled; if the application wants to collect them, this behavior
 * can be turned on by calling calling:
 *
 * scan.setAttribute(SCAN_ATTRIBUTES_METRICS_ENABLE, Bytes.toBytes(Boolean.TRUE))
 */
protected void writeScanMetrics() {
  if (this.scanMetrics == null || scanMetricsPublished) {
    return;
  }
  MapReduceProtos.ScanMetrics pScanMetrics = ProtobufUtil.toScanMetrics(scanMetrics);
  scan.setAttribute(Scan.SCAN_ATTRIBUTES_METRICS_DATA, pScanMetrics.toByteArray());
  scanMetricsPublished = true;
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:20,代码来源:ClientScanner.java


示例8: writeScanMetrics

import org.apache.hadoop.hbase.protobuf.generated.MapReduceProtos; //导入依赖的package包/类
/**
 * Publish the scan metrics. For now, we use scan.setAttribute to pass the metrics back to the
 * application or TableInputFormat.Later, we could push it to other systems. We don't use metrics
 * framework because it doesn't support multi-instances of the same metrics on the same machine;
 * for scan/map reduce scenarios, we will have multiple scans running at the same time.
 * <p/>
 * By default, scan metrics are disabled; if the application wants to collect them, this behavior
 * can be turned on by calling calling:
 * <p/>
 * scan.setAttribute(SCAN_ATTRIBUTES_METRICS_ENABLE, Bytes.toBytes(Boolean.TRUE))
 */
protected void writeScanMetrics() {
  if (this.scanMetrics == null || scanMetricsPublished) {
    return;
  }
  MapReduceProtos.ScanMetrics pScanMetrics = ProtobufUtil.toScanMetrics(scanMetrics);
  scan.setAttribute(Scan.SCAN_ATTRIBUTES_METRICS_DATA, pScanMetrics.toByteArray());
  scanMetricsPublished = true;
}
 
开发者ID:jurmous,项目名称:async-hbase-client,代码行数:20,代码来源:AsyncClientScanner.java


示例9: writeScanMetrics

import org.apache.hadoop.hbase.protobuf.generated.MapReduceProtos; //导入依赖的package包/类
/**
 * Publish the scan metrics. For now, we use scan.setAttribute to pass the metrics back to the
 * application or TableInputFormat.Later, we could push it to other systems. We don't use metrics
 * framework because it doesn't support multi-instances of the same metrics on the same machine;
 * for scan/map reduce scenarios, we will have multiple scans running at the same time.
 *
 * By default, scan metrics are disabled; if the application wants to collect them, this behavior
 * can be turned on by calling calling:
 *
 * scan.setAttribute(SCAN_ATTRIBUTES_METRICS_ENABLE, Bytes.toBytes(Boolean.TRUE))
 */
private void writeScanMetrics() throws IOException {
  if (this.scanMetrics == null) {
    return;
  }
  final DataOutputBuffer d = new DataOutputBuffer();
  MapReduceProtos.ScanMetrics pScanMetrics = ProtobufUtil.toScanMetrics(scanMetrics);
  scan.setAttribute(Scan.SCAN_ATTRIBUTES_METRICS_DATA, pScanMetrics.toByteArray());
}
 
开发者ID:daidong,项目名称:DominoHBase,代码行数:20,代码来源:ClientScanner.java



注:本文中的org.apache.hadoop.hbase.protobuf.generated.MapReduceProtos类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java PacketRejectedException类代码示例发布时间:2022-05-23
下一篇:
Java LocalizableMessageFactory类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap