本文整理汇总了Java中org.apache.hadoop.hbase.protobuf.generated.MapReduceProtos类的典型用法代码示例。如果您正苦于以下问题:Java MapReduceProtos类的具体用法?Java MapReduceProtos怎么用?Java MapReduceProtos使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
MapReduceProtos类属于org.apache.hadoop.hbase.protobuf.generated包,在下文中一共展示了MapReduceProtos类的9个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: toScanMetrics
import org.apache.hadoop.hbase.protobuf.generated.MapReduceProtos; //导入依赖的package包/类
public static ScanMetrics toScanMetrics(final byte[] bytes) {
Parser<MapReduceProtos.ScanMetrics> parser = MapReduceProtos.ScanMetrics.PARSER;
MapReduceProtos.ScanMetrics pScanMetrics = null;
try {
pScanMetrics = parser.parseFrom(bytes);
} catch (InvalidProtocolBufferException e) {
//Ignored there are just no key values to add.
}
ScanMetrics scanMetrics = new ScanMetrics();
if (pScanMetrics != null) {
for (HBaseProtos.NameInt64Pair pair : pScanMetrics.getMetricsList()) {
if (pair.hasName() && pair.hasValue()) {
scanMetrics.setCounter(pair.getName(), pair.getValue());
}
}
}
return scanMetrics;
}
开发者ID:fengchen8086,项目名称:ditb,代码行数:19,代码来源:ProtobufUtil.java
示例2: write
import org.apache.hadoop.hbase.protobuf.generated.MapReduceProtos; //导入依赖的package包/类
@Override
public void write(DataOutput out) throws IOException {
MapReduceProtos.TableSnapshotRegionSplit.Builder builder =
MapReduceProtos.TableSnapshotRegionSplit.newBuilder()
.setRegion(HBaseProtos.RegionSpecifier.newBuilder()
.setType(HBaseProtos.RegionSpecifier.RegionSpecifierType.ENCODED_REGION_NAME)
.setValue(HBaseZeroCopyByteString.wrap(Bytes.toBytes(regionName))).build());
for (String location : locations) {
builder.addLocations(location);
}
MapReduceProtos.TableSnapshotRegionSplit split = builder.build();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
split.writeTo(baos);
baos.close();
byte[] buf = baos.toByteArray();
out.writeInt(buf.length);
out.write(buf);
}
开发者ID:tenggyut,项目名称:HIndex,代码行数:22,代码来源:TableSnapshotInputFormatImpl.java
示例3: toScanMetrics
import org.apache.hadoop.hbase.protobuf.generated.MapReduceProtos; //导入依赖的package包/类
public static ScanMetrics toScanMetrics(final byte[] bytes) {
MapReduceProtos.ScanMetrics.Builder builder = MapReduceProtos.ScanMetrics.newBuilder();
try {
builder.mergeFrom(bytes);
} catch (InvalidProtocolBufferException e) {
//Ignored there are just no key values to add.
}
MapReduceProtos.ScanMetrics pScanMetrics = builder.build();
ScanMetrics scanMetrics = new ScanMetrics();
for (HBaseProtos.NameInt64Pair pair : pScanMetrics.getMetricsList()) {
if (pair.hasName() && pair.hasValue()) {
scanMetrics.setCounter(pair.getName(), pair.getValue());
}
}
return scanMetrics;
}
开发者ID:daidong,项目名称:DominoHBase,代码行数:17,代码来源:ProtobufUtil.java
示例4: readFields
import org.apache.hadoop.hbase.protobuf.generated.MapReduceProtos; //导入依赖的package包/类
@Override
public void readFields(DataInput in) throws IOException {
int len = in.readInt();
byte[] buf = new byte[len];
in.readFully(buf);
MapReduceProtos.TableSnapshotRegionSplit split = MapReduceProtos.TableSnapshotRegionSplit.PARSER.parseFrom(buf);
this.regionName = Bytes.toString(split.getRegion().getValue().toByteArray());
List<String> locationsList = split.getLocationsList();
this.locations = locationsList.toArray(new String[locationsList.size()]);
}
开发者ID:tenggyut,项目名称:HIndex,代码行数:11,代码来源:TableSnapshotInputFormatImpl.java
示例5: writeScanMetrics
import org.apache.hadoop.hbase.protobuf.generated.MapReduceProtos; //导入依赖的package包/类
/**
* Publish the scan metrics. For now, we use scan.setAttribute to pass the metrics back to the
* application or TableInputFormat.Later, we could push it to other systems. We don't use
* metrics framework because it doesn't support multi-instances of the same metrics on the same
* machine; for scan/map reduce scenarios, we will have multiple scans running at the same time.
*
* By default, scan metrics are disabled; if the application wants to collect them, this
* behavior can be turned on by calling calling {@link Scan#setScanMetricsEnabled(boolean)}
*
* <p>This invocation clears the scan metrics. Metrics are aggregated in the Scan instance.
*/
protected void writeScanMetrics() {
if (this.scanMetrics == null || scanMetricsPublished) {
return;
}
MapReduceProtos.ScanMetrics pScanMetrics = ProtobufUtil.toScanMetrics(scanMetrics);
scan.setAttribute(Scan.SCAN_ATTRIBUTES_METRICS_DATA, pScanMetrics.toByteArray());
scanMetricsPublished = true;
}
开发者ID:fengchen8086,项目名称:ditb,代码行数:20,代码来源:ClientScanner.java
示例6: writeScanMetrics
import org.apache.hadoop.hbase.protobuf.generated.MapReduceProtos; //导入依赖的package包/类
/**
* Publish the scan metrics. For now, we use scan.setAttribute to pass the metrics back to the
* application or TableInputFormat.Later, we could push it to other systems. We don't use metrics
* framework because it doesn't support multi-instances of the same metrics on the same machine;
* for scan/map reduce scenarios, we will have multiple scans running at the same time.
*
* By default, scan metrics are disabled; if the application wants to collect them, this
* behavior can be turned on by calling calling {@link Scan#setScanMetricsEnabled(boolean)}
*
* <p>This invocation clears the scan metrics. Metrics are aggregated in the Scan instance.
*/
protected void writeScanMetrics() {
if (this.scanMetrics == null || scanMetricsPublished) {
return;
}
MapReduceProtos.ScanMetrics pScanMetrics = ProtobufUtil.toScanMetrics(scanMetrics);
scan.setAttribute(Scan.SCAN_ATTRIBUTES_METRICS_DATA, pScanMetrics.toByteArray());
scanMetricsPublished = true;
}
开发者ID:grokcoder,项目名称:pbase,代码行数:20,代码来源:ClientScanner.java
示例7: writeScanMetrics
import org.apache.hadoop.hbase.protobuf.generated.MapReduceProtos; //导入依赖的package包/类
/**
* Publish the scan metrics. For now, we use scan.setAttribute to pass the metrics back to the
* application or TableInputFormat.Later, we could push it to other systems. We don't use metrics
* framework because it doesn't support multi-instances of the same metrics on the same machine;
* for scan/map reduce scenarios, we will have multiple scans running at the same time.
*
* By default, scan metrics are disabled; if the application wants to collect them, this behavior
* can be turned on by calling calling:
*
* scan.setAttribute(SCAN_ATTRIBUTES_METRICS_ENABLE, Bytes.toBytes(Boolean.TRUE))
*/
protected void writeScanMetrics() {
if (this.scanMetrics == null || scanMetricsPublished) {
return;
}
MapReduceProtos.ScanMetrics pScanMetrics = ProtobufUtil.toScanMetrics(scanMetrics);
scan.setAttribute(Scan.SCAN_ATTRIBUTES_METRICS_DATA, pScanMetrics.toByteArray());
scanMetricsPublished = true;
}
开发者ID:tenggyut,项目名称:HIndex,代码行数:20,代码来源:ClientScanner.java
示例8: writeScanMetrics
import org.apache.hadoop.hbase.protobuf.generated.MapReduceProtos; //导入依赖的package包/类
/**
* Publish the scan metrics. For now, we use scan.setAttribute to pass the metrics back to the
* application or TableInputFormat.Later, we could push it to other systems. We don't use metrics
* framework because it doesn't support multi-instances of the same metrics on the same machine;
* for scan/map reduce scenarios, we will have multiple scans running at the same time.
* <p/>
* By default, scan metrics are disabled; if the application wants to collect them, this behavior
* can be turned on by calling calling:
* <p/>
* scan.setAttribute(SCAN_ATTRIBUTES_METRICS_ENABLE, Bytes.toBytes(Boolean.TRUE))
*/
protected void writeScanMetrics() {
if (this.scanMetrics == null || scanMetricsPublished) {
return;
}
MapReduceProtos.ScanMetrics pScanMetrics = ProtobufUtil.toScanMetrics(scanMetrics);
scan.setAttribute(Scan.SCAN_ATTRIBUTES_METRICS_DATA, pScanMetrics.toByteArray());
scanMetricsPublished = true;
}
开发者ID:jurmous,项目名称:async-hbase-client,代码行数:20,代码来源:AsyncClientScanner.java
示例9: writeScanMetrics
import org.apache.hadoop.hbase.protobuf.generated.MapReduceProtos; //导入依赖的package包/类
/**
* Publish the scan metrics. For now, we use scan.setAttribute to pass the metrics back to the
* application or TableInputFormat.Later, we could push it to other systems. We don't use metrics
* framework because it doesn't support multi-instances of the same metrics on the same machine;
* for scan/map reduce scenarios, we will have multiple scans running at the same time.
*
* By default, scan metrics are disabled; if the application wants to collect them, this behavior
* can be turned on by calling calling:
*
* scan.setAttribute(SCAN_ATTRIBUTES_METRICS_ENABLE, Bytes.toBytes(Boolean.TRUE))
*/
private void writeScanMetrics() throws IOException {
if (this.scanMetrics == null) {
return;
}
final DataOutputBuffer d = new DataOutputBuffer();
MapReduceProtos.ScanMetrics pScanMetrics = ProtobufUtil.toScanMetrics(scanMetrics);
scan.setAttribute(Scan.SCAN_ATTRIBUTES_METRICS_DATA, pScanMetrics.toByteArray());
}
开发者ID:daidong,项目名称:DominoHBase,代码行数:20,代码来源:ClientScanner.java
注:本文中的org.apache.hadoop.hbase.protobuf.generated.MapReduceProtos类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论