• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java Tracer类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.htrace.core.Tracer的典型用法代码示例。如果您正苦于以下问题:Java Tracer类的具体用法?Java Tracer怎么用?Java Tracer使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



Tracer类属于org.apache.htrace.core包,在下文中一共展示了Tracer类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: testTracing

import org.apache.htrace.core.Tracer; //导入依赖的package包/类
@Test
public void testTracing() throws Throwable {
  Configuration conf = new Configuration();
  String prefix = "fs.shell.htrace.";
  conf.set(prefix + Tracer.SPAN_RECEIVER_CLASSES_KEY,
      SetSpanReceiver.class.getName());
  conf.set(prefix + Tracer.SAMPLER_CLASSES_KEY,
      AlwaysSampler.class.getName());
  conf.setQuietMode(false);
  FsShell shell = new FsShell(conf);
  int res;
  try {
    res = ToolRunner.run(shell, new String[]{"-help", "ls", "cat"});
  } finally {
    shell.close();
  }
  SetSpanReceiver.assertSpanNamesFound(new String[]{"help"});
  Assert.assertEquals("-help ls cat",
      SetSpanReceiver.getMap()
          .get("help").get(0).getKVAnnotations().get("args"));
}
 
开发者ID:nucypher,项目名称:hadoop-oss,代码行数:22,代码来源:TestFsShell.java


示例2: newDB

import org.apache.htrace.core.Tracer; //导入依赖的package包/类
public static DB newDB(String dbname, Properties properties, final Tracer tracer) throws UnknownDBException {
  ClassLoader classLoader = DBFactory.class.getClassLoader();

  DB ret;

  try {
    Class dbclass = classLoader.loadClass(dbname);

    ret = (DB) dbclass.newInstance();
  } catch (Exception e) {
    e.printStackTrace();
    return null;
  }

  ret.setProperties(properties);

  return new DBWrapper(ret, tracer);
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:19,代码来源:DBFactory.java


示例3: getFromOneDataNode

import org.apache.htrace.core.Tracer; //导入依赖的package包/类
private Callable<ByteBuffer> getFromOneDataNode(final DNAddrPair datanode,
    final LocatedBlock block, final long start, final long end,
    final ByteBuffer bb,
    final Map<ExtendedBlock, Set<DatanodeInfo>> corruptedBlockMap,
    final int hedgedReadId) {
  final SpanId parentSpanId = Tracer.getCurrentSpanId();
  return new Callable<ByteBuffer>() {
    @Override
    public ByteBuffer call() throws Exception {
      byte[] buf = bb.array();
      int offset = bb.position();
      try (TraceScope ignored = dfsClient.getTracer().
          newScope("hedgedRead" + hedgedReadId, parentSpanId)) {
        actualGetFromOneDataNode(datanode, block, start, end, buf,
            offset, corruptedBlockMap);
        return bb;
      }
    }
  };
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:21,代码来源:DFSInputStream.java


示例4: RemoteBlockReader2

import org.apache.htrace.core.Tracer; //导入依赖的package包/类
protected RemoteBlockReader2(String file, long blockId,
    DataChecksum checksum, boolean verifyChecksum,
    long startOffset, long firstChunkOffset, long bytesToRead, Peer peer,
    DatanodeID datanodeID, PeerCache peerCache, Tracer tracer) {
  this.isLocal = DFSUtilClient.isLocalAddress(NetUtils.
      createSocketAddr(datanodeID.getXferAddr()));
  // Path is used only for printing block and file information in debug
  this.peer = peer;
  this.datanodeID = datanodeID;
  this.in = peer.getInputStreamChannel();
  this.checksum = checksum;
  this.verifyChecksum = verifyChecksum;
  this.startOffset = Math.max( startOffset, 0 );
  this.filename = file;
  this.peerCache = peerCache;
  this.blockId = blockId;

  // The total number of bytes that we need to transfer from the DN is
  // the amount that the user wants (bytesToRead), plus the padding at
  // the beginning in order to chunk-align. Note that the DN may elect
  // to send more than this amount if the read starts/ends mid-chunk.
  this.bytesNeededToFinish = bytesToRead + (startOffset - firstChunkOffset);
  bytesPerChecksum = this.checksum.getBytesPerChecksum();
  checksumSize = this.checksum.getChecksumSize();
  this.tracer = tracer;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:27,代码来源:RemoteBlockReader2.java


示例5: testTracing

import org.apache.htrace.core.Tracer; //导入依赖的package包/类
@Test
public void testTracing() throws Exception {
  // write and read without tracing started
  String fileName = "testTracingDisabled.dat";
  writeTestFile(fileName);
  Assert.assertEquals(0, SetSpanReceiver.size());
  readTestFile(fileName);
  Assert.assertEquals(0, SetSpanReceiver.size());

  writeTestFile("testReadTraceHooks.dat");

  FsTracer.clear();
  Tracer tracer = FsTracer.get(TRACING_CONF);
  writeWithTracing(tracer);
  readWithTracing(tracer);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:17,代码来源:TestTracing.java


示例6: main

import org.apache.htrace.core.Tracer; //导入依赖的package包/类
/**
 * Run basic test. Adds span to an existing htrace table in an existing hbase setup.
 * Requires a running hbase to send the traces too with an already created trace
 * table (Default table name is 'htrace' with column families 's' and 'i').
 *
 * @param args Default arguments which passed to main method
 * @throws InterruptedException  Thread.sleep() can cause interruption in current thread.
 */
public static void main(String[] args) throws Exception {
  Tracer tracer = new Tracer.Builder().
      conf(new HBaseHTraceConfiguration(HBaseConfiguration.create())).
      build();
  tracer.addSampler(Sampler.ALWAYS);
  TraceScope parent = tracer.newScope("HBaseSpanReceiver.main.parent");
  Thread.sleep(10);
  long traceid = parent.getSpan().getSpanId().getHigh();
  TraceScope child1 = tracer.newScope("HBaseSpanReceiver.main.child.1");
  Thread.sleep(10);
  child1.close();
  TraceScope child2 = tracer.newScope("HBaseSpanReceiver.main.child.2");
  Thread.sleep(10);
  TraceScope gchild = tracer.newScope("HBaseSpanReceiver.main.grandchild");
  gchild.addTimelineAnnotation("annotation 1.");
  Thread.sleep(10);
  gchild.addTimelineAnnotation("annotation 2.");
  gchild.close();
  Thread.sleep(10);
  child2.close();
  Thread.sleep(10);
  parent.close();
  tracer.close();
  System.out.println("trace id: " + traceid);
}
 
开发者ID:apache,项目名称:incubator-htrace,代码行数:34,代码来源:HBaseSpanReceiver.java


示例7: testSimpleTraces

import org.apache.htrace.core.Tracer; //导入依赖的package包/类
@Test
public void testSimpleTraces() throws IOException, InterruptedException {
  FakeZipkinTransport transport = new FakeZipkinTransport();
  Tracer tracer = newTracer(transport);
  Span rootSpan = new MilliSpan.Builder().
      description("root").
      spanId(new SpanId(100, 100)).
      tracerId("test").
      begin(System.currentTimeMillis()).
      build();
  TraceScope rootScope = tracer.newScope("root");
  TraceScope innerOne = tracer.newScope("innerOne");
  TraceScope innerTwo = tracer.newScope("innerTwo");
  innerTwo.close();
  Assert.assertTrue(transport.nextMessageAsSpan().getName().contains("innerTwo"));
  innerOne.close();
  Assert.assertTrue(transport.nextMessageAsSpan().getName().contains("innerOne"));
  rootSpan.addKVAnnotation("foo", "bar");
  rootSpan.addTimelineAnnotation("timeline");
  rootScope.close();
  Assert.assertTrue(transport.nextMessageAsSpan().getName().contains("root"));
  tracer.close();
}
 
开发者ID:apache,项目名称:incubator-htrace,代码行数:24,代码来源:TestZipkinSpanReceiver.java


示例8: testSimpleTraces

import org.apache.htrace.core.Tracer; //导入依赖的package包/类
@Test(timeout=120000)
public void testSimpleTraces() throws IOException, InterruptedException {
  Tracer tracer = newTracer();
  Span rootSpan = new MilliSpan.Builder().
      description("root").
      spanId(new SpanId(100, 100)).
      tracerId("test").
      begin(System.currentTimeMillis()).
      build();
  TraceScope rootScope = tracer.newScope("root");
  TraceScope innerOne = tracer.newScope("innerOne");
  TraceScope innerTwo = tracer.newScope("innerTwo");
  innerTwo.close();
  Assert.assertTrue(flumeServer.nextEventBodyAsString().contains("innerTwo"));
  innerOne.close();
  Assert.assertTrue(flumeServer.nextEventBodyAsString().contains("innerOne"));
  rootSpan.addKVAnnotation("foo", "bar");
  rootSpan.addTimelineAnnotation("timeline");
  rootScope.close();
  Assert.assertTrue(flumeServer.nextEventBodyAsString().contains("root"));
  tracer.close();
}
 
开发者ID:apache,项目名称:incubator-htrace,代码行数:23,代码来源:TestFlumeSpanReceiver.java


示例9: load

import org.apache.htrace.core.Tracer; //导入依赖的package包/类
/**
 * This method will block if a cache entry doesn't exist, and
 * any subsequent requests for the same user will wait on this
 * request to return. If a user already exists in the cache,
 * this will be run in the background.
 * @param user key of cache
 * @return List of groups belonging to user
 * @throws IOException to prevent caching negative entries
 */
@Override
public List<String> load(String user) throws Exception {
  TraceScope scope = null;
  Tracer tracer = Tracer.curThreadTracer();
  if (tracer != null) {
    scope = tracer.newScope("Groups#fetchGroupList");
    scope.addKVAnnotation("user", user);
  }
  List<String> groups = null;
  try {
    groups = fetchGroupList(user);
  } finally {
    if (scope != null) {
      scope.close();
    }
  }

  if (groups.isEmpty()) {
    if (isNegativeCacheEnabled()) {
      negativeCache.add(user);
    }

    // We throw here to prevent Cache from retaining an empty group
    throw noGroupsForUser(user);
  }

  return groups;
}
 
开发者ID:nucypher,项目名称:hadoop-oss,代码行数:38,代码来源:Groups.java


示例10: createFileSystem

import org.apache.htrace.core.Tracer; //导入依赖的package包/类
private static FileSystem createFileSystem(URI uri, Configuration conf
    ) throws IOException {
  Tracer tracer = FsTracer.get(conf);
  TraceScope scope = tracer.newScope("FileSystem#createFileSystem");
  scope.addKVAnnotation("scheme", uri.getScheme());
  try {
    Class<?> clazz = getFileSystemClass(uri.getScheme(), conf);
    FileSystem fs = (FileSystem)ReflectionUtils.newInstance(clazz, conf);
    fs.initialize(uri, conf);
    return fs;
  } finally {
    scope.close();
  }
}
 
开发者ID:nucypher,项目名称:hadoop-oss,代码行数:15,代码来源:FileSystem.java


示例11: get

import org.apache.htrace.core.Tracer; //导入依赖的package包/类
public static synchronized Tracer get(Configuration conf) {
  if (instance == null) {
    instance = new Tracer.Builder("FSClient").
        conf(TraceUtils.wrapHadoopConf(CommonConfigurationKeys.
            FS_CLIENT_HTRACE_PREFIX, conf)).
        build();
  }
  return instance;
}
 
开发者ID:nucypher,项目名称:hadoop-oss,代码行数:10,代码来源:FsTracer.java


示例12: makeRpcRequestHeader

import org.apache.htrace.core.Tracer; //导入依赖的package包/类
public static RpcRequestHeaderProto makeRpcRequestHeader(RPC.RpcKind rpcKind,
    RpcRequestHeaderProto.OperationProto operation, int callId,
    int retryCount, byte[] uuid) {
  RpcRequestHeaderProto.Builder result = RpcRequestHeaderProto.newBuilder();
  result.setRpcKind(convert(rpcKind)).setRpcOp(operation).setCallId(callId)
      .setRetryCount(retryCount).setClientId(ByteString.copyFrom(uuid));

  // Add tracing info if we are currently tracing.
  Span span = Tracer.getCurrentSpan();
  if (span != null) {
    result.setTraceInfo(RPCTraceInfoProto.newBuilder()
        .setTraceId(span.getSpanId().getHigh())
        .setParentId(span.getSpanId().getLow())
          .build());
  }

  // Add caller context if it is not null
  CallerContext callerContext = CallerContext.getCurrent();
  if (callerContext != null && callerContext.isContextValid()) {
    RPCCallerContextProto.Builder contextBuilder = RPCCallerContextProto
        .newBuilder().setContext(callerContext.getContext());
    if (callerContext.getSignature() != null) {
      contextBuilder.setSignature(
          ByteString.copyFrom(callerContext.getSignature()));
    }
    result.setCallerContext(contextBuilder);
  }

  return result.build();
}
 
开发者ID:nucypher,项目名称:hadoop-oss,代码行数:31,代码来源:ProtoUtil.java


示例13: invoke

import org.apache.htrace.core.Tracer; //导入依赖的package包/类
@Override
public Object invoke(Object proxy, Method method, Object[] args)
  throws Throwable {
  long startTime = 0;
  if (LOG.isDebugEnabled()) {
    startTime = Time.now();
  }

  // if Tracing is on then start a new span for this rpc.
  // guard it in the if statement to make sure there isn't
  // any extra string manipulation.
  Tracer tracer = Tracer.curThreadTracer();
  TraceScope traceScope = null;
  if (tracer != null) {
    traceScope = tracer.newScope(RpcClientUtil.methodToTraceString(method));
  }
  ObjectWritable value;
  try {
    value = (ObjectWritable)
      client.call(RPC.RpcKind.RPC_WRITABLE, new Invocation(method, args),
        remoteId, fallbackToSimpleAuth);
  } finally {
    if (traceScope != null) traceScope.close();
  }
  if (LOG.isDebugEnabled()) {
    long callTime = Time.now() - startTime;
    LOG.debug("Call: " + method.getName() + " " + callTime);
  }
  return value.get();
}
 
开发者ID:nucypher,项目名称:hadoop-oss,代码行数:31,代码来源:WritableRpcEngine.java


示例14: initWorkload

import org.apache.htrace.core.Tracer; //导入依赖的package包/类
private static void initWorkload(Properties props, Thread warningthread, Workload workload, Tracer tracer) {
  try {
    try (final TraceScope span = tracer.newScope(CLIENT_WORKLOAD_INIT_SPAN)) {
      workload.init(props);
      warningthread.interrupt();
    }
  } catch (WorkloadException e) {
    e.printStackTrace();
    e.printStackTrace(System.out);
    System.exit(0);
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:13,代码来源:Client.java


示例15: DBWrapper

import org.apache.htrace.core.Tracer; //导入依赖的package包/类
public DBWrapper(final DB db, final Tracer tracer) {
  this.db = db;
  measurements = Measurements.getMeasurements();
  this.tracer = tracer;
  final String simple = db.getClass().getSimpleName();
  scopeStringCleanup = simple + "#cleanup";
  scopeStringDelete = simple + "#delete";
  scopeStringInit = simple + "#init";
  scopeStringInsert = simple + "#insert";
  scopeStringRead = simple + "#read";
  scopeStringScan = simple + "#scan";
  scopeStringUpdate = simple + "#update";
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:14,代码来源:DBWrapper.java


示例16: RemoteBlockReader

import org.apache.htrace.core.Tracer; //导入依赖的package包/类
private RemoteBlockReader(String file, String bpid, long blockId,
    DataInputStream in, DataChecksum checksum, boolean verifyChecksum,
    long startOffset, long firstChunkOffset, long bytesToRead, Peer peer,
    DatanodeID datanodeID, PeerCache peerCache, Tracer tracer) {
  // Path is used only for printing block and file information in debug
  super(new Path("/" + Block.BLOCK_FILE_PREFIX + blockId +
          ":" + bpid + ":of:"+ file)/*too non path-like?*/,
      1, verifyChecksum,
      checksum.getChecksumSize() > 0? checksum : null,
      checksum.getBytesPerChecksum(),
      checksum.getChecksumSize());

  this.isLocal = DFSUtilClient.isLocalAddress(NetUtils.
      createSocketAddr(datanodeID.getXferAddr()));

  this.peer = peer;
  this.datanodeID = datanodeID;
  this.in = in;
  this.checksum = checksum;
  this.startOffset = Math.max( startOffset, 0 );
  this.blockId = blockId;

  // The total number of bytes that we need to transfer from the DN is
  // the amount that the user wants (bytesToRead), plus the padding at
  // the beginning in order to chunk-align. Note that the DN may elect
  // to send more than this amount if the read starts/ends mid-chunk.
  this.bytesNeededToFinish = bytesToRead + (startOffset - firstChunkOffset);

  this.firstChunkOffset = firstChunkOffset;
  lastChunkOffset = firstChunkOffset;
  lastChunkLen = -1;

  bytesPerChecksum = this.checksum.getBytesPerChecksum();
  checksumSize = this.checksum.getChecksumSize();
  this.peerCache = peerCache;
  this.tracer = tracer;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:38,代码来源:RemoteBlockReader.java


示例17: BlockReaderLocalLegacy

import org.apache.htrace.core.Tracer; //导入依赖的package包/类
private BlockReaderLocalLegacy(ShortCircuitConf conf, String hdfsfile,
    ExtendedBlock block, long startOffset, FileInputStream dataIn,
    Tracer tracer) throws IOException {
  this(conf, hdfsfile, block, startOffset,
      DataChecksum.newDataChecksum(DataChecksum.Type.NULL, 4), false,
      dataIn, startOffset, null, tracer);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:8,代码来源:BlockReaderLocalLegacy.java


示例18: releaseShortCircuitFds

import org.apache.htrace.core.Tracer; //导入依赖的package包/类
@Override
public void releaseShortCircuitFds(SlotId slotId) throws IOException {
  ReleaseShortCircuitAccessRequestProto.Builder builder =
      ReleaseShortCircuitAccessRequestProto.newBuilder().
          setSlotId(PBHelperClient.convert(slotId));
  SpanId spanId = Tracer.getCurrentSpanId();
  if (spanId.isValid()) {
    builder.setTraceInfo(DataTransferTraceInfoProto.newBuilder().
        setTraceId(spanId.getHigh()).
        setParentId(spanId.getLow()));
  }
  ReleaseShortCircuitAccessRequestProto proto = builder.build();
  send(out, Op.RELEASE_SHORT_CIRCUIT_FDS, proto);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:15,代码来源:Sender.java


示例19: requestShortCircuitShm

import org.apache.htrace.core.Tracer; //导入依赖的package包/类
@Override
public void requestShortCircuitShm(String clientName) throws IOException {
  ShortCircuitShmRequestProto.Builder builder =
      ShortCircuitShmRequestProto.newBuilder().
          setClientName(clientName);
  SpanId spanId = Tracer.getCurrentSpanId();
  if (spanId.isValid()) {
    builder.setTraceInfo(DataTransferTraceInfoProto.newBuilder().
        setTraceId(spanId.getHigh()).
        setParentId(spanId.getLow()));
  }
  ShortCircuitShmRequestProto proto = builder.build();
  send(out, Op.REQUEST_SHORT_CIRCUIT_SHM, proto);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:15,代码来源:Sender.java


示例20: buildBaseHeader

import org.apache.htrace.core.Tracer; //导入依赖的package包/类
static BaseHeaderProto buildBaseHeader(ExtendedBlock blk,
    Token<BlockTokenIdentifier> blockToken) {
  BaseHeaderProto.Builder builder =  BaseHeaderProto.newBuilder()
      .setBlock(PBHelperClient.convert(blk))
      .setToken(PBHelperClient.convert(blockToken));
  SpanId spanId = Tracer.getCurrentSpanId();
  if (spanId.isValid()) {
    builder.setTraceInfo(DataTransferTraceInfoProto.newBuilder()
        .setTraceId(spanId.getHigh())
        .setParentId(spanId.getLow()));
  }
  return builder.build();
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:14,代码来源:DataTransferProtoUtil.java



注:本文中的org.apache.htrace.core.Tracer类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java ConcurrentFactoryMap类代码示例发布时间:2022-05-22
下一篇:
Java MqttPublishVariableHeader类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap