• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java Logging类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.webrtc.Logging的典型用法代码示例。如果您正苦于以下问题:Java Logging类的具体用法?Java Logging怎么用?Java Logging使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



Logging类属于org.webrtc包,在下文中一共展示了Logging类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: updateFrameDimensionsAndReportEvents

import org.webrtc.Logging; //导入依赖的package包/类
private void updateFrameDimensionsAndReportEvents(VideoRenderer.I420Frame frame) {
  synchronized (layoutLock) {
    if (frameWidth != frame.width || frameHeight != frame.height
        || frameRotation != frame.rotationDegree) {
      Logging.d(TAG, getResourceName() + "Reporting frame resolution changed to "
          + frame.width + "x" + frame.height + " with rotation " + frame.rotationDegree);
      if (rendererEvents != null) {
        rendererEvents.onFrameResolutionChanged(frame.width, frame.height, frame.rotationDegree);
      }
      frameWidth = frame.width;
      frameHeight = frame.height;
      frameRotation = frame.rotationDegree;
      post(new Runnable() {
        @Override public void run() {
          requestLayout();
        }
      });
    }
  }
}
 
开发者ID:angellsl10,项目名称:react-native-webrtc,代码行数:21,代码来源:SurfaceViewRenderer.java


示例2: stopPlayout

import org.webrtc.Logging; //导入依赖的package包/类
private boolean stopPlayout() {
  Logging.d(TAG, "stopPlayout");
  assertTrue(audioThread != null);
  logUnderrunCount();
  audioThread.stopThread();

  final Thread aThread = audioThread;
  audioThread = null;
  if (aThread != null) {
    Logging.d(TAG, "Stopping the AudioTrackThread...");
    aThread.interrupt();
    if (!ThreadUtils.joinUninterruptibly(aThread, AUDIO_TRACK_THREAD_JOIN_TIMEOUT_MS)) {
      Logging.e(TAG, "Join of AudioTrackThread timed out.");
    }
    Logging.d(TAG, "AudioTrackThread has now been stopped.");
  }

  releaseAudioResources();
  return true;
}
 
开发者ID:Piasy,项目名称:AppRTC-Android,代码行数:21,代码来源:WebRtcAudioTrack.java


示例3: getNativeOutputSampleRate

import org.webrtc.Logging; //导入依赖的package包/类
private int getNativeOutputSampleRate() {
  // Override this if we're running on an old emulator image which only
  // supports 8 kHz and doesn't support PROPERTY_OUTPUT_SAMPLE_RATE.
  if (WebRtcAudioUtils.runningOnEmulator()) {
    Logging.d(TAG, "Running emulator, overriding sample rate to 8 kHz.");
    return 8000;
  }
  // Default can be overriden by WebRtcAudioUtils.setDefaultSampleRateHz().
  // If so, use that value and return here.
  if (WebRtcAudioUtils.isDefaultSampleRateOverridden()) {
    Logging.d(TAG, "Default sample rate is overriden to "
            + WebRtcAudioUtils.getDefaultSampleRateHz() + " Hz");
    return WebRtcAudioUtils.getDefaultSampleRateHz();
  }
  // No overrides available. Deliver best possible estimate based on default
  // Android AudioManager APIs.
  final int sampleRateHz;
  if (WebRtcAudioUtils.runningOnJellyBeanMR1OrHigher()) {
    sampleRateHz = getSampleRateOnJellyBeanMR10OrHigher();
  } else {
    sampleRateHz = WebRtcAudioUtils.getDefaultSampleRateHz();
  }
  Logging.d(TAG, "Sample rate is set to " + sampleRateHz + " Hz");
  return sampleRateHz;
}
 
开发者ID:Piasy,项目名称:AppRTC-Android,代码行数:26,代码来源:WebRtcAudioManager.java


示例4: startRecording

import org.webrtc.Logging; //导入依赖的package包/类
private boolean startRecording() {
  Logging.d(TAG, "startRecording");
  assertTrue(audioRecord != null);
  assertTrue(audioThread == null);
  try {
    audioRecord.startRecording();
  } catch (IllegalStateException e) {
    reportWebRtcAudioRecordStartError("AudioRecord.startRecording failed: " + e.getMessage());
    return false;
  }
  if (audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) {
    reportWebRtcAudioRecordStartError("AudioRecord.startRecording failed - incorrect state :"
        + audioRecord.getRecordingState());
    return false;
  }
  audioThread = new AudioRecordThread("AudioRecordJavaThread");
  audioThread.start();
  return true;
}
 
开发者ID:lgyjg,项目名称:AndroidRTC,代码行数:20,代码来源:WebRtcAudioRecord.java


示例5: surfaceCreated

import org.webrtc.Logging; //导入依赖的package包/类
@Override
public void surfaceCreated(final SurfaceHolder holder) {
  Logging.d(TAG, getResourceName() + "Surface created.");
  synchronized (layoutLock) {
    isSurfaceCreated = true;
  }
  tryCreateEglSurface();
}
 
开发者ID:angellsl10,项目名称:react-native-webrtc,代码行数:9,代码来源:SurfaceViewRenderer.java


示例6: surfaceChanged

import org.webrtc.Logging; //导入依赖的package包/类
@Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
  Logging.d(TAG, getResourceName() + "Surface changed: " + width + "x" + height);
  synchronized (layoutLock) {
    surfaceSize.x = width;
    surfaceSize.y = height;
  }
  // Might have a pending frame waiting for a surface of correct size.
  runOnRenderThread(renderFrameRunnable);
}
 
开发者ID:angellsl10,项目名称:react-native-webrtc,代码行数:11,代码来源:SurfaceViewRenderer.java


示例7: renderFrame

import org.webrtc.Logging; //导入依赖的package包/类
@Override
synchronized public void renderFrame(VideoRenderer.I420Frame frame) {
  if (target == null) {
    Logging.d(TAG, "Dropping frame in proxy because target is null.");
    VideoRenderer.renderFrameDone(frame);
    return;
  }

  target.renderFrame(frame);
}
 
开发者ID:Piasy,项目名称:AppRTC-Android,代码行数:11,代码来源:CallActivity.java


示例8: onFrame

import org.webrtc.Logging; //导入依赖的package包/类
@Override
synchronized public void onFrame(VideoFrame frame) {
  if (target == null) {
    Logging.d(TAG, "Dropping frame in proxy because target is null.");
    return;
  }

  target.onFrame(frame);
}
 
开发者ID:Piasy,项目名称:AppRTC-Android,代码行数:10,代码来源:CallActivity.java


示例9: setSwappedFeeds

import org.webrtc.Logging; //导入依赖的package包/类
private void setSwappedFeeds(boolean isSwappedFeeds) {
  Logging.d(TAG, "setSwappedFeeds: " + isSwappedFeeds);
  this.isSwappedFeeds = isSwappedFeeds;
  localProxyVideoSink.setTarget(isSwappedFeeds ? fullscreenRenderer : pipRenderer);
  remoteProxyRenderer.setTarget(isSwappedFeeds ? pipRenderer : fullscreenRenderer);
  fullscreenRenderer.setMirror(isSwappedFeeds);
  pipRenderer.setMirror(!isSwappedFeeds);
}
 
开发者ID:Piasy,项目名称:AppRTC-Android,代码行数:9,代码来源:CallActivity.java


示例10: WebRtcAudioTrack

import org.webrtc.Logging; //导入依赖的package包/类
WebRtcAudioTrack(long nativeAudioTrack) {
  Logging.d(TAG, "ctor" + WebRtcAudioUtils.getThreadInfo());
  this.nativeAudioTrack = nativeAudioTrack;
  audioManager =
      (AudioManager) ContextUtils.getApplicationContext().getSystemService(Context.AUDIO_SERVICE);
  if (DEBUG) {
    WebRtcAudioUtils.logDeviceInfo(TAG);
  }
}
 
开发者ID:Piasy,项目名称:AppRTC-Android,代码行数:10,代码来源:WebRtcAudioTrack.java


示例11: logMainParametersExtended

import org.webrtc.Logging; //导入依赖的package包/类
@TargetApi(23)
private void logMainParametersExtended() {
  if (WebRtcAudioUtils.runningOnMarshmallowOrHigher()) {
    Logging.d(TAG, "AudioRecord: "
            // The frame count of the native AudioRecord buffer.
            + "buffer size in frames: " + audioRecord.getBufferSizeInFrames());
  }
}
 
开发者ID:lgyjg,项目名称:AndroidRTC,代码行数:9,代码来源:WebRtcAudioRecord.java


示例12: stop

import org.webrtc.Logging; //导入依赖的package包/类
public void stop() {
    if (mVideoCapturer != null && !mVideoCapturerStopped) {
        Logging.d(TAG, "Stop video source.");
        try {
            mVideoCapturer.stopCapture();
        } catch (InterruptedException e) {
            Logging.e(TAG, "stop", e);
        }
        mVideoCapturerStopped = true;
    }
}
 
开发者ID:Piasy,项目名称:VideoCRE,代码行数:12,代码来源:VideoSource.java


示例13: logMainParameters

import org.webrtc.Logging; //导入依赖的package包/类
private void logMainParameters() {
  Logging.d(TAG, "AudioTrack: "
          + "session ID: " + audioTrack.getAudioSessionId() + ", "
          + "channels: " + audioTrack.getChannelCount() + ", "
          + "sample rate: " + audioTrack.getSampleRate() + ", "
          // Gain (>=1.0) expressed as linear multiplier on sample values.
          + "max gain: " + audioTrack.getMaxVolume());
}
 
开发者ID:Piasy,项目名称:AppRTC-Android,代码行数:9,代码来源:WebRtcAudioTrack.java


示例14: createAudioTrackOnLollipopOrHigher

import org.webrtc.Logging; //导入依赖的package包/类
@TargetApi(21)
private static AudioTrack createAudioTrackOnLollipopOrHigher(
    int sampleRateInHz, int channelConfig, int bufferSizeInBytes) {
  Logging.d(TAG, "createAudioTrackOnLollipopOrHigher");
  // TODO(henrika): use setPerformanceMode(int) with PERFORMANCE_MODE_LOW_LATENCY to control
  // performance when Android O is supported. Add some logging in the mean time.
  final int nativeOutputSampleRate =
      AudioTrack.getNativeOutputSampleRate(AudioManager.STREAM_VOICE_CALL);
  Logging.d(TAG, "nativeOutputSampleRate: " + nativeOutputSampleRate);
  if (sampleRateInHz != nativeOutputSampleRate) {
    Logging.w(TAG, "Unable to use fast mode since requested sample rate is not native");
  }
  if (usageAttribute != DEFAULT_USAGE) {
    Logging.w(TAG, "A non default usage attribute is used: " + usageAttribute);
  }
  // Create an audio track where the audio usage is for VoIP and the content type is speech.
  return new AudioTrack(
      new AudioAttributes.Builder()
          .setUsage(usageAttribute)
          .setContentType(AudioAttributes.CONTENT_TYPE_SPEECH)
      .build(),
      new AudioFormat.Builder()
        .setEncoding(AudioFormat.ENCODING_PCM_16BIT)
        .setSampleRate(sampleRateInHz)
        .setChannelMask(channelConfig)
        .build(),
      bufferSizeInBytes,
      AudioTrack.MODE_STREAM,
      AudioManager.AUDIO_SESSION_ID_GENERATE);
}
 
开发者ID:Piasy,项目名称:AppRTC-Android,代码行数:31,代码来源:WebRtcAudioTrack.java


示例15: releaseAudioResources

import org.webrtc.Logging; //导入依赖的package包/类
private void releaseAudioResources() {
  Logging.d(TAG, "releaseAudioResources");
  if (audioTrack != null) {
    audioTrack.release();
    audioTrack = null;
  }
}
 
开发者ID:Piasy,项目名称:AppRTC-Android,代码行数:8,代码来源:WebRtcAudioTrack.java


示例16: isAcousticEchoCancelerBlacklisted

import org.webrtc.Logging; //导入依赖的package包/类
public static boolean isAcousticEchoCancelerBlacklisted() {
  List<String> blackListedModels = WebRtcAudioUtils.getBlackListedModelsForAecUsage();
  boolean isBlacklisted = blackListedModels.contains(Build.MODEL);
  if (isBlacklisted) {
    Logging.w(TAG, Build.MODEL + " is blacklisted for HW AEC usage!");
  }
  return isBlacklisted;
}
 
开发者ID:Piasy,项目名称:AppRTC-Android,代码行数:9,代码来源:WebRtcAudioEffects.java


示例17: isNoiseSuppressorBlacklisted

import org.webrtc.Logging; //导入依赖的package包/类
public static boolean isNoiseSuppressorBlacklisted() {
  List<String> blackListedModels = WebRtcAudioUtils.getBlackListedModelsForNsUsage();
  boolean isBlacklisted = blackListedModels.contains(Build.MODEL);
  if (isBlacklisted) {
    Logging.w(TAG, Build.MODEL + " is blacklisted for HW NS usage!");
  }
  return isBlacklisted;
}
 
开发者ID:Piasy,项目名称:AppRTC-Android,代码行数:9,代码来源:WebRtcAudioEffects.java


示例18: setAEC

import org.webrtc.Logging; //导入依赖的package包/类
public boolean setAEC(boolean enable) {
  Logging.d(TAG, "setAEC(" + enable + ")");
  if (!canUseAcousticEchoCanceler()) {
    Logging.w(TAG, "Platform AEC is not supported");
    shouldEnableAec = false;
    return false;
  }
  if (aec != null && (enable != shouldEnableAec)) {
    Logging.e(TAG, "Platform AEC state can't be modified while recording");
    return false;
  }
  shouldEnableAec = enable;
  return true;
}
 
开发者ID:Piasy,项目名称:AppRTC-Android,代码行数:15,代码来源:WebRtcAudioEffects.java


示例19: setNS

import org.webrtc.Logging; //导入依赖的package包/类
public boolean setNS(boolean enable) {
  Logging.d(TAG, "setNS(" + enable + ")");
  if (!canUseNoiseSuppressor()) {
    Logging.w(TAG, "Platform NS is not supported");
    shouldEnableNs = false;
    return false;
  }
  if (ns != null && (enable != shouldEnableNs)) {
    Logging.e(TAG, "Platform NS state can't be modified while recording");
    return false;
  }
  shouldEnableNs = enable;
  return true;
}
 
开发者ID:Piasy,项目名称:AppRTC-Android,代码行数:15,代码来源:WebRtcAudioEffects.java


示例20: release

import org.webrtc.Logging; //导入依赖的package包/类
public void release() {
  Logging.d(TAG, "release");
  if (aec != null) {
    aec.release();
    aec = null;
  }
  if (ns != null) {
    ns.release();
    ns = null;
  }
}
 
开发者ID:Piasy,项目名称:AppRTC-Android,代码行数:12,代码来源:WebRtcAudioEffects.java



注:本文中的org.webrtc.Logging类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java SVNInfo类代码示例发布时间:2022-05-22
下一篇:
Java MimeType类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap