本文整理汇总了Java中com.xuggle.xuggler.IAudioSamples类的典型用法代码示例。如果您正苦于以下问题:Java IAudioSamples类的具体用法?Java IAudioSamples怎么用?Java IAudioSamples使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
IAudioSamples类属于com.xuggle.xuggler包,在下文中一共展示了IAudioSamples类的14个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: onAudioSamples
import com.xuggle.xuggler.IAudioSamples; //导入依赖的package包/类
@Override
public void onAudioSamples(IAudioSamplesEvent event) {
IAudioSamples samples = event.getAudioSamples();
if (audioResampler == null) {
audioResampler = IAudioResampler.make(1, samples.getChannels(),
16000, samples.getSampleRate());
}
if (event.getAudioSamples().getNumSamples() > 0) {
IAudioSamples out = IAudioSamples.make(samples.getNumSamples(),
samples.getChannels());
audioResampler.resample(out, samples, samples.getNumSamples());
AudioSamplesEvent asc = new AudioSamplesEvent(event.getSource(),
out, event.getStreamIndex());
super.onAudioSamples(asc);
out.delete();
}
}
开发者ID:sumansaurabh,项目名称:AudioTranscoder,代码行数:18,代码来源:ConvertAudio.java
示例2: openJavaSound
import com.xuggle.xuggler.IAudioSamples; //导入依赖的package包/类
private static void openJavaSound(IStreamCoder aAudioCoder) {
AudioFormat audioFormat = new AudioFormat(aAudioCoder.getSampleRate(),
(int)IAudioSamples.findSampleBitDepth(aAudioCoder.getSampleFormat()),
aAudioCoder.getChannels(),
true, /* xuggler defaults to signed 16 bit samples */
false);
DataLine.Info info = new DataLine.Info(SourceDataLine.class, audioFormat);
try
{
mLine = (SourceDataLine) AudioSystem.getLine(info);
/**
* if that succeeded, try opening the line.
*/
mLine.open(audioFormat);
/**
* And if that succeed, start the line.
*/
mLine.start();
}
catch (LineUnavailableException e)
{
throw new RuntimeException("could not open audio line");
}
}
开发者ID:johnmans,项目名称:EnTax,代码行数:25,代码来源:HelpForm.java
示例3: queueAudio
import com.xuggle.xuggler.IAudioSamples; //导入依赖的package包/类
/**
* Queue the audio data from xuggler.
*
* @param samples audio data to queue
* @param timeStamp
* @param timeUnit
*/
public void queueAudio(IAudioSamples samples, long timeStamp, TimeUnit timeUnit) {
log.trace("Queue audio");
// convert from AudioSamples to short array
ByteBuffer buf = samples.getByteBuffer();
byte[] decoded = new byte[buf.limit()];
buf.get(decoded);
buf.flip();
short[] isamples = BufferUtils.byteToShortArray(decoded, 0, decoded.length, true);
// queue them up for writing
dataQueue.add(new QueuedAudioData(isamples, timeStamp, timeUnit));
// make a copy for group mux if one exists
if (mux != null) {
mux.pushData(streamName, isamples);
}
}
开发者ID:Red5,项目名称:red5-hls-plugin,代码行数:23,代码来源:SegmentFacade.java
示例4: initialize
import com.xuggle.xuggler.IAudioSamples; //导入依赖的package包/类
/**
* Initializes the reader thread with the given media.
* @param container the media container
* @param videoCoder the media video decoder
* @param audioCoder the media audio decoder
* @param audioConversions the flag(s) for any audio conversions to must take place
*/
public void initialize(IContainer container, IStreamCoder videoCoder, IStreamCoder audioCoder, int audioConversions) {
// assign the local variables
this.outputWidth = 0;
this.outputHeight = 0;
this.videoConversionEnabled = false;
this.scale = false;
this.container = container;
this.videoCoder = videoCoder;
this.audioCoder = audioCoder;
this.audioConversions = audioConversions;
// create a packet for reading
this.packet = IPacket.make();
// create the image converter for the video
if (videoCoder != null) {
this.width = this.videoCoder.getWidth();
this.height = this.videoCoder.getHeight();
IPixelFormat.Type type = this.videoCoder.getPixelType();
this.picture = IVideoPicture.make(type, this.width, this.height);
BufferedImage target = new BufferedImage(this.width, this.height, BufferedImage.TYPE_3BYTE_BGR);
this.videoConverter = ConverterFactory.createConverter(target, type);
}
// create a resuable container for the samples
if (audioCoder != null) {
this.samples = IAudioSamples.make(1024, this.audioCoder.getChannels());
}
}
开发者ID:wnbittle,项目名称:praisenter,代码行数:37,代码来源:XugglerMediaReaderThread.java
示例5: getDataTypeForFormat
import com.xuggle.xuggler.IAudioSamples; //导入依赖的package包/类
/**
* Returns the class type for the given format.
* @param format the format
* @return Class<Number>
* @since 2.0.1
*/
public static final Class<? extends Number> getDataTypeForFormat(IAudioSamples.Format format) {
if (format == Format.FMT_DBL || format == Format.FMT_DBLP) {
return Double.class;
} else if (format == Format.FMT_FLT || format == Format.FMT_FLTP) {
return Float.class;
} else if (format == Format.FMT_S16 || format == Format.FMT_S16P) {
return Short.class;
} else if (format == Format.FMT_S32 || format == Format.FMT_S32P) {
return Integer.class;
} else if (format == Format.FMT_U8 || format == Format.FMT_U8P) {
return Byte.class;
}
return null;
}
开发者ID:wnbittle,项目名称:praisenter,代码行数:21,代码来源:XugglerAudioData.java
示例6: onAudioSamples
import com.xuggle.xuggler.IAudioSamples; //导入依赖的package包/类
/**
* {@inheritDoc}
*
* @see com.xuggle.mediatool.MediaToolAdapter#onAudioSamples(com.xuggle.mediatool.event.IAudioSamplesEvent)
*/
@Override
public void onAudioSamples(final IAudioSamplesEvent event)
{
// Get the samples
final IAudioSamples aSamples = event.getAudioSamples();
final byte[] rawBytes = aSamples.getData().
getByteArray(0, aSamples.getSize());
XuggleAudio.this.currentSamples.setSamples(rawBytes);
// Set the timecode of these samples
// double timestampMillisecs =
// rawBytes.length/format.getNumChannels() /
// format.getSampleRateKHz();
final long timestampMillisecs = TimeUnit.MILLISECONDS.convert(
event.getTimeStamp().longValue(), event.getTimeUnit());
XuggleAudio.this.currentTimecode.setTimecodeInMilliseconds(
timestampMillisecs);
XuggleAudio.this.currentSamples.setStartTimecode(
XuggleAudio.this.currentTimecode);
XuggleAudio.this.currentSamples.getFormat().setNumChannels(
XuggleAudio.this.getFormat().getNumChannels());
XuggleAudio.this.currentSamples.getFormat().setSigned(
XuggleAudio.this.getFormat().isSigned());
XuggleAudio.this.currentSamples.getFormat().setBigEndian(
XuggleAudio.this.getFormat().isBigEndian());
XuggleAudio.this.currentSamples.getFormat().setSampleRateKHz(
XuggleAudio.this.getFormat().getSampleRateKHz());
XuggleAudio.this.chunkAvailable = true;
}
开发者ID:openimaj,项目名称:openimaj,代码行数:42,代码来源:XuggleAudio.java
示例7: playJavaSound
import com.xuggle.xuggler.IAudioSamples; //导入依赖的package包/类
private static void playJavaSound(IAudioSamples aSamples) {
/**
* We're just going to dump all the samples into the line.
*/
byte[] rawBytes = aSamples.getData().getByteArray(0, aSamples.getSize());
mLine.write(rawBytes, 0, aSamples.getSize());
}
开发者ID:johnmans,项目名称:EnTax,代码行数:8,代码来源:HelpForm.java
示例8: readXugglerFlac
import com.xuggle.xuggler.IAudioSamples; //导入依赖的package包/类
private void readXugglerFlac(Binary binary, AudioFormat format, File file) {
IMediaListener myListener = new MediaListenerAdapter() {
public void onOpen(IMediaGenerator pipe) {
log.info("opened: " + ((IMediaReader) pipe).getUrl());
}
@Override
public void onAudioSamples(IAudioSamplesEvent event) {
IAudioSamples samples = event.getAudioSamples();
// log.info("onaudiosamples " + samples.getNumSamples() + " "+
// " fs:" + getFS().length() + " " + samples);
ShortBuffer sb = samples.getByteBuffer().asShortBuffer();
for (int i = 0; i < sb.limit()
&& getFS().length() < audioinfo.getSampleCount(); i++) {
short num = sb.get(i);
getFS().add(1.0f * num / Short.MAX_VALUE);
}
//
super.onAudioSamples(event);
}
};
IMediaReader r = ToolFactory.makeReader(file.getAbsolutePath());
r.addListener(myListener);
while (true) {
IError p;
p = r.readPacket();
if (p != null) {
break;
}
}
//
log.info("read flac done");
}
开发者ID:jeukku,项目名称:waazdoh.music.common,代码行数:37,代码来源:MWave.java
示例9: QueuedAudioData
import com.xuggle.xuggler.IAudioSamples; //导入依赖的package包/类
@SuppressWarnings("unused")
QueuedAudioData(IAudioSamples isamples, long timeStamp, TimeUnit timeUnit) {
ByteBuffer buf = isamples.getByteBuffer();
byte[] decoded = new byte[buf.limit()];
buf.get(decoded);
buf.flip();
this.samples = BufferUtils.byteToShortArray(decoded, 0, decoded.length, true);
this.timeStamp = timeStamp;
this.timeUnit = timeUnit;
}
开发者ID:Red5,项目名称:red5-hls-plugin,代码行数:11,代码来源:SegmentFacade.java
示例10: onAudioSamples
import com.xuggle.xuggler.IAudioSamples; //导入依赖的package包/类
/** {@inheritDoc} */
@Override
public void onAudioSamples(IAudioSamplesEvent event) {
IAudioSamples samples = event.getAudioSamples();
if (samples.getSampleRate() != rate || samples.getChannels() != channels) {
log.debug("SampleRateAdjustTool onAudioSamples");
if (resampler == null) {
// http://build.xuggle.com/view/Stable/job/xuggler_jdk5_stable/javadoc/java/api/com/xuggle/xuggler/IAudioResampler.html
resampler = IAudioResampler.make(channels, samples.getChannels(), rate, samples.getSampleRate(), IAudioSamples.Format.FMT_S16, samples.getFormat());
log.info("Resampled formats - input: {} output: {}", resampler.getInputFormat(), resampler.getOutputFormat());
}
long sampleCount = samples.getNumSamples();
if (resampler != null && sampleCount > 0) {
log.trace("In - samples: {} rate: {} channels: {}", sampleCount, samples.getSampleRate(), samples.getChannels());
IAudioSamples out = IAudioSamples.make(sampleCount, channels);
resampler.resample(out, samples, sampleCount);
log.trace("Out - samples: {} rate: {} channels: {}", out.getNumSamples(), out.getSampleRate(), out.getChannels());
// queue audio
facade.queueAudio(out, event.getTimeStamp(), event.getTimeUnit());
//out.delete();
samples.delete();
} else {
facade.queueAudio(samples, event.getTimeStamp(), event.getTimeUnit());
}
log.debug("SampleRateAdjustTool onAudioSamples - end");
} else {
facade.queueAudio(samples, event.getTimeStamp(), event.getTimeUnit());
}
}
开发者ID:Red5,项目名称:red5-hls-plugin,代码行数:30,代码来源:SampleRateAdjustTool.java
示例11: open
import com.xuggle.xuggler.IAudioSamples; //导入依赖的package包/类
@SuppressWarnings("deprecation")
private void open(URLVideoSource src) throws IOException {
try {
if("file".equals(src.getURL().getProtocol())) {
File f = new File(src.getURL().toURI());
if (container.open(f.getAbsolutePath(), IContainer.Type.READ, null) < 0)
throw new IOException("could not open " + f.getAbsolutePath());
} else {
String urlStr = TextUtilities.toString(src.getURL());
if (container.open(urlStr, IContainer.Type.READ, null) < 0)
throw new IOException("could not open " + urlStr);
}
} catch(URISyntaxException e) {
throw new IOException(e);
}
// query how many streams the call to open found
int numStreams = container.getNumStreams();
// and iterate through the streams to find the first audio stream
int videoStreamId = -1;
int audioStreamId = -1;
for(int i = 0; i < numStreams; i++) {
// Find the stream object
IStream stream = container.getStream(i);
// Get the pre-configured decoder that can decode this stream;
IStreamCoder coder = stream.getStreamCoder();
if (videoStreamId == -1 && coder.getCodecType() == ICodec.Type.CODEC_TYPE_VIDEO) {
videoStreamId = i;
videoStream = stream;
videoCoder = coder;
}
else if (audioStreamId == -1 && coder.getCodecType() == ICodec.Type.CODEC_TYPE_AUDIO) {
audioStreamId = i;
audioStream = stream;
audioCoder = coder;
audioFormat = new AudioFormat(
audioCoder.getSampleRate(),
(int)IAudioSamples.findSampleBitDepth(audioCoder.getSampleFormat()),
audioCoder.getChannels(),
true, /* xuggler defaults to signed 16 bit samples */
false);
}
}
if (videoStreamId == -1 && audioStreamId == -1)
throw new IOException("could not find audio or video stream in container in " + src);
/*
* Check if we have a video stream in this file. If so let's open up our decoder so it can
* do work.
*/
if (videoCoder != null) {
if(videoCoder.open() < 0)
throw new IOException("could not open audio decoder for container " + src);
if (videoCoder.getPixelType() != IPixelFormat.Type.RGB24) {
resampler = IVideoResampler.make(
videoCoder.getWidth(), videoCoder.getHeight(),
IPixelFormat.Type.RGB24,
videoCoder.getWidth(), videoCoder.getHeight(),
videoCoder.getPixelType());
if (resampler == null)
throw new IOException("could not create color space resampler for " + src);
}
}
if (audioCoder != null) {
if (audioCoder.open() < 0)
throw new IOException("could not open audio decoder for container: " + src);
}
decoderThread = new Thread(this, src.getURL().toString());
decoderThread.setPriority(Thread.MIN_PRIORITY);
decoderThread.setDaemon(true);
doDecode.set(true);
decoderThread.start();
}
开发者ID:arisona,项目名称:ether,代码行数:76,代码来源:XuggleAccess.java
示例12: initialize
import com.xuggle.xuggler.IAudioSamples; //导入依赖的package包/类
/**
* Initializes this audio player thread with the given audio coder information.
* <p>
* Returns an integer representing any conversions that must be performed on the audio data.
* @param audioCoder the audio coder
* @return int
*/
public int initialize(IStreamCoder audioCoder) {
if (this.line != null) {
this.line.close();
this.line = null;
}
// make sure the given audio coder is not null
// this can happen with videos that are just video
if (audioCoder != null) {
int result = XugglerAudioData.CONVERSION_NONE;
int sampleRate = audioCoder.getSampleRate();
int bitDepth = (int)IAudioSamples.findSampleBitDepth(audioCoder.getSampleFormat());
int channels = audioCoder.getChannels();
// attempt to use the media's audio format
AudioFormat format = new AudioFormat(sampleRate, bitDepth, channels, true, false);
DataLine.Info info = new DataLine.Info(SourceDataLine.class, format);
// see if its supported
if (!AudioSystem.isLineSupported(info)) {
// if its not supported, this is typically due to the number of playback channels
// lets try the same format with just 2 channels (stereo)
if (channels > 2) {
format = new AudioFormat(sampleRate, bitDepth, 2, true, false);
info = new DataLine.Info(SourceDataLine.class, format);
// check if its supported
if (AudioSystem.isLineSupported(info)) {
// flag that downmixing must take place
result |= XugglerAudioData.CONVERSION_TO_STEREO;
}
}
// if its still not supported check the bit depth
if (!AudioSystem.isLineSupported(info)) {
// otherwise it could be due to the bit depth
// use either the audio format or the down mixed format
AudioFormat source = format;
// try to see if converting it to 16 bit will work
AudioFormat target = new AudioFormat(sampleRate, 16, format.getChannels(), true, false);
if (AudioSystem.isConversionSupported(target, source)) {
// setup the line
info = new DataLine.Info(SourceDataLine.class, target);
format = target;
// flag that a bit depth conversion must take place
result |= XugglerAudioData.CONVERSION_TO_BIT_DEPTH_16;
} else {
// if we still can't get it to be supported just give up and log a message
LOGGER.warn("The audio format is not supported by JavaSound and could not be converted: " + format);
this.line = null;
return -1;
}
}
}
try {
// create and open JavaSound
this.line = (SourceDataLine)AudioSystem.getLine(info);
this.line.open(format);
this.line.start();
return result;
} catch (LineUnavailableException e) {
// if a line isn't available then just dont play any sound
// and just continue normally
LOGGER.error("Line not available for audio playback: ", e);
this.line = null;
return -1;
}
}
return -1;
}
开发者ID:wnbittle,项目名称:praisenter,代码行数:79,代码来源:XugglerAudioPlayerThread.java
示例13: onAudioSamples
import com.xuggle.xuggler.IAudioSamples; //导入依赖的package包/类
public void onAudioSamples(IAudioSamplesEvent event) {
IAudioSamples samples = event.getAudioSamples();
// set the new time stamp to the original plus the offset established
// for this media file
long newTimeStamp = samples.getTimeStamp() + mOffset;
// keep track of predicted time of the next audio samples, if the end
// of the media file is encountered, then the offset will be adjusted
// to this time.
mNextAudio = samples.getNextPts();
// set the new timestamp on audio samples
samples.setTimeStamp(newTimeStamp);
// create a new audio samples event with the one true audio stream
// index
super.onAudioSamples(new AudioSamplesEvent(this, samples,
mAudoStreamIndex));
}
开发者ID:destiny1020,项目名称:java-learning-notes-cn,代码行数:25,代码来源:ConcatenateAudioAndVideo.java
示例14: addAudioStream
import com.xuggle.xuggler.IAudioSamples; //导入依赖的package包/类
/**
* Add a audio stream. The time base defaults to {@link #DEFAULT_TIMEBASE} and the audio format defaults to {@link
* #DEFAULT_SAMPLE_FORMAT}. The new {@link IStream} is returned to provide an easy way to further configure the stream.
*
* @param streamId a format-dependent id for this stream
* @param codec the codec to used to encode data, to establish the codec see {@link com.xuggle.xuggler.ICodec}
* @param channelCount the number of audio channels for the stream
* @param sampleRate sample rate in Hz (samples per seconds), common values are 44100, 22050, 11025, etc.
*
* @return audio index
*
* @throws IllegalArgumentException if inputIndex < 0, the stream id < 0, the codec is NULL or if the container is already open.
* @throws IllegalArgumentException if width or height are <= 0
*
* @see IContainer
* @see IStream
* @see IStreamCoder
* @see ICodec
*/
@SuppressWarnings("deprecation")
public int addAudioStream(int streamId, ICodec codec, int channelCount, int sampleRate) {
log.debug("addAudioStream {}", outputUrl);
// validate parameters
if (channelCount <= 0) {
throw new IllegalArgumentException("Invalid channel count " + channelCount);
}
if (sampleRate <= 0) {
throw new IllegalArgumentException("Invalid sample rate " + sampleRate);
}
// add the new stream at the correct index
audioStream = container.addNewStream(streamId);
if (audioStream == null) {
throw new RuntimeException("Unable to create stream id " + streamId + ", codec " + codec);
}
// configure the stream coder
audioCoder = audioStream.getStreamCoder();
audioCoder.setStandardsCompliance(IStreamCoder.CodecStandardsCompliance.COMPLIANCE_EXPERIMENTAL);
audioCoder.setCodec(codec);
audioCoder.setTimeBase(IRational.make(1, sampleRate));
audioCoder.setChannels(channelCount);
audioCoder.setSampleRate(sampleRate);
audioCoder.setSampleFormat(IAudioSamples.Format.FMT_S16);
switch (sampleRate) {
case 44100:
if (channelCount == 2) {
audioCoder.setBitRate(128000);
} else {
audioCoder.setBitRate(64000);
}
break;
case 22050:
if (channelCount == 2) {
audioCoder.setBitRate(96000);
} else {
audioCoder.setBitRate(48000);
}
break;
default:
audioCoder.setBitRate(32000);
break;
}
audioCoder.setBitRateTolerance((int) (audioCoder.getBitRate() / 2));
audioCoder.setGlobalQuality(0);
log.trace("Bitrate: {} tolerance: {}", audioCoder.getBitRate(), audioCoder.getBitRateTolerance());
log.trace("Time base: {} sample rate: {} stereo: {}", audioCoder.getTimeBase(), sampleRate, channelCount > 1);
log.debug("Added:\n{}", audioStream);
// return the new audio stream
return audioStream.getIndex();
}
开发者ID:Red5,项目名称:red5-hls-plugin,代码行数:70,代码来源:HLSStreamWriter.java
注:本文中的com.xuggle.xuggler.IAudioSamples类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论