Non-audible videos with libwebm (VP8/Opus) -- Syncing audio -- - c++

I am trying to create a very simple webm(vp8/opus) encoder, however I can not get the audio to work.
ffprobe does detect the file format and duration
Stream #1:0(eng): Audio: opus, 48000 Hz, mono, fltp (default)
The video can be played just fine in VLC and Chrome, but with no audio, for some reason the audio input bitrate is always 0
Most of the audio encoding code was copied from
https://github.com/fnordware/AdobeWebM/blob/master/src/premiere/WebM_Premiere_Export.cpp
Here is the relevant code:
static const long long kTimeScale = 1000000000LL;
MkvWriter writer;
writer.Open("video.webm");
Segment mux_seg;
mux_seg.Init(&writer);
// VPX encoding...
int16_t pcm[SAMPLES];
uint64_t audio_track_id = mux_seg.AddAudioTrack(SAMPLE_RATE, 1, 0);
mkvmuxer::AudioTrack *audioTrack = (mkvmuxer::AudioTrack*)mux_seg.GetTrackByNumber(audio_track_id);
audioTrack->set_codec_id(mkvmuxer::Tracks::kOpusCodecId);
audioTrack->set_seek_pre_roll(80000000);
OpusEncoder *encoder = opus_encoder_create(SAMPLE_RATE, 1, OPUS_APPLICATION_AUDIO, NULL);
opus_encoder_ctl(encoder, OPUS_SET_BITRATE(64000));
opus_int32 skip = 0;
opus_encoder_ctl(encoder, OPUS_GET_LOOKAHEAD(&skip));
audioTrack->set_codec_delay(skip * kTimeScale / SAMPLE_RATE);
mux_seg.CuesTrack(audio_track_id);
uint64_t currentAudioSample = 0;
uint64_t opus_ts = 0;
while(has_frame) {
int bytes = opus_encode(encoder, pcm, SAMPLES, out, SAMPLES * 8);
opus_ts = currentAudioSample * kTimeScale / SAMPLE_RATE;
mux_seg.AddFrame(out, bytes, audio_track_id, opus_ts, true);
currentAudioSample += SAMPLES;
}
opus_encoder_destroy(encoder);
mux_seg.Finalize();
writer.Close();
Update #1:
It seems that the problem is that WebM requires the audio and video tracks to be interlaced.
However I can not figure out how to sync the audio.
Should I calculate the frame duration, then encode the equivalent audio samples?

The problem was that I was missing the OGG header data, and the audio frames timestamps were not accurate.
to complete the answer here is the pseudo code for the encoder.
const int kTicksPerSecond = 1000000000; // webm timescale
const int kTimeScale = kTicksPerSecond / FPS;
const int kTwoNanoSeconds = 1000000000;
init_opus_encoder();
audioTrack->set_seek_pre_roll(80000000);
audioTrack->set_codec_delay(opus_preskip);
audioTrack->SetCodecPrivate(ogg_header_data, ogg_header_size);
while(has_video_frame) {
encode_vpx_frame();
video_pts = frame_index * kTimeScale;
muxer_segment.addFrame(frame_packet_data, packet_length, video_track_id, video_pts, packet_flags);
// fill the video frames gap with OPUS audio samples
while(audio_pts < video_pts + kTimeScale) {
encode_opus_frame();
muxer_segment.addFrame(opus_frame_data, opus_frame_data_length, audio_track_id, audio_pts, true /* keyframe */);
audio_pts = curr_audio_samples * kTwoNanoSeconds / 48000;
curr_audio_samples += 960;
}
}

Related

IMFSourceReader M4A Audio Accurate Frame Seek

I'm using IMFSourceReader to continuously buffer 1 second portions of audio files from disk. I'm unable to accurately seek M4A audio data (AAC encoded) and this results in a discontinuous audio stream.
I'm aware that the data returned by IMFSourceReader.Read() is usually offset by a few hundred frames into the past relative to the position set in IMFSourceReader.SetCurrentPosition(). However, even accounting for this offset I'm unable to create a continuous glitch free stream (see readCall == 0 condition).
I am able to accurately seek portions of WAV files (uncompressed) so my offset calculation appears to be correct.
My question is whether the Media Foundation library is able to accurately seek/read portions of AAC encoded M4A files (or any compressed audio for that matter)?
Here's the code. inStartFrame is the sample frame I'm trying to read. Output format is configured as 32bit floating point data (see final function). To trim it down a little I've removed some error checks and cleanup e.g. end of file.
bool WindowsM4AReader::read(float** outBuffer, int inNumChannels, int64_t inStartFrame, int64_t inNumFramesToRead)
{
int64_t hnsToRequest = SampleFrameToHNS(inStartFrame);
int64_t frameRequested = HNSToSampleFrame(hnsToRequest);
PROPVARIANT positionProp;
positionProp.vt = VT_I8;
positionProp.hVal.QuadPart = hnsToRequest;
HRESULT hr = mReader->SetCurrentPosition(GUID_NULL, positionProp);
mReader->Flush(0);
IMFSample* pSample = nullptr;
int bytesPerFrame = sizeof(float) * mNumChannels;
int64_t totalFramesWritten = 0;
int64_t remainingFrames = inNumFramesToRead;
int readCall = 0;
bool quit = false;
while (!quit) {
DWORD streamIndex = 0;
DWORD flags = 0;
LONGLONG llTimeStamp = 0;
hr = mReader->ReadSample(
MF_SOURCE_READER_FIRST_AUDIO_STREAM, // Stream index.
0, // Flags.
&streamIndex, // Receives the actual stream index.
&flags, // Receives status flags.
&llTimeStamp, // Receives the time stamp.
&pSample // Receives the sample or NULL.
);
int64_t frameOffset = 0;
if (readCall == 0) {
int64_t hnsOffset = hnsToRequest - llTimeStamp;
frameOffset = HNSToSampleFrame(hnsOffset);
}
++readCall;
if (pSample) {
IMFMediaBuffer* decodedBuffer = nullptr;
pSample->ConvertToContiguousBuffer(&decodedBuffer);
BYTE* rawBuffer = nullptr;
DWORD maxLength = 0;
DWORD bufferLengthInBytes = 0;
decodedBuffer->Lock(&rawBuffer, &maxLength, &bufferLengthInBytes);
int64_t availableFrames = bufferLengthInBytes / bytesPerFrame;
availableFrames -= frameOffset;
int64_t framesToCopy = min(availableFrames, remainingFrames);
// copy to outputBuffer
float* floatBuffer = (float*)rawBuffer;
float* offsetBuffer = &floatBuffer[frameOffset * mNumChannels];
for (int channel = 0; channel < mNumChannels; ++channel) {
for (int64_t frame = 0; frame < framesToCopy; ++frame) {
float sampleValue = offsetBuffer[frame * mNumChannels + channel];
outBuffer[channel][totalFramesWritten + frame] = sampleValue;
}
}
decodedBuffer->Unlock();
totalFramesWritten += framesToCopy;
remainingFrames -= framesToCopy;
if (totalFramesWritten >= inNumFramesToRead)
quit = true;
}
}
}
LONGLONG WindowsM4AReader::SampleFrameToHNS(int64_t inFrame)
{
return inFrame * (10000000.0 / mSampleRate);
}
int64_t WindowsM4AReader::HNSToSampleFrame(LONGLONG inHNS)
{
return inHNS / 10000000.0 * mSampleRate;
}
bool WindowsM4AReader::ConfigureAsFloatDecoder()
{
IMFMediaType* outputType = nullptr;
HRESULT hr = MFCreateMediaType(&outputType);
UINT32 bitsPerSample = sizeof(float) * 8;
UINT32 blockAlign = mNumChannels * (bitsPerSample / 8);
UINT32 bytesPerSecond = blockAlign * (UINT32)mSampleRate;
hr = outputType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Audio);
hr = outputType->SetGUID(MF_MT_SUBTYPE, MFAudioFormat_Float);
hr = outputType->SetUINT32(MF_MT_AUDIO_PREFER_WAVEFORMATEX, TRUE);
hr = outputType->SetUINT32(MF_MT_AUDIO_NUM_CHANNELS, (UINT32)mNumChannels);
hr = outputType->SetUINT32(MF_MT_AUDIO_SAMPLES_PER_SECOND, (UINT32)mSampleRate);
hr = outputType->SetUINT32(MF_MT_AUDIO_BLOCK_ALIGNMENT, blockAlign);
hr = outputType->SetUINT32(MF_MT_AUDIO_AVG_BYTES_PER_SECOND, bytesPerSecond);
hr = outputType->SetUINT32(MF_MT_AUDIO_BITS_PER_SAMPLE, bitsPerSample);
hr = outputType->SetUINT32(MF_MT_ALL_SAMPLES_INDEPENDENT, TRUE);
DWORD streamIndex = 0;
hr = mReader->SetCurrentMediaType(streamIndex, NULL, outputType);
return true;
}
If you are using AAC Decoder provided by Microsoft (AAC Decoder), and the MPEG-4 File Source, yes i confirm, you can't seek audio frame with the same precision of wave file.
I'll have to make more test, but i think it's possible to find a workaround in your case.
EDIT
I've made a program to check seek position with SourceReader :
github mofo7777
Under Stackoverflow > AudioSourceReaderSeek
Wav format is perfect at seeking, mp3 is good, and m4a not really good.
But the m4a file was encoded with VLC. I encoded a m4a file using Mediafoundation encoder. The result is better when seeking with this file (like mp3).
So i would say that the encoder is important for seeking.
It would be interesting to test different audio format, with different encoders.
Also, there is IMFSeekInfo interface
I can't test this interface, because i'm under Windows Seven, and it's for Win8. It would be interesting for someone to test.

Why codecs x264/x265 ignores pts and dts of input frame?

I'm trying to encode images from a webcam with libx265 (libx264 tried earlier) ...The webcam can not shoot with stable FPS because of the different amount of light entering the matrix and, as a result, different delays. Therefore, I count the fps and dts of the incoming frame and set these values for the corresponding parameters of the x265_image object, and init the encoder fpsNum with 1000 and fpsDenom with 1 (for millisecond timebase).
The problem is that the encoder ignores pts and dts of input image and encodes at 1000 fps! The same trick with timebase produces smooth record with libvpx. Why it does not work with x264/x265 codecs?
Here is parameters initialization:
...
error = (x265_param_default_preset(param, "fast", "zerolatency") != 0);
if(!error){
param->sourceWidth = width;
param->sourceHeight = height;
param->frameNumThreads = 1;
param->fpsNum = 1000;
param->fpsDenom = 1;
// Intra refres:
param->keyframeMax = 15;
param->intraRefine = 1;
// Rate control:
param->rc.rateControlMode = X265_RC_CQP;
param->rc.rfConstant = 12;
param->rc.rfConstantMax = 48;
// For streaming:
param->bRepeatHeaders = 1;
param->bAnnexB = 1;
encoder = x265_encoder_open(param);
...
}
...
Here is frame adding function:
bool hevc::Push(unsigned char *data){
if(!error){
std::lock_guard<std::mutex> lock(m_framestack);
if( timer > 0){
framestack.back()->dts = clock() - timer;
timer+= framestack.back()->dts;
}
else{timer = clock();}
x265_picture *picture = x265_picture_alloc();
if( picture){
x265_picture_init(param, picture);
picture->height = param->sourceHeight;
picture->stride[0] = param->sourceWidth;
picture->stride[1] = picture->stride[2] = picture->stride[0] / 2;
picture->planes[0] = new char[ luma_size];
picture->planes[1] = new char[chroma_size];
picture->planes[2] = new char[chroma_size];
colorspaces::BGRtoI420(param->sourceWidth, param->sourceHeight, data, (byte*)picture->planes[0], (byte*)picture->planes[1], (byte*)picture->planes[2]);
picture->pts = picture->dts = 0;
framestack.emplace_back(picture);
}
else{error = true;}
}
return !error;
}
Global PTS is increasing right after x265_encoder_encode call:
pts+= pic_in->dts; and sets as a pts of new image from framestack queue when it comes to encoder.
Can the x265/x264 codecs encode at variable fps? How to configure it if yes?
I don't know about x265 but in x264 to encode variable frame rate (VFR) video you should enable x264_param_t.b_vfr_input option which was disabled by your zerolatency tuning (VFR encode need 1 frame latency). Also at least in x264 timebase should be in i_timebase_num/i_timebase_den and i_fps_num/i_fps_den to be average fps (or keep default 25/1 if you don't know fps) or you will broke ratecontrol.

How to record the microphone untill there is no sound?

I've created 2 functions :
- One that records the microphone
- One that plays the sound of the microphone
It records the microphone for 3 seconds
#include <iostream>
#include <Windows.h>
#include <vector>
using namespace std;
#pragma comment(lib, "winmm.lib")
short int waveIn[44100 * 3];
void PlayRecord();
void StartRecord()
{
const int NUMPTS = 44100 * 3; // 3 seconds
int sampleRate = 44100;
// 'short int' is a 16-bit type; I request 16-bit samples below
// for 8-bit capture, you'd use 'unsigned char' or 'BYTE' 8-bit types
HWAVEIN hWaveIn;
MMRESULT result;
WAVEFORMATEX pFormat;
pFormat.wFormatTag=WAVE_FORMAT_PCM; // simple, uncompressed format
pFormat.nChannels=1; // 1=mono, 2=stereo
pFormat.nSamplesPerSec=sampleRate; // 44100
pFormat.nAvgBytesPerSec=sampleRate*2; // = nSamplesPerSec * n.Channels * wBitsPerSample/8
pFormat.nBlockAlign=2; // = n.Channels * wBitsPerSample/8
pFormat.wBitsPerSample=16; // 16 for high quality, 8 for telephone-grade
pFormat.cbSize=0;
// Specify recording parameters
result = waveInOpen(&hWaveIn, WAVE_MAPPER,&pFormat,
0L, 0L, WAVE_FORMAT_DIRECT);
WAVEHDR WaveInHdr;
// Set up and prepare header for input
WaveInHdr.lpData = (LPSTR)waveIn;
WaveInHdr.dwBufferLength = NUMPTS*2;
WaveInHdr.dwBytesRecorded=0;
WaveInHdr.dwUser = 0L;
WaveInHdr.dwFlags = 0L;
WaveInHdr.dwLoops = 0L;
waveInPrepareHeader(hWaveIn, &WaveInHdr, sizeof(WAVEHDR));
// Insert a wave input buffer
result = waveInAddBuffer(hWaveIn, &WaveInHdr, sizeof(WAVEHDR));
// Commence sampling input
result = waveInStart(hWaveIn);
cout << "recording..." << endl;
Sleep(3 * 1000);
// Wait until finished recording
waveInClose(hWaveIn);
PlayRecord();
}
void PlayRecord()
{
const int NUMPTS = 44100 * 3; // 3 seconds
int sampleRate = 44100;
// 'short int' is a 16-bit type; I request 16-bit samples below
// for 8-bit capture, you'd use 'unsigned char' or 'BYTE' 8-bit types
HWAVEIN hWaveIn;
WAVEFORMATEX pFormat;
pFormat.wFormatTag=WAVE_FORMAT_PCM; // simple, uncompressed format
pFormat.nChannels=1; // 1=mono, 2=stereo
pFormat.nSamplesPerSec=sampleRate; // 44100
pFormat.nAvgBytesPerSec=sampleRate*2; // = nSamplesPerSec * n.Channels * wBitsPerSample/8
pFormat.nBlockAlign=2; // = n.Channels * wBitsPerSample/8
pFormat.wBitsPerSample=16; // 16 for high quality, 8 for telephone-grade
pFormat.cbSize=0;
// Specify recording parameters
waveInOpen(&hWaveIn, WAVE_MAPPER,&pFormat, 0L, 0L, WAVE_FORMAT_DIRECT);
WAVEHDR WaveInHdr;
// Set up and prepare header for input
WaveInHdr.lpData = (LPSTR)waveIn;
WaveInHdr.dwBufferLength = NUMPTS*2;
WaveInHdr.dwBytesRecorded=0;
WaveInHdr.dwUser = 0L;
WaveInHdr.dwFlags = 0L;
WaveInHdr.dwLoops = 0L;
waveInPrepareHeader(hWaveIn, &WaveInHdr, sizeof(WAVEHDR));
HWAVEOUT hWaveOut;
cout << "playing..." << endl;
waveOutOpen(&hWaveOut, WAVE_MAPPER, &pFormat, 0, 0, WAVE_FORMAT_DIRECT);
waveOutWrite(hWaveOut, &WaveInHdr, sizeof(WaveInHdr)); // Playing the data
Sleep(3 * 1000); //Sleep for as long as there was recorded
waveInClose(hWaveIn);
waveOutClose(hWaveOut);
}
int main()
{
StartRecord();
return 0;
}
How can I change my StartRecord function (and I guess my PlayRecord function aswell), to make it record untill theres no input from the microphone?
(So far, those 2 functions are working perfectly - records the microphone for 3 seconds, then plays the recording)...
Thanks!
Edit: by no sound, I mean the sound level is too low or something (means the person probably isnt speaking)...
Because sound is a wave, it oscillates between high and low pressures. This waveform is usually recorded as positive and negative numbers, with zero being the neutral pressure. If you take the absolute value of the signal and keep a running average it should be sufficient.
The average should be taken over a long enough period that you account for the appropriate amount of silence. A very cheap way to keep an estimate of the running average is like this:
const double threshold = 50; // Whatever threshold you need
const int max_samples = 10000; // The representative running average size
double average = 0; // The running average
int sample_count = 0; // When we are building the average
while( sample_count < max_samples || average > threshold ) {
// New sample arrives, stored in 'sample'
// Adjust the running absolute average
if( sample_count < max_samples ) sample_count++;
average *= double(sample_count-1) / sample_count;
average += std::abs(sample) / sample_count;
}
The larger max_samples, the slower average will respond to a signal. After the sound stops, it will slowly trail off. However, it will be slow to rise again too. This would be fine for reasonably continuous sound.
With something like speech, which can have short or long pauses, you may want to use an impulse-based approach. You can just define the number of samples of 'silence' that you expect, and reset it whenever you receive an impulse that exceeds the threshold. Using the running average above with a much shorter window size will give you a simple way of detecting an impulse. Then you just need to count...
const int max_samples = 100; // Smaller window size for impulse
const int max_silence_samples = 10000; // Maximum samples below threshold
int silence = 0; // Number of samples below threshold
while( silence < max_silence_samples ) {
// Compute running average as before
//...
// Check for silence. If there's a signal, reset the counter.
if( average > threshold ) silence = 0;
else ++silence;
}
Adjusting threshold and max_samples will control the sensitivity to pops and clicks, while max_silence_samples gives you control over how much silence is allowed before you stop recording.
There are undoubtedly more technical ways to achieve your goals, but it's always good to try the simple one first. See how you go with this.
I suggest you to do it via DirectShow. You should create an instance of microphone, SampleGrabber, audio encoder and file writer. Your graph should be like this:
Microphone -> SampleGrabber -> Audio Encoder -> File Writer
Every sample passes through SampleGrabber and you can read all raw samples and check if you should continue record or not. This is the best way you and both record and check it's contents.

Resample audio using libsamplerate in windows phone

I'm trying to re-sample captured 2channel/48khz/32bit audio to 1channel/8khz/32bit using libsamplerate in a windows phone project using WASAPI.
I need to get 160 frames from 960 original frames by re-sampling.After capturing audio using GetBuffer method I send the captured BYTE array of 7680 byte to the below method:
void BackEndAudio::ChangeSampleRate(BYTE* buf)
{
int er2;
st=src_new(2,1,&er2);
//SRC_DATA sd defined before
sd=new SRC_DATA;
BYTE *onechbuf = new BYTE[3840];
int outputIndex = 0;
//convert Stereo to Mono
for (int n = 0; n < 7680; n+=8)
{
onechbuf[outputIndex++] = buf[n];
onechbuf[outputIndex++] = buf[n+1];
onechbuf[outputIndex++] = buf[n+2];
onechbuf[outputIndex++] = buf[n+3];
}
float *res1=new float[960];
res1=(float *)onechbuf;
float *res2=new float[160];
//change samplerate
sd->data_in=res1;
sd->data_out=res2;
sd->input_frames=960;
sd->output_frames=160;
sd->src_ratio=(double)1/6;
sd->end_of_input=1;
int er=src_process(st,sd);
transportController->WriteAudio((BYTE *)res2,640);
delete[] onechbuf;
src_delete(st);
delete sd;
}
src_process method returns no error and sd->input_frames_used set to 960 and sd->output_frames_gen set to 159 but the rendering output is only noise.
I use the code in a real-time VoIP app.
What could be the source of problem ?
I found the problem.I shouldn't make a new SRC_STATE object and delete it in each call of my function by calling st=src_new(2,1,&er2); and src_delete(st);but call them once is enough for the whole audio re-sampling.Also there is no need to using pointer for the SRC_DATA . I modified my code as below and it works fine now.
void BackEndAudio::ChangeSampleRate(BYTE* buf)
{
BYTE *onechbuf = new BYTE[3840];
int outputIndex = 0;
//convert Stereo to Mono
for (int n = 0; n < 7680; n+=8)
{
onechbuf[outputIndex++] = buf[n];
onechbuf[outputIndex++] = buf[n+1];
onechbuf[outputIndex++] = buf[n+2];
onechbuf[outputIndex++] = buf[n+3];
}
float *out=new float[160];
//change samplerate
sd.data_in=(float *)onechbuf;
sd.data_out=out;
sd.input_frames=960;
sd.output_frames=160;
sd.src_ratio=(double)1/6;
sd.end_of_input=0;
int er=src_process(st,&sd);
}

Play stream in OpenAL library

I need to play stream in OpenAL. But i dont understand what i need to do with buffers and source. My pseudocode:
FirstTime = true;
while (true)
{
if (!FirstTime)
{
alSourceUnqueueBuffers(alSource, 1, &unbuf);
}
//get buffer to play in boost::array buf (882 elements) (MONO16).
if (NumberOfSampleSet >=3)
{
alBufferData(alSampleSet[NumberOfSampleSet], AL_FORMAT_MONO16, buf.data(), buf.size(), 44100);
alSourceQueueBuffers(alSource, 1, &alSampleSet[NumberOfSampleSet++]);
if (NumberOfSampleSet == 4)
{
FirstTime = false;
NumberOfSampleSet = 0;
}
}
alSourcePlay(alSource);
}
What am i doing wrong? In speakers i listen repeating clicks. Please tell me what i need to do with buffers to play my sound?
4 buffers (882 samples each) and a 44kHz source give only (4 * 882/ (2 * 44100) ) = 0.04 seconds of playback - that's just a "click".
To produce longer sounds you should load more data (though only two buffers is usually sufficient).
Imagine you have a 100Mb of uncompressed .wav file. Just read say 22050 samples (that is 44100 bytes of data) and enqueue them to the OpenAL's queue associated with Source. Then read another 22050 samples into the second buffer and enqueue them also. Then just switch buffers (like you do now at NumberOfSampleSet == 4) and repeat until the file is not finished.
If you want a pure sine wave of e.g. 440Hz, then using the same 22050-sample buffers just fill them with the values of sine wave:
const int BufferSize = 22050;
const int NumSamples = 44100;
// phase offset to avoid "clicks" between buffers
int LastOffset = 0;
const float Omega = 440.0f;
for(int i = 0 ; i < BufferSize ; i++)
{
float t = ( 2.0f * PI * Omega * ( i + LastOffset ) ) / static_cast<float>( NumSamples );
short VV = (short)(volume * sin(t));;
// 16-bit sample: 2 bytes
buffers[CurrentBuffer][i * 2 + 0] = VV & 0xFF;
buffers[CurrentBuffer][i * 2 + 1] = VV >> 8;
}
LastOffset += BufferSize / 2;
LastOffset %= FSignalFreq;
EDIT1:
To process something in real-time (with severe latency, unfortunately) you have to create the buffers, push some initial data and then check for how much data OpenAL needs:
int StreamBuffer( ALuint BufferID )
{
// get sound to the buffer somehow - load from file, read from input channel (queue), generate etc.
// do the custom sound processing here in buffers[CurrentBuffer]
// submit more data to OpenAL
alBufferData( BufferID, Format, buffers[CurrentBuffer].data(), buffers[CurrentBuffer].size(), SamplesPerSec );
}
int main()
{
....
ALuint FBufferID[2];
alGenBuffers( 2, &FBufferID[0] );
StreamBuffer( FBufferID[0], BUFFER_SIZE );
StreamBuffer( FBufferID[1], BUFFER_SIZE );
alSourceQueueBuffers( FSourceID, 2, &FBufferID[0] );
while(true)
{
// Check how much data is processed in OpenAL's internal queue
ALint Processed;
alGetSourcei( FSourceID, AL_BUFFERS_PROCESSED, &Processed );
// add more buffers while we need them
while ( Processed-- )
{
Luint BufID;
alSourceUnqueueBuffers( SourceID, 1, &BufID );
StreamBuffer(BufID);
alSourceQueueBuffers( SourceID, 1, &BufID );
}
}
....
}