Higher Than Expected Inherent Latency in OpenAL Playback (Windows, C++) - c++

TL/DR: A simple echo program that records and plays back audio immediately is showing higher than expected latency.
I am working on a real-time audio broadcasting application. I have decided to use OpenAL to both capture and playback audio samples. I am planning on sending UDP packets with raw PCM data across a LAN network. My ideal latency between recording on one machine and playing back on another machine is 30ms. (A lofty goal).
As a test, I made a small program that records samples from a microphone and immediately plays them back to the host speakers. I did this to test the baseline latency.
However, I'm seeing an inherent latency of about 65 - 70 ms from simply recording the audio and playing it back.
I have reduced the buffer size that openAL uses to 100 samples at 44100 samples per second. Ideally, this would yield a latency of 2 - 3 ms.
I have yet to try this on another platform (MacOS / Linux) to determine if this is an OpenAL issue or a Windows issue.
Here's the code:
using std::list;
#define FREQ 44100 // Sample rate
#define CAP_SIZE 100 // How much to capture at a time (affects latency)
#define NUM_BUFFERS 10
int main(int argC, char* argV[])
{
list<ALuint> bufferQueue; // A quick and dirty queue of buffer objects
ALuint helloBuffer[NUM_BUFFERS], helloSource;
ALCdevice* audioDevice = alcOpenDevice(NULL); // Request default audio device
ALCcontext* audioContext = alcCreateContext(audioDevice, NULL); // Create the audio context
alcMakeContextCurrent(audioContext);
// Request the default capture device with a ~2ms buffer
ALCdevice* inputDevice = alcCaptureOpenDevice(NULL, FREQ, AL_FORMAT_MONO16, CAP_SIZE);
alGenBuffers(NUM_BUFFERS, &helloBuffer[0]); // Create some buffer-objects
// Queue our buffers onto an STL list
for (int ii = 0; ii < NUM_BUFFERS; ++ii) {
bufferQueue.push_back(helloBuffer[ii]);
}
alGenSources(1, &helloSource); // Create a sound source
short* buffer = new short[CAP_SIZE]; // A buffer to hold captured audio
ALCint samplesIn = 0; // How many samples are captured
ALint availBuffers = 0; // Buffers to be recovered
ALuint myBuff; // The buffer we're using
ALuint buffHolder[NUM_BUFFERS]; // An array to hold catch the unqueued buffers
alcCaptureStart(inputDevice); // Begin capturing
bool done = false;
while (!done) { // Main loop
// Poll for recoverable buffers
alGetSourcei(helloSource, AL_BUFFERS_PROCESSED, &availBuffers);
if (availBuffers > 0) {
alSourceUnqueueBuffers(helloSource, availBuffers, buffHolder);
for (int ii = 0; ii < availBuffers; ++ii) {
// Push the recovered buffers back on the queue
bufferQueue.push_back(buffHolder[ii]);
}
}
// Poll for captured audio
alcGetIntegerv(inputDevice, ALC_CAPTURE_SAMPLES, 1, &samplesIn);
if (samplesIn > CAP_SIZE) {
// Grab the sound
alcCaptureSamples(inputDevice, buffer, samplesIn);
// Stuff the captured data in a buffer-object
if (!bufferQueue.empty()) { // We just drop the data if no buffers are available
myBuff = bufferQueue.front(); bufferQueue.pop_front();
alBufferData(myBuff, AL_FORMAT_MONO16, buffer, samplesIn * sizeof(short), FREQ);
// Queue the buffer
alSourceQueueBuffers(helloSource, 1, &myBuff);
// Restart the source if needed
// (if we take too long and the queue dries up,
// the source stops playing).
ALint sState = 0;
alGetSourcei(helloSource, AL_SOURCE_STATE, &sState);
if (sState != AL_PLAYING) {
alSourcePlay(helloSource);
}
}
}
}
// Stop capture
alcCaptureStop(inputDevice);
alcCaptureCloseDevice(inputDevice);
// Stop the sources
alSourceStopv(1, &helloSource);
alSourcei(helloSource, AL_BUFFER, 0);
// Clean-up
alDeleteSources(1, &helloSource);
alDeleteBuffers(NUM_BUFFERS, &helloBuffer[0]);
alcMakeContextCurrent(NULL);
alcDestroyContext(audioContext);
alcCloseDevice(audioDevice);
return 0;
}
Here is an image of the waveform showing the delay in input sound and the resulting echo. This example is shows a latency of about 70ms.
Waveform with 70ms echo delay
System specs:
Intel Core i7-9750H
24 GB Ram
Windows 10 Home: V 2004 - Build 19041.508
Sound driver: Realtek Audio (Driver version 10.0.19041.1)
Input device: Logitech G330 Headset
Issue is reproducible on other Windows Systems.
EDIT:
I tried to use PortAudio to do a similar thing and achieved a similar result. I've determined that this is due to Windows audio drivers.
I rebuilt PortAudio with ASIO audio only and installed ASIO4ALL audio driver. This has achieved an acceptable latency of <10ms.

I ultimately resolved this issue by ditching OpenAL in favor of PortAudio and Steinberg ASIO. I installed ASIO4ALL and rebuilt PortAudio to accept only ASIO device drivers. I needed to use the ASIO SDK from Steinberg to do this. (Followed guide here). This has allowed me to achieve a latency of between 5 and 10 ms.
This post helped a lot: input delay with PortAudio callback and ASIO sdk

Related

VIDIOC_DQBUF hangs on camera disconnection

My application is using v4l2 running in a separate thread. If a camera gets disconnected then the user is given an appropriate message before terminating the thread cleanly. This works in the vast majority of cases. However, if the execution is inside the VIDIOC_DQBUF ioctl when the camera is disconnected then the ioctl doesn't return causing the entire thread to lock up.
My system is as follows:
Linux Kernel: 4.12.0
OS: Fedora 25
Compiler: gcc-7.1
The following is a simplified example of the problem function.
// Get Raw Buffer from the camera
void v4l2_Processor::get_Raw_Frame(void* buffer)
{
struct v4l2_buffer buf;
memset(&buf, 0, sizeof (buf));
buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buf.memory = V4L2_MEMORY_MMAP;
// Grab next frame
if (ioctl(m_FD, VIDIOC_DQBUF, &buf) < 0)
{ // If the camera becomes disconnected when the execution is
// in the above ioctl, then the ioctl never returns.
std::cerr << "Error in DQBUF\n";
}
// Queue for next frame
if (ioctl(m_FD, VIDIOC_QBUF, &buf) < 0)
{
std::cerr << "Error in QBUF\n";
}
memcpy(buffer, m_Buffers[buf.index].buff,
m_Buffers[buf.index].buf_length);
}
Can anybody shed any light on why this ioctl locks up and what I might do to solve this problem?
I appreciate any help offered.
Amanda
I am currently having the same issue. However, my entire thread doesn't lock up. The ioctl times out (15s) but thats way too long.
Is there a what to query V4L2 (that wont hang) if video is streaming? or at least change the ioctl timeout ?
UPDATE:
#Amanda you can change the timeout of the dequeue in the v4l2_capture driver source & rebuild the kernel/kernel module
modify the timeout in the dqueue function:
if (!wait_event_interruptible_timeout(cam->enc_queue,
cam->enc_counter != 0,
50 * HZ)) // Modify this constant
Best of luck!

Playing audio without freezing draw loop in openGL

I'm working on a project in openGL and it needs to be able to play simple sounds (mp3) from file while not interrupting the draw loop.
I've been playing around with a few different libraries (openAL, portaudio) and eventually settled on mpg123 (to load the mp3) and libao to play the mp3 back.
The current playsound function works but it blocks the openGL draw loop (ie. freezes the game) until the audio has completed playing. I have tried messing around with std::thread but it still blocked the draw loop.
Here is the audio playback function I've been testing with:
void playSound() {
mpg123_handle *mh;
unsigned char *buffer;
size_t buffer_size;
size_t done;
int err;
int driver;
ao_device *dev;
ao_sample_format format;
int channels, encoding;
long rate;
/* initializations */
ao_initialize();
driver = ao_default_driver_id();
mpg123_init();
mh = mpg123_new(NULL, &err);
buffer_size = mpg123_outblock(mh);
buffer = (unsigned char*) malloc(buffer_size * sizeof(unsigned char));
/* open the file and get the decoding format */
mpg123_open(mh, "sounds/door.mp3");
mpg123_getformat(mh, &rate, &channels, &encoding);
/* set the output format and open the output device */
format.bits = mpg123_encsize(encoding) * 8;
format.rate = rate;
format.channels = channels;
format.byte_format = AO_FMT_NATIVE;
format.matrix = 0;
dev = ao_open_live(driver, &format, NULL);
/* decode and play */
while (mpg123_read(mh, buffer, buffer_size, &done) == MPG123_OK)
ao_play(dev, (char*)buffer, done);
/* clean up */
free(buffer);
ao_close(dev);
mpg123_close(mh);
mpg123_delete(mh);
mpg123_exit();
ao_shutdown();
}
How would I go about fixing this so that my game continues to run smoothly and the audio plays in the background?
You should unpack small amount of audio data and feed it to an audio device every frame.
The main trick is to find out how many samples was played by device already. I'm not sure how you can do this with libao, but it pretty simple with OpenAL.
You can check details here Play stream in OpenAL library
Also, you always can use additional thread. It'll be overkill, but very simple to do and can work fine for a small/demo project.

C++/C FFmpeg artifact build up across video frames

Context:
I am building a recorder for capturing video and audio in separate threads (using Boost thread groups) using FFmpeg 2.8.6 on Ubuntu 16.04. I followed the demuxing_decoding example here: https://www.ffmpeg.org/doxygen/2.8/demuxing_decoding_8c-example.html
Video capture specifics:
I am reading H264 off a Logitech C920 webcam and writing the video to a raw file. The issue I notice with the video is that there seems to be a build-up of artifacts across frames until a particular frame resets. Here is my frame grabbing, and decoding functions:
// Used for injecting decoding functions for different media types, allowing
// for a generic decode loop
typedef std::function<int(AVPacket*, int*, int)> PacketDecoder;
/**
* Decodes a video packet.
* If the decoding operation is successful, returns the number of bytes decoded,
* else returns the result of the decoding process from ffmpeg
*/
int decode_video_packet(AVPacket *packet,
int *got_frame,
int cached){
int ret = 0;
int decoded = packet->size;
*got_frame = 0;
//Decode video frame
ret = avcodec_decode_video2(video_decode_context,
video_frame, got_frame, packet);
if (ret < 0) {
//FFmpeg users should use av_err2str
char errbuf[128];
av_strerror(ret, errbuf, sizeof(errbuf));
std::cerr << "Error decoding video frame " << errbuf << std::endl;
decoded = ret;
} else {
if (*got_frame) {
video_frame->pts = av_frame_get_best_effort_timestamp(video_frame);
//Write to log file
AVRational *time_base = &video_decode_context->time_base;
log_frame(video_frame, time_base,
video_frame->coded_picture_number, video_log_stream);
#if( DEBUG )
std::cout << "Video frame " << ( cached ? "(cached)" : "" )
<< " coded:" << video_frame->coded_picture_number
<< " pts:" << pts << std::endl;
#endif
/*Copy decoded frame to destination buffer:
*This is required since rawvideo expects non aligned data*/
av_image_copy(video_dest_attr.video_destination_data,
video_dest_attr.video_destination_linesize,
(const uint8_t **)(video_frame->data),
video_frame->linesize,
video_decode_context->pix_fmt,
video_decode_context->width,
video_decode_context->height);
//Write to rawvideo file
fwrite(video_dest_attr.video_destination_data[0],
1,
video_dest_attr.video_destination_bufsize,
video_out_file);
//Unref the refcounted frame
av_frame_unref(video_frame);
}
}
return decoded;
}
/**
* Grabs frames in a loop and decodes them using the specified decoding function
*/
int process_frames(AVFormatContext *context,
PacketDecoder packet_decoder) {
int ret = 0;
int got_frame;
AVPacket packet;
//Initialize packet, set data to NULL, let the demuxer fill it
av_init_packet(&packet);
packet.data = NULL;
packet.size = 0;
// read frames from the file
for (;;) {
ret = av_read_frame(context, &packet);
if (ret < 0) {
if (ret == AVERROR(EAGAIN)) {
continue;
} else {
break;
}
}
//Convert timing fields to the decoder timebase
unsigned int stream_index = packet.stream_index;
av_packet_rescale_ts(&packet,
context->streams[stream_index]->time_base,
context->streams[stream_index]->codec->time_base);
AVPacket orig_packet = packet;
do {
ret = packet_decoder(&packet, &got_frame, 0);
if (ret < 0) {
break;
}
packet.data += ret;
packet.size -= ret;
} while (packet.size > 0);
av_free_packet(&orig_packet);
if(stop_recording == true) {
break;
}
}
//Flush cached frames
std::cout << "Flushing frames" << std::endl;
packet.data = NULL;
packet.size = 0;
do {
packet_decoder(&packet, &got_frame, 1);
} while (got_frame);
av_log(0, AV_LOG_INFO, "Done processing frames\n");
return ret;
}
Questions:
How do I go about debugging the underlying issue?
Is it possible that running the decoding code in a thread other than the one in which the decoding context was opened is causing the problem?
Am I doing something wrong in the decoding code?
Things I have tried/found:
I found this thread that is about the same problem here: FFMPEG decoding artifacts between keyframes
(I cannot post samples of my corrupted frames due to privacy issues, but the image linked to in that question depicts the same issue I have)
However, the answer to the question is posted by the OP without specific details about how the issue was fixed. The OP only mentions that he wasn't 'preserving the packets correctly', but nothing about what was wrong or how to fix it. I do not have enough reputation to post a comment seeking clarification.
I was initially passing the packet into the decoding function by value, but switched to passing by pointer on the off chance that the packet freeing was being done incorrectly.
I found another question about debugging decoding issues, but couldn't find anything conclusive: How is video decoding corruption debugged?
I'd appreciate any insight. Thanks a lot!
[EDIT] In response to Ronald's answer, I am adding a little more information that wouldn't fit in a comment:
I am only calling decode_video_packet() from the thread processing video frames; the other thread processing audio frames calls a similar decode_audio_packet() function. So only one thread calls the function. I should mention that I have set the thread_count in the decoding context to 1, failing which I would get a segfault in malloc.c while flushing the cached frames.
I can see this being a problem if the process_frames and the frame decoder function were run on separate threads, which is not the case. Is there a specific reason why it would matter if the freeing is done within the function, or after it returns? I believe the freeing function is passed a copy of the original packet because multiple decode calls would be required for audio packet in case the decoder doesnt decode the entire audio packet.
A general problem is that the corruption does not occur all the time. I can debug better if it is deterministic. Otherwise, I can't even say if a solution works or not.
A few things to check:
are you running multiple threads that are calling decode_video_packet()? If you are: don't do that! FFmpeg has built-in support for multi-threaded decoding, and you should let FFmpeg do threading internally and transparently.
you are calling av_free_packet() right after calling the frame decoder function, but at that point it may not yet have had a chance to copy the contents. You should probably let decode_video_packet() free the packet instead, after calling avcodec_decode_video2().
General debugging advice:
run it without any threading and see if that works;
if it does, and with threading it fails, use thread debuggers such as tsan or helgrind to help in finding race conditions that point to your code.
it can also help to know whether the output you're getting is reproduceable (this suggests a non-threading-related bug in your code) or changes from one run to the other (this suggests a race condition in your code).
And yes, the periodic clean-ups are because of keyframes.

Windows MFT (Media Foundation Transform) decoder not returning proper sample time or duration

To decode a H264 stream with the Windows Media foundation Transform, the work flow is currently something like this:
IMFSample sample;
sample->SetTime(time_in_ns);
sample->SetDuration(duration_in_ns);
sample->AddBuffer(buffer);
// Feed IMFSample to decoder
mDecoder->ProcessInput(0, sample, 0);
// Get output from decoder.
/* create outputsample that will receive content */ { ... }
MFT_OUTPUT_DATA_BUFFER output = {0};
output.pSample = outputsample;
DWORD status = 0;
HRESULT hr = mDecoder->ProcessOutput(0, 1, &output, &status);
DWORD status = 0;
hr = mDecoder->ProcessOutput(0, 1, &output, &status);
if (output.pEvents) {
// We must release this, as per the IMFTransform::ProcessOutput()
// MSDN documentation.
output.pEvents->Release();
output.pEvents = nullptr;
}
if (hr == MF_E_TRANSFORM_STREAM_CHANGE) {
// Type change, probably geometric aperture change.
// Reconfigure decoder output type, so that GetOutputMediaType()
} else if (hr == MF_E_TRANSFORM_NEED_MORE_INPUT) {
// Not enough input to produce output.
} else if (!output.pSample) {
return S_OK;
} else }
// Process output
}
}
When we have fed all data to the MFT decoder, we must drain it:
mDecoder->ProcessMessage(MFT_MESSAGE_COMMAND_DRAIN, 0);
Now, one thing with the WMF H264 decoder, is that it will typically not output anything before having been called with over 30 compressed h264 frames regardless of the size of the h264 sliding window. Latency is very high...
I'm encountering an issue that is very troublesome.
With a video made only of keyframes, and which has only 15 frames, each being 2s long, the first frame having a presentation time of non-zero (this stream is from live content, so first frame is typically in epos time)
So without draining the decoder, nothing will come out of the decoder as it hasn't received enough frames.
However, once the decoder is drained, the decoded frame will come out. HOWEVER, the MFT decoder has set all durations to 33.6ms only and the presentation time of the first sample coming out is always 0.
The original duration and presentation time have been lost.
If you provide over 30 frames to the h264 decoder, then both duration and pts are valid...
I haven't yet found a way to get the WMF decoder to output samples with the proper value.
It appears that if you have to drain the decoder before it has output any samples by itself, then it's totally broken...
Has anyone experienced such problems? How did you get around it?
Thank you in advance
Edit: a sample of the video is available on http://people.mozilla.org/~jyavenard/mediatest/fragmented/1301869.mp4
Playing this video with Firefox will causes it to play extremely quickly due to the problems described above.
I'm not sure that your work flow is correct. I think you should do something like this:
do
{
...
hr = mDecoder->ProcessInput(0, sample, 0);
if(FAILED(hr))
break;
...
hr = mDecoder->ProcessOutput(0, 1, &output, &status);
if(FAILED(hr) && hr != MF_E_TRANSFORM_NEED_MORE_INPUT)
break;
}
while(hr == MF_E_TRANSFORM_NEED_MORE_INPUT);
if(SUCCEEDED(hr))
{
// You have a valid decoded frame here
}
The idea is to keep calling ProcessInput/ProcessOuptut while ProcessOutput returns MF_E_TRANSFORM_NEED_MORE_INPUT. MF_E_TRANSFORM_NEED_MORE_INPUT means that decoder needs more input. I think that with this loop you won't need to drain the decoder.

My OpenAL C++ audio streaming buffer gliching

I have for the first time coding sound generating with OpenAL in C++.
What I want to do is to generate endless sinus wave into a double buffering way.
And the problem is that the sound is glittering/lags. I Think it is between the buffering and I don't know why it is like that.
My code:
void _OpenALEngine::play()
{
if(!m_running && !m_threadRunning)
{
ALfloat sourcePos[] = {0,0,0};
ALfloat sourceVel[] = {0,0,0};
ALfloat sourceOri[] = {0,0,0,0,0,0};
alGenSources(1, &FSourceID);
alSourcefv (FSourceID, AL_POSITION, sourcePos);
alSourcefv (FSourceID, AL_VELOCITY, sourceVel);
alSourcefv (FSourceID, AL_DIRECTION, sourceOri);
GetALError();
ALuint FBufferID[2];
alGenBuffers( 2, &FBufferID[0] );
GetALError();
// Gain
ALfloat listenerPos[] = {0,0,0};
ALfloat listenerVel[] = {0,0,0};
ALfloat listenerOri[] = {0,0,0,0,0,0};
alListenerf( AL_GAIN, 1.0 );
alListenerfv(AL_POSITION, listenerPos);
alListenerfv(AL_VELOCITY, listenerVel);
alListenerfv(AL_ORIENTATION, listenerOri);
GetALError();
alSourceQueueBuffers( FSourceID, 2, &FBufferID[0] );
GetALError();
alSourcePlay(FSourceID);
GetALError();
m_running = true;
m_threadRunning = true;
Threading::Thread thread(Threading::ThreadStart(this, &_OpenALEngine::threadPlaying));
thread.Start();
}
}
Void _OpenALEngine::threadPlaying()
{
while(m_running)
{
// Check how much data is processed in OpenAL's internal queue.
ALint Processed;
alGetSourcei( FSourceID, AL_BUFFERS_PROCESSED, &Processed );
GetALError();
// Add more buffers while we need them.
while ( Processed-- )
{
alSourceUnqueueBuffers( FSourceID, 1, &BufID );
runBuffer(); // <--- Generate the sinus wave and submit the Array to the submitBuffer method.
alSourceQueueBuffers( FSourceID, 1, &BufID );
ALint val;
alGetSourcei(FSourceID, AL_SOURCE_STATE, &val);
if(val != AL_PLAYING)
{
alSourcePlay(FSourceID);
}
}
// Don't kill the CPU.
Thread::Sleep(1);
}
m_threadRunning = false;
return Void();
}
void _OpenALEngine::submitBuffer(byte* buffer, int length)
{
// Submit more data to OpenAL
alBufferData( BufID, AL_FORMAT_MONO8, buffer, length * sizeof(byte), 44100 );
}
I generate the sinus wave in the runBuffer() method. And the sinus generator is correct because when I increase the buffer array from 4096 to 40960 the glittering/lags sound with bigger interval. Thank you very much if some one know the problem and will share it :)
Similar Problems are all over the internet and I'm not 100% sure this is the solution to this on. But it probably is, and if not it might at least help others. Most other threads are on different forums and I'm not registering everywhere just to share my knowledge...
The code below is what I came up after 2 days of experimenting. Most solutions I found did not work for me...
(it's not exactly my code, I stripped it of some parts special to my case, so I'm sorry if there are typos or similar that prevent it from being copied verbatim)
My experiments were on an iPhone. Some of the the things I found out, might be iOS-specific.
The problem is that there is no guaranty at what point a processed buffer is marked as such and is available for unqueueing. Trying to build a version that sleeps until a buffer becomes available again I saw that this might be much(I use very small buffers) later than expected. So I realised that the common idea to wait until a buffer is available(which works for most frameworks, but not openAL) is wrong. Instead you should wait until the time you should enqueue another buffer.
With that you have to give up the idea of double-buffering. When the time comes you should check if a buffer exists and unqueue it. But if none is available you need to create a 3rd...
Waiting for when a buffer should be enqueue can be done by calculating times relative to the system-clock, which worked fairly well for me but I decided to go for a version where I rely on a time source that is definitivly in sync with openAL. Best I came up with was wait depending on what s left in the queue. Here, iOS seems not fully in accordance to openAL-spec because AL_SAMPLE_OFFSET should be exact to one sample but I never saw anything but multiples of 2048. That's about 45 microseconds #44100, this is where the 50000 in the code comes from(little more than the smalest unit iOS handles)
Depending on the block-size this can easily be bigger. But with that code I had 3 times that alSourcePlay() was needed again in the last ~hour(compared to up to 10 per minute with other implementations that claimed to be the solution)
uint64 enqueued(0); // keep track of samples in queue
while (bKeepRunning)
{
// check if enough in buffer and wait
ALint off;
alGetSourcei(m_Source, AL_SAMPLE_OFFSET, &off);
uint32 left((enqueued-off)*1000000/SAMPLE_RATE);
if (left > 50000) // at least 50000 mic-secs in buffer
usleep(left - 50000);
// check for available buffer
ALuint buffer;
ALint processed;
alGetSourcei(m_Source, AL_BUFFERS_PROCESSED, &processed);
switch (processed)
{
case 0: // no buffer to unqueue->create new
alGenBuffers(1, &buffer);
break;
case 1: // on buffer to unqueue->use that
alSourceUnqueueBuffers(m_Source, 1, &buffer);
enqueued -= BLOCK_SIZE_SAMPLES;
break;
default: // multiple buffers to unqueue->take one,delete on
{ // could also delete more if processed>2
// but doesn't happen often
// therefore simple implementation(will del. in next loop)
ALuint bufs[2];
alSourceUnqueueBuffers(m_Source, 2, bufs);
alDeleteBuffers(1, bufs);
buffer = bufs[1];
enqueued -= 2*BLOCK_SIZE_SAMPLES;
}
break;
}
// fill block
alBufferData(buffer, AL_FORMAT_STEREO16, pData,
BLOCK_SIZE_SAMPLES*4, SAMPLE_RATE);
alSourceQueueBuffers(m_Source, 1, &buffer);
//check state
ALint state;
alGetSourcei(m_Source, AL_SOURCE_STATE, &state);
if (state != AL_PLAYING)
{
enqueued = BLOCK_SIZE_SAMPLES;
alSourcePlay(m_Source);
}
else
enqueued += BLOCK_SIZE_SAMPLES;
}
I have written OpenAL streaming servers so I know your pain - my instinct is to confirm you have spawned separate threads for the I/O logic which available your streaming audio data - separate from the thread to hold your above OpenAL code ??? If not this will cause your symptoms. Here is a simple launch of each logical chunk into its own thread :
std::thread t1(launch_producer_streaming_io, chosen_file, another_input_parm);
std::this_thread::sleep_for (std::chrono::milliseconds( 100));
std::thread t2(launch_consumer_openal, its_input_parm1, parm2);
// -------------------------
t1.join();
t2.join();
where launch_producer_streaming_io is a method being called with its input parms which services the input/output to continuously supply the audio data ... launch_consumer_openal is a method launched in its own thread where you instantiate your OpenAL class