Figuring out a race condition - c++

I am building a screen recorder, I am using ffmpeg to make the video out from frames I get from Google Chrome. I get green screen in the output video. I think there is a race condition in the threads since I am not allowed to use main thread to do the processing. here how the code look like
This function works each time I get a new frame, I suspect the functions avpicture_fill & vpx_codec_get_cx_data are being rewritten before write_ivf_frame_header & WriteFile are done.
I am thinking of creating a queue where this function push the object pp::VideoFrame then another thread with mutex will dequeue and do the processing below.
What is the best solution for this problem? and what is the optimal way of debugging it
void EncoderInstance::OnGetFrame(int32_t result, pp::VideoFrame frame) {
if (result != PP_OK)
return;
const uint8_t* data = static_cast<const uint8_t*>(frame.GetDataBuffer());
pp::Size size;
frame.GetSize(&size);
uint32_t buffersize = frame.GetDataBufferSize();
if (is_recording_) {
vpx_codec_iter_t iter = NULL;
const vpx_codec_cx_pkt_t *pkt;
// copy the pixels into our "raw input" container.
int bytes_filled = avpicture_fill(&pic_raw, data, AV_PIX_FMT_YUV420P, out_width, out_height);
if(!bytes_filled) {
Logger::Log("Cannot fill the raw input buffer");
return;
}
if(vpx_codec_encode(&codec, &raw, frame_cnt, 1, flags, VPX_DL_REALTIME))
die_codec(&codec, "Failed to encode frame");
while( (pkt = vpx_codec_get_cx_data(&codec, &iter)) ) {
switch(pkt->kind) {
case VPX_CODEC_CX_FRAME_PKT:
glb_app_thread.message_loop().PostWork(callback_factory_.NewCallback(&EncoderInstance::write_ivf_frame_header, pkt));
glb_app_thread.message_loop().PostWork(callback_factory_.NewCallback(&EncoderInstance::WriteFile, pkt));
break;
default:break;
}
}
frame_cnt++;
}
video_track_.RecycleFrame(frame);
if (need_config_) {
ConfigureTrack();
need_config_ = false;
} else {
video_track_.GetFrame(
callback_factory_.NewCallbackWithOutput(
&EncoderInstance::OnGetFrame));
}
}

Related

Sound playback using FFmpeg and libsoundio in c++

I am trying to make a video player desktop application in c++ using primarily FFmpeg and Qt6. As of for now, I can decode and play video frames correctly at the right speed, that is not a problem. I am now trying to get to playback audio, which is much harder than I expected it to be. I am using libsoundio for my audio library but the documentation is really poor and there are not many examples/tutorials on it. I am also a beginner when it comes to audio programming, although I understand the basics. First off, if anyone can recommend an audio library for this type of job let me know, but I would like to use open source libraries. Anyways, here is how I decode my audio data with FFmpeg. I'm not sure if I am doing it correctly as I could barely find documentation on that as well...
I have a struct that contains all the information which is initiated through a function:
struct VideoReader
{
bool valid;
int width, height;
int video_stream_index;
int audio_stream_index;
AVRational time_base;
AVFormatContext* av_format_ctx;
AVCodecContext* av_vi_codec_ctx;
AVCodecContext* av_au_codec_ctx;
AVPacket* packet;
AVFrame* frame;
SwsContext* sws_ctx;
SwrContext* swr_ctx;
};
The function that initiates it is quite long and is not necessary to share but it populates all those values except for the sws_ctx and the swr_ctx.
Here is how I decode packets, this function is simplified, I left the video decoding out of it, ill take care of syncing once I can properly playback audio:
bool video_reader_read_au_frame(VideoReader *video_reader, unsigned char **frame_buffer)
{
// Unpack video_reader
auto& av_format_ctx = video_reader->av_format_ctx;
auto& av_codec_ctx = video_reader->av_au_codec_ctx;
auto& av_packet = video_reader->packet;
auto& av_frame = video_reader->frame;
auto& swr_ctx = video_reader->swr_ctx;
int& audio_stream_index = video_reader->audio_stream_index;
// Decode the video frame data
int response;
while (av_read_frame(av_format_ctx, av_packet) >= 0)
{
last_frame = false;
if (av_packet->stream_index != audio_stream_index)
{
av_packet_unref(av_packet);
continue;
}
response = avcodec_send_packet(av_codec_ctx, av_packet);
if (response < 0)
{
Logger::error("Could not decode packet.");
return false;
}
response = avcodec_receive_frame(av_codec_ctx, av_frame);
if (response == AVERROR(EAGAIN) || response == AVERROR_EOF)
{
av_packet_unref(av_packet);
continue;
}
else if (response < 0)
{
Logger::error("Could not decode packet.");
return false;
}
av_packet_unref(av_packet);
break;
}
// Initialize SwrContext
if (!swr_ctx) {
swr_ctx = swr_alloc_set_opts(nullptr,
av_codec_ctx->channel_layout, AV_SAMPLE_FMT_FLT,
av_codec_ctx->sample_rate, av_codec_ctx->channel_layout,
av_codec_ctx->sample_fmt, av_codec_ctx->sample_rate,
0, nullptr);
if (!swr_ctx)
{
Logger::error("Could not create SwrContext.");
return false;
}
if (swr_init(swr_ctx) < 0)
{
Logger::error("Could not initialize SwrContext.");
return false;
}
}
const int MAX_BUFFER_SIZE = av_samples_get_buffer_size(nullptr, av_frame->channels, av_frame->nb_samples, AV_SAMPLE_FMT_FLT, 1);
*frame_buffer = (unsigned char*)av_malloc(MAX_BUFFER_SIZE);
swr_convert(swr_ctx, frame_buffer, av_frame->nb_samples,
(const unsigned char**)av_frame->data, av_frame->nb_samples);
av_frame_unref(av_frame);
return true;
}
Here is how I would normally call this function:
VideoReader vr{};
if(!video_reader_open(&vr, "C:/Path/to/file.mp4"))
{
Logger::error("Could not initialize VideoReader.");
return 1;
}
unsigned char* buffer;
if(!video_reader_read_au_frame(&vr, &buffer))
{
Logger::error("Could not read audio data.");
return 1;
}
play_audio(&buffer); <-- Find a way to play audio once buffer has data in it
video_reader_close(&vr);
return 0;
Obviously I will loop over video_reader_read_au_frame(&vr, &buffer) to playback the whole video.
I believe my code puts the samples from the decoded frame in buffer, but I am really not sure.. I am unsure as well if I need to convert to AV_SAMPLE_FMT_FLT audio format or something else or just leave it as it is. For libsoundio, I kind of understand this example: http://libsound.io/ but I'm not sure I fully understand how this library works, especially the callback function. I know I have to pass buffer in outstream->userdata as a void pointer, but I don't know how to use it in the callback function. Any help or guidance would be greatly appreciated. Note that later on in this project I might want to send this data over a network to play the video on another computer in sync.

Oboe Async Audio Extraction

I am trying to build a NDK based c++ low latancy audio player which will encounter three operations for multiple audios.
Play from assets.
Stream from an online source.
Play from local device storage.
From one of the Oboe samples provided by Google, I added another function to the class NDKExtractor.cpp to extract a URL based audio and render it to audio device while reading from source at the same time.
int32_t NDKExtractor::decode(char *file, uint8_t *targetData, AudioProperties targetProperties) {
LOGD("Using NDK decoder: %s",file);
// Extract the audio frames
AMediaExtractor *extractor = AMediaExtractor_new();
//using this method instead of AMediaExtractor_setDataSourceFd() as used for asset files in the rythem game example
media_status_t amresult = AMediaExtractor_setDataSource(extractor, file);
if (amresult != AMEDIA_OK) {
LOGE("Error setting extractor data source, err %d", amresult);
return 0;
}
// Specify our desired output format by creating it from our source
AMediaFormat *format = AMediaExtractor_getTrackFormat(extractor, 0);
int32_t sampleRate;
if (AMediaFormat_getInt32(format, AMEDIAFORMAT_KEY_SAMPLE_RATE, &sampleRate)) {
LOGD("Source sample rate %d", sampleRate);
if (sampleRate != targetProperties.sampleRate) {
LOGE("Input (%d) and output (%d) sample rates do not match. "
"NDK decoder does not support resampling.",
sampleRate,
targetProperties.sampleRate);
return 0;
}
} else {
LOGE("Failed to get sample rate");
return 0;
};
int32_t channelCount;
if (AMediaFormat_getInt32(format, AMEDIAFORMAT_KEY_CHANNEL_COUNT, &channelCount)) {
LOGD("Got channel count %d", channelCount);
if (channelCount != targetProperties.channelCount) {
LOGE("NDK decoder does not support different "
"input (%d) and output (%d) channel counts",
channelCount,
targetProperties.channelCount);
}
} else {
LOGE("Failed to get channel count");
return 0;
}
const char *formatStr = AMediaFormat_toString(format);
LOGD("Output format %s", formatStr);
const char *mimeType;
if (AMediaFormat_getString(format, AMEDIAFORMAT_KEY_MIME, &mimeType)) {
LOGD("Got mime type %s", mimeType);
} else {
LOGE("Failed to get mime type");
return 0;
}
// Obtain the correct decoder
AMediaCodec *codec = nullptr;
AMediaExtractor_selectTrack(extractor, 0);
codec = AMediaCodec_createDecoderByType(mimeType);
AMediaCodec_configure(codec, format, nullptr, nullptr, 0);
AMediaCodec_start(codec);
// DECODE
bool isExtracting = true;
bool isDecoding = true;
int32_t bytesWritten = 0;
while (isExtracting || isDecoding) {
if (isExtracting) {
// Obtain the index of the next available input buffer
ssize_t inputIndex = AMediaCodec_dequeueInputBuffer(codec, 2000);
//LOGV("Got input buffer %d", inputIndex);
// The input index acts as a status if its negative
if (inputIndex < 0) {
if (inputIndex == AMEDIACODEC_INFO_TRY_AGAIN_LATER) {
// LOGV("Codec.dequeueInputBuffer try again later");
} else {
LOGE("Codec.dequeueInputBuffer unknown error status");
}
} else {
// Obtain the actual buffer and read the encoded data into it
size_t inputSize;
uint8_t *inputBuffer = AMediaCodec_getInputBuffer(codec, inputIndex,
&inputSize);
//LOGV("Sample size is: %d", inputSize);
ssize_t sampleSize = AMediaExtractor_readSampleData(extractor, inputBuffer,
inputSize);
auto presentationTimeUs = AMediaExtractor_getSampleTime(extractor);
if (sampleSize > 0) {
// Enqueue the encoded data
AMediaCodec_queueInputBuffer(codec, inputIndex, 0, sampleSize,
presentationTimeUs,
0);
AMediaExtractor_advance(extractor);
} else {
LOGD("End of extractor data stream");
isExtracting = false;
// We need to tell the codec that we've reached the end of the stream
AMediaCodec_queueInputBuffer(codec, inputIndex, 0, 0,
presentationTimeUs,
AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM);
}
}
}
if (isDecoding) {
// Dequeue the decoded data
AMediaCodecBufferInfo info;
ssize_t outputIndex = AMediaCodec_dequeueOutputBuffer(codec, &info, 0);
if (outputIndex >= 0) {
// Check whether this is set earlier
if (info.flags & AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM) {
LOGD("Reached end of decoding stream");
isDecoding = false;
} else {
// Valid index, acquire buffer
size_t outputSize;
uint8_t *outputBuffer = AMediaCodec_getOutputBuffer(codec, outputIndex,
&outputSize);
/*LOGV("Got output buffer index %d, buffer size: %d, info size: %d writing to pcm index %d",
outputIndex,
outputSize,
info.size,
m_writeIndex);*/
// copy the data out of the buffer
memcpy(targetData + bytesWritten, outputBuffer, info.size);
bytesWritten += info.size;
AMediaCodec_releaseOutputBuffer(codec, outputIndex, false);
}
} else {
// The outputIndex doubles as a status return if its value is < 0
switch (outputIndex) {
case AMEDIACODEC_INFO_TRY_AGAIN_LATER:
LOGD("dequeueOutputBuffer: try again later");
break;
case AMEDIACODEC_INFO_OUTPUT_BUFFERS_CHANGED:
LOGD("dequeueOutputBuffer: output buffers changed");
break;
case AMEDIACODEC_INFO_OUTPUT_FORMAT_CHANGED:
LOGD("dequeueOutputBuffer: output outputFormat changed");
format = AMediaCodec_getOutputFormat(codec);
LOGD("outputFormat changed to: %s", AMediaFormat_toString(format));
break;
}
}
}
}
// Clean up
AMediaFormat_delete(format);
AMediaCodec_delete(codec);
AMediaExtractor_delete(extractor);
return bytesWritten;
}
Now the problem i am facing is that this code it first extracts all the audio data saves it into a buffer which then becomes part of AFileDataSource which i derived from DataSource class in the same sample.
And after its done extracting the whole file it plays by calling the onAudioReady() for Oboe AudioStreamBuilder.
What I need is to play as it streams the chunk of audio buffer.
Optional Query: Also aside from the question it blocks the UI even though i created a foreground service to communicate with the NDK functions to execute this code. Any thoughts on this?
You probably solved this already, but for future readers...
You need a FIFO buffer to store the decoded audio. You can use the Oboe's FIFO buffer e.g. oboe::FifoBuffer.
You can have a low/high watermark for the buffer and a state machine, so you start decoding when the buffer is almost empty and you stop decoding when it's full (you'll figure out the other states that you need).
As a side note, I implemented such player only to find at some later time, that the AAC codec is broken on some devices (Xiaomi and Amazon come to mind), so I had to throw away the AMediaCodec/AMediaExtractor parts and use an AAC library instead.
You have to implement a ringBuffer (or use the one implemented in the oboe example LockFreeQueue.h) and copy the data on buffers that you send on the ringbuffer from the extracting thread. On the other end of the RingBuffer, the audio thread will get that data from the queue and copy it to the audio buffer. This will happen on onAudioReady(oboe::AudioStream *oboeStream, void *audioData, int32_t numFrames) callback that you have to implement in your class (look oboe docs). Be sure to follow all the good practices on the Audio thread (don't allocate/deallocate memory there, no mutexes and no file I/O etc.)
Optional query: A service doesn't run in a separate thread, so obviously if you call it from UI thread it blocks the UI. Look at other types of services, there you can have IntentService or a service with a Messenger that will launch a separate thread on Java, or you can create threads in C++ side using std::thread

Repeating ffmpeg stream (libavcodec/libavformat)

I am using the various APIs from ffmpeg to draw videos in my application. So far this works very well. Since I also have gifs I want to do looping without having to load the file over and over again.
In my code the decoder loop looks like this:
AVPacket packet = {};
av_init_packet(&packet);
while (mIsRunning) {
int error = av_read_frame(mContext, &packet);
if (error == AVERROR_EOF) {
if(mRepeat) {
logger.info("EOF-repeat");
auto stream = mContext->streams[mVideoStream];
av_seek_frame(mContext, mVideoStream, 0, 0);
continue;
}
if (mReadVideo) {
avcodec_send_packet(mVideoCodec, nullptr);
}
if (mReadAudio) {
avcodec_send_packet(mAudioCodec, nullptr);
}
break;
}
if (error < 0) {
char err[AV_ERROR_MAX_STRING_SIZE];
av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, error);
logger.error("Failed to read next frame from stream: ", err);
throw std::runtime_error("Stream reading failed");
}
if (packet.stream_index == mVideoStream && mReadVideo) {
int32 err;
{
std::lock_guard<std::mutex> l(mVideoLock);
err = avcodec_send_packet(mVideoCodec, &packet);
}
mImageEvent.notify_all();
while (err == AVERROR(EAGAIN) && mIsRunning) {
{
std::unique_lock<std::mutex> l(mReaderLock);
mReaderEvent.wait(l);
}
{
std::lock_guard<std::mutex> l(mVideoLock);
err = avcodec_send_packet(mVideoCodec, &packet);
}
}
}
av_packet_unref(&packet);
}
Reading a video to the end works perfectly well and if I dont set mRepeat to true it properly EOFs and stops parsing. However when I use looping the following happens:
The video ends
AVERROR_EOF happens at av_read_frame
EOF-repeat is printed
A random frame is read from the stream (and rendered)
AVERROR_EOF happens at av_read_frame
EOF-repeat is printed
A random frame is read from the stream (and rendered)
...
You can imagine it like I have a gif of a spinning globe and after one full turn it just starts randomly jumping around, sometimes for a fraction of a second correctly, sometimes backwards and sometimes just randomly everywhere.
I have also tried several versions with avformat_seek_file what other way would there be to reset everything to the beginning and start from scratch again?
I figured out that I also need to reset my IO context to the beginning:
if(mRepeat) {
auto stream = mContext->streams[mVideoStream];
avio_seek(mContext->pb, 0, SEEK_SET);
avformat_seek_file(mContext, mVideoStream, 0, 0, stream->duration, 0);
continue;
}
Now the video properly loops forever :)

SDL2 & SMPEG2 - Empty sound buffer trying to read a MP3

I'm trying to load a MP3 in a buffer using the SMPEG2 library, which comes with the SDL2. Every SMPEG function calls returns without error, but when I'm done, the sound buffer is full of zeros.
Here's the code :
bool LoadMP3(char* filename)
{
bool success = false;
const Uint32 Mp3ChunkLen = 4096;
SMPEG* mp3;
SMPEG_Info infoMP3;
Uint8 * ChunkBuffer;
Uint32 MP3Length = 0;
// Allocate a chunk buffer
ChunkBuffer = (Uint8*)malloc(Mp3ChunkLen);
SDL_RWops *mp3File = SDL_RWFromFile(filename, "rb");
if (mp3File != NULL)
{
mp3 = SMPEG_new_rwops(mp3File, &infoMP3, 1, 0);
if(mp3 != NULL)
{
if(infoMP3.has_audio)
{
Uint32 readLen;
// Inform the MP3 of the output audio specifications
SMPEG_actualSpec(mp3, &asDeviceSpecs); // static SDL_AudioSpec asDeviceSpecs; containing valid values after a call to SDL_OpenAudioDevice
// Enable the audio and disable the video.
SMPEG_enableaudio(mp3, 1);
SMPEG_enablevideo(mp3, 0);
// Play the MP3 once to get the size of the needed finale buffer
SMPEG_play(mp3);
while ((readLen = SMPEG_playAudio(mp3, ChunkBuffer, Mp3ChunkLen)) > 0)
{
MP3Length += readLen;
}
SMPEG_stop(mp3);
if(MP3Length > 0)
{
// Reallocate the buffer with the new length (if needed)
if (MP3Length != Mp3ChunkLen)
{
ChunkBuffer = (Uint8*)realloc(ChunkBuffer, MP3Length);
}
// Replay the entire MP3 into the new ChunkBuffer.
SMPEG_rewind(mp3);
SMPEG_play(mp3);
bool readBackSuccess = (MP3Length == SMPEG_playAudio(mp3, ChunkBuffer, MP3Length));
SMPEG_stop(mp3);
if(readBackSuccess)
{
// !!! Here, ChunkBuffer contains only zeros !!!
success = true;
}
}
}
SMPEG_delete(mp3);
mp3 = NULL;
}
SDL_RWclose(mp3File);
mp3File = NULL;
}
free(ChunkBuffer);
return success;
}
The code's widely based on SDL_Mixer, which I cannot use for my projet, based on its limitations.
I know Ogg Vorbis would be a better choice of file format, but I'm porting a very old project, and it worked entirely with MP3s.
I'm sure the sound system is initialized correctly because I can play WAV files just fine. It's intialized with a frequency of 44100, 2 channels, 1024 samples, and the AUDIO_S16SYS format (the latter which is, as I understood from the SMPEG source, mandatory).
I've calculated the anticipated buffer size, based on the bitrate, the amount of data in the MP3 and the OpenAudioDevice audio specs, and everything is consistent.
I cannot figure why everything but the buffer data seems to be working.
UPDATE #1
Still trying to figure out what's wrong, I thought the support for MP3 might not be working, so I created the following function :
SMPEG *mpeg;
SMPEG_Info info;
mpeg = SMPEG_new(filename,&info, 1);
SMPEG_play(mpeg);
do { SDL_Delay(50); } while(SMPEG_status(mpeg) == SMPEG_PLAYING);
SMPEG_delete(mpeg);
The MP3 played. So, the decoding should actually be working. But that's not what I need ; I really need the sound buffer data so I can send it to my mixer.
After much tinkering, research and digging through the SMPEG source code, I realized that I had to pass 1 as the SDLAudio parameter to SMPEG_new_rwops function.
The comment found in smpeg.h is misleading :
The sdl_audio parameter indicates if SMPEG should initialize the SDL audio subsystem. If not, you will have to use the SMPEG_playaudio() function below to extract the decoded data.
Since the audio subsystem was already initialized and I was using the SMPEG_playaudio() function, I had no reason to think I needed this parameter to be non-zero. In SMPEG, this parameter triggers the audio decompression at opening time, but even though I called SMPEG_enableaudio(mp3, 1); the data is never reparsed. This might be a bug/a shady feature.
I had another problem with the freesrc parameter which needed to be 0, since I freed the SDL_RWops object myself.
For future reference, once ChunkBuffer has the MP3 data, it needs to pass through SDL_BuildAudioCVT/SDL_ConvertAudio if it's to be played through an already opened audio device.
The final working code is :
// bool ReadMP3ToBuffer(char* filename)
bool success = false;
const Uint32 Mp3ChunkLen = 4096;
SDL_AudioSpec mp3Specs;
SMPEG* mp3;
SMPEG_Info infoMP3;
Uint8 * ChunkBuffer;
Uint32 MP3Length = 0;
// Allocate a chunk buffer
ChunkBuffer = (Uint8*)malloc(Mp3ChunkLen);
memset(ChunkBuffer, 0, Mp3ChunkLen);
SDL_RWops *mp3File = SDL_RWFromFile(filename, "rb"); // filename is a char* passed to the function.
if (mp3File != NULL)
{
mp3 = SMPEG_new_rwops(mp3File, &infoMP3, 0, 1);
if(mp3 != NULL)
{
if(infoMP3.has_audio)
{
Uint32 readLen;
// Get the MP3 audio specs for later conversion
SMPEG_wantedSpec(mp3, &mp3Specs);
SMPEG_enablevideo(mp3, 0);
// Play the MP3 once to get the size of the needed buffer in relation with the audio specs
SMPEG_play(mp3);
while ((readLen = SMPEG_playAudio(mp3, ChunkBuffer, Mp3ChunkLen)) > 0)
{
MP3Length += readLen;
}
SMPEG_stop(mp3);
if(MP3Length > 0)
{
// Reallocate the buffer with the new length (if needed)
if (MP3Length != Mp3ChunkLen)
{
ChunkBuffer = (Uint8*)realloc(ChunkBuffer, MP3Length);
memset(ChunkBuffer, 0, MP3Length);
}
// Replay the entire MP3 into the new ChunkBuffer.
SMPEG_rewind(mp3);
SMPEG_play(mp3);
bool readBackSuccess = (MP3Length == SMPEG_playAudio(mp3, ChunkBuffer, MP3Length));
SMPEG_stop(mp3);
if(readBackSuccess)
{
SDL_AudioCVT convertedSound;
// NOTE : static SDL_AudioSpec asDeviceSpecs; containing valid values after a call to SDL_OpenAudioDevice
if(SDL_BuildAudioCVT(&convertedSound, mp3Specs.format, mp3Specs.channels, mp3Specs.freq, asDeviceSpecs.format, asDeviceSpecs.channels, asDeviceSpecs.freq) >= 0)
{
Uint32 newBufferLen = MP3Length*convertedSound.len_mult;
// Make sure the audio length is a multiple of a sample size to avoid sound clicking
int sampleSize = ((asDeviceSpecs.format & 0xFF)/8)*asDeviceSpecs.channels;
newBufferLen &= ~(sampleSize-1);
// Allocate the new buffer and proceed with the actual conversion.
convertedSound.buf = (Uint8*)malloc(newBufferLen);
memcpy(convertedSound.buf, ChunkBuffer, MP3Length);
convertedSound.len = MP3Length;
if(SDL_ConvertAudio(&convertedSound) == 0)
{
// Save convertedSound.buf and convertedSound.len_cvt for future use in your mixer code.
// Dont forget to free convertedSound.buf once it's not used anymore.
success = true;
}
}
}
}
}
SMPEG_delete(mp3);
mp3 = NULL;
}
SDL_RWclose(mp3File);
mp3File = NULL;
}
free(ChunkBuffer);
return success;
NOTE : Some MP3 files I tried lost a few milliseconds and cutoff too early during playback when I resampled them with this code. Some others didn't. I could reproduce the same behaviour in Audacity, so I'm not sure what's going on. There may still have a bug with my code, a bug in SMPEG, or it maybe a known issue with the MP3 format itself. If someone can provide and explanation in the comments, that would be great!

How can I use appropriate the lock mutex function, for three threads in C++?

I have a question about threads but I think that is difficult to explain, so be patient.
I have two pthreads in a QT/C++ program and one signal, Signal fills a buffer, One thread copies the buffer and one to process the buffer's data.
fill buffer1 ----Copy buffer1 to buffer2----process the buffer's 2 data
Signal's function:
void MainWindow::TcpData()
{
if(socket->bytesAvailable()>(DATA_LEN)) {
QByteArray array = socket ->readAll();
if(pthread_mutex_trylock(&data_mutex)==0)
{
if((p+array.size())<(MAX_TCP_BUFFER_SIZE+100))
{
memcpy(BUFFER+p,array.data(),array.size());
p+=array.size();
}
else {
p=0;
memcpy(BUFFER,array.data(),array.size());
p+=array.size();
}
pthread_mutex_unlock(&data_mutex);
}
}
}
Thread 1:
void *MainWindow::copyTCPdata() {
pthread_mutex_lock(&data_mutex);
while(1) {
if(data_ready) {
pthread_cond_wait(&data_cond,&data_mutex);
continue;
}
/* Move the last part of the previous buffer, that was not processed,
* on the start of the new buffer. */
memcpy(data, data+DATA_LEN, (FULL_LEN-1)*4);
/* Read the new data. */
memcpy(data+(FULL_LEN-1)*4, BUFFER,DATA_LEN);
memcpy(BUFFER,BUFFER+DATA_LEN,p);
if(p>DATA_LEN) p=p-DATA_LEN;
data_ready = 1;
pthread_cond_signal(&data_cond);
pthread_mutex_unlock(&data_mutex);
} }
Thread 2:
void *MainWindow::processData {
while(1) {
if(!data_ready) {
pthread_cond_wait(&data_cond,&data_mutex);
continue;
}
data_ready = 0;
pthread_cond_signal(&data_cond);
pthread_mutex_unlock(&data_mutex);
detectSignal(data);
pthread_mutex_lock(&data_mutex);
}
}
I think am loosing data with this way, but the program is more stable, Can someone suggest me a better solution?