how do i lock access to a bool with a mutex? - c++

solved!: im copying the instance of Map in the new Thread an dont use the reference.
im learning how to use multiple threads. For this im programing a little game where i want the game to run in the main thread and the next chunk of the level shall be loaded in another thread. for this i set up a mutex around a vector to tell the loading thread what to load next. inside this mutex i also have a boolean to tell the thread when tu terminate.
initialising thread in Map::Map()
pending_orders_mutex = SDL_CreateMutex();
can_process_order = SDL_CreateCond();
chunk_loader_thread = SDL_CreateThread(Map::chunk_loader,"chunk_loader_thread",(void*)this);
loading thread
int Map::chunk_loader(void * data)
{
Map map = *(Map*)data;
bool kill_this_thread = false;
Chunk_Order actual_order;
actual_order.load_graphics = false;
actual_order.x = 0;
actual_order.y = 0;
while (!kill_this_thread)
{
SDL_LockMutex(map.pending_orders_mutex); // lock mutex
printf("3-kill_chunk_loader_thread: %d\n", map.kill_chunk_loader_thread);
kill_this_thread = map.kill_chunk_loader_thread;
printf("4-kill_chunk_loader_thread: %d\n", map.kill_chunk_loader_thread);
if (!kill_this_thread)
{
if (map.pending_orders.size())
{
actual_order = map.pending_orders.back();
map.pending_orders.pop_back();
printf("in thread processing order\n");
}
else
{
printf("in thread waiting for order\n");
SDL_CondWait(map.can_process_order, map.pending_orders_mutex);
}
}
SDL_UnlockMutex(map.pending_orders_mutex); // unlock mutex
//load actual order
}
printf("thread got killed\n");
return 0;
}
killing the thread (main thread)
SDL_LockMutex(pending_orders_mutex); // lock mutex
printf("setting kill command\n");
printf("1-kill_chunk_loader_thread: %d\n", kill_chunk_loader_thread);
kill_chunk_loader_thread = true; // send kill command
printf("2-kill_chunk_loader_thread: %d\n", kill_chunk_loader_thread);
SDL_CondSignal(can_process_order); // signal that order was pushed
SDL_UnlockMutex(pending_orders_mutex); // unlock mutex
SDL_WaitThread(chunk_loader_thread, NULL);
console output
3-kill_chunk_loader_thread: 0
4-kill_chunk_loader_thread: 0
in thread waiting for order
setting kill command
1-kill_chunk_loader_thread: 0
2-kill_chunk_loader_thread: 1
3-kill_chunk_loader_thread: 0
4-kill_chunk_loader_thread: 0
in thread waiting for order
why does the mainthread not change the "kill_chunk_loader_thread" boolean in the loading thread?

First of all, you should try to upload a minimal complete program in the question.
Its look like you set kill_chunk_loader_thread = true
but you didn't set map.kill_chunk_loader_thread = true
the section of map declaration is missig from your question, but I guess you didn't use a reference to the local or global variable, or you just perform struct copy so when you change one struct the other doesn't been affected at all.
EDIT:
Map map = *(Map*)data; copies the map structure (default copy constructor, I guess) so from now on if the source map change the copy won't.
You should keep work with the pointer, like this: Map* pMap = (Map*)data;
and check the pointer like this: kill_this_thread = pMap->kill_chunk_loader_thread; so you read from the source map.

Related

Second thread is never triggered

I've been struggling with a multithreading issue for a bit. I've written some simple code to try and isolate the issue and I'm not finding it. What's happening is that the first thread is being woken up with data being sent to it, but second one never does. They each have their own condition_variable yet it doesn't seem to matter. Ultimately, what I'm trying to do is have a few long running threads that do a single dedicated task when needed, and staying in a wait state when not needed. And running them each in their own thread is important and a requirement.
Here's the code:
#include <glib.h>
#include <string>
#include <mutex>
#include <condition_variable>
#include <unistd.h>
#define NUM_THREADS 2
bool DEBUG = true;
pthread_t threads[NUM_THREADS];
std::mutex m_0;
std::mutex m_1;
std::condition_variable cov_0;
std::condition_variable cov_1;
bool dataReady_0 = false;
bool dataReady_1 = false;
bool keepRunning[NUM_THREADS] = { true };
void date_update (guint source_id, const char *json_data) {
if (DEBUG) {
start_threads(2);
sleep(2);
DEBUG = false;
}
g_print("From source id=%d\n", source_id);
switch (source_id) {
case 0:
dataReady_0 = true;
cov_0.notify_one();
break;
case 1:
dataReady_1 = true;
cov_1.notify_one();
break;
}
}
void start_threads (int thread_count) {
int rc;
switch (thread_count) {
case 2:
rc = pthread_create(&threads[1], nullptr, custom_thread_1, nullptr);
if (rc) {
g_print("Error:unable to create thread(1), return code(%d)\n", rc);
}
case 1:
rc = pthread_create(&threads[0], nullptr, custom_thread_0, nullptr);
if (rc) {
g_print("Error:unable to create thread(0), return code(%d)\n", rc);
}
}
}
void *custom_thread_0 (void *pVoid) {
g_print("Created thread for source id=0\n");
while (keepRunning[0]) {
// Wait until date_update() sends data
std::unique_lock<std::mutex> lck(m_0);
cov_0.wait(lck, [&]{return dataReady_0;});
dataReady_0 = false;
g_print("THREAD=0, DATA RECEIVED\n");
lck.unlock();
}
pthread_exit(nullptr);
}
void *custom_thread_1 (void *pVoid) {
g_print("Created thread for source id=1\n");
while (keepRunning[1]) {
// Wait until date_update() sends data
std::unique_lock<std::mutex> lck(m_1);
cov_1.wait(lck, [&]{return dataReady_1;});
dataReady_1 = false;
g_print("THREAD=1, DATA RECEIVED\n");
lck.unlock();
}
pthread_exit(nullptr);
}
Here's the output. As you can see the data_update function gets the "data" from the calling function for both source 0 and source 1, but only thread 0 ever seems to process anything. I'm at a bit of a loss as to the source of the problem.
Sending data for source id=1
From source id=1
Sending data for source id=0
From source id=0
THREAD=0, DATA RECEIVED
Sending data for source id=1
From source id=1
Sending data for source id=0
From source id=0
THREAD=0, DATA RECEIVED
Sending data for source id=1
From source id=1
Sending data for source id=0
From source id=0
THREAD=0, DATA RECEIVED
I'm sure I'm just missing a minor detail somewhere, but I'm fully willing to accept that perhaps I do not understand C/C++ threading correctly.
The 2nd thread is exiting because the keepRunning state flag is false. It's usually a good first step in debugging threads to log the start and exit of all threads.
But you have a much less obvious problem.
It does not appear that the appropriate mutex is held when the value of the condition variable's predicate is changed in date_update().
I'll break that down a bit more.
When cov_0.wait() is called, the predicate used is [&]{return dataReady_0;} (*), and the unique_lock passed is holding the mutex m_0. This means that whenever the value of the predicate might change, the mutex m_0 must be held.
This predicate is quite simple and will change value whenever the global variable dataReady_0 changes value.
In date_update() there is code to change the value of dataReady_0 and the mutex m_0 is not held when doing this. There should be a scoped_lock or unique_lock in the block that changes the global variable's state.
It will still mostly work without this, but you have a race! It will fail eventually!
The condition variable may check and see that the predicate is false, then another thread changes the predicate's value and does a notify, and then the first thread waits on the condition variable. It misses the notify because it was not yet waiting when it was sent. The use of the mutex to prevent the predicate from changing in a way that races with the notification is a critical component of what makes this work.
(*) You don't need the capture [&] here. This lambda could be stateless.
You should initialize all elements of the built-in array:
bool keepRunning[2] = { true, true };

Thread synchronisation with SDL thread library

I am attempting to write a thread safe task queue for multithreading in C++, using SDL2's threading library.
The thread function which runs on all threads is as follows:
int threadFunc(void * pData)
{
ThreadData* data = (ThreadData*)pData;
SDLTaskManager* pool = data->pool;
Task* task = nullptr;
while (true)
{
SDL_LockMutex(pool->mLock);
while (!pool->mRunning && pool->mCurrentTasks.empty())
{
//mutex is unlocked, then locked again when signal received
SDL_CondWait(pool->mConditionFlag, pool->mLock);
if (pool->mShuttingDown)
return 0;
}
//mutex is locked at this stage so no other threads can alter contents of deque
//code inside if block should not be executed if deque is empty
if (!pool->mCurrentTasks.empty())
{
/*out of range error here*/
task = pool->mCurrentTasks.front();
pool->mCurrentTasks.pop_front();
}
if (task != nullptr)
{
pool->notifyThreadWorking(true);
data->taskCount++;
}
else
{
pool->stop();
SDL_UnlockMutex(pool->mLock);
continue;
}
SDL_UnlockMutex(pool->mLock);
task->execute();
SDL_LockMutex(pool->mLock);
pool->notifyThreadWorking(false);
pool->mCompleteTasks.push_back(task);
SDL_UnlockMutex(pool->mLock);
task = nullptr;
}
return 0;
}
As you can see, according to the comments in the code, an out of range error occurs inside an if block, where the deque is empty. However, there is a check there to make sure that the code is only executed if the deque is not empty. The mutex is locked by SDL_CondWait so no other thread should be able to make changes to the deque, until that mutex is unlocked again.
The producer code is as follows:
SDL_LockMutex(pool->mLock);
for (int i = 0; i < numTasks; i++)
{
pool->mCurrentTasks.push_back(new Task());
}
pool->mRunning = true;
SDL_CondBroadcast(pool->mConditionFlag);
SDL_UnlockMutex(pool->mLock);
The fact that the code inside the if block is being executed, shows that at the time if (!pool->mCurrentTasks.empty()) is evaluated, the deque has member data, but not when it reaches task = pool->mCurrentTasks.front(); By my understanding of mutex' this shouldn't be possible. How can this be?

using mutexes and condition_variables

I'm looking for correct pattern of the interconnectivity between two threads using boost interprocess library. I don't think there is a something specific for the library unlike of typical parallel programming using standard library.
So I'm looking for a basic technique and understanding of these sync primitives usage.
There are two threads: writer and reader , which are using a shared memory. Named mutex used for a sync an access to objects (string and vector) in shared memory. conditional variable used for a waiting when readerwill have written a data into shared memory. So the scenario is :
- reader starts and init conditional variable on named mutex with condition that vector with data should be non-empty. and waits...
- writer locks the mutex and fills up the vector
- writer "notifies one" that writing to data vector has been finished and unlocks the mutex
- reader receives a notification , locks the mutex and processes data in a vector.
after that, reader should notify a writer that a reading was finished and vector can be fill up again with a new portion of the data.
so I'm not sure how to set all these waits and notifies correctly. Looks like my version does a deadlock. Please advise.
reader's thread code
namespace bi = boost::interprocess;
using bi_char_vector = bi::vector<char, CharAllocator>;
bi::named_mutex mtx{bi::open_or_create, "mtx"};
bi::named_condition cnd{bi::open_or_create, "cnd"};
data = segment.find_or_construct<bi_char_vector>("data")(segment.get_segment_manager());
while (!done) {
bi::scoped_lock<bi::named_mutex> lock{mtx};
cnd.wait(lock, [data] {return !data->empty(); });
// process the data...
cnd.notify_one();
}
writer's thread code:
bi::managed_shared_memory segment(bi::open_only, shm_name.c_str());
bi::named_mutex mtx{bi::open_only, "mtx"};
bi::named_condition cnd(bi::open_only, "cnd");
data = segment.find_or_construct<bi_char_vector>("data")(segment.get_segment_manager());
for(std::size_t chunk_num = 0; chunk_num < chunk_count; ++chunk_num) {
bi::scoped_lock<bi::named_mutex> lock { mtx };
cnd.wait(lock);
data->clear();
// fills the data
cnd.notify_one();
}
}
if I'm setting the wait in writer loop, it stops on this,
if I'm removing this wait, looks like reader receives and processes last loop iteration only
found the problem.
on reader's side added:
while (!done) {
bi::scoped_lock<bi::named_mutex> lock{mtx};
cnd.wait(lock, [data] {return !data->empty(); });
// process the data...
data->clear();
cnd.notify_one();
}
on writer side:
for(std::size_t chunk_num = 0; chunk_num < chunk_count; ++chunk_num) {
bi::scoped_lock<bi::named_mutex> lock { mtx };
cnd.wait(lock, [data] {return data->empty(); });
//...
cnd.notify_one();
}
and now it works as expected

Know if a pthread thread is Alive in a safe way

I made a multithread application that generates/destroy 100 threads continuously:
//Here is the thread class (one by every thread
struct s_control
{
data_in[D_BUFFER_SIZE];//data in to thread
data_out[D_BUFFER_SIZE];//data generated by the thread
//I use volatile in order to status data is avaiable in and out of the thread:
volatile __int16 status;//thread state 0=empty,1=full,2=filling (thread running)
}*control;
//Here is the thread main function
static void* F_pull(void* vv)//=pull_one_curl()
{
s_control* cc = (s_control* ) vv;
//use of cc->data_in and filling of cc->data out
cc->status=1; //Here advises that thread is finished and data out is filled
return NULL;
}
void main()
{
initialization();
control=new s_control[D_TAREAS];
pthread_t *tid=new pthread_t[D_TAREAS];
for (th=0;th<D_TAREAS;th++)
{ //Access to status of thread at the beginning
//(to avoid if it changes in the middle):
long status1=control[th].status
if (status1==0) //Thread finished and data_out of thread is empty
{ control[i2].status=2; //Filling in (thread initiated)status LLENANDO
error = pthread_create(&tid[th],NULL,F_pull,(void *) &control[th]);
}
else if (status1==1) //Thread finished and data_out of thread is full
{
//do things with control[th].data_out;
//and fill in control[th].data_in with data to pass to next thread
control[th].status=0; //Thread is finished and now its data_out is empty
}
else
{
//printf("\nThread#%li:filling",i2);
}
}while(!_kbhit());
finish();
}
Then as you can see, at the end of the thread, I used the variable volatile to advise that thread is about to exit:
begin of thread{ ....
cc->status=1; //Here advises that thread is finished and data out is filled
return NULL;
}//END OF THREAD
But after cc->status is set to 1 thread is not finished yet (it exist one more line)
So I do not like set status inside the thread.
I tried pthread_kill, but it didnĀ“t work, because it does not work until thread is alive, as can be seen at:
pthread_kill
I am not sure if this answers your question, but you can use pthread_join() to wait for a thread to terminate. In conjunction with some (properly synchronized) status variables, you should be able to achieve what you need.

Closing a thread with select() system call statement?

I have a thread to monitor serial port using select system call, the run function of the thread is as follows:
void <ProtocolClass>::run()
{
int fd = mPort->GetFileDescriptor();
fd_set readfs;
int maxfd=fd+1;
int res;
struct timeval Timeout;
Timeout.tv_usec=0;
Timeout.tv_sec=3;
//BYTE ack_message_frame[ACKNOWLEDGE_FRAME_SIZE];
while(true)
{
usleep(10);
FD_ZERO(&readfs);
FD_SET(fd,&readfs);
res=select(maxfd,&readfs,NULL,NULL,NULL);
if(res<0)
perror("\nselect failed");
else if( res==0)
puts("TIMEOUT");
else if(FD_ISSET(fd,&readfs))
{//IF INPUT RECEIVED
qDebug("************RECEIVED DATA****************");
FlushBuf();
qDebug("\nReading data into a read buffer");
int bytes_read=mPort->ReadPort(mBuf,1000);
mFrameReceived=false;
for(int i=0;i<bytes_read;i++)
{
qDebug("%x",mBuf[i]);
}
//if complete frame has been received, write the acknowledge message frame to the port.
if(bytes_read>0)
{
qDebug("\nAbout to Process Received bytes");
ProcessReceivedBytes(mBuf,bytes_read);
qDebug("\n Processed Received bytes");
if(mFrameReceived)
{
int no_bytes=mPort->WritePort(mAcknowledgeMessage,ACKNOWLEDGE_FRAME_SIZE);
}//if frame received
}//if bytes read > 0
} //if input received
}//end while
}
The problem is when I exit from this thread, using
delete <protocolclass>::instance();
the program crashes with a glibc error of malloc memory corruption. On checking the core with gdb it was found the when exiting the thread it was processing the data and thus the error. The destructor of the protocol class looks as follows:
<ProtocolClass>::~<ProtocolClass>()
{
delete [] mpTrackInfo; //delete data
wait();
mPort->ClosePort();
s_instance = NULL; //static instance of singleton
delete mPort;
}
Is this due to select? Do the semantics for destroying objects change when select is involved? Can someone suggest a clean way to destroy threads involving select call.
Thanks
I'm not sure what threading library you use, but you should probably signal the thread in one way or another that it should exit, rather than killing it.
The most simple way would be to keep a boolean that is set true when the thread should exit, and use a timeout on the select() call to check it periodically.
ProtocolClass::StopThread ()
{
kill_me = true;
// Wait for thread to die
Join();
}
ProtocolClass::run ()
{
struct timeval tv;
...
while (!kill_me) {
...
tv.tv_sec = 1;
tv.tv_usec = 0;
res = select (maxfd, &readfds, NULL, NULL, &tv);
if (res < 0) {
// Handle error
}
else if (res != 0) {
...
}
}
You could also set up a pipe and include it in readfds, and then just write something to it from another thread. That would avoid waking up every second and bring down the thread without delay.
Also, you should of course never use a boolean variable like that without some kind of lock, ...
Are the threads still looking at mpTrackInfo after you delete it?
Not seeing the code it is hard.
But Iwould think that the first thing the destructor should do is wait for any threads to die (preferably with some form of join() to make sure they are all accounted for). Once they are dead you can start cleaning up the data.
your thread is more than just memory with some members, so just deleting and counting on the destructor is not enough. Since I don't know qt threads I think this link can put you on your way:
trolltech message
Two possible problems:
What is mpTrackInfo? You delete it before you wait for the thread to exit. Does the thread use this data somewhere, maybe even after it's been deleted?
How does the thread know it's supposed to exit? The loop in run() seems to run forever, which should cause wait() in the destructor to wait forever.