How to block a thread while other threads are waiting - c++

I have a very specific problem to solve. I'm pretty sure someone else in the world has already encountered and solved it but I didn't find any solutions yet.
Here it is :
I have a thread that pop command from a queue and execute them asynchronously
I can call from any other thread a function to execute a command synchronously, bypassing the queue mechanism, returning a result, and taking priority of execution (after the current execution is over).
I have a mutex protecting a command execution so only one is executed at a time
The problem is, with a simple mutex, I have no certitude that a synchronous call will get the mutex before the asynchronous thread when in conflict. In fact, our test shows that the allocation is very unfair and that the asynchronous thread always win.
So I want to block the asynchronous thread while there is a synchronous call waiting. I don't know in advance how many synchronous call can be made, and I don't control the threads that make the calls (so any solution using a pool of threads is not possible).
I'm using C++ and Microsoft library. I know the basic synchronization objects, but maybe there is an more advance object or method suitable for my problem that I don't know.
I'm open to any idea!

Ok so I finally get the chance to close this. I tried some of the solution proposed here and in the link posted.
In the end, I combined a mutex for the command execution and a counter of awaiting sync calls (the counter is also protected by a mutex of course).
The async thread check the counter before trying to get the mutex, and wait the counter to be 0. Also, to avoid a loop with sleep, I added an event that is set when the counter is set to 0. The async thread wait for this event before trying to get the mutex.
void incrementSyncCounter()
{
DLGuardThread guard(_counterMutex);
_synchCount++;
}
void decrementSyncCounter()
{
DLGuardThread guard(_counterMutex);
_synchCount--;
// If the counter is 0, it means that no other sync call is waiting, so we notify the main thread by setting the event
if(_synchCount == 0)
{
_counterEvent.set();
}
}
unsigned long getSyncCounter()
{
DLGuardThread guard(_counterMutex);
return _synchCount;
}
bool executeCommand(Command* command)
{
// Increment the sync call counter so the main thread can be locked while at least one sync call is waiting
incrementSyncCounter();
// Execute the command using mutex protection
DLGuardThread guard(_theCommandMutex);
bool res = command->execute();
guard.release();
// Decrement the sync call counter so the main thread can be unlocked if there is no sync call waiting
decrementSyncCounter();
return res;
}
void main ()
{
[...]
// Infinite loop
while(!_bStop)
{
// While the Synchronous call counter is not 0, this main thread is locked to give priority to the sync calls.
// _counterEvent will be set when the counter is decremented to 0, then this thread will check the value once again to be sure no other call has arrived inbetween.
while(getSyncCounter() > 0)
{
::WaitForSingleObject (_counterEvent.hEvent(), INFINITE);
}
// Take mutex
DLGuardThread guard(_theCommandMutex);
status = command->execute();
// Release mutex
guard.release();
}
}

Related

Multithreading Implementation in C++

I am a beginner using multithreading in C++, so I'd appreciate it if you can give me some recommendations.
I have a function which receives the previous frame and current frame from a video stream (let's call this function, readFrames()). The task of that function is to compute Motion Estimation.
The idea when calling readFrames() would be:
Store the previous and current frame in a buffer.
I want to compute the value of Motion between each pair of frames from the buffer but without blocking the function readFrames(), because more frames can be received while computing that value. I suppose I have to write a function computeMotionValue() and every time I want to execute it, create a new thread and launch it. This function should return some float motionValue.
Every time the motionValue returned by any thread is over a threshold, I want to +1 a common int variable, let's call it nValidMotion.
My problem is that I don't know how to "synchronize" the threads when accessing motionValue and nValidMotion.
Can you please explain to me in some pseudocode how can I do that?
and every time I want to execute it, create a new thread and launch it
That's usually a bad idea. Threads are usually fairly heavy-weight, and spawning one is usually slower than just passing a message to an existing thread pool.
Anyway, if you fall behind, you'll end up with more threads than processor cores and then you'll fall even further behind due to context-switching overhead and memory pressure. Eventually creating a new thread will fail.
My problem is that I don't know how to "synchronize" the threads when accessing motionValue and nValidMotion.
Synchronization of access to a shared resource is usually handled with std::mutex (mutex means "mutual exclusion", because only one thread can hold the lock at once).
If you need to wait for another thread to do something, use std::condition_variable to wait/signal. You're waiting-for/signalling a change in state of some shared resource, so you need a mutex for that as well.
The usual recommendation for this kind of processing is to have at most one thread per available core, all serving a thread pool. A thread pool has a work queue (protected by a mutex, and with the empty->non-empty transition signalled by a condvar).
For combining the results, you could have a global counter protected by a mutex (but this is relatively heavy-weight for a single integer), or you could just have each task added to added to the thread pool return a bool via the promise/future mechanism, or you could just make your counter atomic.
Here is a sample pseudo code you may use:
// Following thread awaits notification from worker threads, detecting motion
nValidMotion_woker_Thread()
{
while(true) { message_recieve(msg_q); ++nValidMotion; }
}
// Worker thread, computing motion on 2 frames; if motion detected, notify uysing message Q to nValidMotion_woker_Thread
WorkerThread(frame1 ,frame2)
{
x = computeMotionValue(frame1 ,frame2);
if x > THRESHOLD
msg_q.send();
}
// main thread
main_thread()
{
// 1. create new message Q for inter-thread communication
msg_q = new msg_q();
// start listening thread
Thread a = new nValidMotion_woker_Thread();
a.start();
while(true)
{
// collect 2 frames
frame1 = readFrames();
frame2 = readFrames();
// start workre thread
Thread b = new WorkerThread(frame1 ,frame2);
b.start();
}
}

Mutex blocking the thread in a producer-consumer problem

I have a fundamental question regarding the producer consumer problem. Please consider the below pseudo-code.
// Consumer. This is in the thread I created for asynchronous log writing
// so that I don't block the important threads with my silly log-writing
// operation
run()
{
mutex.lock(); // Line A
retrieveFromQueueAndWriteToFile();
mutex.unlock();
}
// producer. This function gets log messages from 'x' number of threads
add( string mylog )
{
mutex.lock(); // Line B, consider internal call to pthread_mutex_lock
Queue.push(mylog);
mutex.lock();
}
When the log writing operation is in progress in the consumer function the mutex lock is held there. So, when a new log comes in, at Line B, the mutex lock cannot be obtained in the add function. This blocks the important threads when log writing operation is happening.
Is this not the same as writing the log to the file with the other important threads itself. I don't see the point in creating a new thread to write the logs to a file when the important threads are being blocked anyways.
Any help appreciated.
You can split retrieveFromQueueAndWriteToFile to: retrieveFromQueue and writeLogToFile.
Something like this:
run()
{
mutex.lock(); // Line A
auto log = retrieveFromQueue();
mutex.unlock();
writeLogToFile(log); // this operation does not need a lock
}
Note:
If run is called only by one thread in an infinite loop, only the push and pop from the queue are locked, the write to file part of the operation is done by that thread without locking.

C++: Thread synchronization scenario on Linux Platform

I am implementing multithreaded C++ program for Linux platform where I need a functionality similar to WaitForMultipleObjects().
While searching for the solution I observed that there are articles that describe how to achieve WaitForMultipleObjects() functionality in Linux with examples but those examples does not satisfy the scenario that I have to support.
The scenario in my case is pretty simple. I have a daemon process in which the main thread exposes a method/callback to the outside world for example to a DLL. The code of the DLL is not under my control. The same main thread creates a new thread "Thread 1". Thread 1 has to execute kind of an infinite loop in which it would wait for a shutdown event (daemon shutdown) OR it would wait on the data available event being signaled through the exposed method/callback mentioned above.
In short the thread would be waiting on shutdown event and data available event where if shutdown event is signaled the wait would satisfy and the loop would be broken or if data available event is signaled then also wait would satisfy and thread would do business processing.
In windows, it seems very straight forward. Below is the MS Windows based pseudo code for my scenario.
//**Main thread**
//Load the DLL
LoadLibrary("some DLL")
//Create a new thread
hThread1 = __beginthreadex(..., &ThreadProc, ...)
//callback in main thread (mentioned in above description) which would be called by the DLL
void Callbackfunc(data)
{
qdata.push(data);
SetEvent(s_hDataAvailableEvent);
}
void OnShutdown()
{
SetEvent(g_hShutdownEvent);
WaitforSingleObject(hThread1,..., INFINITE);
//Cleanup here
}
//**Thread 1**
unsigned int WINAPI ThreadProc(void *pObject)
{
while (true)
{
HANDLE hEvents[2];
hEvents[0] = g_hShutdownEvent;
hEvents[1] = s_hDataAvailableEvent;
//3rd parameter is set to FALSE that means the wait should satisfy if state of any one of the objects is signaled.
dwEvent = WaitForMultipleObjects(2, hEvents, FALSE, INFINITE);
switch (dwEvent)
{
case WAIT_OBJECT_0 + 0:
// Shutdown event is set, break the loop
return 0;
case WAIT_OBJECT_0 + 1:
//do business processing here
break;
default:
// error handling
}
}
}
I want to implement the same for Linux. According to my understanding when it would come to Linux, it has totally different mechanism where we need to register for signals. If the termination signal arrives, the process would come to know that it is about to shutdown but before that it is necessary for the process to wait for the running thread to gracefully shutdown.
The correct way to do this in Linux would be using condition variables. While this is not the same as WaitForMultipleObjects in Windows, you will get the same functionality.
Use two bools to determine whether there is data available or a shutdown must occur.
Then have the shutdown function and the data function both set the bools accordingly, and signal the condition variable.
#include <pthread.h>
pthread_cond_t cv = PTHREAD_COND_INITIALIZER;
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_t hThread1; // this isn't a good name for it in linux, you'd be
// better with something line "tid1" but for
// comparison's sake, I've kept this
bool shutdown_signalled;
bool data_available;
void OnShutdown()
{
//...shutdown behavior...
pthread_mutex_lock(&mutex);
shutdown_signalled = true;
pthread_mutex_unlock(&mutex);
pthread_cond_signal(&cv);
}
void Callbackfunc(...)
{
// ... whatever needs to be done ...
pthread_mutex_lock(&mutex);
data_available = true;
pthread_mutex_unlock(&mutex);
pthread_cond_signal(&cv);
}
void *ThreadProc(void *args)
{
while(true){
pthread_mutex_lock(&mutex);
while (!(shutdown_signalled || data_available)){
// wait as long as there is no data available and a shutdown
// has not beeen signalled
pthread_cond_wait(&cv, &mutex);
}
if (data_available){
//process data
data_available = false;
}
if (shutdown_signalled){
//do the shutdown
pthread_mutex_unlock(&mutex);
return NULL;
}
pthread_mutex_unlock(&mutex); //you might be able to put the unlock
// before the ifs, idk the particulars of your code
}
}
int main(void)
{
shutdown_signalled = false;
data_available = false;
pthread_create(&hThread1, &ThreadProc, ...);
pthread_join(hThread1, NULL);
//...
}
I know windows has condition variables as well, so this shouldn't look too alien. I don't know what rules windows has about them, but on a POSIX platform the wait needs to be inside of a while loop because "spurious wakeups" can occur.
If you wish to write unix or linux specific code, you have differenr APIs available:
pthread: provides threads, mutex, condition variables
IPC (inter process comunication) mechanisms : mutex, semaphore, shared memory
signals
For threads, the first library is mandatory (there are lower level syscalls on linux, but it's more tedious). For events, the three may be used.
The system shutdown event generate termination (SIG_TERM) and kill (SIG_KILL) signals broadcasted to all the relevant processes. Hence an individual daemon shutdown can also be initiated this way. The goal of the game is to catch the signals, and initiate process shutdown. The important points are:
the signal mechanism is made in such a way that it is not necessary to wait for them
Simply install a so called handler using sigaction, and the system will do the rest.
the signal is set to the process, and any thread may intercept it (the handler may execute in any context)
You need therefore to install a signal handler (see sigaction(2)), and somehow pass the information to the other threads that the application must terminate.
The most convenient way is probably to have a global mutex protected flag which all your threads will consult regularily. The signal handler will set that flag to indicate shutdown. For the worker thread, it means
telling the remote host that the server is closing down,
close its socket on read
process all the remaining received commands/data and send answers
close the socket
exit
For the main thread, this will mean initiating a join on the worker thread, then exit.
This model should not interfer with the way data is normally processed: a blocking call to select or poll will return the error EINTR if a signal was caught, and for a non blocking call, the thread is regularily checking the flag, so it does work too.

How can I use Boost condition variables in producer-consumer scenario?

EDIT: below
I have one thread responsible for streaming data from a device in buffers. In addition, I have N threads doing some processing on that data. In my setup, I would like the streamer thread to fetch data from the device, and wait until the N threads are done with the processing before fetching new data or a timeout is reached. The N threads should wait until new data has been fetched before continuing to process. I believe that this framework should work if I don't want the N threads to repeat processing on a buffer and if I want all buffers to be processed without skipping any.
After careful reading, I found that condition variables is what I needed. I have followed tutorials and other stack overflow questions, and this is what I have:
global variables:
boost::condition_variable cond;
boost::mutex mut;
member variables:
std::vector<double> buffer
std::vector<bool> data_ready // Size equal to number of threads
data receiver loop (1 thread runs this):
while (!gotExitSignal())
{
{
boost::unique_lock<boost::mutex> ll(mut);
while(any(data_ready))
cond.wait(ll);
}
receive_data(buffer);
{
boost::lock_guard<boost::mutex> ll(mut);
set_true(data_ready);
}
cond.notify_all();
}
data processing loop (N threads run this)
while (!gotExitSignal())
{
{
boost::unique_lock<boost::mutex> ll(mut);
while(!data_ready[thread_id])
cond.wait(ll);
}
process_data(buffer);
{
boost::lock_guard<boost::mutex> ll(mut);
data_ready[thread_id] = false;
}
cond.notify_all();
}
These two loops are in their own member functions of the same class. The variable buffer is a member variable, so it can be shared across threads.
The receiver thread will be launched first. The data_ready variable is a vector of bools of size N. data_ready[i] is true if data is ready to be processed and false if the thread has already processed data. The function any(data_ready) outputs true if any of the elements of data_ready is true, and false otherwise. The set_true(data_ready) function sets all of the elements of data_ready to true. The receiver thread will check if any processing thread still is processing. If not, it will fetch data, set the data_ready flags, notify the threads, and continue with the loop which will stop at the beginning until processing is done. The processing threads will check their respective data_ready flag to be true. Once it is true, the processing thread will do some computations, set its respective data_ready flag to 0, and continue with the loop.
If I only have one processing thread, the program runs fine. Once I add more threads, I'm getting into issues where the output of the processing is garbage. In addition, the order of the processing threads matters for some reason; in other words, the LAST thread I launch will output correct data whereas the previous threads will output garbage, no matter what the input parameters are for the processing (assuming valid parameters). I don't know if the problem is due to my threading code or if there is something wrong with my device or data processing setup. I try using couts at the processing and receiving steps, and with N processing threads, I see the output as it should:
receive data
process 1
process 2
...
process N
receive data
process 1
process 2
...
Is the usage of the condition variables correct? What could be the problem?
EDIT: I followed fork's suggestions and changed the code to:
data receiver loop (1 thread runs this):
while (!gotExitSignal())
{
if(!any(data_ready))
{
receive_data(buffer);
boost::lock_guard<boost::mutex> ll(mut);
set_true(data_ready);
cond.notify_all();
}
}
data processing loop (N threads run this)
while (!gotExitSignal())
{
// boost::unique_lock<boost::mutex> ll(mut);
boost::mutex::scoped_lock ll(mut);
cond.wait(ll);
process_data(buffer);
data_ready[thread_id] = false;
}
It works somewhat better. Am I using the correct locks?
I did not read your whole story but if i look at the code quickly i see that you use conditions wrong.
A condition is like a state, once you set a thread in a waiting condition it gives away the cpu. So your thread will effectively stop running untill some other process/thread notifies it.
In your code you have a while loop and each time you check for data you wait. That is wrong, it should be an if instead of a while. But then again it should not be there. The checking for data should be done somewhere else. And your worker thread should put itself in waiting condition after it has done its work.
Your worker threads are the consumers. And the producers are the ones that deliver the data.
I think a better construction would be to make a thread check if there is data and notify the worker(s).
PSEUDO CODE:
//producer
while (true) {
1. lock mutex
2. is data available
3. unlock mutex
if (dataAvailableVariable) {
4. notify a worker
5. set waiting condition
}
}
//consumer
while (true) {
1. lock mutex
2. do some work
3. unlock mutex
4. notify producer that work is done
5. set wait condition
}
You should also take care of the fact that some thread needs to be alive in order to avoid a deadlock, means all threads in waiting condition.
I hope that helps you a little.

invoke methods in thread in c++

I have a class which reads from a message queue. Now this class has also got a thread inside it. Depending on the type of the msg in msg q, it needs to execute different functions inside that thread as the main thread in class always keeps on waiting on msg q. As soon as it reads a message from queue, it checks its type and calls appropriate method to be executed in thread and then it goes back to reading again(reading in while loop).
I am using boost message q and boost threads
How can I do this.
Its something like this:
while(!quit) {
try
{
ptime now(boost::posix_time::microsec_clock::universal_time());
ptime timeout = now + milliseconds(100);
if (mq.timed_receive(&msg, sizeof(msg), recvd_size, priority, timeout))
{
switch(msg.type)
{
case collect:
{
// need to call collect method in thread
}
break;
case query:
{
// need to call query method in thread
}
break;
and so on.
Can it be done?
If it can be done, then what happens in the case when thread is say executing collect method and main thread gets a query message and wants to call it.
Thanks in advance.
Messages arriving while the receiving thread is executing long operations will be stored for later (in the queue, waiting to be processed).
If the thread is done with its operation, it will come back and call the receive function again, and immediately get the first of the messages that arrived while it was not looking and can process it.
If the main thread needs the result of the message processing operation, it will block until the worker thread is done and delivers the result.
Make sure you do not do anything inside the worker thread that in turn waits on the main thread's actions, otherwise there is the risk of a deadlock.