I have a simple threading question - how should the following be synchronized?
I have main thread and a secondary thread that does something only once and something - more that once.
Basically:
Secondary thread:
{
Do_Something_Once();
while (not_important_condition) {
Do_Something_Inside_Loop();
}
}
I want to suspend my main thread unless Do_Something_Once action is done and right now I use a plain bool value is_something_once_done = false; to indicate if the action is finished.
Hence, the code of my main thread looks like this:
{
Launch_Secondary_Thread();
while (!is_something_once_done) {
boost::this_thread::sleep(milliseconds(25));
}
}
which obviously isn't the best way to perform such kind of synchronization.
Any alternatives (better if boost::thread - powered)?
Thank you
This is a job for condition variables.
Check out the Condition Variables section of the boost docs - the example there is almost exactly what you're doing.
Whatever you do, don't do a busy-wait loop with sleep
You could consider using boost's condition variable mechanism. It is designed for this scenario.
Insert code that is appropriate for your platform where I have added comments below:
{
// Create event visible by second thread to be signalled on completion
Launch_Secondary_Thread();
// Wait for event to be signalled
}
{
Do_Something_Once();
// set the event state to signalled so that 1st thread knows to continue working
while (not_important_condition) {
Do_Something_Inside_Loop();
}
}
Make sure that the event DOES get signalled, even if 2nd thread exits abnormally after an exception or other error. If not, your 1st thread will never wake up. Unless you can put a timeout on the wait.
You're free to go with mutex locks!
Do_Something_Once()
{
boost::mutex::scoped_lock(mutex);
// ...
}
Update:
For your particular case I would go with condition variable, as others suggested.
Related
So I have the following, working, code for getting clean exit performed when user interrupts the program (ie. ctrl-c in terminal), so that global destructors, etc. will be run.
The problem is that it is very limited what you can do in the signal handler function (it did take me a little while to figure out how to this correctly (as simpel as it looks now) - the trick is starting a thread to moniter a flag - but now im left wondering if theres a better way to avoid burning cpu ? (in particular now as i plan to include this with some library code)
std::signal(SIGINT, [](int signal) { gSignalStatus() = signal; });
std::thread signal_handler_thread([] {
using namespace std::chrono_literals;
for (;;)
{
if (gSignalStatus() != 0)
{
std::exit(gSignalStatus());
}
std::this_thread::sleep_for(100ms);
}
});
signal_handler_thread.detach();
Sleeping is a very very crude solution - but is there any better way (for example wake up the thread from the signal handler, but we are not allowed to that...) ?
I have a situation where a notify() 'can' be called before a wait().
I am trying to make a simulator to schedule its next event when I 'notify' him by sending him messages. So I have devised a wait->notify->scedule chain
void Broker::pause()
{
boost::unique_lock<boost::mutex> lock(m_pause_mutex);
{
std::cout << "pausing the simulation" << std::endl;
m_cond_cnn.wait(lock);
std::cout << "Simulation UNpaused" << std::endl;
// the following line causes the current function to be called at
// a later time, and a notify() can happen before the current function
// is called again
Simulator::Schedule(MilliSeconds(xxx), &Broker::pause, this);
}
}
void Broker::messageReceiveCallback(std::string message) {
boost::unique_lock<boost::mutex> lock(m_pause_mutex);
{
m_cond_cnn.notify_one();
}
}
the problem here is that: there can be situations that a notify() is called before its wait() is called.
Is there a solution for such situation?
thank you
Condition variables can hardly be used alone, if only because, as you noticed, they only wake the currently waiting threads. There's also the matter of spurious wake-ups (ie. the condition variable can sometimes wake up a thread without any corresponding notify having been called). To work properly, condition variables usually need another variable to maintain a more reliable state.
To solve both those problems, in your case you just need to add a boolean flag:
boost::unique_lock<boost::mutex> lock(m_pause_mutex);
while (!someFlag)
m_cond_cnn.wait(lock);
someFlag = false;
//...
boost::unique_lock<boost::mutex> lock(m_pause_mutex);
someFlag = true;
m_cond_cnn.notify_one();
I think that syam's answer is fine in general but in your specific case where you seem to be using ns-3, I would suggest instead that you restructure your code to use the right primitives in ns-3:
I suspect that you use one of the ns-3 realtime simulator implementations. Good.
Schedule a keeplive event for the 0.1s to make sure that the simulator keeps running (it will top running when there are no events left).
Optionally, use a boolean in this keepalive event to check if you should reschedule the keepalive event or call Simulator::Stop.
Create a thread to run the simulator mainloop with Simulator::Run(). The simulator will sleep until the next scheduled event is supposed to expire or until a new event is externally scheduled
Use Simulator::ScheduleWithContext to schedule an event externally from another thread.
Keep in mind that the ns-3 API is not thread safe in general. The only ns-3 API that is thread-safe is ns3::Simulator::ScheduleWithContext. I can't stress out how important it is to not use any other API available in the ns-3:: namespace from a thread that is not the main thread.
I'm writing an application which has an event queue. My intention is to create this in such a way that multiple threads can write and one thread can read from the queue, and hand over the processing of a popped element to another thread so that the subsequent pop again will not be blocked. I used a lock and a condition variable for pushing and popping items from the queue:
void Publisher::popEvent(boost::shared_ptr<Event>& event) {
boost::mutex::scoped_lock lock(queueMutex);
while(eventQueue.empty())
{
queueConditionVariable.wait(lock);
}
event = eventQueue.front();
eventQueue.pop();
lock.unlock();
}
void Publisher::pushEvent(boost::shared_ptr<Event> event) {
boost::mutex::scoped_lock lock(queueMutex);
eventQueue.push(event);
lock.unlock();
queueConditionVariable.notify_one();
}
In the constructor of the Publisher class (only one instance is created), I'm starting one thread which will iterate through a loop till a notify_one() is captured, and then is starting up another thread to process the event popped from the queue:
In constructor:
publishthreadGroup = boost::shared_ptr<boost::thread_group> (new boost::thread_group());
publishthreadGroup->create_thread(boost::bind(queueProcessor, this));
queueProcessor method:
void queueProcessor(Publisher* agent) {
while(true) {
boost::shared_ptr<Event> event;
agent->getEvent(event);
agent->publishthreadGroup->create_thread(boost::bind(dispatcher, agent, event));
}
}
and in the dispatcher method, the relevant processing is done and the processed information is published to a server via thrift. In another method called before program exists, which is in the main thread, I call join_all() so that main thread waits till threads are done.
In this implementation, after the thread for dispatcher is made, in the while loop above, I have experienced a deadlock/hang. The running code seem to be stuck. What is the issue in this implementation? And is there a much cleaner, better way of doing what I am trying to do? (Multiple producers and one one consumer thread iterating through the queue and handing off the processing of an element to a different thread)
Thank you!
It seems that the queueProcessor function will run forever and the thread running it will never exit. Any threads created by that function will do their work and exit, but this thread - the first one created in the publishthreadGroup - has a while(true) loop that has no way of stopping. Thus a call to join_all() will wait forever. Can you create some other flag variable that triggers that function to exit the loop and return? That should do the trick!
I'm doing some experiments on C++ multithreading and I have no idea how to solve one problem. Let's say we have thread pool, that process user requests using existing thread and creates new thread, when no free thread available. I've created command_queue thread-safe class, which have push and pop methods. pop waits while queue is empty and returns only when command is available or timeout occurred. Now it's time to implement thread pool. The idea is to make free threads sleep for some amount of time and kill the thread if there is nothing to do after that period of time. Here is implementation
command_queue::handler_t handler;
while (handler = tasks.pop(timeout))
{
handler();
}
here we exit the thread procedure if timeout occurred. That is fine, but there is problem with new thread creation. Let's say we already have 2 thread processing user requests, they are working at the moment, but we need to do some other operation asynchronously.
We call
thread_pool::start(some_operation);
which should start new thread, because there is no free threads available. When thread is available it calls timed_wait on condition variable, so the idea is to check whether there are threads that are waiting.
if (thread_are_free_threads) // ???
condition.notify_one();
else
create_thread(thread_proc);
but how to check it? Documentation says, that if there are no waiting threads notify_one does nothing. If I could check whether or not it did nothing that would be a solution
if (!condition.notify_one()) // nobody was notified
create_thread(thread_proc);
As far as I see there is no way to check that.
Thanks for your answers.
You need to create another variable (perhaps a semaphore) which knows how many threads are running, then you can check that and create a new thread, if needed, before calling notify.
The other, better option is to just not have your threads exit when they time out. They should stay alive waiting to be notified. Instead of exiting when the notify times out, check a variable to see if the program is still running or if it is "shutting down", If it's still running, then start waiting again.
A more typical thread pool would look like this:
Pool::Pool()
{
runningThreads = 0;
actualThreads = 0;
finished = false;
jobQue.Init();
mutex.Init();
conditionVariable.Init();
for(int loop=0; loop < threadCount; ++loop) { startThread(threadroutine); }
}
Pool::threadroutine()
{
{
// Extra code to count threads sp we can add more if required.
RAIILocker doLock(mutex);
++ actualThreads;
++ runningThreads;
}
while(!finished)
{
Job job;
{
RAIILocker doLock(mutex);
while(jobQue.empty())
{
// This is the key.
// Here the thread is suspended (using zero resources)
// until some other thread calls the notify_one on the
// conditionVariable. At this point exactly one thread is release
// and it will start executing as soon as it re-acquires the lock
// on the mutex.
//
-- runningThreads;
conditionVariable.wait(mutex);
++ runningThreads;
}
job = jobQue.getJobAndRemoveFromQue();
}
job.execute();
}
{
// Extra code to count threads sp we can add more if required.
RAIILocker doLock(mutex);
-- actualThreads;
-- runningThreads;
}
}
Pool::AddJob(Job job)
{
RAIILocker doLock(mutex);
// This is where you would check to see if you need more threads.
if (runningThreads == actualThreads) // Plus some other conditions.
{
// increment both counts. When it waits we decrease the running count.
startThread(threadroutine);
}
jobQue.push_back(job);
conditionVariable.notify_one(); // This releases one worker thread
// from the call to wait() above.
// Note: The worker thread will not start
// until this thread releases the mutex.
}
I think you need to rethink your design. In a simple model of a dealer thread handing out work the player threads, the dealer places the job onto the message queue and lets one of the players pick up the job when it gets a chance.
In your case the dealer is actively managing the thread pool in that it retains a knowledge on which player threads are idle and which are busy. Since the dealer knows which player is idle, the dealer can actively pass the idle the job and signal the player using a simple semaphore (or cond var) - there being one semaphore per player. In such a case, it might make sense for the dealer to destroy idle threads actively by giving the thread a kill myself job to do.
Now I found one solution, but it's not as that perfect.
I have volatile member variable named free - that stores number of free threads in the pool.
void thread_pool::thread_function()
{
free++;
command_queue::handler_t handler;
while (handler = tasks.pop(timeout))
{
free--;
handler();
free++;
}
free--;
}
when I assign task to the thread I do something like this
if (free == 0)
threads.create_thread(boost::bind(&thread_pool::thread_function, this));
there is still issue with synchronization, because if the context will be switched after free-- in thread_function we might create a new thread, which we actually don't need, but as the tasks queue is thread safe there is no problem with that, it's just an unwanted overhead. Can you suggest solution to that and what do you think about it? Maybe it's better to leave it as it is then having one more synchronization here?
Another idea. You can query the length of the Message Queue. If it gets too long, create a new worker.
Greetings, everyone!
I have a class (say, "Switcher" ) that executes some very-very long operation and notifies its listener, that operation is complete. The operation is long, and I isolate actual switching into separate thread:
class Switcher
{
public:
// this is what other users call:
void StartSwitching()
{
// another switch is initiated, I must terminate previous switching operation:
if ( m_Thread != NULL )
{
if ( WaitForThread(m_Thread, 3000) != OK )
{
TerminateThread(m_Thread);
}
}
// start new switching thread:
m_Thread = StartNewThread( ThreadProc );
}
// this is a thread procedure:
static void ThreadProc()
{
DoActualSwitching();
NotifyListener();
}
private:
Thread m_Thread;
};
The logic is rather simple - if user initiates new switching before the previous one is complete, I terminate previous switching (don't care of what happens inside "DoActualSwitching()") and start the new one. The problem is that sometimes, when terminating thread, I loose the "NotifyListener()" call.
I would like to introduce some improvements to ensure, that NotifyListener() is called every time, even if thread is terminated. Is there any pattern to do this? I can only think of another thread, that infinitely waits for the switcher and if the switcher is done (correctly or by termination), it can emit notification. But introducing another thread seems an overplay for me. Can you think of any other solution (p.s. the platform is win32)?
Thank you!
First, you should never call TerminateThread. You cannot know which operation is terminated when calling TerminateThread and so that could lead to memory leaks/resource leaks/state corruption.
To get your thread to be interruptable/cancelable, you supply a 'cancel' state, which is checked by the thread itself. Then your notify end will always work.
TerminateThread() here whacks the thread, and if it was inside DoActualSwitching(), that's where it'll die, and NotifyListener() will not be called on that thread. This is what TerminateThread() does, and there is no way to make it behave differently.
What you are looking for is a bit more graceful way to terminate the thread. Without more info about your application it's difficult to suggest an optimal approach, but if you can edit DoActualSwitching(), then I'd add
if (WAIT_OBJECT_0 == WaitForSingleObject(m_ExitThreadEvent, 0))
break;
into the loop there, and call SetEvent(m_ExitThreadEvent) instead of TerminateThread(). Of course you'll need to create the event and add the handle to the class. If your model suggest that there is only one switching thread at a time, I'd use autoreset event here, otherwise some more code is needed.
Good luck!