Greetings, everyone!
I have a class (say, "Switcher" ) that executes some very-very long operation and notifies its listener, that operation is complete. The operation is long, and I isolate actual switching into separate thread:
class Switcher
{
public:
// this is what other users call:
void StartSwitching()
{
// another switch is initiated, I must terminate previous switching operation:
if ( m_Thread != NULL )
{
if ( WaitForThread(m_Thread, 3000) != OK )
{
TerminateThread(m_Thread);
}
}
// start new switching thread:
m_Thread = StartNewThread( ThreadProc );
}
// this is a thread procedure:
static void ThreadProc()
{
DoActualSwitching();
NotifyListener();
}
private:
Thread m_Thread;
};
The logic is rather simple - if user initiates new switching before the previous one is complete, I terminate previous switching (don't care of what happens inside "DoActualSwitching()") and start the new one. The problem is that sometimes, when terminating thread, I loose the "NotifyListener()" call.
I would like to introduce some improvements to ensure, that NotifyListener() is called every time, even if thread is terminated. Is there any pattern to do this? I can only think of another thread, that infinitely waits for the switcher and if the switcher is done (correctly or by termination), it can emit notification. But introducing another thread seems an overplay for me. Can you think of any other solution (p.s. the platform is win32)?
Thank you!
First, you should never call TerminateThread. You cannot know which operation is terminated when calling TerminateThread and so that could lead to memory leaks/resource leaks/state corruption.
To get your thread to be interruptable/cancelable, you supply a 'cancel' state, which is checked by the thread itself. Then your notify end will always work.
TerminateThread() here whacks the thread, and if it was inside DoActualSwitching(), that's where it'll die, and NotifyListener() will not be called on that thread. This is what TerminateThread() does, and there is no way to make it behave differently.
What you are looking for is a bit more graceful way to terminate the thread. Without more info about your application it's difficult to suggest an optimal approach, but if you can edit DoActualSwitching(), then I'd add
if (WAIT_OBJECT_0 == WaitForSingleObject(m_ExitThreadEvent, 0))
break;
into the loop there, and call SetEvent(m_ExitThreadEvent) instead of TerminateThread(). Of course you'll need to create the event and add the handle to the class. If your model suggest that there is only one switching thread at a time, I'd use autoreset event here, otherwise some more code is needed.
Good luck!
Related
I have this code:
mainwindow.h:
namespace Ui {
class MainWindow;
}
class MainWindow : public QMainWindow {
private:
QMutex mutex;
}
mainwindow.cpp:
void MainWindow::on_calculateBtn_clicked() {
QMutexLocker locker(&mutex);
qDebug() << "mutex has been locked" << endl;
ui->calculateBtn->setEnabled(false);
startProcess(); // huge calcutations
ui->calculateBtn->setEnabled(true); // performed before startProcess() has finished (why?)
qDebug() << "mutex will be unlocked" << endl;
}
If I click calculateBtn again while startProcess() has not finished, my program crashed:
pure virtual method called
The program has unexpectedly finished.
I tried:
void MainWindow::on_calculateBtn_clicked() {
if (!processing) {
processing = true;
ui->calculateBtn->setEnabled(false);
startProcess();
ui->calculateBtn->setEnabled(true); // performed before startProcess() has finished (why?)
processing = false;
}
}
There is no shared data, I just want one startProcess() will not be started before other startProcess() finished.
Why did it happen? I think that mutex have to lock function startProcess() in on_calculateBtn_clicked() and nothing should happens. It seems I don't know any important things. Thanks in advance for any advice.
The same mutex is locked twice from the same thread (the main thread, which contains the event loop), which is invalid for a non-recursive mutex.
But even a recursive mutex will not solve the basic problem of your code; you need a flag to indicate that you are already doing the calculations, and return from all subsequent calls to your method while they are running, else you'll start them multiple times in the same thread, one interrupting the other, probably with bad results. Even better, disable the button while the method is running and take care it isn't called by other ways.
However, if calling startProcess() multiple times and run it simultaneously is intended, you'll have to start a thread for each button press and take care for access to shared data (using mutexes, most probably) - that's where the real fun begins.
I think that you (by default) have a Qt::DirectConnection with this button press, right? i.e.:
connect(..., SIGNAL(...),
..., SLOT(:on_calculateBtn_clicked()), <by-default-Qt::DirectConnection>);
The issue I see here is that the first button press will run the function void MainWindow::on_calculateBtn_clicked() immediately.... which is all good so far, the mutex is locked and huge calcs are running.
However when you press the button again, void MainWindow::on_calculateBtn_clicked() is again immediate run (like an interrupt). The first thing it does is try to lock the mutex and it must hang here.
If you make connection to the slot void MainWindow::on_calculateBtn_clicked() Qt::QueuedConnection then it won't trigger the button press until it has clear the other tasks on its task queue.
but.... weather or not your design here is good is questionable, I think over you should re-think your strategy (as some comments have suggested)
EDIT
Oh yeah, meant to add..... to answer your question, therefore I don't think the mutex is begin unlocked twice... its just the nature of the direct connection
How is it possible to catch a signal by a process , handle it such that a current ongoing IO output is not interrupted?
Can this be achieved by calling all registered callbacks handleExit() in exitSignalHandling till one handleExit() returns a status which tells that it handled the exit signal. The signal is handled in objectB if it has been marked to handle the exit, this is the case when the process is currently inside the relevant function which need special care :
void exitSignalHandling(){
/** call all registered callbacks */
}
while(1){
objectB.compute();
objectA.write(some data) /* when process enters: set flag to handle exit signal , objectB registered a call back objectB::handleExit()*/
}
class objectA{
bool handleExit(){
if( handleExit == true){
exitAfterWrite = true;
return true;
}
return false;
}
write(){
handleExit=true;
/*write data*/
if(exitAfterWrite){ exit(SUCCESS) }
}
}
Well obviously, the problem is that by handling a signal, you're exiting the object context and are back to static C code.
What you need to do is re-enter the object context with e.g. a singleton class or a global variable. This class would act as the holder class for all the objects that are registered for signal-uninterruptible I/O.
There are different ways of doing this. You can either employ an abstract class with bool handleExit() = 0; method. Other solution would be binding a handler method with std::bind(&class::handler, this) and storing it in a container.
When you start/end signal-uninterruptible I/O, you need to register/unregister your handler object with the holder. I think that using dynamic polymorphism would be the easiest way here.
I also have to state that your idea is not exactly thought-through. If I call handleExit, I get a value, whether exit was already set before. I don't see any use of it. But that's a minor problem.
What intrigues me the most is the use of exit call. Using this way of ending the application is not very bright. Imagine you would have two objects doing uninterruptible I/O at the time a interrupting signal comes. Only the first one will finish, the second one will still get killed along the way by the exit call from the first object.
Generally, I think it would be much better idea to create one class that is responsible for all the signal handling and decides to kill the application when no I/O is pending.
I have a situation where a notify() 'can' be called before a wait().
I am trying to make a simulator to schedule its next event when I 'notify' him by sending him messages. So I have devised a wait->notify->scedule chain
void Broker::pause()
{
boost::unique_lock<boost::mutex> lock(m_pause_mutex);
{
std::cout << "pausing the simulation" << std::endl;
m_cond_cnn.wait(lock);
std::cout << "Simulation UNpaused" << std::endl;
// the following line causes the current function to be called at
// a later time, and a notify() can happen before the current function
// is called again
Simulator::Schedule(MilliSeconds(xxx), &Broker::pause, this);
}
}
void Broker::messageReceiveCallback(std::string message) {
boost::unique_lock<boost::mutex> lock(m_pause_mutex);
{
m_cond_cnn.notify_one();
}
}
the problem here is that: there can be situations that a notify() is called before its wait() is called.
Is there a solution for such situation?
thank you
Condition variables can hardly be used alone, if only because, as you noticed, they only wake the currently waiting threads. There's also the matter of spurious wake-ups (ie. the condition variable can sometimes wake up a thread without any corresponding notify having been called). To work properly, condition variables usually need another variable to maintain a more reliable state.
To solve both those problems, in your case you just need to add a boolean flag:
boost::unique_lock<boost::mutex> lock(m_pause_mutex);
while (!someFlag)
m_cond_cnn.wait(lock);
someFlag = false;
//...
boost::unique_lock<boost::mutex> lock(m_pause_mutex);
someFlag = true;
m_cond_cnn.notify_one();
I think that syam's answer is fine in general but in your specific case where you seem to be using ns-3, I would suggest instead that you restructure your code to use the right primitives in ns-3:
I suspect that you use one of the ns-3 realtime simulator implementations. Good.
Schedule a keeplive event for the 0.1s to make sure that the simulator keeps running (it will top running when there are no events left).
Optionally, use a boolean in this keepalive event to check if you should reschedule the keepalive event or call Simulator::Stop.
Create a thread to run the simulator mainloop with Simulator::Run(). The simulator will sleep until the next scheduled event is supposed to expire or until a new event is externally scheduled
Use Simulator::ScheduleWithContext to schedule an event externally from another thread.
Keep in mind that the ns-3 API is not thread safe in general. The only ns-3 API that is thread-safe is ns3::Simulator::ScheduleWithContext. I can't stress out how important it is to not use any other API available in the ns-3:: namespace from a thread that is not the main thread.
I'm doing some experiments on C++ multithreading and I have no idea how to solve one problem. Let's say we have thread pool, that process user requests using existing thread and creates new thread, when no free thread available. I've created command_queue thread-safe class, which have push and pop methods. pop waits while queue is empty and returns only when command is available or timeout occurred. Now it's time to implement thread pool. The idea is to make free threads sleep for some amount of time and kill the thread if there is nothing to do after that period of time. Here is implementation
command_queue::handler_t handler;
while (handler = tasks.pop(timeout))
{
handler();
}
here we exit the thread procedure if timeout occurred. That is fine, but there is problem with new thread creation. Let's say we already have 2 thread processing user requests, they are working at the moment, but we need to do some other operation asynchronously.
We call
thread_pool::start(some_operation);
which should start new thread, because there is no free threads available. When thread is available it calls timed_wait on condition variable, so the idea is to check whether there are threads that are waiting.
if (thread_are_free_threads) // ???
condition.notify_one();
else
create_thread(thread_proc);
but how to check it? Documentation says, that if there are no waiting threads notify_one does nothing. If I could check whether or not it did nothing that would be a solution
if (!condition.notify_one()) // nobody was notified
create_thread(thread_proc);
As far as I see there is no way to check that.
Thanks for your answers.
You need to create another variable (perhaps a semaphore) which knows how many threads are running, then you can check that and create a new thread, if needed, before calling notify.
The other, better option is to just not have your threads exit when they time out. They should stay alive waiting to be notified. Instead of exiting when the notify times out, check a variable to see if the program is still running or if it is "shutting down", If it's still running, then start waiting again.
A more typical thread pool would look like this:
Pool::Pool()
{
runningThreads = 0;
actualThreads = 0;
finished = false;
jobQue.Init();
mutex.Init();
conditionVariable.Init();
for(int loop=0; loop < threadCount; ++loop) { startThread(threadroutine); }
}
Pool::threadroutine()
{
{
// Extra code to count threads sp we can add more if required.
RAIILocker doLock(mutex);
++ actualThreads;
++ runningThreads;
}
while(!finished)
{
Job job;
{
RAIILocker doLock(mutex);
while(jobQue.empty())
{
// This is the key.
// Here the thread is suspended (using zero resources)
// until some other thread calls the notify_one on the
// conditionVariable. At this point exactly one thread is release
// and it will start executing as soon as it re-acquires the lock
// on the mutex.
//
-- runningThreads;
conditionVariable.wait(mutex);
++ runningThreads;
}
job = jobQue.getJobAndRemoveFromQue();
}
job.execute();
}
{
// Extra code to count threads sp we can add more if required.
RAIILocker doLock(mutex);
-- actualThreads;
-- runningThreads;
}
}
Pool::AddJob(Job job)
{
RAIILocker doLock(mutex);
// This is where you would check to see if you need more threads.
if (runningThreads == actualThreads) // Plus some other conditions.
{
// increment both counts. When it waits we decrease the running count.
startThread(threadroutine);
}
jobQue.push_back(job);
conditionVariable.notify_one(); // This releases one worker thread
// from the call to wait() above.
// Note: The worker thread will not start
// until this thread releases the mutex.
}
I think you need to rethink your design. In a simple model of a dealer thread handing out work the player threads, the dealer places the job onto the message queue and lets one of the players pick up the job when it gets a chance.
In your case the dealer is actively managing the thread pool in that it retains a knowledge on which player threads are idle and which are busy. Since the dealer knows which player is idle, the dealer can actively pass the idle the job and signal the player using a simple semaphore (or cond var) - there being one semaphore per player. In such a case, it might make sense for the dealer to destroy idle threads actively by giving the thread a kill myself job to do.
Now I found one solution, but it's not as that perfect.
I have volatile member variable named free - that stores number of free threads in the pool.
void thread_pool::thread_function()
{
free++;
command_queue::handler_t handler;
while (handler = tasks.pop(timeout))
{
free--;
handler();
free++;
}
free--;
}
when I assign task to the thread I do something like this
if (free == 0)
threads.create_thread(boost::bind(&thread_pool::thread_function, this));
there is still issue with synchronization, because if the context will be switched after free-- in thread_function we might create a new thread, which we actually don't need, but as the tasks queue is thread safe there is no problem with that, it's just an unwanted overhead. Can you suggest solution to that and what do you think about it? Maybe it's better to leave it as it is then having one more synchronization here?
Another idea. You can query the length of the Message Queue. If it gets too long, create a new worker.
I have a simple threading question - how should the following be synchronized?
I have main thread and a secondary thread that does something only once and something - more that once.
Basically:
Secondary thread:
{
Do_Something_Once();
while (not_important_condition) {
Do_Something_Inside_Loop();
}
}
I want to suspend my main thread unless Do_Something_Once action is done and right now I use a plain bool value is_something_once_done = false; to indicate if the action is finished.
Hence, the code of my main thread looks like this:
{
Launch_Secondary_Thread();
while (!is_something_once_done) {
boost::this_thread::sleep(milliseconds(25));
}
}
which obviously isn't the best way to perform such kind of synchronization.
Any alternatives (better if boost::thread - powered)?
Thank you
This is a job for condition variables.
Check out the Condition Variables section of the boost docs - the example there is almost exactly what you're doing.
Whatever you do, don't do a busy-wait loop with sleep
You could consider using boost's condition variable mechanism. It is designed for this scenario.
Insert code that is appropriate for your platform where I have added comments below:
{
// Create event visible by second thread to be signalled on completion
Launch_Secondary_Thread();
// Wait for event to be signalled
}
{
Do_Something_Once();
// set the event state to signalled so that 1st thread knows to continue working
while (not_important_condition) {
Do_Something_Inside_Loop();
}
}
Make sure that the event DOES get signalled, even if 2nd thread exits abnormally after an exception or other error. If not, your 1st thread will never wake up. Unless you can put a timeout on the wait.
You're free to go with mutex locks!
Do_Something_Once()
{
boost::mutex::scoped_lock(mutex);
// ...
}
Update:
For your particular case I would go with condition variable, as others suggested.