I had three initial ideas about this
Firstly some kind of counter? (Maybe using mutex?)
Some kind of semophore? (I don't know much about these) OR perhaps a promise/future combination
Some other kind of signal/slot mechanism, similar to that of the signal created by CTRL-C (SIGINT etc)
I'm working on some software which makes use of detached threads to do some work. Unfortunatly the threads don't clean up nicely, they just quit at the end of execution. This is fine for communication in one direction (ie; main() can quit first), but won't work the other way around - at the moment there is no way for main() to know when the threads have finished working and to exit gracefully.
To expand on those bullet points...
My initial idea was to have a protected region of variables - could be a counter or an array of flags, one for each thread, and to access these using a mutex. The mutex might not even be necessary if using one variable per detached thread to signal the end of the thread working, because main() will "poll" these variables, which is a read-only operation. Only the detached threads themselves need write access. If more than one detached thread uses the same counter/variable then a mutex would be required.
The next idea I had was to use a semophore (which is something I really know nothing about) or promise/future combinations, which I think would work as a possible option.
The final thought was some kind of signals mechanism, like possibly "stealing" a SIGxyz signal (like SIGINT) and using that to some how communicate the end of a thread execution. I'm not confident about this one however.
My question is really - how is this supposed to be done? What would the typical engineering solution to this problem be?
(Final thought: Using a file, or a pipe? Seems a bit complicated though?)
Perhaps I overlooked the question but I think you could use an atomic variable as a flag in order to notify the detached thread's termination.
Something like the following example:
#include <thread>
#include <iostream>
#include <atomic>
int main()
{
// Define a flag to notify detached thread's termination
std::atomic_bool term_flag;
// Define some function to run concurrently
auto func = [&term_flag](){
std::this_thread::sleep_for(std::chrono::seconds(2));
term_flag = true;
};
// Run and detach the thread
term_flag = false;
std::thread t(func);
t.detach();
// Wait until detached thread termination
while(!term_flag)
std::this_thread::yield();
std::cout << "Detached Thread has terminated properly" << std::endl;
return 0;
}
Output:
Detached Thread has terminated properly
EDIT:
As Hans Passant mentioned, you could also use a condition variable associated with a mutex to do it.
This would be a better solution (but a bit less readable in my humble opinion) since we have more control over how much to wait.
The basic example above could then be rewritten as:
#include <thread>
#include <iostream>
#include <mutex>
#include <condition_variable>
int main()
{
// Define the mutex and the condition variable to notify the detached thread's termination
std::mutex m;
std::condition_variable cv;
// Define some function to run concurrently
auto func = [&cv](){
std::this_thread::sleep_for(std::chrono::seconds(2));
cv.notify_one();
};
// Run and detach the thread
std::thread t(func);
t.detach();
// Wait until detached thread termination
{
std::unique_lock<std::mutex> lk(m);
cv.wait(lk);
}
std::cout << "Detached Thread has terminated properly" << std::endl;
return 0;
}
Related
I want the while loop in the thread to run , wait a second, then run again, so on and so on., but this don't seem to work, how would I fix it?
main(){
bool flag = true;
pthread = CreateThread(NULL, 0, ThreadFun, this, 0, &ThreadIP);
}
ThreadFun(){
while(flag == true)
WaitForSingleObject(pthread,1000);
}
This is one way to do it, I prefer using condition variables over sleeps since they are more responsive and std::async over std::thread (mainly because std::async returns a future which can send information back the the starting thread. Even if that feature is not used in this example).
#include <iostream>
#include <chrono>
#include <future>
#include <condition_variable>
// A very useful primitive to communicate between threads is the condition_variable
// despite its name it isn't a variable perse. It is more of an interthread signal
// saying, hey wake up thread something may have changed that's interesting to you.
// They come with some conditions of their own
// - always use with a lock
// - never wait without a predicate
// (https://www.modernescpp.com/index.php/c-core-guidelines-be-aware-of-the-traps-of-condition-variables)
// - have some state to observe (in this case just a bool)
//
// Since these three things go together I usually pack them in a class
// in this case signal_t which will be used to let thread signal each other
class signal_t
{
public:
// wait for boolean to become true, or until a certain time period has passed
// then return the value of the boolean.
bool wait_for(const std::chrono::steady_clock::duration& duration)
{
std::unique_lock<std::mutex> lock{ m_mtx };
m_cv.wait_for(lock, duration, [&] { return m_signal; });
return m_signal;
}
// wiat until the boolean becomes true, wait infinitely long if needed
void wait()
{
std::unique_lock<std::mutex> lock{ m_mtx };
m_cv.wait(lock, [&] {return m_signal; });
}
// set the signal
void set()
{
std::unique_lock<std::mutex> lock{ m_mtx };
m_signal = true;
m_cv.notify_all();
}
private:
bool m_signal { false };
std::mutex m_mtx;
std::condition_variable m_cv;
};
int main()
{
// create two signals to let mainthread and loopthread communicate
signal_t started; // indicates that loop has really started
signal_t stop; // lets mainthread communicate a stop signal to the loop thread.
// in this example I use a lambda to implement the loop
auto future = std::async(std::launch::async, [&]
{
// signal this thread has been scheduled and has started.
started.set();
do
{
std::cout << ".";
// the stop_wait_for will either wait 500 ms and return false
// or stop immediately when stop signal is set and then return true
// the wait with condition variables is much more responsive
// then implementing a loop with sleep (which will only
// check stop condition every 500ms)
} while (!stop.wait_for(std::chrono::milliseconds(500)));
});
// wait for loop to have started
started.wait();
// give the thread some time to run
std::this_thread::sleep_for(std::chrono::seconds(3));
// then signal the loop to stop
stop.set();
// synchronize with thread stop
future.get();
return 0;
}
While the other answer is a possible way to do it, my answer will mostly answer from a different angle trying to see what could be wrong with your code...
Well, if you don't care to wait up to one second when flag is set to false and you want a delay of at least 1000 ms, then a loop with Sleep could work but you need
an atomic variable (for ex. std::atomic)
or function (for ex. InterlockedCompareExchange)
or a MemoryBarrier
or some other mean of synchronisation to check the flag.
Without proper synchronisation, there is no guarantee that the compiler would read the value from memory and not the cache or a register.
Also using Sleep or similar function from a UI thread would also be suspicious.
For a console application, you could wait some time in the main thread if the purpose of you application is really to works for a given duration. But usually, you probably want to wait until processing is completed. In most cases, you should usually wait that threads you have started have completed.
Another problem with Sleep function is that the thread always has to wake up every few seconds even if there is nothing to do. This can be bad if you want to optimize battery usage. However, on the other hand having a relatively long timeout on function that wait on some signal (handle) might make your code a bit more robust against missed wakeup if your code has some bugs in it.
You also need a delay in some cases where you don't really have anything to wait on but you need to pull some data at regular interval.
A large timeout could also be useful as a kind of watch dog timer. For example, if you expect to have something to do and receive nothing for an extended period, you could somehow report a warning so that user could check if something is not working properly.
I highly recommand you to read a book on multithreading like Concurrency in Action before writing multithread code code.
Without proper understanding of multithreading, it is almost 100% certain that anyone code is bugged. You need to properly understand the C++ memory model (https://en.cppreference.com/w/cpp/language/memory_model) to write correct code.
A thread waiting on itself make no sense. When you wait a thread, you are waiting that it has terminated and obviously if it has terminated, then it cannot be executing your code. You main thread should wait for the background thread to terminate.
I also usually recommand to use C++ threading function over the API as they:
Make your code portable to other system.
Are usually higher level construct (std::async, std::future, std::condition_variable...) than corresponding Win32 API code.
I just read the doc about std::thread.detach() in C++11.
Here is my test:
#include <iostream>
#include <thread>
#include <chrono>
static int counter = 0;
void func()
{
while (true) {
std::cout<<"running..."<<std::endl;
std::cout<<counter++<<std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
}
}
int main()
{
{
std::thread t(func);
t.detach();
} // t is released after this line
// t has died, so who is holding the resources of the detached thread???
std::cin.get();
return 0;
}
This code works as expected. So it seems that the thread can keep running even if its destructor has been invoked. Is this true?
If it's true, who on earth holds the resources of the thread after the object t is released? Is there some mechanism to hold the resources, for example, a hidden anonymous object?
In C++, std::thread does not manage the thread of execution itself. C++ does not have controls for managing the thread of execution at all.
std::thread manages the thread handle - the identifier of a thread (thread_t in Posix world, which was largely a model for std::thread). Such identifier is used to communicate (as in control) with the thread, but in C++, the only standard way of communication would be to join the thread (which is simply waiting for thread's completion) or detaching from it.
When std::thread destructor is called, the thread handle is also destructed, and no further controlling of the thread is possible. But the thread of execution itself remains and continues being managed by implementation (or, more precisely, operation system).
Please note, for non-detached threads std::threads destructors throws an exception if the thread has not been joined. This is simply a safeguard against developers accidentally loosing the thread handle when they didn't intend to.
You are correct that the thread keeps running if detached after the thread's destructor.
No one on earth hold the resources (unless you make arrangements for someone to). However when your application exits, the application shutdown process will end the thread.
One can still arrange to communicate with and "wait" for a detached thread. In essence, join() is a convenience API so that you don't have to do something like this:
#include <atomic>
#include <chrono>
#include <iostream>
#include <thread>
static int counter = 0;
std::atomic<bool> time_to_quit{false};
std::atomic<bool> has_quit{false};
void func()
{
while (!time_to_quit) {
std::cout<<"running..."<<std::endl;
std::cout<<counter++<<std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
}
has_quit = true;
}
int main()
{
{
std::thread t(func);
t.detach();
} // t is released after this line
using namespace std::chrono_literals;
std::this_thread::sleep_for(3s);
time_to_quit = true;
while (!has_quit)
;
std::cout << "orderly shutdown\n";
}
Threads of executions exist independently from the thread objects that you use to manage them in C++. When you detach a thread object, the thread of execution continues running, but the implementation (usually in combination with the Operating System) is responsible for it.
Why the condition variable is stuck on waiting if it was notified in worker_thread? What am I missing here?
#include <thread>
#include <mutex>
#include <condition_variable>
#include <iostream>
std::mutex m;
std::condition_variable cv;
void worker_thread()
{
cv.notify_one();
}
int main()
{
std::thread worker(worker_thread);
std::cout << "Start waiting..." << std::endl;
std::unique_lock<std::mutex> lk(m);
cv.wait(lk);
std::cout << "Finished waiting..." << std::endl;
worker.join();
getchar();
}
Your problem is that cv.notify_one() only wakes threads that are currently waiting. cv doesn't remember you notified it, and someone later comes along and waits.
Your worker thread is outpacing your main thread. So the notify happens before the main thread.
This is just a symptom of your real problem; you are using a condition variable wrong. Barring extremely advanced use, all use of condition variable should be in a triple.
A std::condition_variable.
A std::mutex.
A payload.
Your code is missing the payload.
To signal, you:
std::unique_lock<std::mutex> l(m);
payload = /* move it to a 'set' or 'non-empty' state */;
cv.notify_one(); // or all
to listen you:
std::unique_lock<std::mutex> l(m);
cv.wait(l, [&]{ return /* payload is in a set or non-empty state */; });
// while locked, consume one "unit" of payload from the payload.
with minor variations for wait_for and the like.
Following this cargo-cult pattern is important, as it avoids a number of pitfalls. It deals with both spurious wakeups with the wait happening after the notification.
Your code is missing a payload. So your code is vulnerable to both the waiting thread outrunning the signaling thread, and spurious wakeups.
Note that getting "clever" here is highly discouraged. For example, deciding that "I'll use an atomic variable to avoid using a mutex when signaling" actually doesn't work. Either follow the above recipe dogmatically, or go and spend a few months learning the threading and memory model of C++ well enough to improvise.
notify_one will unblock a waiting thread if there is one. If there are no waiting threads, nothing happens. A condition_variable does not have a state to remember how many threads should be notified when it is waited on.
I'm new to multithread programming. I have a simple testing program:
#include <mutex>
#include <thread>
#include <iostream>
int main(){
std::mutex mtx;
std::thread t1([&](){
while (true){
mtx.lock();
std::cout << 1 << "Hello" << "\n";
mtx.unlock();
}
});
std::thread t2([&](){
while (true){
mtx.lock();
std::cout << 2 << "Hello" << "\n";
mtx.unlock();
}
});
t1.join();
t2.join();
}
This is a pretty simple program, and it prints "1Hello" and "2Hello" in a random pattern, which implies that the mutex is unlocked by one and then acquired by the other and executed, in some random pattern.
Is it specified behavior in standard, that is, will a implementation guarantee that it won't stick to t1? And if not, how do I avoid it?
There should be no guarantee of who will be running. If you can set the priority of one thread higher than the other, then you can guarantee with this code that only the highest priority thread will be running.
What is the actual problem? The problem is that this code uses multi-threading in the worst possible way. This is quite an achievement and not really bad because it is an exercise. It asks the threads to run continuously, it locks while doing long actions and only unlocks for the next loop, so there is actually no parallelism, only a battle for the mutex.
How can this be solved? Let the threads do some background action and then stop or let the threads wait for a condition are at least let the threads sleep once in a while AND let the threads run as independent as possible and not block others while doing potentially a long action.
Edit (small clarification): while this code is using multi-threading in the worst possible way, it is a nice and clean example on how to do it.
Suppose we have two workers. Each worker has an id of 0 and 1. Also suppose that we have jobs arriving all the time, each job has also an identifier 0 or 1 which specifies which worker will have to do this job.
I would like to create 2 threads that are initially locked, and then when two jobs arrive, unlock them, each of them does their job and then lock them again until other jobs arrive.
I have the following code:
#include <iostream>
#include <thread>
#include <mutex>
using namespace std;
struct job{
thread jobThread;
mutex jobMutex;
};
job jobs[2];
void executeJob(int worker){
while(true){
jobs[worker].jobMutex.lock();
//do some job
}
}
void initialize(){
int i;
for(i=0;i<2;i++){
jobs[i].jobThread = thread(executeJob, i);
}
}
int main(void){
//initialization
initialize();
int buffer[2];
int bufferSize = 0;
while(true){
//jobs arrive here constantly,
//once the buffer becomes full,
//we unlock the threads(workers) and they start working
bufferSize = 2;
if(bufferSize == 2){
for(int i = 0; i<2; i++){
jobs[i].jobMutex.unlock();
}
}
break;
}
}
I started using std::thread a few days ago and I'm not sure why but Visual Studio gives me an error saying abort() has been called. I believe there's something missing however due to my ignorance I can't figure out what.
I would expect this piece of code to actually
Initialize the two threads and then lock them
Inside the main function unlock the two threads, the two threads will do their job(in this case nothing) and then they will become locked again.
But it gives me an error instead. What am I doing wrong?
Thank you in advance!
For this purpose you can use boost's threadpool class.
It's efficient and well tested. opensource library instead of you writing newly and stabilizing it.
http://threadpool.sourceforge.net/
main()
{
pool tp(2); //number of worker threads-currently its 2.
// Add some tasks to the pool.
tp.schedule(&first_task);
tp.schedule(&second_task);
}
void first_task()
{
...
}
void second_task()
{
...
}
Note:
Suggestion for your example:
You don't need to have individual mutex object for each thread. Single mutex object lock itself will does the synchronization between all the threads. You are locking mutex of one thread in executejob function and without unlocking another thread is calling lock with different mutex object leading to deadlock or undefined behaviour.
Also since you are calling mutex.lock() inside whileloop without unlocking , same thread is trying to lock itself with same mutex object infinately leading to undefined behaviour.
If you donot need to execute threads parallel you can have one global mutex object can be used inside executejob function to lock and unlock.
mutex m;
void executeJob(int worker)
{
m.lock();
//do some job
m.unlock();
}
If you want to execute job parallel use boost threadpool as I suggested earlier.
In general you can write an algorithm similar to the following. It works with pthreads. I'm sure it would work with c++ threads as well.
create threads and make them wait on a condition variable, e.g. work_exists.
When work arrives you notify all threads that are waiting on that condition variable. Then in the main thread you start waiting on another condition variable work_done
Upon receiving work_exists notification, worker threads wake up, and grab their assigned work from jobs[worker], they execute it, they send a notification on work_done variable, and then go back to waiting on the work_exists condition variable
When main thread receives work_done notification it checks if all threads are done. If not, it keeps waiting till the notification from last-finishing thread arrives.
From cppreference's page on std::mutex::unlock:
The mutex must be unlocked by all threads that have successfully locked it before being destroyed. Otherwise, the behavior is undefined.
Your approach of having one thread unlock a mutex on behalf of another thread is incorrect.
The behavior you're attempting would normally be done using std::condition_variable. There are examples if you look at the links to the member functions.