This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Is there a way to cancel/detach a future in C++11?
There is a member function which runs asynchronously using std::future and std::async. In some case, I need to cancel it. (The function loads near objects consecutively and sometimes an objects gets out of range while loading it.) I already read the answers to this question addressing the same issue, but I cannot get it work.
This is simplified code with the same structure as my actual program has. Calling Start() and Kill() while the asynchronous is running, causes a crash because of access violation for input.
In my eyes the code should work as follows. When Kill() is called, the running flag is disabled. The next command get() should wait for thread to end, which it does soon since it checks the running flag. After the thread is canceled, the input pointer is deleted.
#include <vector>
#include <future>
using namespace std;
class Class
{
future<void> task;
bool running;
int *input;
vector<int> output;
void Function()
{
for(int i = 0; i < *input; ++i)
{
if(!running) return;
output.push_back(i);
}
}
void Start()
{
input = new int(42534);
running = true;
task = async(launch::async, &Class::Function, this);
}
void Kill()
{
running = false;
task.get();
delete input;
}
};
It seems like the thread doesn't notice toggling the running flag to false. What is my mistake?
Since noone's actually answered the question yet I'll do so.
The writes and reads to the running variable are not atomic operations, so there is nothing in the code that causes any synchronisation between the two threads, so nothing ever ensures that the async thread sees that the variable has changed.
One possible way that can happen is that the compiler analyzes the code of Function, determines that there are never any writes to the variable in that thread, and as it's not an atomic object writes by other threads are not required to be visible, so it's entirely legal to rearrange the code to this:
void Function()
{
if(!running) return;
for(int i = 0; i < *input; ++i)
{
output.push_back(i);
}
}
Obviously in this code if running changes after the function has started it won't cause the loop to stop.
There are two ways the C++ standard allows you to synchronize the two threads, which is either to use a mutex and only read or write the running variable while the mutex is locked, or to make the variable an atomic variable. In your case, changing running from bool to atomic<bool> will ensure that writes to the variable are synchronized with reads from it, and the async thread will terminate.
Related
I am trying to design an infinite (or a user-defined length) loop that would be independent of my GUI process. I know how to start that loop in a separate thread, so the GUI process is not blocked. However, I would like to have a possibility to interrupt the loop at a press of a button. The complete scenario may look like this:
GUI::startButton->myClass::runLoop... ---> starts a loop in a new thread
GUI::stopButton->myClass::terminateLoop ---> should be able to interrupt the started loop
The problem I have is figuring out how to provide the stop functionality. I am sure there is a way to achieve this in C++. I was looking at a number of multithreading related posts and articles, as well as some lectures on how to use async and futures. Most of the examples did not fit my intended use and/or were too complex for my current state of skills.
Example:
GUIClass.cpp
MyClass *myClass = new MyClass;
void MyWidget::on_pushButton_start_clicked()
{
myClass->start().detach();
}
void MyWidget::on_pushButton_stop_clicked()
{
myClass->stop(); // TBD: how to implement the stop functionality?
}
MyClass.cpp
std::thread MyClass::start()
{
return std::thread(&MyClass::runLoop, this);
}
void MyClass::runLoop()
{
for(int i = 0; i < 999999; i++)
{
// do some work
}
}
As far as i know, there is no standard way to terminate a STL thread. And even if possible, this is not advisable since it can leave your application in an undefined state.
It would be better to add a check to your MyClass::runLoop method that stops execution in a controlled way as soon as an external condition is fulfilled. This might, for example, be a control variable like this:
std::thread MyClass::start()
{
_threadRunning = true;
if(_thread.joinable() == true) // If thr thread is joinable...
{
// Join before (re)starting the thread
_thread.join();
}
_thread = std::thread(&MyClass::runLoop, this);
return _thread;
}
void MyClass::runLoop()
{
for(int i = 0; i < MAX_ITERATION_COUNT; i++)
{
if(_threadRunning == false) { break; }
// do some work
}
}
Then you can end the thread with:
void MyClass::stopLoop()
{
_threadRunning = false;
}
_threadRunning would here be a member variable of type bool or, if your architecture for some reason has non-atomic bools, std::atomic<bool>.
With x86, x86_64, ARM and ARM64, however, you should be fine without atomic bools. It, however is advised to use them. Also to hint at the fact that the variable is used in a multithreading context.
Possible MyClass.h:
MyClass
{
public:
MyClass() : _threadRunning(false) {}
std::thread start();
std::thread runLoop();
std::thread stopLoop();
private:
std::thread _thread;
std::atomic<bool> _threadRunning;
}
It might be important to note that, depending on the code in your loop, it might take a while before the thread really stops.
Therefore it might be wise to std::thread::join the thread before restarting it, to make sure only one thread runs at a time.
Suppose you are given the following code:
class FooBar {
public void foo() {
for (int i = 0; i < n; i++) {
print("foo");
}
}
public void bar() {
for (int i = 0; i < n; i++) {
print("bar");
}
}
}
The same instance of FooBar will be passed to two different threads. Thread A will call foo() while thread B will call bar(). Modify the given program to output "foobar" n times.
For the following problem on leetcode we have to write two functions
void foo(function<void()> printFoo);
void bar(function<void()> printBar);
where printFoo and correspondingly printBar is a function pointer that prints Foo. The functions foo and bar are being called in a multithreaded environment and there is no ordering guarantee on how foo and bar is being called.
My solution was
class FooBar {
private:
int n;
mutex m1;
condition_variable cv;
condition_variable cv2;
bool flag;
public:
FooBar(int n) {
this->n = n;
flag=false;
}
void foo(function<void()> printFoo) {
for (int i = 0; i < n; i++) {
unique_lock<mutex> lck(m1);
cv.wait(lck,[&]{return !flag;});
printFoo();
flag=true;
lck.unlock();
cv2.notify_one();
}
}
void bar(function<void()> printBar) {
for (int i = 0; i < n; i++) {
unique_lock<mutex> lck(m1);
cv2.wait(lck,[&]{return flag;});
printBar();
flag=false;
lck.unlock();
cv.notify_one();
// printBar() outputs "bar". Do not change or remove this line.
}
}
};
Let us assume, at time t = 0 bar was called and then at time t = 10 foo was called, foo goes through the critical section protected by the mutex m1.
My question are
Does the C++ memory model because of the fencing property guarantee that when the bar function resumes from waiting on cv2 the value of flag will be set to true?
Am I right in assuming locks shared among threads enforce a before and after relationship as illustrated in the manner of Leslie Lamports clocking system. The compiler and C++ guarantees everything before the end of a critical section (Here the end of the lock) will be observed will be observed by any thread that renters the lock, so common locks, atomics, semaphore can be visualised as enfocing before and after behavior by establishing time in multithreaded environment.
Can we solve this problem using just one condition variable?
Is there a way to do this without using locks and just atomics. What performance improvements do atomics give over locks?
What happens if i do cv.notify_one() and correspondigly cv2.notify_one() within the critical region, is there a chance of a missed interrupt.
Original Problem
https://leetcode.com/problems/print-foobar-alternately/.
Leslie Lamports Paper
https://lamport.azurewebsites.net/pubs/time-clocks.pdf
Does the C++ memory model because of the fencing property guarantee that when the bar function resumes from waiting on cv2 the value of flag will be set to true?
By itself, a conditional variable is prone to spurious wake-up. A CV.wait(lck) call without a predicate clause can return for kinds of reasons. That's why it's always important to check the predicate condition in a while loop before entering wait. You should never assume that when wait(lck) returns that the thing you were waiting for has actually happened. But with the clause you added within the wait: cv2.wait(lck,[&]{return flag;}); this check is taken care of for you. So yes, when wait(lck, predicate) returns, then flag will be true.
Can we solve this problem using just one condition variable?
Absolutely. Just get rid of cv2 and have both threads wait (and notify) on the first cv.
Is there a way to do this without using locks and just atomics. What performance improvements do atomics give over locks?
atomics are great when you can get away with polling on one thread instead of waiting. Imagine a UI thread that wants to show you the current speed of your car. And it polls the speed variable on every frame refresh. But another thread, the "engine thread" is setting that atomic<int> speed variable with every rotation of the tire. That's where it shines - when you already have a polling loop in place, and on x86, atomics are mostly implemented with the LOCK op code prefix (e.g. concurrency is done correctly by the CPU).
As for an implementation for just locks and atomics... well, it's late for me. Easy solution, both threads just sleep and poll on an atomic integer that increments with each thread's turn. Each thread just waits for value to be "last+2" and polls every few milliseconds. Not efficient, but would work.
It's a bit late in the evening for me to thing about how to do this with a single or pair of mutexes.
What happens if i do cv.notify_one() and correspondigly cv2.notify_one() within the critical region, is there a chance of a missed interrupt.
No, you're fine. As long as all your threads are holding a lock and checking their predicate condition before entering the wait call. You can do the notify call insider or outside of the critical region. I always recommend doing notify_all over notify_one, but that might even be unnecessary.
I have a class with some methods that should be thread safe, i.e. multiple threads should be able operate on the class object state. One of the methods spawns a new thread that, every 10 seconds, updates a field. Because this thread can be long-running, I'd like to be able to abort it properly.
I have implemented a solution that uses std::condition_variable.wait_for() to wait for an abortion signal inside the thread, but am not particularly sure if my solution is either optimal or correct at all.
class A
{
unsigned int value; // A value that will be updated every 10 s in another thread
bool is_being_updated; // true while value is being updated in another thread
std::thread t;
bool aborted; // true = thread should abort
mutable std::mutex m1;
mutable std::mutex m2;
std::condition_variable cv;
public:
A();
~A();
void begin_update(); // Creates a thread that periodically updates value
void abort(); // Aborts the updating thread
unsigned int get_value() const;
void set_value(unsigned int);
};
This is how I implemented the methods:
A::A() : value(0), is_being_updated(false), aborted(false) { }
A::~A()
{
// Not sure if this is thread safe?
if(t.joinable()) t.join();
}
// Updates this->value every 10 seconds
void A::begin_update()
{
std::lock_guard<std::mutex> lck(m1);
if (is_being_updated) return; // Don't allow begin_update() while updating
is_being_updated = true;
if (aborted) aborted = false;
// Create a thread that will update value periodically
t = std::thread([this] {
std::unique_lock<std::mutex> update_lock(m2);
for(int i=0; i < 10; i++)
{
cv.wait_for(update_lock, std::chrono::seconds(10), [this]{ return aborted; });
if (!aborted)
{
std::lock_guard<std::mutex> lck(m1);
this->value++; // Update value
}
else
{
break; // Break on thread abort
}
}
// Locking here would cause indefinite blocking ...
// std::lock_guard<std::mutex> lck(m1);
if(is_being_updated) is_being_updated = false;
});
}
// Aborts the thread created in begin_update()
void A::abort()
{
std::lock_guard<std::mutex> lck(m1);
is_being_updated = false;
this->value = 0; // Reset value
{
std::lock_guard<std::mutex> update_lock(m2);
aborted = true;
}
cv.notify_one(); // Signal abort ...
if(t.joinable()) t.join(); // Wait for the thread to finish
}
unsigned int A::get_value() const
{
std::lock_guard<std::mutex> lck(m1);
return this->value;
}
void A::set_value(unsigned int v)
{
std::lock_guard<std::mutex> lck(m1);
if (is_being_updated) return; // Cannot set value while thread is updating it
this->value = v;
}
This seems to work fine, but I'm uncertain about it being correct. My concerns are the following:
Is my destructor safe? Suppose that the updating thread has not been aborted and is still doing its job while A object goes out of scope. A switch to a different thread now happens while dtor's t.join() still hasn't finished, and the switched-to thread calls begin_update() on the same object. Is something like this possible? Should I introduce e.g. an extra is_being_destructed flag that I would set to true inside a destructor and that all other methods should check for being false before they can proceed? Or can no such undesired scenario happen?
Inside the thread, at the end, I'm setting is_being_updated = false without a lock, despite the variable being shared state. This can mean that other threads won't see its correct value, e.g. even after the thread is done, some other thread may still see the value as is_being_updated == true instead of false. I cannot lock the mutex, however, because abort() may have already locked it, meaning that the call will block indefinitely. I'm not sure about the best way to solve this, other than perhaps making is_being_updated atomic. Would that work?
I've read about spurious wakeups, but am not sure I the code should do anything extra to handle them. As far as I understand, the answer is no, and no problems are to be expected in this regard.
Is my thinking here correct? Did I miss anything else that I should have in mind?
This stuff is always hard to check, so don't be afraid to question me if you think I misunderstand.
Short answer, no, it's not thread safe.
As long as the thread that has scope of A is the one calling abort (and doesn't forget to call abort), you won't experience a race condition, as A::abort() will block until the thread is joined. Under these assumptions, the join in your destructor is pointless.
If abort is called by the a thread that doesn't own A, then it's definitely possible for the thread to be join-ed twice, which is bad. Using .joinable() to decide to join a thread or not is a big red flag.
Please remove one of your if(t.joinable()) t.join(); (I'm leaning towards the one in the destructor) and change the other to just t.join().
As you said, you can make is_being_updated atomic. That's a great solution.
Here's another solution. You can signal without holding the lock. (It's actually better form in general, as it helps reduce lock contention, since the first thing the woken thread needs to do is reacquire its mutex.)
void A::abort()
{
{
std::lock(m1, m2); // deadlock-proof
std::lock_guard<std::mutex> lck(m1, std::adopt_lock);
std::lock_guard<std::mutex> update_lock(m2, std::adopt_lock);
is_being_updated = false;
this->value = 0; // Reset value
aborted = true;
}
cv.notify_one(); // Signal abort ...
t.join(); // Wait for the thread to finish
}
You're good. The way you wrote the wait, you will only come back if abort==true or 10 seconds has elapsed.
1) I think this problem is inherent on your design, as it is a bool flag will not fix the problem. Maybe A shouldn't go out of scope until all the threads stop using it, in which case it should reside in a managed pointer like shared_ptr.
2) You should be using atomics for your bools and also value, this would avoid having to use the unique_lock for increasing the value and for returning it.
3) As I said in the comments the lambda in the cv handles the spurious wakeups.
The biggest bit of code smell is using a full thread to update a variable every 10 seconds. A heavy-weight OS thread with magabytes to gigabytes of address space to do one task every 10 seconds.
What more, it is updating a value without anyone being able to see the change.
You already have a get_value wrapping accessor. Simply store the start point when you want to start counting. When you call get_value calculate the time since the start point. Divide by 10 seconds. Use that to calculate the returned value.
In a real application, you'd have a timer system that lets you trigger events (either in a thread pool, or in a message pump) every period of time. You'd use that instead of a dedicated thread to do something like this, and you'd make sure that modifying that value was vulgar (allowed people to subscribe to changes in it). Then your abort would consist of deregistering the timer instead of stopping a thread.
Your system is a horrible mixture of the two, using threads for no good reason.
I am trying to make a timer, so after five minutes something happens. The catch is that while the timer is being checked constantly I need other code to be running. I have created a sample below, of how the actually code looks, the function with the timer is in class, so I did the same thing below. Here is the code:
This code assumes all necessary headers are included
Class.h:
class MyClass
{
public:
void TimerFunc(int MSeconds);
};
void MyClass::TimerFunc(int MSeconds)
{
Sleep(MSeconds); //Windows.h
//Event code
return;
}
Main.cpp:
int main()
{
MyClass myClass;
myClass.TimerFunc(300); //300 is 5 minutes
//Here we do not want to wait for the five minutes to pass,
//instead we want to continue the rest of the code and check
//for user input as below
std::cout << "This should print before the Event Code happens.";
}
The problem here is that the code waits for the five minutes to pass, and then continues. I'm not sure if threading would be a good option here, I haven't done much with it before, if anyone could help me with that, or knows a better way to go about it, any help is appreciated.
If you don't mind your Event executing in a different thread-context, you could have your Timer class spawn a thread to do the waiting and then the event-execution; or (on POSIX OS's) set up a SIGALRM signal and have the signal handler do the Event. The downside of that is that if your event-code does anything non-trivial, you'll need to worry about race conditions with the concurrently executing main thread.
The other approach is to have your main thread check the clock every so often, and if the time-to-execute has passed, have your main thread call your Event routine at that time. That has the advantage of automatic thread-safety, but the disadvantage is that you'll have to add that code into your thread's main event loop; you can't easily hide it away inside a class like the one in your example.
With C++11 threads, this would work like this:
int main()
{
MyClass myClass;
thread ti([](MyClass &m){m.TimerFunc(300); }, ref(myClass)); // create and launch thread
// ... code executed concurrently to threaded code
ti.join(); // wait for the thread to end (or you'll crash !!)
}
Add a private member to your class:
atomic<bool> run=true; // designed to avoid race issue with concurrent access
Update its timer function to loop while this variable is true:
void MyClass::TimerFunc(int MSeconds)
{
while (run) {
this_thread::sleep_for(chrono::milliseconds(MSeconds)); // standard sleep instead of microsoft's one
//Event code
}
return;
}
Foresee within the class a member function to stop the threaded loop:
void Stop() {
run = false;
}
Finally update main() to call myClass.Stop() when the timer function is no longer needed (i.e. before calling ti.join() )
EDIT: attention, nasty error to avoid: be careful to refer to ref(myClass) in the thread constructor. If you would forget this, the thread ti would use a reference to a copy of myClass instead of the original object.
I previously inquired about synchronizing two threads without using pthread_join and I was able to resolve it using pthread_cond_wait and pthread_cond_signal.
I've written a small struct to bundle this functionality into a single place:
struct ConditionWait
{
int i_ConditionPredicate;
pthread_mutex_t lock_Var;
pthread_cond_t cond_Var;
int i_ValidResult;
ConditionWait()
{
pthread_mutex_init(&lock_Var, NULL);
pthread_cond_init(&cond_Var, NULL);
i_ValidResult = 1;
i_ConditionPredicate = 0;
}
void Signal()
{
pthread_mutex_lock(&lock_Var);
i_ConditionPredicate = i_ValidResult;
pthread_cond_signal(&cond_Var);
pthread_mutex_unlock(&lock_Var);
}
void Wait()
{
pthread_mutex_lock(&lock_Var);
while(i_ConditionPredicate != i_ValidResult)
{
pthread_cond_wait(&cond_Var, &lock_Var);
}
pthread_mutex_unlock(&lock_Var);
}
};
Assuming that I call Wait() and Signal() from two different threads, would this be thread safe. Would taking the same lock in two functions of the same object cause deadlocks or race conditions?
Edit: I'm using this now in my program and it works fine. I'm not too sure whether it's just luck
This will only work once, after you wake up the thread waiting, the next attempts to wait will all succeed and never block since you never "reset" the condition predicate. If this is what you want (or it doesn't matter in your situation) then yes, this is safe and is how condition variables are typically used.
PS: You should also use pthread_mutex_destroy() and pthread_cond_destroy() in the destructor of this thing.