Terminate loop in Thread on Member Change within Class - c++

I have a class that runs a loop on a seperate thread and I want it to break when I change the value of a member to false:
#include <iostream>
#include <thread>
#include <future>
#include <chrono>
#include <functional>
#include <atomic>
class A
{
public:
void ChangeLoop(){
loop = !loop;
if(loop){
std::future<void> fi = std::async(std::launch::async, &A::RunLoop, this, std::ref(loop));
}
}
void RunLoop(std::atomic<bool> &loop_ref){
while(loop_ref){
emit(loop_ref);
std::this_thread::sleep_for(std::chrono::milliseconds(50));
}
}
private:
std::atomic<bool> loop {false};
std::mutex emit_mutex;
template<class...Ts> void emit(Ts&&...ts){
auto lock = std::unique_lock<std::mutex>(emit_mutex);
using expand = int[];
void(expand{
0,
((std::cout << ts << "\n"), 0)...
});
}
};
int main(){
A a;
a.ChangeLoop();
std::this_thread::sleep_for(std::chrono::seconds(2));
a.ChangeLoop();
return 0;
}
When I change loop to false, the thread does not break as I would expect. Alternatively, I tried to have the threaded function look at the member variable without taking any arguments, but had the same issue:
#include <iostream>
#include <thread>
#include <future>
#include <chrono>
#include <functional>
#include <atomic>
class A
{
public:
void ChangeLoop(){
loop = !loop;
if(loop){
std::future<void> fi = std::async(std::launch::async, &A::RunLoop, this);
}
}
void RunLoop(){
while(loop){
emit(loop);
std::this_thread::sleep_for(std::chrono::milliseconds(50));
}
}
private:
std::atomic<bool> loop {false};
std::mutex emit_mutex;
template<class...Ts> void emit(Ts&&...ts){
auto lock = std::unique_lock<std::mutex>(emit_mutex);
using expand = int[];
void(expand{
0,
((std::cout << ts << "\n"), 0)...
});
}
};
int main(){
A a;
a.ChangeLoop();
std::this_thread::sleep_for(std::chrono::seconds(2));
a.ChangeLoop();
return 0;
}
How can I thread my RunLoop function seperately, and have it break when I change member variable loop?

As you have already found the working solution to your problem, I just want to point out why std::async is the not the right choice here.
From the online reference on std::async:
If the std::future obtained from std::async is not moved from or bound to a reference, the destructor of the std::future will block at the end of the full expression until the asynchronous operation completes.
So what you have here is the destructor of std::future blocking because the thread with RunLoop never completes execution because of its while loop.
This is true even when the return value from std::async is ignored in ChangeLoop and not assigned to a std::future.
Some C++ experts say that an std::future produced by std::async should not block. Here is an article by Herb Sutter where he argues this.
But for now the solution proposed in the comment (of using std::detach) is the way to go.

Related

Is it safe to pass a lambda function that goes out of scope to a std::thread

Considering the following code:
#include <iostream>
#include <thread>
#include <chrono>
int main()
{
std::thread t;
{
auto my_lambda = []{
int idx = 0;
while (true) {
std::this_thread::sleep_for (std::chrono::seconds(1));
std::cout << idx ++ << std::endl;
}
};
t = std::thread(my_lambda);
}
t.join();
return 0;
}
Is it safe that the thread runs a lambda function that goes out of scope?
I saw that the constructor of std::thread takes an universal reference for the input function Function&& f and that lambdas are translated into structs. So if the instance of the struct is instantiated inside the scope, the thread will be running the operator() of a dangling reference.
{
struct lambda_translated { void operator()(){ ... } };
lambda_translated instance;
t = std::thread(instance);
}
However I'm not sure that my reasoning is correct.
Side question: does the behavior change if I declare the lambda as an R-value inside the std::thread constructor:
#include <iostream>
#include <thread>
#include <chrono>
int main()
{
std::thread t;
{
t = std::thread([]{
int idx = 0;
while (true) {
std::this_thread::sleep_for (std::chrono::seconds(1));
std::cout << idx ++ << std::endl;
}
});
}
t.join();
return 0;
}
As a summary of the comments:
The lambda is copied (or moved if declared in-place), so you won't have problems.
You have to worry about the captures: do not capture by reference objects that can go out of the scope, or if you pass objects that can be deleted during thread execution (even if copied, think about a raw pointer to an object).
As an extension, same applies if you use std::bind to pass a method and the object goes out of scope or it is deleted.

How can I check if thread is done, when using thread::detach

I am trying to make kind of "running check" to avoid running one function multiple times at once it is for my another project. I have to use while() and detach() , the problem is I don't really know how can I check if thread is joinable(), because when I am not doing this this, the error comes out: Unhandled exception at 0x7632A842 in dasd.exe: Microsoft C++ exception: std::system_error at memory location 0x009BF614. but when I use code below I am getting no errors, but loop won't work
#include <future>
#include <thread>
#include <chrono>
#include <iostream>
using namespace std::chrono_literals;
void Thing()
{
std::this_thread::sleep_for(3s);
std::cout << "done\n";
}
int main()
{
std::packaged_task<void()> task(Thing);
auto future = task.get_future();
std::thread ac(std::move(task));
while (true)
{
std::cout << ac.joinable() << std::endl;
if (future.wait_for(1ms) == std::future_status::ready && ac.joinable())
{
ac.detach();
std::cout << "good\n";
}
std::this_thread::sleep_for(1s);
}
}
the output is:
1
1
1
done
1
good
0
0
.......
the question is: how can i make successful loop avoiding errors? I am trying for such as long time, and i think it is about something what i just don't know...
Thank You in advance
Don't detach().
People use detach() far, far too often.
It should only be used in relatively rare circumstances. A thread running after the end of main is not a good idea, and without formal synchronization with the end of the thread, preventing that is basically impossible.
There are two ways to do this with a detach()ed thread -- the _at_thread_exit methods of std::promise, or using OS-specific APIs.
A thread pool might be what you want.
template<class T>
struct threadsafe_queue {
std::optional<T> try_pop();
T wait_and_pop();
void push(T);
std::deque<T> pop_all();
private:
mutable std::mutex m;
std::condition_variable cv;
std::deque<T> data;
};
struct thread_pool {
explicit thread_pool( std::size_t number_of_threads );
std::size_t thread_count() const;
void add_thread(std::size_t n=1);
void abort_all_tasks_and_threads();
void wait_for_empty_queue();
~thread_pool();
template<class F>
std::future<std::invoke_result_t<F>> add_task( F f );
private:
using task=std::future<void()>; // or std::packaged_task<void> or something custom
std::vector<std::thread> threads;
threadsafe_queue< task > tasks;
};
something vaguely like that.
Then make a 1 thread thread-pool, and shove tasks into that.

Condition variable should be used or not to reduce missed wakeups

I have two threads, one is the producer and other is consumer. My consumer is always late (due to some costly function call, simulated in below code using sleeps) so I have used ring buffer as I can afford to loose some events.
Questions:
I am wondering if it would be better to use condition variable instead of what I currently have : continuous monitoring of the ring buffer size to see if the events got generated. I know that the current while loop of checking the ring buffer size is expensive, so I can probably add some yield calls to reduce the tight loop. I want to reduce the chances of dropped events.
Can I get rid of pointers? In my current code I am passing pointers to my ring buffer from main function to the threads. Wondering if there is any fancy or better way to do the same?
#include <iostream>
#include <thread>
#include <chrono>
#include <vector>
#include <atomic>
#include <boost/circular_buffer.hpp>
#include <condition_variable>
#include <functional>
std::atomic<bool> mRunning;
std::mutex m_mutex;
std::condition_variable m_condVar;
long int data = 0;
class Detacher {
public:
template<typename Function, typename ... Args>
void createTask(Function &&func, Args&& ... args) {
m_threads.emplace_back(std::forward<Function>(func), std::forward<Args>(args)...);
}
Detacher() = default;
Detacher(const Detacher&) = delete;
Detacher & operator=(const Detacher&) = delete;
Detacher(Detacher&&) = default;
Detacher& operator=(Detacher&&) = default;
~Detacher() {
for (auto& thread : m_threads) {
thread.join();
}
}
private:
std::vector<std::thread> m_threads;
};
void foo_1(boost::circular_buffer<int> *cb)
{
while (mRunning) {
std::unique_lock<std::mutex> mlock(m_mutex);
if (!cb->size())
continue;
int data = cb[0][0];
cb->pop_front();
mlock.unlock();
if (!mRunning) {
break;
}
//simulate time consuming function call
std::this_thread::sleep_for(std::chrono::milliseconds(16));
}
}
void foo_2(boost::circular_buffer<int> *cb)
{
while (mRunning) {
std::unique_lock<std::mutex> mlock(m_mutex);
cb->push_back(data);
data++;
mlock.unlock();
//simulate time consuming function call
std::this_thread::sleep_for(std::chrono::milliseconds(1));
}
}
int main()
{
mRunning = true;
boost::circular_buffer<int> cb(100);
Detacher thread_1;
thread_1.createTask(foo_1, &cb);
Detacher thread_2;
thread_2.createTask(foo_2, &cb);
std::this_thread::sleep_for(std::chrono::milliseconds(20000));
mRunning = false;
}
The producer is faster (16x) than the consumer, so ~93% of all events will always be discarded.

Modify shared state and notify std::condition_variable if std::mutex::lock throws

I encountered some problem and I'm not sure how to deal with it.
#include <iostream>
#include <thread>
#include <condition_variable>
#include <chrono>
std::condition_variable CV;
std::mutex m;
std::size_t i{0};
void set_value() try
{
std::this_thread::sleep_for(std::chrono::seconds{2});
{
std::lock_guard<std::mutex> lock{m};
i = 20;
}
CV.notify_one();
}
catch(...){
//what to do?
}
int main()
{
std::thread t{set_value};
t.detach();
std::unique_lock<std::mutex> lock{m};
CV.wait(lock, []{ return i != 0; });
std::cout << "i has changed to " << i << std::endl;
}
This of course works fine but how should I handle the case when std::lock_guard::lock throws an exception?
At first I was thinking to create global std::atomic<bool> mutex_lock_throwed{ false }; that I could set to true inside the catch block. Than I could notify_one()
catch(...){
mutex_lock_throwed.store(true);
CV.notify_one();
}
and change predicate for wait function to
[]{ return i != 0 || mutex_lock_throwed.load(); }
This actually worked very well but I read this in cppreference
Even if the shared variable is atomic, it must be modified under the mutex in order to correctly publish the modification to the waiting thread.
As you can see its not possible if mutex throws. So what should be the correct way to handle this?

Threading inside class with atomics and mutex c++

I wrote this sample program to mimic what I'm trying to do in a larger program.
I have some data that will come from the user and be passed into a thread for some processing. I am using mutexes around the data the flags to signal when there is data.
Using the lambda expression, is a pointer to *this send to the thread? I seem to be getting the behavior I expect in the cout statement.
Are the mutexes used properly around the data?
Is putting the atomics and mutexes as a private member of the class a good move?
foo.h
#pragma once
#include <atomic>
#include <thread>
#include <vector>
#include <mutex>
class Foo
{
public:
Foo();
~Foo();
void StartThread();
void StopThread();
void SendData();
private:
std::atomic<bool> dataFlag;
std::atomic<bool> runBar;
void bar();
std::thread t1;
std::vector<int> data;
std::mutex mx;
};
foo.c
#include "FooClass.h"
#include <thread>
#include <string>
#include <iostream>
Foo::Foo()
{
dataFlag = false;
}
Foo::~Foo()
{
StopThread();
}
void Foo::StartThread()
{
runBar = true;
t1 = std::thread([=] {bar(); });
return;
}
void Foo::StopThread()
{
runBar = false;
if(t1.joinable())
t1.join();
return;
}
void Foo::SendData()
{
mx.lock();
for (int i = 0; i < 5; ++i) {
data.push_back(i);
}
mx.unlock();
dataFlag = true;
}
void Foo::bar()
{
while (runBar)
{
if(dataFlag)
{
mx.lock();
for(auto it = data.begin(); it < data.end(); ++it)
{
std::cout << *it << '\n';
}
mx.unlock();
dataFlag = false;
}
}
}
main.cpp
#include "FooClass.h"
#include <iostream>
#include <string>
int main()
{
Foo foo1;
std::cout << "Type anything to end thread" << std::endl;
foo1.StartThread();
foo1.SendData();
// type something to end threads
char a;
std::cin >> a;
foo1.StopThread();
return 0;
}
You ensure that the thread is joined using RAII techniques? Check.
All data access/modification is either protected through atomics or mutexs? Check.
Mutex locking uses std::lock_guard? Nope. Using std::lock_guard wraps your lock() and unlock() calls with RAII. This ensures that even if an exception occurs while within the lock, that the lock is released.
Is putting the atomics and mutexes as a private member of the class a good move?
Its neither good nor bad, but in this scenario, where Foo is a wrapper for a std::thread that does work and controls the synchronization, it makes sense.
Using the lambda expression, is a pointer to *this send to the thread?
Yes, you can also do t1 = std::thread([this]{bar();}); to make it more explicit.
As it stands, with your dataFlag assignments after the locks, you may encounter problems. If you call SendData twice such that bar processes the first one but is halted before setting dataFlag = false so that the second call adds the data, sets the flag to true only to have bar set it back to false. Then, you'll have data that has been "sent" but bar doesn't think there's anything to process.
There may be other tricky situations, but this was just one example; moving it into the lock clears up that problem.
for example, your SendData should look like:
void Foo::SendData()
{
std::lock_guard<std::mutex> guard(mx);
for (int i = 0; i < 5; ++i) {
data.push_back(i);
}
dataFlag = true;
}