C++ background thread in class - instance scope - c++

I have this simple class:
struct Foo {
void Run() {
this->bgLoader = std::thread([this]() mutable {
//do something
this->onFinish_Thread();
});
}
std::function<void()> onFinish_Thread;
std::thread bgLoader;
};
That is called from C-API:
void CApiRunFoo(){
Foo foo;
foo.onFinish_Thread = []() {
//do something at thread end
};
foo.Run();
}
I want to run CApiRunFoo, return from it but keep the thread running until it is finished.
Now, the problem is, that once CApiRunFoo end, foo goes out of scope even if background thread is still running. If I change foo to object via new, it will run, but it will cause memory leak.
I was thinking to create destructor with:
~Foo(){
if (bgLoader.joinable()){
bgLoader.join();
}
}
but I am not sure if it can cause deadlock or not plus it probably wont cause CApiRunFoo to return until the thread finishes.
Is there any solution/design pattern to this problem?

You could return the Foo instance to the C program:
struct Foo {
~Foo() {
if (bgLoader.joinable()) {
run = false;
bgLoader.join();
}
}
void Run() {
run = true;
this->bgLoader = std::thread([this]() mutable {
while(run) {
// do stuff
}
this->onFinish_Thread();
});
}
std::atomic<bool> run;
std::function<void()> onFinish_Thread;
std::thread bgLoader;
};
The C interface:
extern "C" {
struct foo_t {
void* instance;
};
foo_t CApiRunFoo() {
Foo* ptr = new Foo;
ptr->onFinish_Thread = []() {
std::cout << "done\n";
};
ptr->Run();
return foo_t{ptr};
}
void CApiDestroyFoo(foo_t x) {
auto ptr = static_cast<Foo*>(x.instance);
delete ptr;
}
}
And a C program:
int main() {
foo_t x = CApiRunFoo();
CApiDestroyFoo(x);
}
Demo
As it seems you'd like the Foo objects to automatically self destruct when the thread finishes, you could run them detached and let them delete this; when done.
#include <atomic>
#include <condition_variable>
#include <cstdint>
#include <iostream>
#include <functional>
#include <mutex>
#include <thread>
// Counting detached threads and making sure they are all finished before
// exiting the destructor. Used as a `static` member of `Foo`.
struct InstanceCounter {
~InstanceCounter() {
run = false;
std::unique_lock lock(mtx);
std::cout << "waiting for " << counter << std::endl;
while(counter) cv.wait(lock);
std::cout << "all done" << std::endl;
}
void operator++() {
std::lock_guard lock(mtx);
std::cout << "inc: " << ++counter << std::endl;
}
void operator--() {
std::lock_guard lock(mtx);
std::cout << "dec: " << --counter << std::endl;
cv.notify_one(); // if the destructor is waiting
}
std::atomic<bool> run{true};
std::mutex mtx;
std::condition_variable cv;
unsigned counter = 0;
};
struct Foo {
bool Run() {
try {
++ic; // increase number of threads in static counter
bgLoader = std::thread([this]() mutable {
while(ic.run) {
// do stuff
}
// if onFinish_Thread may throw - you may want to try-catch:
onFinish_Thread();
--ic; // decrease number of threads in static counter
delete this; // self destruct
});
bgLoader.detach();
return true; // thread started successfully
}
catch(const std::system_error& ex) {
// may actually happen if the system runs out of resources
--ic;
std::cout << ex.what() << ": ";
delete this;
return false; // thread not started
}
}
std::function<void()> onFinish_Thread;
private:
~Foo() { // private: Only allowed to self destruct
std::cout << "deleting myself" << std::endl;
}
std::thread bgLoader;
static InstanceCounter ic;
};
InstanceCounter Foo::ic{};
Now the C interface becomes more like what you had in the question.
#include <stdbool.h>
extern "C" {
bool CApiRunFoo() {
Foo* ptr = new Foo;
ptr->onFinish_Thread = []() {
std::cout << "done" << std::endl;
};
return ptr->Run();
// it looks like `ptr` is leaked here, but it self destructs later
}
}
Demo

Your program should call join and finish the new thread at some point in future (see also this question with answer). To do that, it should hold a reference (in a wide sense) to the thread object. In your current design, your foo is such a reference. So you are not allowed to lose it.
You should think about a place where it makes sense to do join in your code. This same place should hold your foo. If you do that, there is no problem, because foo contains also the onFinish_Thread object.

Related

How to start a variable number of threads in C++?

I am searching for a way to start multiple threads whose exact number can only be determined at runtime. The threads are not dependent on each other, so it's a fire-and-forget kind of problem.
The threads do need some context which is stored as internal variables of a class (Foo). Some of these variables are references. The class also holds a method that should be executed as the thread function (bar).
#include <iostream>
#include <string>
#include <vector>
#include <thread>
class Foo
{
public:
Foo(int a){
std::cout << "Created" << std::endl;
m_a = new int(a);
}
~Foo(){
std::cout << "Destroyed" << std::endl;
delete m_a;
}
void bar() {
std::cout << "Internal var: " << *m_a << std::endl;
}
private:
int* m_a;
};
int main() {
for(int i = 0; i < 5; i++) {
std::thread t(&Foo::bar, std::ref(Foo(i)));
// the threads will be joined at a later point, this is for demo purposes
}
return 0;
}
I get a compile error at this point:
error: use of deleted function ‘void std::ref(const _Tp&&) [with _Tp = Foo]’
I get it that this error is caused because of the temporary nature of the object created in the for-loop. But if I remove the std::ref function, I get a segfault: double free or corruption (fasttop)
I am sure that there must be a way of doing this, but I am unaware of that. I would expect some output like (probably in this order, but not guaranteed):
Created
Internal var: 0
Destroyed
Created
Internal var: 1
Destroyed
...
Thanks!
Problem 1: Foo is missing a copy/move constructor. See The rule of three/five/zero.
Add a copy constructor:
Foo(Foo const& that) : m_a(new int(*that.m_a)) {}
And/or a move constructor:
Foo(Foo && that) : m_a(that.m_a) { that.m_a = nullptr; }
Problem 2: Foo(i) is a temporary instance of Foo, it lives until the end of the full-expression (the ;).
std::thread t(&Foo::bar, std::ref(Foo(i)));
// ^
// Foo(i) is dead at this point while the thread is starting!
You want it to live longer than that, in order to be usable inside the thread.
For example, like this (also answers your question about creating threads in a loop):
int main() {
std::vector<Foo> inputs;
std::vector<std::thread> threads;
for(int i = 0; i < 5; i++) {
inputs.emplace_back(i);
threads.emplace_back(&Foo::bar, &inputs.back());
}
for (auto& t : threads) {
t.join();
}
}
Note: std::ref(Foo(i)) doesn't compile because it has protection against returning references to temporaries (precisely to prevent issues like these).
Here is a minimaly fixed version of your code:
it includes the move ctor for Foo class (and explicitely deletes copy ctor)
it moves the threads into a vector
it joins the threads
Code:
#include <string>
#include <vector>
#include <thread>
#include <iostream>
class Foo
{
public:
Foo(int a) {
std::cout << "Created" << std::endl;
m_a = new int(a);
}
~Foo() {
if (m_a != NULL) {
std::cout << "Destroyed" << std::endl;
delete m_a;
}
}
Foo(const Foo& other) = delete; //not used here
Foo(Foo&& other) {
std::cout << "Move ctor" << '\n';
m_a = other.m_a;
other.m_a = nullptr;
}
void bar() {
std::cout << "Internal var: " << *m_a << std::endl;
}
private:
int* m_a;
};
int main() {
std::vector<std::thread> vec;
for (int i = 0; i < 5; i++) {
std::thread t(&Foo::bar, Foo(i));
vec.push_back(std::move(t));
}
for (auto& t : vec) {
t.join();
}
return 0;
}
The chief design failure seems that t is a variable inside the loop. That means it's destroyed at the end of each iteration - you never have 5 std::thread instances at the same time. Also, you fail to call join on those threads.
The std::ref apparently hides this problem and replaces it with another problem, but your original thread creation was correct: std::thread t(&Foo::bar, Foo(i)).
You probably want a std::list<std::thread>, and use std::list::emplace_back to create a variable amount. std::list<std::thread> allows you to remove threads in any order from the list.

Pass class method as void function pointer (C++11)

I have an object which needs to interface with an existing C api to register an in interrupt (void function taking no arguments). I can attach the interrupt to the function function(). However, I want to be able to pass in arguments to the function, but that would change the function signature. I thought a way around that would be to create an object to store the parameters (and modify them as necessary), and then pass in a method (or similar). However, I haven't been able to figure out how to do that.
I've tried passing in a lambda as [=](){ std::cout << "a: " << a << "\n"; }, but it turns out lambdas with a capture can't be converted to function pointers. I've also tried a templated method (since it would get instantiated at compile time), but couldn't get it to work. I've seen some posts on SO talking about std::bind and std::function, but they often warn about virtual function overhead, which I'd like to avoid on an embedded platform for an ISR.
What is the best way to convert a paramterized function to a void(*)()?
#include <iostream>
void function() {
std::cout << "Hello World!\n";
}
void attach_interrupt(void(*fn)()) {
fn();
}
class A {
int a;
public:
A(int a) : a(a) {
attach_interrupt(function); // This works as expected
// attach_interrupt(method); // How do I make this work?
// attach_interrupt(method2<a>);
}
void method() {
// something requiring a and b
std::cout << "a: " << a << "\n";
}
template<int a>
void method2() {
std::cout << "a: " << a << "\n";
}
};
int main()
{
const int PIN_1 = 0;
const int PIN_2 = 1;
const int PIN_3 = 2;
A foo(PIN_1);
A bar(PIN_2);
A baz(PIN_3);
return 0;
}
EDIT: My solution, inspired by the selected answer:
#include <iostream>
void attach_interrupt(int pin, void(*fn)()) {
fn();
}
// Free function, which works as an ISR
template <unsigned int IRQ, unsigned int IRQ2>
static void irqHandler()
{
std::cout << "IRQ: " << IRQ << "\n";
std::cout << "IRQ2: " << IRQ2 << "\n";
};
template <unsigned int PIN_1, unsigned int PIN_2>
class Handler {
private:
public:
Handler() {
void(*irq)() = &irqHandler<PIN_1, PIN_2>;
attach_interrupt(0, irq);
attach_interrupt(0, &handler_2);
}
// static member function can have its address taken, also works as ISR
static void handler_2() {
std::cout << "PIN_1: " << PIN_1 << "\n";
std::cout << "PIN_2: " << PIN_2 << "\n";
}
};
Handler<1, 2> a;
Handler<2, 3> b;
int main()
{
return 0;
}
So you want to register one and the same interrupt handler for different interrupts, each having equal, but individual data...
What about a free-standing template function with static data?
template <unsigned int IRQ>
void irqHandler()
{
static A a(IRQ);
a.doSomething();
};
void(*interruptVectorTable[12])() =
{
// ...
&irqHandler<7>,
// ...
&irqHandler<10>,
};
Well here is a convoluted way to do this. It requires some boilerplate code so I wrapped that up in a couple of MACROS (yuck). For C++11 the locking is somewhat limited (read less efficient) but that can be improved upon if you have access to C++14 or above:
// ## Header Library Code
namespace static_dispatch {
inline std::mutex& mutex()
{ static std::mutex mtx; return mtx; }
inline std::lock_guard<std::mutex> lock_for_reading()
{ return std::lock_guard<std::mutex>(mutex()); }
inline std::lock_guard<std::mutex> lock_for_updates()
{ return std::lock_guard<std::mutex>(mutex()); }
inline std::vector<void*>& cbdb()
{
static std::vector<void*> vps;
return vps;
}
inline void register_cb(void(*cb)(), void* user_data)
{
auto lock = lock_for_updates();
cbdb().push_back(user_data);
cb(); // assign id under lock
}
inline void* retreive_cb(std::size_t id)
{
auto lock = lock_for_reading();
return cbdb()[id];
}
} // namespace static_dispatch
#define CALLBACK_BOILERPLATE(id) \
static auto id = std::size_t(-1); \
if(id == std::size_t(-1)) { id = static_dispatch::cbdb().size() - 1; return; }
#define CALLBACK_RETREIVE_DATA(id, T) \
reinterpret_cast<T*>(static_dispatch::retreive_cb(id))
// ## Application Code
class A
{
public:
void member_callback_1() const
{
std::cout << s << '\n';
}
private:
std::string s = "hello";
};
void callback_1()
{
CALLBACK_BOILERPLATE(id);
auto a = CALLBACK_RETREIVE_DATA(id, A);
a->member_callback_1();
}
// The framework that you need to register your
// callbacks with
void framework_register(void(*cb)()) { cb(); }
int main()
{
A a;
// register callback with data structure
static_dispatch::register_cb(&callback_1, &a);
// Now register callback with framework because subsequent calls
// will invoke the real callback.
framework_register(&callback_1);
// etc...
}
As noted about if you have C++14 you can replace the mutex and locking code with the more efficient functions here:
inline std::shared_timed_mutex& mutex()
{ static std::shared_timed_mutex mtx; return mtx; }
inline std::shared_lock<std::shared_timed_mutex> lock_for_reading()
{ return std::shared_lock<std::shared_timed_mutex>(mutex()); }
inline std::unique_lock<std::shared_timed_mutex> lock_for_updates()
{ return std::unique_lock<std::shared_timed_mutex>(mutex()); }

use std::thread::id in map to have thread safe

The idea is to have instance for each thread, so I created new instance for every new thread::id like that :
struct doSomething{
void test(int toto) {}
};
void test(int toto)
{
static std::map<std::thread::id, doSomething *> maps;
std::map<std::thread::id, doSomething *>::iterator it = maps.find(std::this_thread::get_id());
if (it == maps.end())
{
// mutex.lock() ?
maps[std::this_thread::get_id()] = new doSomething();
it = maps.find(std::this_thread::get_id());
// mutex.unlock() ?
}
it->second->test(toto);
}
Is it a good idea?
Having a mutex lock after you've accessed the map would not be enough. You can't go anywhere near the map without a mutex because another thread might take the mutex to modify the map while you are reading from it.
{
std::unique_lock<std::mutex> lock(my_mutex);
std::map<std::thread::id, doSomething *>::iterator it = maps.find(std::this_thread::get_id());
if (it != maps.end())
return *it;
auto ptr = std::make_unique<doSomething>();
maps[std::this_thread::get_id()] = ptr.get();
return ptr.release();
}
But unless you have some special/unique use case, this is an already-solved problem through thread-local storage, and since you have C++11 you have the thread_local storage specifier.
Note that I'm using a mutex here because cout is a shared resource and yield just to encourage a little more interleaving of the workflow.
#include <iostream>
#include <memory>
#include <thread>
#include <mutex>
static std::mutex cout_mutex;
struct CoutGuard : public std::unique_lock<std::mutex> {
CoutGuard() : unique_lock(cout_mutex) {}
};
struct doSomething {
void fn() {
CoutGuard guard;
std::cout << std::this_thread::get_id() << " running doSomething "
<< (void*)this << "\n";
}
};
thread_local std::unique_ptr<doSomething> tls_dsptr; // DoSomethingPoinTeR
void testFn() {
doSomething* dsp = tls_dsptr.get();
if (dsp == nullptr) {
tls_dsptr = std::make_unique<doSomething>();
dsp = tls_dsptr.get();
CoutGuard guard;
std::cout << std::this_thread::get_id() << " allocated "
<< (void*)dsp << "\n";
} else {
CoutGuard guard;
std::cout << std::this_thread::get_id() << " re-use\n";
}
dsp->fn();
std::this_thread::yield();
}
void thread_fn() {
testFn();
testFn();
testFn();
}
int main() {
std::thread t1(thread_fn);
std::thread t2(thread_fn);
t2.join();
t1.join();
}
Live demo: http://coliru.stacked-crooked.com/a/3dec7efcb0018549
g++ -std=c++14 -O2 -Wall -pedantic -pthread main.cpp && ./a.out
140551597459200 allocated 0x7fd4a80008e0
140551597459200 running doSomething 0x7fd4a80008e0
140551605851904 allocated 0x7fd4b00008e0
140551605851904 running doSomething 0x7fd4b00008e0
140551605851904 re-use
140551605851904 running doSomething 0x7fd4b00008e0
140551597459200 re-use
140551605851904 re-use
140551597459200 running doSomething 0x7fd4a80008e0
140551605851904 running doSomething 0x7fd4b00008e0
140551597459200 re-use
140551597459200 running doSomething 0x7fd4a80008e0
It's a little hard to spot but thread '9200 allocated ..4a80.. whereas thread '1904 allocated ..4b00..
No, not a good idea.
std::map's methods themselves are not thread safe.
In order to really make it a "good idea", you must also make all access to your std::map thread-safe, by using a mutex, or an equivalent.
This includes not just the parts you have commented out, but also all other methods you're using, like find().
Everything that touches your std::map must be mutex-protected.

c++ how to use std::mutex and std::lock_guard with functor?

I am learning how to use std::thread in standard C++, and I can't solve one problem with std::mutex.
I am running 2 threads with simple functions that show a message in CMD. I want to use a std::mutex, so that one thread will wait until the other threat stops using the buffer.
When I use the functions everything works fine, but with the functors I have a problem:
error C2280: 'std::mutex::mutex(const std::mutex &)' : attempting to reference a deleted function
What am I doing wrong?
#include <iostream>
#include <thread>
#include <mutex>
class thread_guard
{
private:
std::thread m_thread;
public:
thread_guard(std::thread t)
{
m_thread = std::move(t);
if (!m_thread.joinable())
std::cout << "Brak watku!!!" << std::endl;
}
~thread_guard()
{
m_thread.join();
}
};
class func
{
private:
std::mutex mut;
public:
void operator()()
{
for (int i = 0; i < 11000; i++)
{
std::lock_guard<std::mutex> guard(mut);
std::cout << "watek dziala 1" << std::endl;
}
}
};
class func2
{
private:
std::mutex mut;
public:
void operator()()
{
for (int i = 0; i < 11000; i++)
{
std::lock_guard<std::mutex> guard(mut);
std::cout << "watek dziala 2" << std::endl;
}
}
};
std::mutex mut2;
void fun()
{
for (int i = 0; i < 11000; i++)
{
std::lock_guard<std::mutex> guard(mut2);
std::cout << "watek dziala 1" << std::endl;
}
}
void fun2()
{
for (int i = 0; i < 11000; i++)
{
std::lock_guard<std::mutex> guard(mut2);
std::cout << "watek dziala 2" << std::endl;
}
}
int main(void)
{
thread_guard t1( (std::thread( func() )) );
thread_guard t2( (std::thread(func2() )) );
//thread_guard t1((std::thread(fun)));
//thread_guard t2((std::thread(fun2)));
}
You actually have two problems. The compilation error is because the function objects are copied, but the embedded mutex doesn't have a valid copy-constructor so you get an error. Instead you have to create an instance of your object, and pass the member function and a pointer to the object:
func f1;
thread_guard t1(std::thread(&func::operator(), &f1));
Note that this doesn't really make it useful to use a function in this case.
The other problem is that each functor object have it's own mutex, so the two threads will run completely independent of each other.
If you, for example, make the mutex global, then you also solve the first problem, and can use the functor without problems.
In your code each function owns a mutex. These are different mutexes, and really they don't guard anything.
The problem is that a function needs to be copyable and mutexes are not. If the function needs to lock a mutex it is usually to some shared resource and you'd pass this mutex by reference to your functor.
Create the mutex outside, e.g. in main(), then
class func
{
std::mutex * mutex;
public:
explicit func( std::mutex & m ) : mutex( &m )
{
}
void operator()()
{
for (int i = 0; i < 11000; i++)
{
std::lock_guard<std::mutex> guard(*mutex);
std::cout << "watek dziala 1" << std::endl;
}
}
};
similar for func2
int main(void)
{
std::mutex mutex;
thread_guard t1( (std::thread( func( mutex) )) );
thread_guard t2( (std::thread( func2( mutex ) )) );
}

Why boost::recursive_mutex is not working as expected?

I have a custom class that uses boost mutexes and locks like this (only relevant parts):
template<class T> class FFTBuf
{
public:
FFTBuf();
[...]
void lock();
void unlock();
private:
T *_dst;
int _siglen;
int _processed_sums;
int _expected_sums;
int _assigned_sources;
bool _written;
boost::recursive_mutex _mut;
boost::unique_lock<boost::recursive_mutex> _lock;
};
template<class T> FFTBuf<T>::FFTBuf() : _dst(NULL), _siglen(0),
_expected_sums(1), _processed_sums(0), _assigned_sources(0),
_written(false), _lock(_mut, boost::defer_lock_t())
{
}
template<class T> void FFTBuf<T>::lock()
{
std::cerr << "Locking" << std::endl;
_lock.lock();
std::cerr << "Locked" << std::endl;
}
template<class T> void FFTBuf<T>::unlock()
{
std::cerr << "Unlocking" << std::endl;
_lock.unlock();
}
If I try to lock more than once the object from the same thread, I get an exception (lock_error):
#include "fft_buf.hpp"
int main( void ) {
FFTBuf<int> b( 256 );
b.lock();
b.lock();
b.unlock();
b.unlock();
return 0;
}
This is the output:
sb#dex $ ./src/test
Locking
Locked
Locking
terminate called after throwing an instance of 'boost::lock_error'
what(): boost::lock_error
zsh: abort ./src/test
Why is this happening? Am I understanding some concept incorrectly?
As the name implies, the Mutex is recursive but the Lock is not.
That said, you have here a design problem. The locking operations would be better off not being accessible from the outside.
class SynchronizedInt
{
public:
explicit SynchronizedInt(int i = 0): mData(i) {}
int get() const
{
lock_type lock(mMutex);
toolbox::ignore_unused_variable_warning(lock);
return mData;
}
void set(int i)
{
lock_type lock(mMutex);
toolbox::ignore_unused_variable_warning(lock);
mData = i;
}
private:
typedef boost::recursive_mutex mutex_type;
typedef boost::unique_lock<mutex_type> lock_type;
int mData;
mutable mutex_type mMutex;
};
The main point of the recursive_mutex is to allow chain locking in a given thread which may occur if you have complex operations that call each others in some case.
For example, let's add tweak get:
int SynchronizedInt::UnitializedValue = -1;
int SynchronizedInt::get() const
{
lock_type lock(mMutex);
if (mData == UnitializedValue) this->fetchFromCache();
return mData;
}
void SynchronizedInt::fetchFromCache()
{
this->set(this->fetchFromCacheImpl());
}
Where is the problem here ?
get acquires the lock on mMutex
it calls fetchFromCache which calls set
set attempts to acquire the lock...
If we did not have a recursive_mutex, this would fail.
The lock should not be part of the protected ressource but of the caller as you have one caller by thread. They must use different unique_lock.
The purpose of unique_lock is to lock and release the mutex with RAII, so you don't have to call unlock explicitly.
When the unique_lock is declared inside a method body, it will belong to the calling thread stack.
So a more correct use is :
#include <boost/thread/recursive_mutex.hpp>
#include <iostream>
template<class T>
class FFTBuf
{
public :
FFTBuf()
{
}
// this can be called by any thread
void exemple() const
{
boost::recursive_mutex::scoped_lock lock( mut );
std::cerr << "Locked" << std::endl;
// we are safe here
std::cout << "exemple" << std::endl ;
std::cerr << "Unlocking ( by RAII)" << std::endl;
}
// this is mutable to allow lock of const FFTBuf
mutable boost::recursive_mutex mut;
};
int main( void )
{
FFTBuf< int > b ;
{
boost::recursive_mutex::scoped_lock lock1( b.mut );
std::cerr << "Locking 1" << std::endl;
// here the mutex is locked 1 times
{
boost::recursive_mutex::scoped_lock lock2( b.mut );
std::cerr << "Locking 2" << std::endl;
// here the mutex is locked 2 times
std::cerr << "Auto UnLocking 2 ( by RAII) " << std::endl;
}
b.exemple();
// here the mutex is locked 1 times
std::cerr << "Auto UnLocking 1 ( by RAII) " << std::endl;
}
return 0;
}
Note the mutable on the mutex for const methods.
And the boost mutex types have a scoped_lock typedef which is the good unique_lock type.
Try this:
template<class T> void FFTBuf<T>::lock()
{
std::cerr << "Locking" << std::endl;
_mut.lock();
std::cerr << "Locked" << std::endl;
}
template<class T> void FFTBuf<T>::unlock()
{
std::cerr << "Unlocking" << std::endl;
_mut.unlock();
}
You use the same instance of unique_lock _lock twice and this is a problem.
You either have to directly use methods lock () and unock() of the recursive mutex or use two different instances of unique_lock like foe example _lock and _lock_2;.
Update
I would like to add that your class has public methods lock() and unlock() and from my point of view in a real program it is a bad idea. Also having unique_lock as a member of class in a real program must be often a bad idea.