I have a custom class that uses boost mutexes and locks like this (only relevant parts):
template<class T> class FFTBuf
{
public:
FFTBuf();
[...]
void lock();
void unlock();
private:
T *_dst;
int _siglen;
int _processed_sums;
int _expected_sums;
int _assigned_sources;
bool _written;
boost::recursive_mutex _mut;
boost::unique_lock<boost::recursive_mutex> _lock;
};
template<class T> FFTBuf<T>::FFTBuf() : _dst(NULL), _siglen(0),
_expected_sums(1), _processed_sums(0), _assigned_sources(0),
_written(false), _lock(_mut, boost::defer_lock_t())
{
}
template<class T> void FFTBuf<T>::lock()
{
std::cerr << "Locking" << std::endl;
_lock.lock();
std::cerr << "Locked" << std::endl;
}
template<class T> void FFTBuf<T>::unlock()
{
std::cerr << "Unlocking" << std::endl;
_lock.unlock();
}
If I try to lock more than once the object from the same thread, I get an exception (lock_error):
#include "fft_buf.hpp"
int main( void ) {
FFTBuf<int> b( 256 );
b.lock();
b.lock();
b.unlock();
b.unlock();
return 0;
}
This is the output:
sb#dex $ ./src/test
Locking
Locked
Locking
terminate called after throwing an instance of 'boost::lock_error'
what(): boost::lock_error
zsh: abort ./src/test
Why is this happening? Am I understanding some concept incorrectly?
As the name implies, the Mutex is recursive but the Lock is not.
That said, you have here a design problem. The locking operations would be better off not being accessible from the outside.
class SynchronizedInt
{
public:
explicit SynchronizedInt(int i = 0): mData(i) {}
int get() const
{
lock_type lock(mMutex);
toolbox::ignore_unused_variable_warning(lock);
return mData;
}
void set(int i)
{
lock_type lock(mMutex);
toolbox::ignore_unused_variable_warning(lock);
mData = i;
}
private:
typedef boost::recursive_mutex mutex_type;
typedef boost::unique_lock<mutex_type> lock_type;
int mData;
mutable mutex_type mMutex;
};
The main point of the recursive_mutex is to allow chain locking in a given thread which may occur if you have complex operations that call each others in some case.
For example, let's add tweak get:
int SynchronizedInt::UnitializedValue = -1;
int SynchronizedInt::get() const
{
lock_type lock(mMutex);
if (mData == UnitializedValue) this->fetchFromCache();
return mData;
}
void SynchronizedInt::fetchFromCache()
{
this->set(this->fetchFromCacheImpl());
}
Where is the problem here ?
get acquires the lock on mMutex
it calls fetchFromCache which calls set
set attempts to acquire the lock...
If we did not have a recursive_mutex, this would fail.
The lock should not be part of the protected ressource but of the caller as you have one caller by thread. They must use different unique_lock.
The purpose of unique_lock is to lock and release the mutex with RAII, so you don't have to call unlock explicitly.
When the unique_lock is declared inside a method body, it will belong to the calling thread stack.
So a more correct use is :
#include <boost/thread/recursive_mutex.hpp>
#include <iostream>
template<class T>
class FFTBuf
{
public :
FFTBuf()
{
}
// this can be called by any thread
void exemple() const
{
boost::recursive_mutex::scoped_lock lock( mut );
std::cerr << "Locked" << std::endl;
// we are safe here
std::cout << "exemple" << std::endl ;
std::cerr << "Unlocking ( by RAII)" << std::endl;
}
// this is mutable to allow lock of const FFTBuf
mutable boost::recursive_mutex mut;
};
int main( void )
{
FFTBuf< int > b ;
{
boost::recursive_mutex::scoped_lock lock1( b.mut );
std::cerr << "Locking 1" << std::endl;
// here the mutex is locked 1 times
{
boost::recursive_mutex::scoped_lock lock2( b.mut );
std::cerr << "Locking 2" << std::endl;
// here the mutex is locked 2 times
std::cerr << "Auto UnLocking 2 ( by RAII) " << std::endl;
}
b.exemple();
// here the mutex is locked 1 times
std::cerr << "Auto UnLocking 1 ( by RAII) " << std::endl;
}
return 0;
}
Note the mutable on the mutex for const methods.
And the boost mutex types have a scoped_lock typedef which is the good unique_lock type.
Try this:
template<class T> void FFTBuf<T>::lock()
{
std::cerr << "Locking" << std::endl;
_mut.lock();
std::cerr << "Locked" << std::endl;
}
template<class T> void FFTBuf<T>::unlock()
{
std::cerr << "Unlocking" << std::endl;
_mut.unlock();
}
You use the same instance of unique_lock _lock twice and this is a problem.
You either have to directly use methods lock () and unock() of the recursive mutex or use two different instances of unique_lock like foe example _lock and _lock_2;.
Update
I would like to add that your class has public methods lock() and unlock() and from my point of view in a real program it is a bad idea. Also having unique_lock as a member of class in a real program must be often a bad idea.
Related
I already asked this question in another post, but it came out poorly, so I want to rephrase it better.
I have to start a series of threads doing different tasks, that only have to return if an exit signal was sent, otherwise (if they incur in exceptions or anything else) they just restart their code from beginning.
To make my intent clear, here's some code:
class thread_wrapper
{
public:
template<typename _Callable, typename... _Args>
thread_wrapper();
void signal_exit() {exit_requested_ = true;}
void join() {th_.join();}
private:
std::thread th_;
bool exit_requested_{false};
void execute()
{
while(!exit_requested_)
{
try
{
// Do thread processing
}
catch (const std::exception& e)
{
std::cout << e.what() << std::endl;
}
}
return;
}
};
What I want to achieve, is to use this class as it was a normal std::thread, passing a function and its arguments when it is initialized, but then I want the inner std::thread to run the "execute" function, and only inside the try block I want it to run the behaviour passed in constructor.
How could I achieve this? Thanks in advance.
EDIT: I found a solution, but I am able to run only in c++ 17 (because of the template on lambda), and it is not really that elegant in my opinion.
template<typename Lambda>
class thread_wrapper
{
public:
explicit thread_wrapper(Lambda&& lambda) : lambda_{std::move(lambda)}, th_(&thread_wrapper::execute, this){};
void signal_exit() {exit_requested_ = true;}
void join() {th_.join();}
private:
std::thread th_;
bool exit_requested_{false};
Lambda lambda_;
void execute()
{
while(!exit_requested_)
{
try
{
lambda_();
}
catch (const std::exception& e)
{
std::cout << e.what() << std::endl;
}
}
return;
}
};
And here is a sample main:
class Foo
{
public:
void say_hello() { std::cout << "Hello!" << std::endl;}
};
int main()
{
Foo foo;
thread_wrapper th([&foo](){foo.say_hello(); std::this_thread::sleep_for(2s);});
std::this_thread::sleep_for(10s);
th.signal_exit();
th.join();
}
What do you think?
I'd say the solution you found is fine. You might want to avoid the thread_wrapper itself being a templated class and only template the constructor:
// no template
class thread_wrapper {
public:
template<typename Lambda, typename... Args>
explicit thread_wrapper(Lambda lambda, Args&&... args) {
:lambda_(std::bind(lambda, std::forward<Args>(args)...))
}
// ...
private:
std::function<void()> lambda_;
// ...
};
(I didn't try to compile this - small syntax errors etc are to be expected. It's more to show the concept)
Important: if you do call signal_exit, it will not abort the execution of lambda_. It will only exit once the lambda has returned/thrown.
Two little naming things to consider:
thread_wrapper is not a great name. It doesn't tell us anything about the purpose, or what it does different than a regular thread. Maybe robust_thread (to signify the automatic exception recovery) or something.
The method signal_exit could just be named exit. There is no reason to make the interface of this class specific to signals. You could use this class for any thread that should auto-restart until it is told to stop by some other part of the code.
Edit: One more thing I forgot, exit_requested_ must be either atomic or protected by a mutex to protect from undefined behavior. I'd suggest an std::atomic<bool>, that should be enough in your case.
I would use std::async and a condition variable construction for this.
I wrapped all the condition variable logic in one class so it can easily be reused.
More info on condition variables here : https://www.modernescpp.com/index.php/c-core-guidelines-be-aware-of-the-traps-of-condition-variables
Don't hesitate to ask for more information if you need it.
#include <chrono>
#include <future>
#include <condition_variable>
#include <mutex>
#include <iostream>
#include <thread>
//-----------------------------------------------------------------------------
// synchronization signal between two threads.
// by using a condition variable the waiting thread
// can even react with the "sleep" time of your example
class signal_t
{
public:
void set()
{
std::unique_lock<std::mutex> lock{m_mtx};
m_signalled = true;
// notify waiting threads that something worth waking up for has happened
m_cv.notify_all();
}
bool wait_for(const std::chrono::steady_clock::duration& duration)
{
std::unique_lock<std::mutex> lock{ m_mtx };
// condition variable wait is better then using sleep
// it can detect signal almost immediately
m_cv.wait_for(lock, duration, [this]
{
return m_signalled;
});
if ( m_signalled ) std::cout << "signal set detected\n";
return m_signalled;
}
private:
std::mutex m_mtx;
std::condition_variable m_cv;
bool m_signalled = false;
};
//-----------------------------------------------------------------------------
class Foo
{
public:
void say_hello() { std::cout << "Hello!" << std::endl; }
};
//-----------------------------------------------------------------------------
int main()
{
Foo foo;
signal_t stop_signal;
// no need to create a threadwrapper object
// all the logic fits within the lambda
// also std::async is a better abstraction then
// using std::thread. Through the future
// information on the asynchronous process can
// be fed back into the calling thread.
auto ft = std::async(std::launch::async, [&foo, &stop_signal]
{
while (!stop_signal.wait_for(std::chrono::seconds(2)))
{
foo.say_hello();
}
});
std::this_thread::sleep_for(std::chrono::seconds(10));
std::cout << "setting stop signal\n";
stop_signal.set();
std::cout << "stop signal set\n";
// synchronize with stopping of the asynchronous process.
ft.get();
std::cout << "async process stopped\n";
}
I have this simple class:
struct Foo {
void Run() {
this->bgLoader = std::thread([this]() mutable {
//do something
this->onFinish_Thread();
});
}
std::function<void()> onFinish_Thread;
std::thread bgLoader;
};
That is called from C-API:
void CApiRunFoo(){
Foo foo;
foo.onFinish_Thread = []() {
//do something at thread end
};
foo.Run();
}
I want to run CApiRunFoo, return from it but keep the thread running until it is finished.
Now, the problem is, that once CApiRunFoo end, foo goes out of scope even if background thread is still running. If I change foo to object via new, it will run, but it will cause memory leak.
I was thinking to create destructor with:
~Foo(){
if (bgLoader.joinable()){
bgLoader.join();
}
}
but I am not sure if it can cause deadlock or not plus it probably wont cause CApiRunFoo to return until the thread finishes.
Is there any solution/design pattern to this problem?
You could return the Foo instance to the C program:
struct Foo {
~Foo() {
if (bgLoader.joinable()) {
run = false;
bgLoader.join();
}
}
void Run() {
run = true;
this->bgLoader = std::thread([this]() mutable {
while(run) {
// do stuff
}
this->onFinish_Thread();
});
}
std::atomic<bool> run;
std::function<void()> onFinish_Thread;
std::thread bgLoader;
};
The C interface:
extern "C" {
struct foo_t {
void* instance;
};
foo_t CApiRunFoo() {
Foo* ptr = new Foo;
ptr->onFinish_Thread = []() {
std::cout << "done\n";
};
ptr->Run();
return foo_t{ptr};
}
void CApiDestroyFoo(foo_t x) {
auto ptr = static_cast<Foo*>(x.instance);
delete ptr;
}
}
And a C program:
int main() {
foo_t x = CApiRunFoo();
CApiDestroyFoo(x);
}
Demo
As it seems you'd like the Foo objects to automatically self destruct when the thread finishes, you could run them detached and let them delete this; when done.
#include <atomic>
#include <condition_variable>
#include <cstdint>
#include <iostream>
#include <functional>
#include <mutex>
#include <thread>
// Counting detached threads and making sure they are all finished before
// exiting the destructor. Used as a `static` member of `Foo`.
struct InstanceCounter {
~InstanceCounter() {
run = false;
std::unique_lock lock(mtx);
std::cout << "waiting for " << counter << std::endl;
while(counter) cv.wait(lock);
std::cout << "all done" << std::endl;
}
void operator++() {
std::lock_guard lock(mtx);
std::cout << "inc: " << ++counter << std::endl;
}
void operator--() {
std::lock_guard lock(mtx);
std::cout << "dec: " << --counter << std::endl;
cv.notify_one(); // if the destructor is waiting
}
std::atomic<bool> run{true};
std::mutex mtx;
std::condition_variable cv;
unsigned counter = 0;
};
struct Foo {
bool Run() {
try {
++ic; // increase number of threads in static counter
bgLoader = std::thread([this]() mutable {
while(ic.run) {
// do stuff
}
// if onFinish_Thread may throw - you may want to try-catch:
onFinish_Thread();
--ic; // decrease number of threads in static counter
delete this; // self destruct
});
bgLoader.detach();
return true; // thread started successfully
}
catch(const std::system_error& ex) {
// may actually happen if the system runs out of resources
--ic;
std::cout << ex.what() << ": ";
delete this;
return false; // thread not started
}
}
std::function<void()> onFinish_Thread;
private:
~Foo() { // private: Only allowed to self destruct
std::cout << "deleting myself" << std::endl;
}
std::thread bgLoader;
static InstanceCounter ic;
};
InstanceCounter Foo::ic{};
Now the C interface becomes more like what you had in the question.
#include <stdbool.h>
extern "C" {
bool CApiRunFoo() {
Foo* ptr = new Foo;
ptr->onFinish_Thread = []() {
std::cout << "done" << std::endl;
};
return ptr->Run();
// it looks like `ptr` is leaked here, but it self destructs later
}
}
Demo
Your program should call join and finish the new thread at some point in future (see also this question with answer). To do that, it should hold a reference (in a wide sense) to the thread object. In your current design, your foo is such a reference. So you are not allowed to lose it.
You should think about a place where it makes sense to do join in your code. This same place should hold your foo. If you do that, there is no problem, because foo contains also the onFinish_Thread object.
I use std::cout to print the log on console. Since the program is multi-thread, the print result would be disordered, if I use more than one "<<" operate after cout.
For example if one thread executes cout<< "A" << "B" << endl; Another thread might executes cout << "C"; between A and B. The result would be "ACB".
Hence I'm going to write a new class to inherit ostream(which is basic_ostream<char, char_traits<char>> in fact) and add a lock when the cout is initialized, so the print out should follow the proper order.
One option would be to create a class that holds a reference to a stream, but holds a lock throughout it's lifetime. Here's a simple example:
#include <iostream>
#include <mutex>
struct LockedOstream {
std::lock_guard<std::mutex> lg_;
std::ostream& os_;
LockedOstream(std::mutex& m, std::ostream& os)
: lg_{m}
, os_{os}
{ }
std::ostream& stream() const { return os_; }
};
int main()
{
std::mutex m;
LockedOstream(m, std::cout).stream() << "foo " << "bar\n";
// ^ locked now ^ unlocked now
}
This works as long as all the printing that forms a single "unit" of output all occurs in the same statement.
Edit: Actually, the inheritance version is a lot nicer than I originally expected:
#include <iostream>
#include <mutex>
class LockedOstream : public std::ostream {
static std::mutex& getCoutMutex()
// use a Meyers' singleton for the cout mutex to keep this header-only
{
static std::mutex m;
return m;
}
std::lock_guard<std::mutex> lg_;
public:
// Generic constructor
// You need to pass the right mutex to match the stream you want
LockedOstream(std::mutex& m, std::ostream& os)
: std::ostream(os.rdbuf())
, lg_{m}
{ }
// Cout specific constructor
// Uses a mutex singleton specific for cout
LockedOstream()
: LockedOstream(getCoutMutex(), std::cout)
{ }
};
int main()
{
LockedOstream() << "foo " << "bar\n";
// ^ locked now ^ unlocked now
}
As an aside:
using namespace std; is widely considered bad practice, and
I'm not a big fan of std::endl
either
(though the latter is sometimes contentious, but at least it's a good idea to know and make an informed choice).
You can define your own function
template<typename... Ts>
void locked_print(std::ostream& stream, Ts&&... ts)
{
static std::mutex mtx;
std::lock_guard<std::mutex> guard(mtx);
(stream << ... << std::forward<Ts>(ts));
}
And when you want to be sure it's exclusive, you can call it like locked_print(std::cout, 1, 2, "bar");
Since outstream << x1 << x2 << ... are multiple function calls, there is no way you can do that internally, besides locking everything until destruction of that same outstream.
You can just force your constraint when you call it:
{
std::lock_guard<std::mutex> guard(global_mutex);
// your print here
}
The idea is to have instance for each thread, so I created new instance for every new thread::id like that :
struct doSomething{
void test(int toto) {}
};
void test(int toto)
{
static std::map<std::thread::id, doSomething *> maps;
std::map<std::thread::id, doSomething *>::iterator it = maps.find(std::this_thread::get_id());
if (it == maps.end())
{
// mutex.lock() ?
maps[std::this_thread::get_id()] = new doSomething();
it = maps.find(std::this_thread::get_id());
// mutex.unlock() ?
}
it->second->test(toto);
}
Is it a good idea?
Having a mutex lock after you've accessed the map would not be enough. You can't go anywhere near the map without a mutex because another thread might take the mutex to modify the map while you are reading from it.
{
std::unique_lock<std::mutex> lock(my_mutex);
std::map<std::thread::id, doSomething *>::iterator it = maps.find(std::this_thread::get_id());
if (it != maps.end())
return *it;
auto ptr = std::make_unique<doSomething>();
maps[std::this_thread::get_id()] = ptr.get();
return ptr.release();
}
But unless you have some special/unique use case, this is an already-solved problem through thread-local storage, and since you have C++11 you have the thread_local storage specifier.
Note that I'm using a mutex here because cout is a shared resource and yield just to encourage a little more interleaving of the workflow.
#include <iostream>
#include <memory>
#include <thread>
#include <mutex>
static std::mutex cout_mutex;
struct CoutGuard : public std::unique_lock<std::mutex> {
CoutGuard() : unique_lock(cout_mutex) {}
};
struct doSomething {
void fn() {
CoutGuard guard;
std::cout << std::this_thread::get_id() << " running doSomething "
<< (void*)this << "\n";
}
};
thread_local std::unique_ptr<doSomething> tls_dsptr; // DoSomethingPoinTeR
void testFn() {
doSomething* dsp = tls_dsptr.get();
if (dsp == nullptr) {
tls_dsptr = std::make_unique<doSomething>();
dsp = tls_dsptr.get();
CoutGuard guard;
std::cout << std::this_thread::get_id() << " allocated "
<< (void*)dsp << "\n";
} else {
CoutGuard guard;
std::cout << std::this_thread::get_id() << " re-use\n";
}
dsp->fn();
std::this_thread::yield();
}
void thread_fn() {
testFn();
testFn();
testFn();
}
int main() {
std::thread t1(thread_fn);
std::thread t2(thread_fn);
t2.join();
t1.join();
}
Live demo: http://coliru.stacked-crooked.com/a/3dec7efcb0018549
g++ -std=c++14 -O2 -Wall -pedantic -pthread main.cpp && ./a.out
140551597459200 allocated 0x7fd4a80008e0
140551597459200 running doSomething 0x7fd4a80008e0
140551605851904 allocated 0x7fd4b00008e0
140551605851904 running doSomething 0x7fd4b00008e0
140551605851904 re-use
140551605851904 running doSomething 0x7fd4b00008e0
140551597459200 re-use
140551605851904 re-use
140551597459200 running doSomething 0x7fd4a80008e0
140551605851904 running doSomething 0x7fd4b00008e0
140551597459200 re-use
140551597459200 running doSomething 0x7fd4a80008e0
It's a little hard to spot but thread '9200 allocated ..4a80.. whereas thread '1904 allocated ..4b00..
No, not a good idea.
std::map's methods themselves are not thread safe.
In order to really make it a "good idea", you must also make all access to your std::map thread-safe, by using a mutex, or an equivalent.
This includes not just the parts you have commented out, but also all other methods you're using, like find().
Everything that touches your std::map must be mutex-protected.
I am learning how to use std::thread in standard C++, and I can't solve one problem with std::mutex.
I am running 2 threads with simple functions that show a message in CMD. I want to use a std::mutex, so that one thread will wait until the other threat stops using the buffer.
When I use the functions everything works fine, but with the functors I have a problem:
error C2280: 'std::mutex::mutex(const std::mutex &)' : attempting to reference a deleted function
What am I doing wrong?
#include <iostream>
#include <thread>
#include <mutex>
class thread_guard
{
private:
std::thread m_thread;
public:
thread_guard(std::thread t)
{
m_thread = std::move(t);
if (!m_thread.joinable())
std::cout << "Brak watku!!!" << std::endl;
}
~thread_guard()
{
m_thread.join();
}
};
class func
{
private:
std::mutex mut;
public:
void operator()()
{
for (int i = 0; i < 11000; i++)
{
std::lock_guard<std::mutex> guard(mut);
std::cout << "watek dziala 1" << std::endl;
}
}
};
class func2
{
private:
std::mutex mut;
public:
void operator()()
{
for (int i = 0; i < 11000; i++)
{
std::lock_guard<std::mutex> guard(mut);
std::cout << "watek dziala 2" << std::endl;
}
}
};
std::mutex mut2;
void fun()
{
for (int i = 0; i < 11000; i++)
{
std::lock_guard<std::mutex> guard(mut2);
std::cout << "watek dziala 1" << std::endl;
}
}
void fun2()
{
for (int i = 0; i < 11000; i++)
{
std::lock_guard<std::mutex> guard(mut2);
std::cout << "watek dziala 2" << std::endl;
}
}
int main(void)
{
thread_guard t1( (std::thread( func() )) );
thread_guard t2( (std::thread(func2() )) );
//thread_guard t1((std::thread(fun)));
//thread_guard t2((std::thread(fun2)));
}
You actually have two problems. The compilation error is because the function objects are copied, but the embedded mutex doesn't have a valid copy-constructor so you get an error. Instead you have to create an instance of your object, and pass the member function and a pointer to the object:
func f1;
thread_guard t1(std::thread(&func::operator(), &f1));
Note that this doesn't really make it useful to use a function in this case.
The other problem is that each functor object have it's own mutex, so the two threads will run completely independent of each other.
If you, for example, make the mutex global, then you also solve the first problem, and can use the functor without problems.
In your code each function owns a mutex. These are different mutexes, and really they don't guard anything.
The problem is that a function needs to be copyable and mutexes are not. If the function needs to lock a mutex it is usually to some shared resource and you'd pass this mutex by reference to your functor.
Create the mutex outside, e.g. in main(), then
class func
{
std::mutex * mutex;
public:
explicit func( std::mutex & m ) : mutex( &m )
{
}
void operator()()
{
for (int i = 0; i < 11000; i++)
{
std::lock_guard<std::mutex> guard(*mutex);
std::cout << "watek dziala 1" << std::endl;
}
}
};
similar for func2
int main(void)
{
std::mutex mutex;
thread_guard t1( (std::thread( func( mutex) )) );
thread_guard t2( (std::thread( func2( mutex ) )) );
}