Looks like scoped_lock in c++17 gives the functionality I'm after however I'm presently tied to c++11.
At the moment I'm seeing deadlock issues with guard_lock when we call it with the same mutex more than once. Does scoped_lock protect against multiple calls (i.e. reenterent?)?
Is there a best practice for doing this in c++11 w/ lock_guard?
mutex lockingMutex;
void get(string s)
{
lock_guard<mutex> lock(lockingMutex);
if (isPresent(s))
{
//....
}
}
bool isPresent(string s)
{
bool ret = false;
lock_guard<mutex> lock(lockingMutex);
//....
return ret;
}
To be able to lock the same mutex multiple times one needs to use std::recursive_mutex. Recursive mutex is more expensive than a non-recursive one.
Best practise, though, is to design your code in such a way that a thread does not lock the same mutex multiple times. For example, have you public functions lock the mutex first and then invoke the implementation function that expects the mutex to have been locked already. Implementation functions must not call the public API functions that lock the mutex. E.g.:
class A {
std::mutex m_;
int state_ = 0;
private: // These expect the mutex to have been locked.
void foo_() {
++state_;
}
void bar_() {
this->foo_();
}
public: // Public functions lock the mutex first.
void foo() {
std::lock_guard<std::mutex> lock(m_);
this->foo_();
}
void bar() {
std::lock_guard<std::mutex> lock(m_);
this->bar_();
}
};
Scoped lock does not give the functionality you are looking for.
Scoped lock is just a variardic version of lock guard. It only exists due to some ABI issues with changing lock guard into a variardic template.
To have reentrant mutexes, you need to use a reentrant mutex. But this is both more expensive at runtime, and usually indicates a lack of care in your mutex state. While holding a mutex you should have complete and total understanding of all other synchronization actions you are performing.
Once you have complete understanding of all synchronizations actions you are performing, it is easy to avoid recursively locking.
There are two patterns you can consider here. First, split public locking API from a private non-locking API. Second, split synchronization from implementation.
private:
mutex lockingMutex;
bool isPresent(string s, lock_guard<mutex> const& lock) {
bool ret = false;
//....
return ret;
}
void get(string s, lock_guard<mutex> const& lock) {
if (isPresent(s, lock))
{
//....
}
}
public:
void get(string s) {
return get( std::move(s), lock_guard<mutex>(lockingMutex) );
}
bool isPresent(string s) {
return isPresent( std::move(s), lock_guard<mutex>(lockingMutex) );
}
};
here I use lock_guard<mutex> as "proof we have a lock".
An often better alternative is to write your class as non-thread-safe, then use a wrapper:
template<class T>
struct mutex_guarded {
template<class T0, class...Ts,
std::enable_if_t<!std::is_same<std::decay_t<T0>, mutex_guarded>{}, bool> =true
>
mutex_guarded(T0&&t0, Ts&&...ts):
t( std::forward<T0>(t0), std::forward<Ts>(ts)... )
{}
mutex_guarded()=default;
~mutex_guarded=default;
template<class F>
auto read( F&& f ) const {
auto l = lock();
return f(t);
}
template<class F>
auto write( F&& f ) {
auto l = lock();
return f(t);
}
private:
auto lock() { return std::unique_lock<std::mutex>(m); }
auto lock() const { return std::unique_lock<std::mutex>(m); }
mutable std::mutex m;
T t;
};
now we can use this like this:
mutex_guarded<Foo> foo;
foo.write([&](auto&&foo){ foo.get("hello"); } );
you can write mutex_gaurded, shared_mutex_guarded, not_mutex_guarded or even async_guarded (which returns futures and serializes actions in a worker thread).
So long as the class doesn't leave its own "zone of control" in methods this pattern makes writing mutex-guarded data much easier, and lets you compose related mutex guarded data into one bundle without having to rewrite them.
Related
We have a lot of legacy C++98 code that we are slowly upgrading to c++11 and we have a RAII implementation for custom Mutex class:
class RaiiMutex
{
public:
RaiiMutex() = delete;
RaiiMutex(const RaiiMutex&) = delete;
RaiiMutex& operator= (const RaiiMutex&) = delete;
RaiiMutex(Mutex& mutex) : mMutex(mutex)
{
mMutex.Lock();
}
~RaiiMutex()
{
mMutex.Unlock();
}
private:
Mutex& mMutex;
};
Is it ok to make an std::unique_ptr of this object? We would still benefit from automatically calling the destructor when the object dies (thus unlocking) and would also gain the ability of unlocking before non-critical operations.
Example legacy code:
RaiiMutex raiiMutex(mutex);
if (!condition)
{
loggingfunction();
return false;
}
After:
auto raiiMutex = std::unique_ptr<RaiiMutex>(new RaiiMutex(mutex));
if (!condition)
{
raiiMutex = nullptr;
loggingfunction(); // log without locking the mutex
return false;
}
It would also remove the use of unnecessary brackets:
Example legacy code:
Data data;
{
RaiiMutex raiiMutex(mutex);
data = mQueue.front();
mQueue.pop_front();
}
data.foo();
After:
auto raiiMutex = std::unique_ptr<RaiiMutex>(new RaiiMutex(mutex));
Data data = mQueue.front();
mQueue.pop_front();
raiiMutex = nullptr;
data.foo();
Does it make sense?
Edit:
Cannot use unique_lock due to custom Mutex class:
class Mutex
{
public:
Mutex();
virtual ~Mutex();
void Unlock(bool yield = false);
void Lock();
bool TryLock();
bool TimedLock(uint64 pWaitIntervalUs);
private:
sem_t mMutex;
};
Add Mutex::lock(), Mutex::unlock() and Mutex::try_lock() methods to Mutex. They just forward to the Lock etc methods.
Then use std::unique_lock<Mutex>.
If you cannot modify Mutex, wrap it.
struct SaneMutex: Mutex {
void lock() { Lock(); }
// etc
using Mutex::Mutex;
};
A SaneMutex replaces a Mutex everywhere you can.
Where you can't:
struct MutexRef {
void lock() { m.Lock(); }
// etc
MutexRef( Mutex& m_in ):m(m_in) {}
private:
Mutex& m;
};
include an adapter.
these match the C++ standard lockable requirements. If you want timed lockable, you have to write a bit of glue code.
auto l = std::unique_lock<MutexRef>( mref );
or
auto l = std::unique_lock<SaneMutex>( m );
you now have std::lock, std::unique_lock, std::scoped_lock support.
And your code is one step closer to using std::mutex.
As for your unique_ptr solution, I wouldn't add the overhead of a memory allocation on every time you lock a mutex casually.
There is a shared_mutex class planned for C++17. And shared_timed_mutex already in C++14. (Who knows why they came in that order, but whatever.) Then there is a recursive_mutex and a recursive_timed_mutex since C++11. What I need is a shared_recursive_mutex. Did I miss something in the standard or do I have to wait another three years for a standardized version of that?
If there is currently no such facility, what would be a simple (first priority) and efficient (2nd priority) implementation of such a feature using standard C++ only?
Recursive property of the mutex operates with the term "owner", which in case of a shared_mutex is not well-defined: several threads may have .lock_shared() called at the same time.
Assuming "owner" to be a thread which calls .lock() (not .lock_shared()!), an implementation of the recursive shared mutex can be simply derived from shared_mutex:
class shared_recursive_mutex: public shared_mutex
{
public:
void lock(void) {
std::thread::id this_id = std::this_thread::get_id();
if(owner == this_id) {
// recursive locking
count++;
}
else {
// normal locking
shared_mutex::lock();
owner = this_id;
count = 1;
}
}
void unlock(void) {
if(count > 1) {
// recursive unlocking
count--;
}
else {
// normal unlocking
owner = std::thread::id();
count = 0;
shared_mutex::unlock();
}
}
private:
std::atomic<std::thread::id> owner;
int count;
};
The field .owner needs to be declared as atomic, because in the .lock() method this field is checked without a protection from the concurrent access.
If you want to recursively call .lock_shared() method, you need to maintain a map of owners, and accesses to that map should be protected with some additional mutex.
Allowing a thread with active .lock() to call .lock_shared() makes implementation to be more complex.
Finally, allowing a thread to advance locking from .lock_shared() to .lock() is no-no, as it leads to possible deadlock when two threads attempt to perform that advancing.
Again, semantic of recursive shared mutex would be very fragile, so it is better to not use it at all.
If you are on Linux / POSIX platform, you are in luck because C++ mutexes are modelled after POSIX ones. The POSIX ones provide more features, including being recursive, process shared and more. And wrapping POSIX primitives into C++ classes is straight-forward.
Good entry point into POSIX threads documentation.
Here is a quick thread-safety wrapper around a type T:
template<class T, class Lock>
struct lock_guarded {
Lock l;
T* t;
T* operator->()&&{ return t; }
template<class Arg>
auto operator[](Arg&&arg)&&
-> decltype(std::declval<T&>()[std::declval<Arg>()])
{
return (*t)[std::forward<Arg>(arg)];
}
T& operator*()&&{ return *t; }
};
constexpr struct emplace_t {} emplace {};
template<class T>
struct mutex_guarded {
lock_guarded<T, std::unique_lock<std::mutex>>
get_locked() {
return {{m},&t};
}
lock_guarded<T const, std::unique_lock<std::mutex>>
get_locked() const {
return {{m},&t};
}
lock_guarded<T, std::unique_lock<std::mutex>>
operator->() {
return get_locked();
}
lock_guarded<T const, std::unique_lock<std::mutex>>
operator->() const {
return get_locked();
}
template<class F>
std::result_of_t<F(T&)>
operator->*(F&& f) {
return std::forward<F>(f)(*get_locked());
}
template<class F>
std::result_of_t<F(T const&)>
operator->*(F&& f) const {
return std::forward<F>(f)(*get_locked());
}
template<class...Args>
mutex_guarded(emplace_t, Args&&...args):
t(std::forward<Args>(args)...)
{}
mutex_guarded(mutex_guarded&& o):
t( std::move(*o.get_locked()) )
{}
mutex_guarded(mutex_guarded const& o):
t( *o.get_locked() )
{}
mutex_guarded() = default;
~mutex_guarded() = default;
mutex_guarded& operator=(mutex_guarded&& o)
{
T tmp = std::move(o.get_locked());
*get_locked() = std::move(tmp);
return *this;
}
mutex_guarded& operator=(mutex_guarded const& o):
{
T tmp = o.get_locked();
*get_locked() = std::move(tmp);
return *this;
}
private:
std::mutex m;
T t;
};
You can use either:
mutex_guarded<std::vector<int>> guarded;
auto s0 = guarded->size();
auto s1 = guarded->*[](auto&&e){return e.size();};
both do roughly the same thing, and the object guarded is only accessed when the mutex is locked.
Stealing from #tsyvarev 's answer (with some minor changes) we get:
class shared_recursive_mutex
{
std::shared_mutex m
public:
void lock(void) {
std::thread::id this_id = std::this_thread::get_id();
if(owner == this_id) {
// recursive locking
++count;
} else {
// normal locking
m.lock();
owner = this_id;
count = 1;
}
}
void unlock(void) {
if(count > 1) {
// recursive unlocking
count--;
} else {
// normal unlocking
owner = std::thread::id();
count = 0;
m.unlock();
}
}
void lock_shared() {
std::thread::id this_id = std::this_thread::get_id();
if (shared_counts->count(this_id)) {
++(shared_count.get_locked()[this_id]);
} else {
m.lock_shared();
shared_count.get_locked()[this_id] = 1;
}
}
void unlock_shared() {
std::thread::id this_id = std::this_thread::get_id();
auto it = shared_count->find(this_id);
if (it->second > 1) {
--(it->second);
} else {
shared_count->erase(it);
m.unlock_shared();
}
}
private:
std::atomic<std::thread::id> owner;
std::atomic<std::size_t> count;
mutex_guarded<std::map<std::thread::id, std::size_t>> shared_counts;
};
try_lock and try_lock_shared left as an exercise.
Both lock and unlock shared lock the mutex twice (this is safe, as the branches are really about "is this thread in control of the mutex", and another thread cannot change that answer from "no" to "yes" or vice versa). You could do it with one lock with ->* instead of ->, which would make it faster (at the cost of some complexity in the logic).
The above does not support having an exclusive lock, then a shared lock. That is tricky. It cannot support having a shared lock, then upgrading to an unique lock, because that is basically impossible to stop it from deadlocking when 2 threads try that.
That last issue may be why recursive shared mutexes are a bad idea.
It is possible to construct a shared recursive mutex using existing primitives. I don't recommend doing it, though.
It isn't simple, and wrapping the existing POSIX implementation (or whatever is native to your platform) is very likely to be more efficient.
If you do decide to write your own implementation, making it efficient still depends on platform-specific details, so you're either writing an interface with a different implementation for each platform, or you're selecting a platform and could just as easily use the native (POSIX or whatever) facilities instead.
I'm certainly not going to provide a sample recursive read/write lock implementation, because it's a wholly unreasonable amount of work for an Stack Overflow answer.
Sharing my implementation, no promises
recursive_shared_mutex.h
#ifndef _RECURSIVE_SHARED_MUTEX_H
#define _RECURSIVE_SHARED_MUTEX_H
#include <thread>
#include <mutex>
#include <map>
struct recursive_shared_mutex
{
public:
recursive_shared_mutex() :
m_mtx{}, m_exclusive_thread_id{}, m_exclusive_count{ 0 }, m_shared_locks{}
{}
void lock();
bool try_lock();
void unlock();
void lock_shared();
bool try_lock_shared();
void unlock_shared();
recursive_shared_mutex(const recursive_shared_mutex&) = delete;
recursive_shared_mutex& operator=(const recursive_shared_mutex&) = delete;
private:
inline bool is_exclusive_locked()
{
return m_exclusive_count > 0;
}
inline bool is_shared_locked()
{
return m_shared_locks.size() > 0;
}
inline bool can_exclusively_lock()
{
return can_start_exclusive_lock() || can_increment_exclusive_lock();
}
inline bool can_start_exclusive_lock()
{
return !is_exclusive_locked() && (!is_shared_locked() || is_shared_locked_only_on_this_thread());
}
inline bool can_increment_exclusive_lock()
{
return is_exclusive_locked_on_this_thread();
}
inline bool can_lock_shared()
{
return !is_exclusive_locked() || is_exclusive_locked_on_this_thread();
}
inline bool is_shared_locked_only_on_this_thread()
{
return is_shared_locked_only_on_thread(std::this_thread::get_id());
}
inline bool is_shared_locked_only_on_thread(std::thread::id id)
{
return m_shared_locks.size() == 1 && m_shared_locks.find(id) != m_shared_locks.end();
}
inline bool is_exclusive_locked_on_this_thread()
{
return is_exclusive_locked_on_thread(std::this_thread::get_id());
}
inline bool is_exclusive_locked_on_thread(std::thread::id id)
{
return m_exclusive_count > 0 && m_exclusive_thread_id == id;
}
inline void start_exclusive_lock()
{
m_exclusive_thread_id = std::this_thread::get_id();
m_exclusive_count++;
}
inline void increment_exclusive_lock()
{
m_exclusive_count++;
}
inline void decrement_exclusive_lock()
{
if (m_exclusive_count == 0)
{
throw std::logic_error("Not exclusively locked, cannot exclusively unlock");
}
if (m_exclusive_thread_id == std::this_thread::get_id())
{
m_exclusive_count--;
}
else
{
throw std::logic_error("Calling exclusively unlock from the wrong thread");
}
}
inline void increment_shared_lock()
{
increment_shared_lock(std::this_thread::get_id());
}
inline void increment_shared_lock(std::thread::id id)
{
if (m_shared_locks.find(id) == m_shared_locks.end())
{
m_shared_locks[id] = 1;
}
else
{
m_shared_locks[id] += 1;
}
}
inline void decrement_shared_lock()
{
decrement_shared_lock(std::this_thread::get_id());
}
inline void decrement_shared_lock(std::thread::id id)
{
if (m_shared_locks.size() == 0)
{
throw std::logic_error("Not shared locked, cannot shared unlock");
}
if (m_shared_locks.find(id) == m_shared_locks.end())
{
throw std::logic_error("Calling shared unlock from the wrong thread");
}
else
{
if (m_shared_locks[id] == 1)
{
m_shared_locks.erase(id);
}
else
{
m_shared_locks[id] -= 1;
}
}
}
std::mutex m_mtx;
std::thread::id m_exclusive_thread_id;
size_t m_exclusive_count;
std::map<std::thread::id, size_t> m_shared_locks;
std::condition_variable m_cond_var;
};
#endif
recursive_shared_mutex.cpp
#include "recursive_shared_mutex.h"
#include <condition_variable>
void recursive_shared_mutex::lock()
{
std::unique_lock sync_lock(m_mtx);
m_cond_var.wait(sync_lock, [this] { return can_exclusively_lock(); });
if (is_exclusive_locked_on_this_thread())
{
increment_exclusive_lock();
}
else
{
start_exclusive_lock();
}
}
bool recursive_shared_mutex::try_lock()
{
std::unique_lock sync_lock(m_mtx);
if (can_increment_exclusive_lock())
{
increment_exclusive_lock();
return true;
}
if (can_start_exclusive_lock())
{
start_exclusive_lock();
return true;
}
return false;
}
void recursive_shared_mutex::unlock()
{
{
std::unique_lock sync_lock(m_mtx);
decrement_exclusive_lock();
}
m_cond_var.notify_all();
}
void recursive_shared_mutex::lock_shared()
{
std::unique_lock sync_lock(m_mtx);
m_cond_var.wait(sync_lock, [this] { return can_lock_shared(); });
increment_shared_lock();
}
bool recursive_shared_mutex::try_lock_shared()
{
std::unique_lock sync_lock(m_mtx);
if (can_lock_shared())
{
increment_shared_lock();
return true;
}
return false;
}
void recursive_shared_mutex::unlock_shared()
{
{
std::unique_lock sync_lock(m_mtx);
decrement_shared_lock();
}
m_cond_var.notify_all();
}
If a thread owns a shared lock, it may also obtain an exclusive lock without giving up it's shared lock. (This of course requires no other thread currently has a shared or exclusive lock)
Vice-versa, a thread which owns an exclusive lock may obtain a shared lock.
Interestingly, these properties also allow locks to be upgradable/downgradable.
Temporarily upgrading lock:
recusrive_shared_mutex mtx;
foo bar;
mtx.lock_shared();
if (bar.read() == x)
{
mtx.lock();
bar.write(y);
mtx.unlock();
}
mtx.unlock_shared();
Downgrading from an exclusive lock to a shared lock
recusrive_shared_mutex mtx;
foo bar;
mtx.lock();
bar.write(x);
mtx.lock_shared();
mtx.unlock();
while (bar.read() != y)
{
// Something
}
mtx.unlock_shared();
I searched for a C++ read-write-lock and came across this related question. We did need exactly such a shared_recursive_mutex to control access to our "database" class from multiple threads. So, for completeness: If you are looking for another implementation example (like I was), you may want to consider this link too: shared_recursive_mutex implementation using C++17 (on github).
Features
C++17
Single Header
Dependency-free
It has a disadvantage though: static thread_local members specialized for a PhantomType class via template. So, you can't really use this shared_recursive_mutex in multiple separate instances of the same (PhantomType) class. Try it, if that is no restriction for you.
The following implementation supports first having a unique_lock and then acquiring an additional shared_lock in the same thread:
#include <shared_mutex>
#include <thread>
class recursive_shared_mutex: public std::shared_mutex {
public:
void lock() {
if (owner_ != std::this_thread::get_id()) {
std::shared_mutex::lock();
owner_ = std::this_thread::get_id();
}
++count_;
}
void unlock() {
--count_;
if (count_ == 0) {
owner_ = std::thread::id();
std::shared_mutex::unlock();
}
}
void lock_shared() {
if (owner_ != std::this_thread::get_id()) {
std::shared_mutex::lock_shared();
}
}
void unlock_shared() {
if (owner_ != std::this_thread::get_id()) {
std::shared_mutex::unlock_shared();
}
}
private:
std::atomic<std::thread::id> owner_;
std::atomic_uint32_t count_ = 0;
};
High level
I want to call some functions with no return value in a async mode without waiting for them to finish. If I use std::async the future object doesn't destruct until the task is over, this make the call not sync in my case.
Example
void sendMail(const std::string& address, const std::string& message)
{
//sending the e-mail which takes some time...
}
myResonseType processRequest(args...)
{
//Do some processing and valuate the address and the message...
//Sending the e-mail async
auto f = std::async(std::launch::async, sendMail, address, message);
//returning the response ASAP to the client
return myResponseType;
} //<-- I'm stuck here until the async call finish to allow f to be destructed.
// gaining no benefit from the async call.
My questions are
Is there a way to overcome this limitation?
if (1) is no, should I implement once a thread that will take those "zombie" futures and wait on them?
Is (1) and (2) are no, is there any other option then just build my own thread pool?
note:
I rather not using the option of thread+detach (suggested by #galop1n) since creating a new thread have an overhead I wish to avoid. While using std::async (at least on MSVC) is using an inner thread pool.
Thanks.
You can move the future into a global object, so when the local future's destructor runs it doesn't have to wait for the asynchronous thread to complete.
std::vector<std::future<void>> pending_futures;
myResonseType processRequest(args...)
{
//Do some processing and valuate the address and the message...
//Sending the e-mail async
auto f = std::async(std::launch::async, sendMail, address, message);
// transfer the future's shared state to a longer-lived future
pending_futures.push_back(std::move(f));
//returning the response ASAP to the client
return myResponseType;
}
N.B. This is not safe if the asynchronous thread refers to any local variables in the processRequest function.
While using std::async (at least on MSVC) is using an inner thread pool.
That's actually non-conforming, the standard explicitly says tasks run with std::launch::async must run as if in a new thread, so any thread-local variables must not persist from one task to another. It doesn't usually matter though.
why do you not just start a thread and detach if you do not care on joining ?
std::thread{ sendMail, address, message}.detach();
std::async is bound to the lifetime of the std::future it returns and their is no alternative to that.
Putting the std::future in a waiting queue read by an other thread will require the same safety mechanism as a pool receiving new task, like mutex around the container.
Your best option, then, is a thread pool to consume tasks directly pushed in a thread safe queue. And it will not depends on a specific implementation.
Below a thread pool implementation taking any callable and arguments, the threads do poling on the queue, a better implementation should use condition variables (coliru) :
#include <iostream>
#include <queue>
#include <memory>
#include <thread>
#include <mutex>
#include <functional>
#include <string>
struct ThreadPool {
struct Task {
virtual void Run() const = 0;
virtual ~Task() {};
};
template < typename task_, typename... args_ >
struct RealTask : public Task {
RealTask( task_&& task, args_&&... args ) : fun_( std::bind( std::forward<task_>(task), std::forward<args_>(args)... ) ) {}
void Run() const override {
fun_();
}
private:
decltype( std::bind(std::declval<task_>(), std::declval<args_>()... ) ) fun_;
};
template < typename task_, typename... args_ >
void AddTask( task_&& task, args_&&... args ) {
auto lock = std::unique_lock<std::mutex>{mtx_};
using FinalTask = RealTask<task_, args_... >;
q_.push( std::unique_ptr<Task>( new FinalTask( std::forward<task_>(task), std::forward<args_>(args)... ) ) );
}
ThreadPool() {
for( auto & t : pool_ )
t = std::thread( [=] {
while ( true ) {
std::unique_ptr<Task> task;
{
auto lock = std::unique_lock<std::mutex>{mtx_};
if ( q_.empty() && stop_ )
break;
if ( q_.empty() )
continue;
task = std::move(q_.front());
q_.pop();
}
if (task)
task->Run();
}
} );
}
~ThreadPool() {
{
auto lock = std::unique_lock<std::mutex>{mtx_};
stop_ = true;
}
for( auto & t : pool_ )
t.join();
}
private:
std::queue<std::unique_ptr<Task>> q_;
std::thread pool_[8];
std::mutex mtx_;
volatile bool stop_ {};
};
void foo( int a, int b ) {
std::cout << a << "." << b;
}
void bar( std::string const & s) {
std::cout << s;
}
int main() {
ThreadPool pool;
for( int i{}; i!=42; ++i ) {
pool.AddTask( foo, 3, 14 );
pool.AddTask( bar, " - " );
}
}
Rather than moving the future into a global object (and manually manage deletion of unused futures), you can actually move it into the local scope of the asynchronously called function.
"Let the async function take its own future", so to speak.
I have come up with this template wrapper which works for me (tested on Windows):
#include <future>
template<class Function, class... Args>
void async_wrapper(Function&& f, Args&&... args, std::future<void>& future,
std::future<void>&& is_valid, std::promise<void>&& is_moved) {
is_valid.wait(); // Wait until the return value of std::async is written to "future"
auto our_future = std::move(future); // Move "future" to a local variable
is_moved.set_value(); // Only now we can leave void_async in the main thread
// This is also used by std::async so that member function pointers work transparently
auto functor = std::bind(f, std::forward<Args>(args)...);
functor();
}
template<class Function, class... Args> // This is what you call instead of std::async
void void_async(Function&& f, Args&&... args) {
std::future<void> future; // This is for std::async return value
// This is for our synchronization of moving "future" between threads
std::promise<void> valid;
std::promise<void> is_moved;
auto valid_future = valid.get_future();
auto moved_future = is_moved.get_future();
// Here we pass "future" as a reference, so that async_wrapper
// can later work with std::async's return value
future = std::async(
async_wrapper<Function, Args...>,
std::forward<Function>(f), std::forward<Args>(args)...,
std::ref(future), std::move(valid_future), std::move(is_moved)
);
valid.set_value(); // Unblock async_wrapper waiting for "future" to become valid
moved_future.wait(); // Wait for "future" to actually be moved
}
I am a little surprised it works because I thought that the moved future's destructor would block until we leave async_wrapper. It should wait for async_wrapper to return but it is waiting inside that very function. Logically, it should be a deadlock but it isn't.
I also tried to add a line at the end of async_wrapper to manually empty the future object:
our_future = std::future<void>();
This does not block either.
You need to make your future a pointer. Below is exactly what you are looking for:
std::make_unique<std::future<void>*>(new auto(std::async(std::launch::async, sendMail, address, message))).reset();
Live example
i have no idea what i'm doing, but this seem to work:
// :( http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3451.pdf
template<typename T>
void noget(T&& in)
{
static std::mutex vmut;
static std::vector<T> vec;
static std::thread getter;
static std::mutex single_getter;
if (single_getter.try_lock())
{
getter = std::thread([&]()->void
{
size_t size;
for(;;)
{
do
{
vmut.lock();
size=vec.size();
if(size>0)
{
T target=std::move(vec[size-1]);
vec.pop_back();
vmut.unlock();
// cerr << "getting!" << endl;
target.get();
}
else
{
vmut.unlock();
}
}while(size>0);
// ¯\_(ツ)_/¯
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}
});
getter.detach();
}
vmut.lock();
vec.push_back(std::move(in));
vmut.unlock();
}
it creates a dedicated getter thread for each type of future you throw at it (eg. if you give a future and future, you'll have 2 threads. if you give it 100x future, you'll still only have 2 threads), and when there's a future you don't want to deal with, just do notget(fut); - you can also noget(std::async([]()->void{...})); works just fine, no block, it seems. warning, do not try to get the value from a future after using noget() on it. that's probably UB and asking for trouble.
I understand the concept of thread safety. I am looking for advice to simplify thread safety when trying to protect a single variable.
Say I have a variable:
double aPass;
and I want to protect this variable, so i create a mutex:
pthread_mutex_t aPass_lock;
Now there are two good ways i can think of doing this but they both have annoying disadvantages. The first is to make a thread safe class to hold the variable:
class aPass {
public:
aPass() {
pthread_mutex_init(&aPass_lock, NULL);
aPass_ = 0;
}
void get(double & setMe) {
pthread_mutex_lock(aPass_lock);
setMe = aPass_
pthread_mutex_unlock(aPass_lock);
}
void set(const double setThis) {
pthread_mutex_lock(aPass_lock);
aPass_ = setThis;
pthread_mutex_unlock(aPass_lock);
}
private:
double aPass_;
pthread_mutex_t aPass_lock;
};
Now this will keep aPass totally safe, nothing can be mistaken and ever touch it, YAY! however look at all that mess and imagine the confusion when accessing it. gross.
The other way is to have them both accessible and to make sure you lock the mutex before you use aPass.
pthread_mutex_lock(aPass_lock);
do something with aPass
pthread_mutex_unlock(aPass_lock);
But what if someone new comes on the project, what if you forget one time to lock it. I don't like debugging thread problems they are hard.
Is there a good way to (using pthreads because i have to use QNX which has little boost support) To lock single variables without needing a big class and that is safer then just creating a mutex lock to go with it?
std::atomic<double> aPass;
provided you have C++11.
To elabourate on my solution, it would be something like this.
template <typename ThreadSafeDataType>
class ThreadSafeData{
//....
private:
ThreadSafeDataType data;
mutex mut;
};
class apass:public ThreadSafeData<int>
Additionally, to make it unique, it might be best to make all operators and members static. For this to work you need to use CRTP
i.e
template <typename ThreadSafeDataType,class DerivedDataClass>
class ThreadSafeData{
//....
};
class apass:public ThreadSafeData<int,apass>
You can easily make your own class that locks the mutex on construction, and unlocks it on destruction. This way, no matter what happens the mutex will be freed on return, even if an exception is thrown, or any path is taken.
class MutexGuard
{
MutexType & m_Mutex;
public:
inline MutexGuard(MutexType & mutex)
: m_Mutex(mutex)
{
m_Mutex.lock();
};
inline ~MutexGuard()
{
m_Mutex.unlock();
};
}
class TestClass
{
MutexType m_Mutex;
double m_SharedVar;
public:
TestClass()
: m_SharedVar(4.0)
{ }
static void Function1()
{
MutexGuard scopedLock(m_Mutex); //lock the mutex
m_SharedVar+= 2345;
//mutex automatically unlocked
}
static void Function2()
{
MutexGuard scopedLock(m_Mutex); //lock the mutex
m_SharedVar*= 234;
throw std::runtime_error("Mutex automatically unlocked");
}
}
The variable m_SharedVar is ensured mutual exclusion between Function1() and Function2() , and will always be unlocked on return.
boost has build in types to accomplish this: boost::scoped_locked, boost::lock_guard.
You can create a class which act as a generic wrapper around your variable synchronising the access to it.
Add operator overloading for the assignment and you are done.
Consider use RAII idiom, below code is just the idea, it's not tested:
template<typename T, typename U>
struct APassHelper : boost::noncoypable
{
APassHelper(T&v) : v_(v) {
pthread_mutex_lock(mutex_);
}
~APassHelper() {
pthread_mutex_unlock(mutex_);
}
UpdateAPass(T t){
v_ = t;
}
private:
T& v_;
U& mutex_;
};
double aPass;
int baPass_lock;
APassHelper<aPass,aPass_lock) temp;
temp.UpdateAPass(10);
You can modify your aPass class by using operators instead of get/set:
class aPass {
public:
aPass() {
pthread_mutex_init(&aPass_lock, NULL);
aPass_ = 0;
}
operator double () const {
double setMe;
pthread_mutex_lock(aPass_lock);
setMe = aPass_;
pthread_mutex_unlock(aPass_lock);
return setMe;
}
aPass& operator = (double setThis) {
pthread_mutex_lock(aPass_lock);
aPass_ = setThis;
pthread_mutex_unlock(aPass_lock);
return *this;
}
private:
double aPass_;
pthread_mutex_t aPass_lock;
};
Usage:
aPass a;
a = 0.5;
double b = a;
This could of course be templated to support other types. Note however that a mutex is overkill in this case. Generally, memory barriers are enough when protecting loads and stores of small data-types. If possible you should use C++11 std::atomic<double>.
I try to implement blocking queue. the main parts are the following (it's a kind of educational task)
template <typename T>
class Blocking_queue
{
public:
std::queue<T> _queue;
boost::mutex _mutex;
boost::condition_variable _cvar;
void Put(T& object);
T Get();
void Disable()
};
template<typename T>
void Blocking_queue::Put(T& object)
{
boost::mutex::scoped_lock lock(_mutex);
_queue.push(T);
lock.unlock();
_cvar.notify_one();
}
template<typename T>
T Blocking_queue::Get()
{
boost::mutex::scoped_lock lock(_mutex);
while(_queue.empty())
{
_cvar.wait(_mutex);
}
T last_el = _queue.front();
_queue.pop();
return last_el;
}
template<typename T>
void Blocking_queue::Disable()
{
}
And i need to implement a function Disable() "releasing" all waiting threads (as written in the task). The problem is that i don't fully understand what "releasing" in this terms means, and what methods should i apply. So my idea - is the following: when Disable() is called we should call some method for current thread in this place (inside the loop)
while(_queue.empty())
{
//here
_cvar.wait(_mutex);
}
which will release current thread, am i right? Thanks.
"releasing all threads that are waiting" is an operation that is hardly useful. What do you want to do with this operation?
What is useful, is to shutdown the queue, thus every thread waiting on the queue will be unblocked and every thread that is going to call Get() will return immediately. To implement such a behaviour, simply add a shutdown flag to the queue and wait for "not empty or shutdown":
template<typename T>
void Blocking_queue::Disable()
{
boost::mutex::scoped_lock lock(_mutex);
_shutdown = true;
_cvar.notify_all()
}
To indicate that there is no data, to the caller of Get(), you could return a pair with an additional bool or throw a special exception. There is no way to return null, as not for all types T there is a null value.
template<typename T>
std::pair< bool, T > Blocking_queue::Get()
{
boost::mutex::scoped_lock lock(_mutex);
while (_queue.empty() && !_shutdown )
_cvar.wait(_mutex);
if ( _shutdown )
return std::make_pair( false, T() );
T last_el = _queue.front();
_queue.pop();
return std::make_pair( true, last_el );
}