I am working on a processing framework where callbacks are registered to events and to ensure that no callback is invoked on an object, which has been deleted, I would like to use weak capture rather than capture by reference. It was no problem to make this work using C++14 and shared_from_this(), but how is this correctly achieved using C++17 and weak_from_this().
The example below prints nothing when C++17 is used. I am using g++ 6.3.0-18
#define CXX17 // When this is defined, nothing is printed
#ifdef CXX17
# include <experimental/memory>
# include <experimental/functional>
template <typename T>
using enable_shared_from_this = std::experimental::enable_shared_from_this<T>;
#else
# include <memory>
# include <functional>
template <typename T>
using enable_shared_from_this = std::enable_shared_from_this<T>;
#endif
#include <thread>
#include <mutex>
#include <condition_variable>
#include <atomic>
#include <iostream>
struct A : enable_shared_from_this<A> {
int a;
A() : a(7) {}
auto getptr() {
#ifdef CXX17
return this->weak_from_this();
#else
auto sptr = shared_from_this();
auto wptr = std::weak_ptr<decltype(sptr)::element_type>(sptr);
sptr.reset(); // Drop strong referencing
return wptr;
#endif
}
};
std::condition_variable condition;
std::mutex mutex;
std::atomic<bool> start0{false};
std::atomic<bool> start1{false};
std::shared_ptr<A> g_a;
static void thread_func0() {
auto w_a = g_a->getptr();
std::unique_lock<std::mutex> lock {mutex};
condition.wait(lock, [&]() {
return start0.load();
});
std::this_thread::sleep_for(std::chrono::microseconds(10));
if (auto t = w_a.lock()) {
std::cout << t->a << std::endl;
}
}
static void thread_func1() {
std::unique_lock<std::mutex> lock {mutex};
condition.wait(lock, [&]() {
return start1.load();
});
std::this_thread::sleep_for(std::chrono::microseconds(10000));
g_a = nullptr;
}
int main() {
g_a = std::make_shared<A>();
std::thread thread0(thread_func0);
std::thread thread1(thread_func1);
start0 = true;
start1 = true;
condition.notify_all();
thread0.join();
thread1.join();
return 0;
}
Here's a way more reduced example:
#include <experimental/memory>
#include <iostream>
template <typename T>
using enable_shared_from_this = std::experimental::enable_shared_from_this<T>;
struct A : enable_shared_from_this<A> {
int a;
A() : a(7) {}
};
int main() {
auto sp = std::make_shared<A>();
auto wp = sp->weak_from_this();
if (auto s = wp.lock()) {
std::cout << s->a << std::endl;
}
}
This prints nothing. Why? The reason is ultimately the reason why it's std::enable_shared_from_this and not some other type that you yourself can provide: the shared_ptr class needs to opt-in to this functionality. The new functionality is experimental, so std::shared_ptr was not opting in - so the underlying weak_ptr was never initialized. It just doesn't happen, so wp is always an "empty" weak_ptr here.
On the other hand, std::experimental::shared_ptr does opt-in to this functionality. You need to use the shared_ptr corresponding to your enable_shared_from_this - which is std::experimental::shared_ptr.
There's no std::experimental::make_shared (or at least, as far as I could find), but the opt-in mechanism isn't based on that anyway - it's just based on any shared_ptr construction. So if you change:
auto sp = std::make_shared<A>();
to:
auto sp = std::experimental::shared_ptr<A>(new A);
Then the opt-in mechanism matches the shared_ptr type and does the right thing, you get a valid weak_ptr (a std::experimental::weak_ptr), lock() gives you shared ownership of the underlying A, and the program prints 7.
Related
When trying to learn threads most examples suggests that I should put std::mutex, std::condition_variable and std::queue global when sharing data between two different threads and it works perfectly fine for simple scenario. However, in real case scenario and bigger applications this may soon get complicated as I may soon lose track of the global variables and since I am using C++ this does not seem to be an appropriate option (may be I am wrong)
My question is if I have a producer/consumer problem and I want to put both in separate classes, since they will be sharing data I would need to pass them the same mutex and queue now how do I share these two variables between them without defining it to be global and what is the best practice for creating threads?
Here is a working example of my basic code using global variables.
#include <iostream>
#include <thread>
#include <mutex>
#include <queue>
#include <condition_variable>
std::queue<int> buffer;
std::mutex mtx;
std::condition_variable cond;
const int MAX_BUFFER_SIZE = 50;
class Producer
{
public:
void run(int val)
{
while(true) {
std::unique_lock locker(mtx) ;
cond.wait(locker, []() {
return buffer.size() < MAX_BUFFER_SIZE;
});
buffer.push(val);
std::cout << "Produced " << val << std::endl;
val --;
locker.unlock();
// std::this_thread::sleep_for(std::chrono::seconds(2));
cond.notify_one();
}
}
};
class Consumer
{
public:
void run()
{
while(true) {
std::unique_lock locker(mtx);
cond.wait(locker, []() {
return buffer.size() > 0;
});
int val = buffer.front();
buffer.pop();
std::cout << "Consumed " << val << std::endl;
locker.unlock();
std::this_thread::sleep_for(std::chrono::seconds(1));
cond.notify_one();
}
}
};
int main()
{
std::thread t1(&Producer::run, Producer(), MAX_BUFFER_SIZE);
std::thread t2(&Consumer::run, Consumer());
t1.join();
t2.join();
return 0;
}
Typically, you want to have synchronisation objects packaged alongside the resource(s) they are protecting.
A simple way to do that in your case would be a class that contains the buffer, the mutex, and the condition variable. All you really need is to share a reference to one of those to both the Consumer and the Producer.
Here's one way to go about it while keeping most of your code as-is:
class Channel {
std::queue<int> buffer;
std::mutex mtx;
std::condition_variable cond;
// Since we know `Consumer` and `Producer` are the only entities
// that will ever access buffer, mtx and cond, it's better to
// not provide *any* public (direct or indirect) interface to
// them, and use `friend` to grant access.
friend class Producer;
friend class Consumer;
public:
// ...
};
class Producer {
Channel* chan_;
public:
explicit Producer(Channel* chan) : chan_(chan) {}
// ...
};
class Consumer {
Channel* chan_;
public:
explicit Consumer(Channel* chan) : chan_(chan) {}
// ...
};
int main() {
Channel channel;
std::thread t1(&Producer::run, Producer(&channel), MAX_BUFFER_SIZE);
std::thread t2(&Consumer::run, Consumer(&channel));
t1.join();
t2.join();
}
However, (Thanks for the prompt, #Ext3h) a better way to go about this would be to encapsulate access to the synchronisation objects as well, i.e. keep them hidden in the class. At that point Channel becomes what is commonly known as a Synchronised Queue
Here's what I'd subjectively consider a nicer-looking implementation of your example code, with a few misc improvements thrown in as well:
#include <cassert>
#include <iostream>
#include <thread>
#include <mutex>
#include <queue>
#include <optional>
#include <condition_variable>
template<typename T>
class Channel {
static constexpr std::size_t default_max_length = 10;
public:
using value_type = T;
explicit Channel(std::size_t max_length = default_max_length)
: max_length_(max_length) {}
std::optional<value_type> next() {
std::unique_lock locker(mtx_);
cond_.wait(locker, [this]() {
return !buffer_.empty() || closed_;
});
if (buffer_.empty()) {
assert(closed_);
return std::nullopt;
}
value_type val = buffer_.front();
buffer_.pop();
cond_.notify_one();
return val;
}
void put(value_type val) {
std::unique_lock locker(mtx_);
cond_.wait(locker, [this]() {
return buffer_.size() < max_length_;
});
buffer_.push(std::move(val));
cond_.notify_one();
}
void close() {
std::scoped_lock locker(mtx_);
closed_ = true;
cond_.notify_all();
}
private:
std::size_t max_length_;
std::queue<value_type> buffer_;
bool closed_ = false;
std::mutex mtx_;
std::condition_variable cond_;
};
void producer_main(Channel<int>& chan, int val) {
// Don't use while(true), it's Undefined Behavior
while (val >= 0) {
chan.put(val);
std::cout << "Produced " << val << std::endl;
val--;
}
}
void consumer_main(Channel<int>& chan) {
bool running = true;
while (running) {
auto val = chan.next();
if (!val) {
running = false;
continue;
}
std::cout << "Consumed " << *val << std::endl;
};
}
int main()
{
// You are responsible for ensuring the channel outlives both threads.
Channel<int> channel;
std::thread producer_thread(producer_main, std::ref(channel), 13);
std::thread consumer_thread(consumer_main, std::ref(channel));
producer_thread.join();
channel.close();
consumer_thread.join();
return 0;
}
What is the drawbacks or errors in the following approach to use non-owning std::unique_ptr having custom deleter to arrange a critical section?
#include <memory>
#include <shared_mutex>
#include <optional>
#include <variant>
#include <cassert>
struct Data
{
std::optional<int> i;
};
struct DataLocker
{
std::variant<std::unique_lock<std::shared_mutex>, std::shared_lock<std::shared_mutex>> lock;
void operator () (const Data *)
{
std::visit([] (auto & lock) { if (lock) lock.unlock(); }, lock);
}
};
struct DataHolder
{
std::unique_ptr<Data, DataLocker> getLockedData()
{
return {&data, {std::unique_lock<std::shared_mutex>{m}}};
}
std::unique_ptr<const Data, DataLocker> getLockedData() const
{
return {&data, {std::shared_lock<std::shared_mutex>{m}}};
}
private :
mutable std::shared_mutex m;
Data data;
};
#include <iostream>
#include <thread>
int main()
{
DataHolder d;
auto producer = [&d]
{
d.getLockedData()->i = 123;
};
auto consumer = [&d = std::as_const(d)]
{
for (;;) {
if (const auto i = d.getLockedData()->i) {
std::cout << *i << std::endl;
return;
}
}
};
std::thread p(producer);
std::thread c(consumer);
p.join();
c.join();
}
One corner case, when writer reset()s a pointer and never destruct std::unique_ptr itself is covered by adding unlock to deleter's operator ().
I have some tasks that need to be performed asynchronously, and the server can't close while there are still tasks running. So I'm trying to store the futures returned by std::async in a list, but I also don't want to get an infinitely growing list of those. So I want to remove the futures as they're completed.
Here's roughly what I'm trying to do:
// this is a member of the server class
std::list<std::future<void>> pending;
std::list<std::future<void>>::iterator iter = ???;
pending.push_back( std::async( std::launch::async, [iter]()
{
doSomething();
pending.remove( iter );
} );
Here, iter needs to be pointing to the newly inserted element, but I can't get it before inserting the element (there is no iterator), nor after (since it is passed to the lambda by value). I could make a shared_ptr to store the iterator, but that seems to be way overkill.
Is there a better pattern for this?
Update: there seems to be another issue with this. When a future attempts to remove itself from the list, it is essentially waiting for itself to complete, which locks everything up. Oops!
On top of that, list destructor empties the list before calling element destructors.
It appears you can just append a default std::future to the list, get an iterator to that and then move your future in.
Mind you, that non-mutex-protected remove(iter) looks awfully dangerous.
Here's one way. I don't think this one needs futures:
#include <unordered_set>
#include <condition_variable>
#include <mutex>
#include <thread>
struct server
{
std::mutex pending_mutex;
std::condition_variable pending_condition;
std::unordered_set<unsigned> pending;
unsigned next_id = 0;
void add_task()
{
auto lock = std::unique_lock(pending_mutex);
auto id = next_id++;
auto t = std::thread([this, id]{
this->doSomething();
this->notify_complete(id);
});
t.detach(); // or we could store it somewhere. e.g. pending could be a map
pending.insert(id);
}
void doSomething();
void notify_complete(unsigned id)
{
auto lock = std::unique_lock(pending_mutex);
pending.erase(id);
if (pending.empty())
pending_condition.notify_all();
}
void wait_all_complete()
{
auto none_left = [&] { return pending.empty(); };
auto lock = std::unique_lock(pending_mutex);
pending_condition.wait(lock, none_left);
}
};
int main()
{
auto s = server();
s.add_task();
s.add_task();
s.add_task();
s.wait_all_complete();
}
Here it is with futures, in case that's important:
#include <unordered_map>
#include <condition_variable>
#include <mutex>
#include <thread>
#include <future>
struct server
{
std::mutex pending_mutex;
std::condition_variable pending_condition;
std::unordered_map<unsigned, std::future<void>> pending;
unsigned next_id = 0;
void add_task()
{
auto lock = std::unique_lock(pending_mutex);
auto id = next_id++;
auto f = std::async(std::launch::async, [this, id]{
this->doSomething();
this->notify_complete(id);
});
pending.emplace(id, std::move(f));
}
void doSomething();
void notify_complete(unsigned id)
{
auto lock = std::unique_lock(pending_mutex);
pending.erase(id);
if (pending.empty())
pending_condition.notify_all();
}
void wait_all_complete()
{
auto none_left = [&] { return pending.empty(); };
auto lock = std::unique_lock(pending_mutex);
pending_condition.wait(lock, none_left);
}
};
int main()
{
auto s = server();
s.add_task();
s.add_task();
s.add_task();
s.wait_all_complete();
}
Here's the list version:
#include <list>
#include <condition_variable>
#include <mutex>
#include <thread>
#include <future>
struct server
{
using pending_list = std::list<std::future<void>>;
using id_type = pending_list::const_iterator;
std::mutex pending_mutex;
std::condition_variable pending_condition;
pending_list pending;
void add_task()
{
auto lock = std::unique_lock(pending_mutex);
// redundant construction
auto id = pending.emplace(pending.end());
auto f = std::async(std::launch::async, [this, id]{
this->doSomething();
this->notify_complete(id);
});
*id = std::move(f);
}
void doSomething();
void notify_complete(id_type id)
{
auto lock = std::unique_lock(pending_mutex);
pending.erase(id);
if (pending.empty())
pending_condition.notify_all();
}
void wait_all_complete()
{
auto none_left = [&] { return pending.empty(); };
auto lock = std::unique_lock(pending_mutex);
pending_condition.wait(lock, none_left);
}
};
int main()
{
auto s = server();
s.add_task();
s.add_task();
s.add_task();
s.wait_all_complete();
}
I wrote this sample program to mimic what I'm trying to do in a larger program.
I have some data that will come from the user and be passed into a thread for some processing. I am using mutexes around the data the flags to signal when there is data.
Using the lambda expression, is a pointer to *this send to the thread? I seem to be getting the behavior I expect in the cout statement.
Are the mutexes used properly around the data?
Is putting the atomics and mutexes as a private member of the class a good move?
foo.h
#pragma once
#include <atomic>
#include <thread>
#include <vector>
#include <mutex>
class Foo
{
public:
Foo();
~Foo();
void StartThread();
void StopThread();
void SendData();
private:
std::atomic<bool> dataFlag;
std::atomic<bool> runBar;
void bar();
std::thread t1;
std::vector<int> data;
std::mutex mx;
};
foo.c
#include "FooClass.h"
#include <thread>
#include <string>
#include <iostream>
Foo::Foo()
{
dataFlag = false;
}
Foo::~Foo()
{
StopThread();
}
void Foo::StartThread()
{
runBar = true;
t1 = std::thread([=] {bar(); });
return;
}
void Foo::StopThread()
{
runBar = false;
if(t1.joinable())
t1.join();
return;
}
void Foo::SendData()
{
mx.lock();
for (int i = 0; i < 5; ++i) {
data.push_back(i);
}
mx.unlock();
dataFlag = true;
}
void Foo::bar()
{
while (runBar)
{
if(dataFlag)
{
mx.lock();
for(auto it = data.begin(); it < data.end(); ++it)
{
std::cout << *it << '\n';
}
mx.unlock();
dataFlag = false;
}
}
}
main.cpp
#include "FooClass.h"
#include <iostream>
#include <string>
int main()
{
Foo foo1;
std::cout << "Type anything to end thread" << std::endl;
foo1.StartThread();
foo1.SendData();
// type something to end threads
char a;
std::cin >> a;
foo1.StopThread();
return 0;
}
You ensure that the thread is joined using RAII techniques? Check.
All data access/modification is either protected through atomics or mutexs? Check.
Mutex locking uses std::lock_guard? Nope. Using std::lock_guard wraps your lock() and unlock() calls with RAII. This ensures that even if an exception occurs while within the lock, that the lock is released.
Is putting the atomics and mutexes as a private member of the class a good move?
Its neither good nor bad, but in this scenario, where Foo is a wrapper for a std::thread that does work and controls the synchronization, it makes sense.
Using the lambda expression, is a pointer to *this send to the thread?
Yes, you can also do t1 = std::thread([this]{bar();}); to make it more explicit.
As it stands, with your dataFlag assignments after the locks, you may encounter problems. If you call SendData twice such that bar processes the first one but is halted before setting dataFlag = false so that the second call adds the data, sets the flag to true only to have bar set it back to false. Then, you'll have data that has been "sent" but bar doesn't think there's anything to process.
There may be other tricky situations, but this was just one example; moving it into the lock clears up that problem.
for example, your SendData should look like:
void Foo::SendData()
{
std::lock_guard<std::mutex> guard(mx);
for (int i = 0; i < 5; ++i) {
data.push_back(i);
}
dataFlag = true;
}
I am implementing a threadpool that has a push_back method on callable object. However I am getting error on moving a packaged task into a function object using lambda trick.
class Threadpool {
public:
// ...
::std::deque <::std::function<void()>> _work_queue;
::std::mutex _work_queue_mutex;
::std::condition_variable _worker_signal;
template <typename CallableT>
::std::future<::std::result_of_t<CallableT()>> push_back(CallableT&&);
}
template<typename CallableT>
::std::future<::std::result_of_t<CallableT()>> Threadpool::push_back(CallableT&& callable) {
::std::packaged_task<::std::result_of_t<CallableT()>()> task (::std::move(callable));
auto fu = task.get_future();
{
::std::unique_lock<::std::mutex> locker(_work_queue_mutex);
// COMPILE ERROR
_work_queue.emplace_back([task=::std::move(task)] () { task(); })
}
_worker_signal.notify_one();
return fu;
}
Threadpool pool;
pool.emplace_back( []() { ::std::cout << "hello\n"; } );
The compiler complains about the emplace_back by error: no match for call to '(const std::packaged_task<void()>) ()' _work_queue.emplace_back([task=::std::move(task)]() { task(); }); I don't understand what's going wrong since as far as I know packaged_task is only movable and I am capturing the task by move.
There are two issues with your example.
Indeed, std::packaged_task is only movable, so [task=std::move(task)] is correct. But on top of that std::packaged_task::operator() requires not-const object: https://en.cppreference.com/w/cpp/thread/packaged_task/operator()
So the lambda must be defined as mutable to allow the usage of task():
[task=std::move(task)] () mutable { task(); };
But even so the lambda object is only movable and not copyable, while std::function requires a copyable object: https://en.cppreference.com/w/cpp/utility/functional/function
So one of the solutions, is to wrap the packaged_task in a copyable smart pointer as follows:
#include <mutex>
#include <deque>
#include <functional>
#include <condition_variable>
#include <future>
#include <iostream>
#include <type_traits>
class Threadpool
{
public:
// ...
std::deque <std::function<void()>> _work_queue;
std::mutex _work_queue_mutex;
std::condition_variable _worker_signal;
template <typename CallableT>
std::future<std::result_of_t<CallableT()>> push_back(CallableT&&);
};
template<typename CallableT>
std::future<std::result_of_t<CallableT()>> Threadpool::push_back(CallableT&& callable)
{
auto task = std::make_shared<std::packaged_task<std::result_of_t<CallableT()>()>>( std::move(callable) );
auto fu = task->get_future();
{
std::unique_lock<std::mutex> locker(_work_queue_mutex);
_work_queue.emplace_back([task]() { (*task)(); });
}
_worker_signal.notify_one();
return fu;
};
int main()
{
Threadpool pool;
pool.push_back( []() { std::cout << "hello\n"; } );
}
Demo: https://gcc.godbolt.org/z/aEfvo7Mhz