Why there is no std::future::try_wait()? - c++

Given that there are std::future::wait_for/until(), I don't see why there is no std::future::try_wait(). I'm currently writing a producer-consumer example, and I want to use std::future as a convenient way to signal the consumer threads to return. My consumer code is like
void consume(std::future<void>& stop) {
while (!stop.try_wait()) { // alas, no such method
// try consuming an item in queue
}
}
I'm thinking to simulate try_wait() with a zero-duration wait_for() which is really ugly. As a side question: any other convenient ways to signal the consumer threads to return?

std::experimental::future has a .is_ready() and .then( F ) methods added to it.
is_ready is probably your try_wait (without a timeout).
wait_for, as noted, gives you the functionality of try_wait in practice.
std::future is not designed as a signaling mechanism even if it can be used as one. If you want a signaling mechansim, create one using a condition variable, mutex, and state that stores the state of the signals (possibly combining them).
struct state {
bool stop = false;
unsigned some_value = 7;
friend auto as_tie( state const& s ) {
return std::tie(s.stop, s.some_value);
}
friend bool operator==( state const& lhs, state const& rhs ) {
return as_tie(lhs)==as_tie(rhs);
}
};
template<class State, class Cmp=std::equal<State>>
struct condition_state {
// gets a copy of the current state:
State get_state() const {
auto l = lock();
return state;
}
// Returns a state that is different than in:
State next_state(State const& in) const {
auto l = lock();
cv.wait( l, [&]{ return !Cmp{}(in, state); } );
return state;
}
// runs f on the state if it changes from old.
// does this atomically in a mutex, so be careful.
template<class F>
auto consume_state( F&& f, State old ) const {
auto l = lock();
cv.wait( l, [&]{ return !Cmp{}(old, state); } );
return std::forward<F>(f)( state );
}
// runs f on the state if it changes:
template<class F>
auto consume_state( F&& f ) const {
return consume_state( std::forward<F>(f), state );
}
// calls f on the state, then notifies everyone to check if
// it has changed:
template<class F>
void change_state( F&& f ) {
{
auto l = lock();
std::forward<F>(f)( state );
}
cv.notify_all();
}
// Sets the value of state to in
void set_state( State in ) {
change_state( [&](State& state) {
state = std::move(in);
} );
}
private:
auto lock() const { return std::unique_lock<std::mutex>(m); }
mutable std::mutex m;
std::condition_variable cv;
State state;
};
For an example, suppose our State was a vector of ready tasks and a bool saying "abort":
struct tasks_todo {
std::deque< std::function<void()> > todo;
bool abort = false;
friend bool operator==()( tasks_todo const& lhs, tasks_todo const& rhs ) {
if (lhs.abort != rhs.abort) return false;
if (lhs.todo.size() != rhs.todo.size()) return false;
return true;
}
};
then we can write our queue as follows:
struct task_queue {
void add_task( std::function<void()> task ) {
tasks.change_state( [&](auto& tasks) { tasks.todo.push_back(std::move(task)); } );
}
void shutdown() {
tasks.change_state( [&](auto& tasks) { tasks.abort = true; } );
}
std::function<void()> pop_task() {
return tasks.consume_state(
[&](auto& tasks)->std::function<void()> {
if (tasks.abort) return {};
if (tasks.todo.empty()) return {}; // should be impossible
auto r = tasks.front();
tasks.pop_front();
return r;
},
{} // non-aborted empty queue
);
}
private:
condition_state<task_todo> tasks;
};
or somesuch.

Since std::future::wait_for is not available, one can specify own timeout routine as shown in the snippet:
void even(int n,promise<bool> p)
{
this_thread::sleep_for(chrono::milliseconds(500ms)); //set milliseconds(10ms) to display result
p.set_value( n%2 == 0?true:false);
}
int main()
{
promise<bool> p;
future<bool> f =p.get_future();
int n = 100;
std::chrono::system_clock::time_point tp1 = std::chrono::system_clock::now() ;
thread t([&](){ even(n,move(p)); });
auto span = std::chrono::milliseconds(200ms);
std::future_status s;
do
{
s =f.wait_for(std::chrono::seconds(0));
// do something
}
while( std::chrono::system_clock::now() < (tp1 + span) );
if( s==future_status::ready)
std::cout << "result is " << (f.get()? "Even": "Odd") << '\n';
else
std::cout << "timeout " << '\n';
t.join();
}

Related

A workaround of the crash caused by calling destroy() from final_suspend()

Two days ago in my previous post I provided a code that works with GCC but crashes with MSVC2002 that calls the task destructor two times.
Today I made it work with both MSVC2002 and GCC by replacing my former await_suspend implementation:
std::coroutine_handle<> await_suspend(std::coroutine_handle<UpdatePromise> h) noexcept
{
// resume awaiting coroutine or if there is no coroutine to resume return special coroutine that do
// nothing
std::coroutine_handle<> val = awaiting_coroutine ? awaiting_coroutine : std::noop_coroutine();
h.destroy();
return val;
}
with the following:
void await_suspend(std::coroutine_handle<UpdatePromise> h) noexcept
{
auto coro = awaiting_coroutine;
h.destroy();
if (coro)
{
coro.resume();
}
}
What can be a difference between these two implementations?
If they are identical why are different return types of await_suspend are supported by the compiler? What are they for? Is it something like a syntax sugar?
Now the full example looks like this:
#include <coroutine>
#include <optional>
#include <iostream>
#include <thread>
#include <chrono>
#include <queue>
#include <vector>
// simple timers
// stored timer tasks
struct timer_task
{
std::chrono::steady_clock::time_point target_time;
std::coroutine_handle<> handle;
};
// comparator
struct timer_task_before_cmp
{
bool operator()(const timer_task& left, const timer_task& right) const
{
return left.target_time > right.target_time;
}
};
std::priority_queue<timer_task, std::vector<timer_task>, timer_task_before_cmp> timers;
inline void submit_timer_task(std::coroutine_handle<> handle, std::chrono::nanoseconds timeout)
{
timers.push(timer_task{ std::chrono::steady_clock::now() + timeout, handle });
}
//template <bool owning>
struct UpdatePromise;
//template <bool owning>
struct UpdateTask
{
// declare promise type
using promise_type = UpdatePromise;
UpdateTask(std::coroutine_handle<promise_type> handle) :
handle(handle)
{
std::cout << "UpdateTask constructor." << std::endl;
}
UpdateTask(const UpdateTask&) = delete;
UpdateTask(UpdateTask&& other) : handle(other.handle)
{
std::cout << "UpdateTask move constructor." << std::endl;
}
UpdateTask& operator = (const UpdateTask&) = delete;
UpdateTask& operator = (const UpdateTask&& other)
{
handle = other.handle;
std::cout << "UpdateTask move assignment." << std::endl;
return *this;
}
~UpdateTask()
{
std::cout << "UpdateTask destructor." << std::endl;
}
std::coroutine_handle<promise_type> handle;
};
struct UpdatePromise
{
std::coroutine_handle<> awaiting_coroutine;
UpdateTask get_return_object();
std::suspend_never initial_suspend()
{
return {};
}
void unhandled_exception()
{
std::terminate();
}
auto final_suspend() noexcept
{
// if there is a coroutine that is awaiting on this coroutine resume it
struct transfer_awaitable
{
std::coroutine_handle<> awaiting_coroutine;
// always stop at final suspend
bool await_ready() noexcept
{
return false;
}
//Results in a crash with MSVC2022, but not with GCC.
/*
std::coroutine_handle<> await_suspend(std::coroutine_handle<UpdatePromise> h) noexcept
{
// resume awaiting coroutine or if there is no coroutine to resume return special coroutine that do
// nothing
std::coroutine_handle<> val = awaiting_coroutine ? awaiting_coroutine : std::noop_coroutine();
h.destroy();
return val;
}*/
//Does not crash.
void await_suspend(std::coroutine_handle<UpdatePromise> h) noexcept
{
auto coro = awaiting_coroutine;
h.destroy();
if (coro)
{
coro.resume();
}
}
void await_resume() noexcept {}
};
return transfer_awaitable{ awaiting_coroutine };
}
void return_void() {}
// use `co_await std::chrono::seconds{n}` to wait specified amount of time
auto await_transform(std::chrono::milliseconds d)
{
struct timer_awaitable
{
std::chrono::milliseconds m_d;
// always suspend
bool await_ready()
{
return m_d <= std::chrono::milliseconds(0);
}
// h is a handler for current coroutine which is suspended
void await_suspend(std::coroutine_handle<> h)
{
// submit suspended coroutine to be resumed after timeout
submit_timer_task(h, m_d);
}
void await_resume() {}
};
return timer_awaitable{ d };
}
// also we can await other UpdateTask<T>
auto await_transform(UpdateTask& update_task)
{
if (!update_task.handle)
{
throw std::runtime_error("coroutine without promise awaited");
}
if (update_task.handle.promise().awaiting_coroutine)
{
throw std::runtime_error("coroutine already awaited");
}
struct task_awaitable
{
std::coroutine_handle<UpdatePromise> handle;
// check if this UpdateTask already has value computed
bool await_ready()
{
return handle.done();
}
// h - is a handle to coroutine that calls co_await
// store coroutine handle to be resumed after computing UpdateTask value
void await_suspend(std::coroutine_handle<> h)
{
handle.promise().awaiting_coroutine = h;
}
// when ready return value to a consumer
auto await_resume()
{
}
};
return task_awaitable{ update_task.handle };
}
};
inline UpdateTask UpdatePromise::get_return_object()
{
return { std::coroutine_handle<UpdatePromise>::from_promise(*this) };
}
// timer loop
void loop()
{
while (!timers.empty())
{
auto& timer = timers.top();
// if it is time to run a coroutine
if (timer.target_time < std::chrono::steady_clock::now())
{
auto handle = timer.handle;
timers.pop();
handle.resume();
}
else
{
std::this_thread::sleep_until(timer.target_time);
}
}
}
// example
using namespace std::chrono_literals;
UpdateTask TestTimerAwait()
{
using namespace std::chrono_literals;
std::cout << "testTimerAwait started." << std::endl;
co_await 1s;
std::cout << "testTimerAwait finished." << std::endl;
}
UpdateTask TestNestedTimerAwait()
{
using namespace std::chrono_literals;
std::cout << "testNestedTimerAwait started." << std::endl;
auto task = TestTimerAwait();
co_await 2s;
//We can't wait for a destroyed coroutine.
//co_await task;
std::cout << "testNestedTimerAwait finished." << std::endl;
}
// main can't be a coroutine and usually need some sort of looper (io_service or timer loop in this example)
int main()
{
auto task = TestNestedTimerAwait();
// execute deferred coroutines
loop();
}
I compile the example with Microsoft (R) C/C++ Optimizing Compiler Version 19.30.30709 for x86 using the following command:
cl /std:c++latest /EHsc a.cpp
EDIT1:
The code was a bit incorrect, I commented co_await task out:
//We can't wait for a destroyed coroutine.
//co_await task;

Concurrent program compiled with clang runs fine, but hangs with gcc

I wrote a class to share a limited number of resources (for instance network interfaces) between a larger number of threads. The resources are pooled and, if not in use, they are borrowed out to the requesting thread, which otherwise waits on a condition_variable.
Nothing really exotic: apart for the fancy scoped_lock which requires c++17, it should be good old c++11.
Both gcc10.2 and clang11 compile the test main fine, but while the latter produces an executable which does pretty much what expected, the former hangs without consuming CPU (deadlock?).
With the help of https://godbolt.org/ I tried older versions of gcc and also icc (passing options -O3 -std=c++17 -pthread), all reproducing the bad result, while even there clang confirms the proper behavior.
I wonder if I made a mistake or if the code triggers some compiler misbehavior and in case how to work around that.
#include <iostream>
#include <vector>
#include <stdexcept>
#include <mutex>
#include <condition_variable>
template <typename T>
class Pool {
///////////////////////////
class Borrowed {
friend class Pool<T>;
Pool<T>& pool;
const size_t id;
T * val;
public:
Borrowed(Pool & p, size_t i, T& v): pool(p), id(i), val(&v) {}
~Borrowed() { release(); }
T& get() const {
if (!val) throw std::runtime_error("Borrowed::get() this resource was collected back by the pool");
return *val;
}
void release() { pool.collect(*this); }
};
///////////////////////////
struct Resource {
T val;
bool available = true;
Resource(T v): val(std::move(v)) {}
};
///////////////////////////
std::vector<Resource> vres;
size_t hint = 0;
std::condition_variable cv;
std::mutex mtx;
size_t available_cnt;
public:
Pool(std::initializer_list<T> l): available_cnt(l.size()) {
vres.reserve(l.size());
for (T t: l) {
vres.emplace_back(std::move(t));
}
std::cout << "Pool has size " << vres.size() << std::endl;
}
~Pool() {
for ( auto & res: vres ) {
if ( ! res.available ) {
std::cerr << "WARNING Pool::~Pool resources are still in use\n";
}
}
}
Borrowed borrow() {
std::unique_lock<std::mutex> lk(mtx);
cv.wait(lk, [&](){return available_cnt > 0;});
if ( vres[hint].available ) {
// quick path, if hint points to an available resource
std::cout << "hint good" << std::endl;
vres[hint].available = false;
--available_cnt;
Borrowed b(*this, hint, vres[hint].val);
if ( hint + 1 < vres.size() ) ++hint;
return b; // <--- gcc seems to hang here
} else {
// full scan to find the available resource
std::cout << "hint bad" << std::endl;
for ( hint = 0; hint < vres.size(); ++hint ) {
if ( vres[hint].available ) {
vres[hint].available = false;
--available_cnt;
return Borrowed(*this, hint, vres[hint].val);
}
}
}
throw std::runtime_error("Pool::borrow() no resource is available - internal logic error");
}
void collect(Borrowed & b) {
if ( &(b.pool) != this )
throw std::runtime_error("Pool::collect() trying to collect resource owned by another pool!");
if ( b.val ) {
b.val = nullptr;
{
std::scoped_lock<std::mutex> lk(mtx);
hint = b.id;
vres[hint].available = true;
++available_cnt;
}
cv.notify_one();
}
}
};
///////////////////////////////////////////////////////////////////
#include <thread>
#include <chrono>
int main() {
Pool<std::string> pool{"hello","world"};
std::vector<std::thread> vt;
for (int i = 10; i > 0; --i) {
vt.emplace_back( [&pool, i]()
{
auto res = pool.borrow();
std::this_thread::sleep_for(std::chrono::milliseconds(i*300));
std::cout << res.get() << std::endl;
}
);
}
for (auto & t: vt) t.join();
return 0;
}
You're running into undefined behavior since you effectively relock an already acquired lock. With MSVC I obtained a helpful callstack to distinguish this. Here is a working fixed example (I suppose, works now for me, see the changes within the borrow() method, might be further re-designed since locking inside a destructor might be questioned):
#include <iostream>
#include <vector>
#include <stdexcept>
#include <mutex>
#include <condition_variable>
template <typename T>
class Pool {
///////////////////////////
class Borrowed {
friend class Pool<T>;
Pool<T>& pool;
const size_t id;
T * val;
public:
Borrowed(Pool & p, size_t i, T& v) : pool(p), id(i), val(&v) {}
~Borrowed() { release(); }
T& get() const {
if (!val) throw std::runtime_error("Borrowed::get() this resource was collected back by the pool");
return *val;
}
void release() { pool.collect(*this); }
};
///////////////////////////
struct Resource {
T val;
bool available = true;
Resource(T v) : val(std::move(v)) {}
};
///////////////////////////
std::vector<Resource> vres;
size_t hint = 0;
std::condition_variable cv;
std::mutex mtx;
size_t available_cnt;
public:
Pool(std::initializer_list<T> l) : available_cnt(l.size()) {
vres.reserve(l.size());
for (T t : l) {
vres.emplace_back(std::move(t));
}
std::cout << "Pool has size " << vres.size() << std::endl;
}
~Pool() {
for (auto & res : vres) {
if (!res.available) {
std::cerr << "WARNING Pool::~Pool resources are still in use\n";
}
}
}
Borrowed borrow() {
std::unique_lock<std::mutex> lk(mtx);
while (available_cnt == 0) cv.wait(lk);
if (vres[hint].available) {
// quick path, if hint points to an available resource
std::cout << "hint good" << std::endl;
vres[hint].available = false;
--available_cnt;
Borrowed b(*this, hint, vres[hint].val);
if (hint + 1 < vres.size()) ++hint;
lk.unlock();
return b; // <--- gcc seems to hang here
}
else {
// full scan to find the available resource
std::cout << "hint bad" << std::endl;
for (hint = 0; hint < vres.size(); ++hint) {
if (vres[hint].available) {
vres[hint].available = false;
--available_cnt;
lk.unlock();
return Borrowed(*this, hint, vres[hint].val);
}
}
}
throw std::runtime_error("Pool::borrow() no resource is available - internal logic error");
}
void collect(Borrowed & b) {
if (&(b.pool) != this)
throw std::runtime_error("Pool::collect() trying to collect resource owned by another pool!");
if (b.val) {
b.val = nullptr;
{
std::scoped_lock<std::mutex> lk(mtx);
hint = b.id;
vres[hint].available = true;
++available_cnt;
cv.notify_one();
}
}
}
};
///////////////////////////////////////////////////////////////////
#include <thread>
#include <chrono>
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
int main()
{
try
{
Pool<std::string> pool{ "hello","world" };
std::vector<std::thread> vt;
for (int i = 10; i > 0; --i) {
vt.emplace_back([&pool, i]()
{
auto res = pool.borrow();
std::this_thread::sleep_for(std::chrono::milliseconds(i * 300));
std::cout << res.get() << std::endl;
}
);
}
for (auto & t : vt) t.join();
return 0;
}
catch(const std::exception& e)
{
std::cout << "exception occurred: " << e.what();
}
return 0;
}
Locking destructor coupled with missed NRVO caused the issue (credits to Secundi for pointing this out in the comments).
If the compiler skips NRVO, the few lines below if will call the destructor of b. The destructor tries to acquire the mutex before this gets released by the unique_lock, resulting in a deadlock.
Borrowed b(*this, hint, vres[hint].val);
if ( hint + 1 < vres.size() ) ++hint;
return b; // <--- gcc seems to hang here
It is of crucial importance here to avoid destroying b. In fact, even if manually releasing the unique_lock before returning will avoid the deadlock, the destructor of b will mark the pooled resource as available, while this is just being borrowed out, making the code wrong.
A possible fix consists in replacing the lines above with:
const auto tmp = hint;
if ( hint + 1 < vres.size() ) ++hint;
return Borrowed(*this, tmp, vres[tmp].val);
Another possibility (which does not exclude the former) is to delete the (evil) copy ctor of Borrowed and only provide a move ctor:
Borrowed(const Borrowed &) = delete;
Borrowed(Borrowed && b): pool(b.pool), id(b.id), val(b.val) { b.val = nullptr; }

Future with Coroutines co_await

Watching a c++ lecture (https://youtu.be/DLLt4anKXKU?t=1589), I tried to understand how future work with co_await; example:
auto compute = []() -> std::future<int> {
int fst = co_await std::async(get_first);
int snd = co_await std::async(get_second);
co_return fst + snd;
};
auto f = compute();
/* some heavy task */
f.get();
I can't understand how and when co_await std::async(get_first) returns control to compute. i.e how std::future implements an awaitable interface (type).
how std::future implements an awaitable interface
Well as far as C++20 is concerned, it doesn't. C++20 provides co_await and its attendant language functionality, but it doesn't provide any actual awaitable types.
How std::future could implement the awaitable interface is basically the same as how std::experimental::future from the Concurrency TS implements future::then. then takes a function to be continued when the future's value becomes available. The return value of then is a new future<U> (the old future<T> now becomes non-functional), where U is the new value that the given continuation function returns. That new future will only have a U available when the original value is available and when the continuation has processed it into the new value. In that order.
The exact details about how .then works depend entirely on how future is implemented. And it may depend on how the specific future was created, as futures from std::async have special properties that other futures don't.
co_await just makes this process much more digestible visually. A co_awaitable future would simply shove the coroutine handle into future::then, thereby altering the future.
Here there is a full program that can await futures with C++20 coroutines. I did it myself these days to learn.
#include <cassert>
#include <coroutine>
#include <future>
#include <iostream>
#include <optional>
#include <thread>
using namespace std::literals;
template <class T>
class FutureAwaitable {
public:
template <class U> struct BasicPromiseType {
auto get_return_object() {
return FutureAwaitable<T>(CoroHandle::from_promise(*this));
}
std::suspend_always initial_suspend() noexcept {
std::cout << "Initial suspend\n";
return {};
}
std::suspend_never final_suspend() noexcept {
std::cout << "Final suspend\n";
return {};
}
template <class V>
requires std::is_convertible_v<V, T>
void return_value(V v) { _value = v; }
void unhandled_exception() { throw; }
std::optional<T> _value;
};
using promise_type = BasicPromiseType<FutureAwaitable<T>>;
using CoroHandle = std::coroutine_handle<promise_type>;
explicit FutureAwaitable(CoroHandle h) : _parent(h) { }
~FutureAwaitable() {
}
bool is_ready() const {
auto & fut = std::get<FutureAwaitable<T> *>(&_parent);
return fut->wait_for(std::chrono::seconds(0)) != std::future_status::ready;
}
FutureAwaitable(std::future<T> && f) {
_f = &f;
}
T get() const { return promise()._value.value(); }
std::future<T> & std_future() const {
assert(_f->valid());
return *_f;
}
bool await_ready() {
if (!(_f->wait_for(std::chrono::seconds(0)) == std::future_status::ready)) {
std::cout << "Await ready IS ready\n";
return true;
}
else
std::cout << "Await ready NOT ready\n";
return false;
}
auto await_resume() {
std::cout << "Await resume" << std::endl;
return std_future().get();
}
bool await_suspend(CoroHandle parent) {
_parent = parent;
std::cout << "Await suspend\n";
return true;
}
void resume() {
assert(_parent);
_parent.resume();
}
auto parent() const { return _parent; }
bool done() const noexcept {
return _parent.done();
}
private:
auto & promise() const noexcept { return _parent.promise(); }
CoroHandle _parent = nullptr;
std::future<T> * _f = nullptr;
};
template <class T> auto operator co_await(std::future<T> &&f) {
return FutureAwaitable<T>(std::forward<std::future<T>>(f));
}
template <class T> auto operator co_await(std::future<T> & f) {
return FutureAwaitable<T>(std::forward<std::future<T>>(f));
}
FutureAwaitable<int> coroutine() {
std::promise<int> p;
auto fut = p.get_future();
p.set_value(31);
std::cout << "Entered func()" << std::endl;
auto res = co_await std::move(fut);
std::cout << "Continue func(): " << res << std::endl;
auto computation = co_await std::async(std::launch::async, [] {
int j = 0;
for (int i = 0; i < 1000; ++i) {
j += i;
}
return j;
});
auto computation2 = std::async(std::launch::async, [] {
int j = 0;
std::this_thread::sleep_for(20s);
for (int i = 0; i < 1000; ++i) {
j += i;
}
return j;
});
auto computation3 = std::async(std::launch::async, [] {
int j = 0;
std::this_thread::sleep_for(20s);
for (int i = 0; i < 1000; ++i) {
j += i;
}
return j;
});
co_await computation2;
co_await computation3;
std::cout << "Computation result is " << computation << std::endl;
co_return computation;
}
#define ASYNC_MAIN(coro) \
int main() { \
FutureAwaitable<int> c = coro(); \
do { c.resume(); } while (!c.done()); \
std::cout << "The coroutine returned " << c.get(); \
return 0; \
}
ASYNC_MAIN(coroutine)

std::packaged_task with std::placeholders

MAJOR EDIT TO SIMPLIFY CODE (and solved)
I would like to be able to make a packaged task that has a free unbound argument, which I will then add at call time of the packaged task.
In this case, I want the first argument to the function (of type size_t) to be unbound.
Here is a working minimal example (this was the solution):
#include <vector>
#include <queue>
#include <memory>
#include <thread>
#include <mutex>
#include <condition_variable>
#include <future>
#include <functional>
#include <stdexcept>
#include <cstdlib>
#include <cstdio>
//REV: I'm trying to "trick" this into for double testfunct( size_t arg1, double arg2), take enqueue( testfunct, 1.0 ), and then internally, execute
// return testfunct( internal_size_t, 1.0 )
template<typename F, typename... Args>
auto enqueue(F&& f, Args&&... args)
-> std::future<typename std::result_of<F(size_t, Args...)>::type>
{
using return_type = typename std::result_of<F(size_t, Args...)>::type;
//REV: this is where the error was, I was being stupid and thinking this task_contents which would be pushed to the queue should be same (return?) type as original function? Changed to auto and everything worked... (taking into account Jans's result_type(size_t) advice into account.
//std::function<void(size_t)> task_contents = std::bind( std::forward<F>(f), std::placeholders::_1, std::forward<Args>(args)... );
auto task_contents = std::bind( std::forward<F>(f), std::placeholders::_1, std::forward<Args>(args)... );
std::packaged_task<return_type(size_t)> rawtask(
task_contents );
std::future<return_type> res = rawtask.get_future();
size_t arbitrary_i = 10;
rawtask(arbitrary_i);
return res;
}
double testfunct( size_t threadidx, double& arg1 )
{
fprintf(stdout, "Double %lf Executing on thread %ld\n", arg1, threadidx );
std::this_thread::sleep_for( std::chrono::milliseconds(1000) );
return 10; //true;
}
int main()
{
std::vector<std::future<double>> myfutures;
for(size_t x=0; x<100; ++x)
{
double a=x*10;
myfutures.push_back(
enqueue( testfunct, std::ref(a) )
);
}
for(size_t x=0; x<100; ++x)
{
double r = myfutures[x].get();
fprintf(stdout, "Got %ld: %f\n", x, r );
}
}
The main issues are on ThreadPool::enqueue:
std::function<void(size_t)> task1 = std::bind( std::forward<F>(f), std::placeholders::_1, std::forward<Args>(args)... );
Here, the type of task1 is std::function<void(std::size_t)> but the result of the std::bind when evaluated with funct is convertible to std::function<bool(std::size_t)> and even though as #T.C has pointed out, you can assign the result of the bind to task1, in order to pass task1 to std::make_shared you need to honor the return_type you've got.
Change the above to:
std::function<return_type(size_t)> task1 = std::bind( std::forward<F>(f), std::placeholders::_1, std::forward<Args>(args)... );
Now the same for:
auto task = std::make_shared< std::packaged_task<return_type()> >( task1 );
but in this case is the parameter type that is missing. Change it to:
auto task = std::make_shared< std::packaged_task<return_type(std::size_t)> >( task1 );
ThreadPool::tasks store function objects of type std::function<void(std::size_t)> but you're storing lambda that receive no arguments. Change the tasks.emplace(...) to:
tasks.emplace([task](std::size_t a){ (*task)(a); });
The code is not very well formatted, but a solution.
First, you should wrap the results in the lambda creation, not pass functions that can return anything. But if you want to use a shared pointer on a task, this works.
In the prototype:
std::future<void> enqueue(std::function<void(size_t)> f);
using Task = std::function<void(size_t)>;
// the task queue
std::queue<Task> tasks;
std::optional<Task> pop_one();
Implementation becomes:
ThreadPool::ThreadPool(size_t threads)
: stop(false)
{
for(size_t i = 0;i<threads;++i)
workers.emplace_back(
[this,i]
{
for(;;)
{
auto task = pop_one();
if(task)
{
(*task)(i);
}
else break;
}
}
);
}
std::optional<ThreadPool::Task> ThreadPool::pop_one()
{
std::unique_lock<std::mutex> lock(this->queue_mutex);
this->condition.wait(lock,
[this]{ return this->stop || !this->tasks.empty(); });
if(this->stop && this->tasks.empty())
{
return std::optional<Task>();
}
auto task = std::move(this->tasks.front()); //REV: this moves into my thread the front of the tasks queue.
this->tasks.pop();
return task;
}
template<typename T>
std::future<T> ThreadPool::enqueue(std::function<T(size_t)> fun)
{
auto task = std::make_shared< std::packaged_task<T(size_t)> >([=](size_t size){return fun(size);});
auto res = task->get_future();
{
std::unique_lock<std::mutex> lock(queue_mutex);
// don't allow enqueueing after stopping the pool
if(stop)
{
throw std::runtime_error("enqueue on stopped ThreadPool");
}
tasks.emplace([=](size_t size){(*task)(size);});
}
condition.notify_one();
return res;
}
And now you can have your main:
int main()
{
size_t nthreads=3;
ThreadPool tp(nthreads);
std::vector<std::future<bool>> myfutures;
for(size_t x=0; x<100; ++x)
{
myfutures.push_back(
tp.enqueue<bool>([=](size_t threadidx) {return funct(threadidx, (double)x * 10.0);}));
}
for(size_t x=0; x<100; ++x)
{
bool r = myfutures[x].get();
std::cout << "Got " << r << "\n";
}
}
There is now an explicit return type when wrapping the lambda, as the return type is templated.

Reference to object pointed by list iterator not working

I have a container class Query_List :
template <typename Data>
class query_list
{
private:
std::mutex mx_lock;
// Underlaying container for fast read, write and acces
std::list<Data> m_DataArray;
// Index table used for fast acces over the container
std::map<uint32_t, typename std::list<Data>::iterator> m_IndexTable;
public:
query_list() { }
void Push_Back(const uint32_t& ID, const Data& Val)
{
std::lock_guard<std::mutex> _l(mx_lock);
// Add data to the container
m_DataArray.push_back(Val);
// Get iterator to the new alocated data
auto iter = m_DataArray.end();
--iter;
// Asociate ID with the index in the list
m_IndexTable[ID] = iter;
}
bool AtID(const uint32_t& ID, Data &To_Get)
{
if (!Exists(ID))
return false;
std::lock_guard<std::mutex> _l(mx_lock);
To_Get = *m_IndexTable[ID];
return true;
}
void Remove(const uint32_t& ID)
{
// Data has already been freed!
if (!Exists(ID)) return;
std::lock_guard<std::mutex> _l(mx_lock);
m_DataArray.erase(m_IndexTable[ID]);
m_IndexTable[ID] = m_DataArray.end();
}
bool Exists(const uint32_t& ID)
{
std::lock_guard<std::mutex> _l(mx_lock);
if (m_IndexTable.find(ID) == m_IndexTable.end())
return false;
if (m_IndexTable[ID] == m_DataArray.end())
return false;
return true;
}
};
The problem appears when I try to extract data from the container that is pointed by an ID:
bool PacketManager::AppendPacket(const Packet& pk)
{
PacketQueue _queue;
// The queue is passed by reference
if (!l_ConnQueues.AtID(pk.ownerID, _queue))
return false;
// Append the packet
std::lock_guard<std::mutex> _l(_queue._mx);
size_t InitSize = _queue.OutPackets.size();
_queue.OutPackets.push(pk);
// If data is not appended to the queue
if (_queue.OutPackets.size() <= InitSize)
return false;
return true;
}
Debugging the function shows me that the data is appened to the temporary object from the queue, but not to the one from the container. I suspect the cause of this behaviour to be the copy constructor of the PackeTQueue class.
struct PacketQueue
{
PacketQueue() { }
uint32_t ID;
std::mutex _mx;
std::queue<Packet> OutPackets;
PacketQueue& operator=(const PacketQueue& q)
{
ID = q.ID;
OutPackets = q.OutPackets;
return *this;
}
PacketQueue(const PacketQueue& queue)
{
ID = queue.ID;
OutPackets = queue.OutPackets;
}
};
My questions are:
Why is this happening?
What can I do to fix this error?
Any suggestions on improving the design of the container class (Query_List)?
The problem is that your AtID() method is returning a copy of the PacketQueue that is stored in m_DataArray. If you want to access the original so that you can modify it, you need to change the To_Get output parameter to return a pointer:
bool AtID(uint32_t ID, Data* &To_Get)
{
std::lock_guard<std::mutex> l(mx_lock);
auto iter = m_IndexTable.find(ID);
if (iter == m_IndexTable.end())
return false;
To_Get = &*(iter->second);
return true;
}
bool PacketManager::AppendPacket(const Packet& pk)
{
PacketQueue *queue;
if (!l_ConnQueues.AtID(pk.ownerID, queue))
return false;
std::lock_guard<std::mutex> l(queue->_mx);
size_t InitSize = queue->OutPackets.size();
queue->OutPackets.push(pk);
return (queue->OutPackets.size() > InitSize);
}
Or, you can change AtID() to return the pointer as its return value instead of using an output parameter at all:
Data* AtID(uint32_t ID)
{
std::lock_guard<std::mutex> l(mx_lock);
auto iter = m_IndexTable.find(ID);
if (iter == m_IndexTable.end())
return nullptr;
return &*(iter->second);
}
bool PacketManager::AppendPacket(const Packet& pk)
{
PacketQueue *queue = l_ConnQueues.AtID(pk.ownerID);
if (!queue)
return false;
std::lock_guard<std::mutex> l(queue->_mx);
queue->OutPackets.push(pk);
return true;
}
Of course, that being said, since l_ConnQueues's mutex is unlocked after AtID() exits, any other thread could potentially Remove() the PacketQueue from the list while AppendPacket() is still trying to push a packet into it. So, it would be safer to have AppendPacket() keep the list's mutex locked until it is done updating the returned queue:
Data* AtID_NoLock(uint32_t ID)
{
auto iter = m_IndexTable.find(ID);
if (iter == m_IndexTable.end())
return nullptr;
return &*(iter->second);
}
Data* AtID(uint32_t ID)
{
std::lock_guard<std::mutex> l(mx_lock);
return AtID_NoLock(ID);
}
bool PacketManager::AppendPacket(const Packet& pk)
{
std::lock_guard<std::mutex> l(l_ConnQueues.mx_lock);
PacketQueue *queue = l_ConnQueues.AtID_NoLock(pk.ownerID);
if (!queue)
return false;
std::lock_guard<std::mutex> l2(queue->_mx);
queue->OutPackets.push(pk);
return true;
}
That being said, you will notice that I changed AtID() to not use Exists() anymore. There is a race condition (in Remove(), too) that, as soon as Exists() exits, another thread could come in and lock the mutex and alter the list/map before the current thread has a chance to re-lock the mutex. As such, AtID() (and Remove()) should not be calling Exists() at all.
Also, I don't suggest having Remove() set a stored std:list iterator to end, that just makes more work for AtID(). It would be better to simply erase the found ID from the map altogether:
void Remove(uint32_t ID)
{
std::lock_guard<std::mutex> l(mx_lock);
auto iter = m_IndexTable.find(ID);
if (iter != m_IndexTable.end())
{
m_DataArray.erase(iter->second);
m_IndexTable.erase(iter);
}
}
Update: That being said, there is really no good reason to have a std::map of std::list iterators at all. You can store your PacketQueue objects directly in the std::map and remove the std::list altogether:
template <typename Data>
class query_list {
private:
std::mutex mx_lock;
std::map<uint32_t, Data> m_Data;
public:
query_list() { }
void Push_Back(uint32_t ID, const Data& Val) {
std::lock_guard<std::mutex> l(mx_lock);
// Add data to the container
m_Data[ID] = Val;
}
Data* AtID_NoLock(uint32_t ID) {
auto iter = m_Data.find(ID);
return (iter != m_Data.end()) ? &(iter->second) : nullptr;
}
Data* AtID(uint32_t ID) {
std::lock_guard<std::mutex> l(mx_lock);
return AtID_NoLock(ID);
}
void Remove(uint32_t ID) {
std::lock_guard<std::mutex> l(mx_lock);
auto iter = m_Data.find(ID);
if (iter != m_Data.end())
m_Data.erase(iter);
}
bool Exists(uint32_t ID) {
std::lock_guard<std::mutex> l(mx_lock);
return (m_Data.find(ID) != m_Data.end());
}
};