I'm sorry if I'm getting the whole concept wrong, but I'm trying to make a tuple the container of the actual objects, where only with its destruction those objects will go out of scope.
I currently have this:
class MiniThread {
public:
~MiniThread() {
if (m_thread) {
if (m_thread->joinable())
m_thread->join();
delete m_thread;
}
}
void join()
{
if (m_thread == nullptr)
return;
m_thread->join();
m_thread = nullptr;
}
template<typename F, typename... Args>
void run(F func, Args... args)
{
if (m_thread != nullptr)
join();
auto tuple = std::forward_as_tuple(args...);
m_thread = new std::thread([=]() {
__try
{
std::apply(func, tuple);
}
__except (CrashDump::GenerateDump(GetExceptionInformation()))
{
// TODO: log.
exit(1);
}
});
m_started = true;
}
bool started() const { return m_started; }
private:
std::thread *m_thread = nullptr;
bool m_started = false;
};
std::string getString()
{
return std::string("sono");
}
int main()
{
auto test = [&](std::string seila, const std::string& po, std::promise<int>* p)
{
std::cout << seila.c_str() << std::endl;
std::cout << po.c_str() << std::endl;
p->set_value(10);
};
std::promise<int> p;
std::future<int> f;
MiniThread thread;
std::string hello = "hello";
std::string seilapo = "seilapo";
f = p.get_future();
thread.run(test, getString(), "how are you", &p);
thread.join();
int ftest = f.get();
std::cout << ftest << std::endl;
}
By the time the thread is ran, the args are no longer reliable. They have been destructed already. So I was wondering if is there a way to copy them in the call of the thread by value. I have made some attempts of moving the variadic arguments into tuples, but tuples are always rendered with rvalues and fail all the same.
This:
auto tuple = std::forward_as_tuple(args...);
Creates a tuple of references into args... That's what forward_as_tuple's job is. You're then capturing that tuple of references by value:
m_thread = new std::thread([=]{ /* ... */ });
So once your arguments go out of scope, you're only holding onto references to them... and that'll dangle.
But you don't actually... need to have a tuple at all. Just copy the arguments themselves:
m_thread = std::thread([=]() {
func(args...); // func and args, no tuple here
});
Also don't write new thread - thread is already a handle type, just create one.
The above copies the arguments. If you want to move them, then in C++17, yes you'll need to have a tuple and use std::apply. But not forward_as_tuple... just make_tuple:
m_thread = std::thread([func, args=std::make_tuple(std::move(args)...)]() mutable {
std::apply(func, std::move(args));
});
In C++20, you won't need the tuple again, and can write a pack-expansion:
m_thread = std::thread([func, ...args=std::move(args)]() mutable {
func(std::move(args)...);
});
Related
I am trying to work with Coroutines and multithreading together in C++.
In many coroutine examples, they create a new thread in the await_suspend of the co_await operator for the promise type. I want to submit to a thread pool in this function.
Here I define a co_await for future<int>.
void await_suspend(std::coroutine_handle<> handle) {
this->wait();
handle.resume();
}
I want to change this code to submit a lambda/function pointer to a threadpool. Potentially I can use Alexander Krizhanovsky's ringbuffer to communicate with the threadpool to create a threadpool by myself or use boost's threadpool.
My problem is NOT the thread pool. My problem is that I don't know how to get reference to the threadpool in this co_await operator.
How do I pass data from the outside environment where the operator is to this await_suspend function? Here is an example of what I want to do:
void await_suspend(std::coroutine_handle<> handle) {
// how do I get "pool"? from within this function
auto res = pool.enqueue([](int x) {
this->wait();
handle.resume();
});
}
I am not an expert at C++ so I'm not sure how I would get access to pool in this operator?
Here's the full code inspired by this GitHub gist A simple C++ coroutine example.
#include <future>
#include <iostream>
#include <coroutine>
#include <type_traits>
#include <list>
#include <thread>
using namespace std;
template <>
struct std::coroutine_traits<std::future<int>> {
struct promise_type : std::promise<int> {
future<int> get_return_object() { return this->get_future(); }
std::suspend_never initial_suspend() noexcept { return {}; }
std::suspend_never final_suspend() noexcept { return {}; }
void return_value(int value) { this->set_value(value); }
void unhandled_exception() {
this->set_exception(std::current_exception());
}
};
};
template <>
struct std::coroutine_traits<std::future<int>, int> {
struct promise_type : std::promise<int> {
future<int> get_return_object() { return this->get_future(); }
std::suspend_never initial_suspend() noexcept { return {}; }
std::suspend_never final_suspend() noexcept { return {}; }
void return_value(int value) { this->set_value(value); }
void unhandled_exception() {
this->set_exception(std::current_exception());
}
};
};
auto operator co_await(std::future<int> future) {
struct awaiter : std::future<int> {
bool await_ready() { return false; } // suspend always
void await_suspend(std::coroutine_handle<> handle) {
this->wait();
handle.resume();
}
int await_resume() { return this->get(); }
};
return awaiter{std::move(future)};
}
future<int> async_add(int a, int b)
{
auto fut = std::async([=]() {
int c = a + b;
return c;
});
return fut;
}
future<int> async_fib(int n)
{
if (n <= 2)
co_return 1;
int a = 1;
int b = 1;
// iterate computing fib(n)
for (int i = 0; i < n - 2; ++i)
{
int c = co_await async_add(a, b);
a = b;
b = c;
}
co_return b;
}
future<int> test_async_fib()
{
for (int i = 1; i < 10; ++i)
{
int ret = co_await async_fib(i);
cout << "async_fib(" << i << ") returns " << ret << endl;
}
}
int runfib(int arg) {
auto fut = test_async_fib();
fut.wait();
return 0;
}
int run_thread() {
printf("Running thread");
return 0;
}
int main()
{
std::list<shared_ptr<std::thread>> threads = { };
for (int i = 0 ; i < 10; i++) {
printf("Creating thread\n");
std::shared_ptr<std::thread> thread = std::make_shared<std::thread>(runfib, 5);
threads.push_back(thread);
}
std::list<shared_ptr<std::thread>>::iterator it;
for (it = threads.begin(); it != threads.end(); it++) {
(*it).get()->join();
printf("Joining thread");
}
fflush(stdout);
return 0;
}
You could have a thread pool, and let the coroutine promise schedule work on it.
I have this example around that is not exactly simple but may do the work:
Make your coroutine return a task<T>.
task<int> async_add(int a, int b) { ... }
Let the task share a state with its coroutine_promise. The state:
is implemented as an executable, resuming the coroutine when executed, and
holds the result of the operation (e.g. a std::promise<T>).
template <typename T>
class task<T>::state : public executable {
public:
void execute() noexcept override {
handle_.resume();
}
...
private:
handle_type handle_;
std::promise<T> result_;
};
The coroutine_promise returns a task_scheduler awaiter at initial_suspend:
template <typename T>
class task<T>::coroutine_promise {
public:
auto initial_suspend() {
return task_scheduler<task<T>>{};
}
The task_scheduler awaiter schedules the state:
template <is_task task_t>
struct task_scheduler : public std::suspend_always {
void await_suspend(task_t::handle_type handle) const noexcept {
thread_pool::get_instance().schedule(handle.promise().get_state());
}
};
Wrapping it all up: calls to a coroutine will make a state be scheduled on a thread, and, whenever a thread executes that state, the coroutine will be resumed. The caller can then wait for the task's result.
auto c{ async_add(a,b) };
b = c.get_result();
[Demo]
That example is from 2018, and was built for the Coroutine TS. So it's missing a lot of stuff from the actual C++20 feature. It also assumes the presence of a lot of things that didn't make it into C++20. The most notable of which being the idea that std::future is an awaitable type, and that it has continuation support when coupled with std::async.
It's not, and it doesn't. So there's not much you can really learn from this example.
co_await is ultimately built on the ability to suspend execution of a function and schedule its resumption after some value has been successfully computed. The actual C++20 std::future has exactly none of the machinery needed to do that. Nor does std::asyc give it the ability to do so.
As such, neither is an appropriate tool for this task.
You need to build your own future type (possibly using std::promise/future internally) which has a reference to your thread pool. When you co_await on this future, it is that new future which passes off the coroutine_handle to the thread pool, doing whatever is needed to ensure that this handle does not get executed until its current set of tasks is done.
Your pool or whatever needs to have a queue of tasks, such that it can insert new ones to be processed after all of the current one, and remove tasks once they've finished (as well as starting the next one). And those operations need to be properly synchronized. This queue needs to be accessible by both the future type and your coroutine's promise type.
When a coroutine ends, the promise needs to tell the queue that its current task is over and to move to the next one, or suspend the thread if there is no next one. And the promise's value needs to be forwarded to the next task. When a coroutine co_awaits on a future from your system, it needs to add that handle to the queue of tasks to be performed, possibly starting up the thread again.
Based on How to implement timeout for function in c++, I wrote this wrapper:
template <typename t_time, typename t_function, typename... t_params>
inline std::conditional_t<
// if 't_function' return type is 'void'
std::is_void_v<std::invoke_result_t<t_function, const bool &, t_params...>>,
// the 'execute' wrapper will return 'bool', which will be 'true' if the
// 'p_function' executes in less 'p_max_time', or 'false' otherwise
bool,
// else it will result a 'std::optional' with the return type of
// 't_function', which will contain a value of that type, if the
// 'p_function' executes in less 'p_max_time', or empty otherwise
std::optional<std::invoke_result_t<t_function, const bool &, t_params...>>>
execute(t_time p_max_time, t_function &p_function, t_params &&... p_params) {
std::mutex _mutex;
std::condition_variable _cond;
bool _timeout{false};
typedef typename std::invoke_result_t<t_function, const bool &, t_params...>
t_ret;
if constexpr (std::is_void_v<t_ret>) {
std::thread _th([&]() -> void {
p_function(_timeout, std::forward<t_params>(p_params)...);
_cond.notify_one();
});
std::unique_lock<std::mutex> _lock{_mutex};
if (_cond.wait_for(_lock, p_max_time) != std::cv_status::timeout) {
_th.join();
return true;
}
_timeout = true;
_th.detach();
return false;
} else {
t_ret _ret;
std::thread _th([&]() -> void {
_ret = p_function(_timeout, std::forward<t_params>(p_params)...);
_cond.notify_one();
});
std::unique_lock<std::mutex> _lock{_mutex};
if (_cond.wait_for(_lock, p_max_time) != std::cv_status::timeout) {
_th.join();
return {std::move(_ret)};
}
_timeout = true;
_th.detach();
return {};
}
}
Unlike the code in the answers for the question I referenced, I would not like to throw an exception in the execute wrapper. If p_function does not return, the wrapper will return a bool, true if p_function executed in at most p_max_time, false otherwise. If p_function returns T, the wrapper will return std::optional<T> which will have a value if p_function does not exceed p_max_time, or it will be empty otherwise.
The const bool & parameter required for p_function is used to inform p_function that its execution exceeded p_max_time, so p_function may stop its execution, though execute will not count on it.
Here is an example:
auto _function = [](const bool &p_is_timeout, int &&p_i) -> void {
std::this_thread::sleep_for(1s);
if (p_is_timeout) {
std::cout << "timeout";
} else {
std::cout << "i = " << p_i << '\n';
}
};
int _i{4};
if (!async::execute(200ms, _function, std::move(_i))) {
std::cout << "TIMEOUT!!\n";
}
So, the problem is _th.detach() causes crash when I execute some test functions in a row. I if change it to _th.join(), the crash no longer occurs, but, obviously, the function that calls the wrapper has to wait for p_function to end, which is not desired.
How can I make execute detach _th thread, without causing crash?
Your lambda needs access to the local variables _ret and cond these don't exist after the end of execute so your code has undefined behaviour. Lambdas that capture by reference should only be used when the lambda has the same lifetime as the code that they're defined in.
You'd need to define your state variables in the heap so that they exist after the end of the function. For example you could use a shared_ptr:
template <typename Result>
struct state
{
std::mutex _mutex;
std::condition_variable _cond;
Result _ret;
};
template <>
struct state<void>
{
std::mutex _mutex;
std::condition_variable _cond;
};
template <typename t_time, typename t_function, typename... t_params>
inline std::conditional_t<
// if 't_function' return type is 'void'
std::is_void_v<std::invoke_result_t<t_function, const bool &, t_params...>>,
// the 'execute' wrapper will return 'bool', which will be 'true' if the
// 'p_function' executes in less 'p_max_time', or 'false' otherwise
bool,
// else it will result a 'std::optional' with the return type of
// 't_function', which will contain a value of that type, if the
// 'p_function' executes in less 'p_max_time', or empty otherwise
std::optional<std::invoke_result_t<t_function, const bool &, t_params...>>>
execute(t_time p_max_time, t_function &p_function, t_params &&... p_params) {
typedef typename std::invoke_result_t<t_function, const bool &, t_params...>
t_ret;
auto _state = std::make_shared<state<t_ret>>();
bool _timeout{false};
if constexpr (std::is_void_v<t_ret>) {
std::thread _th([&, _state]() -> void {
p_function(_timeout, std::forward<t_params>(p_params)...);
_state->_cond.notify_one();
});
std::unique_lock<std::mutex> _lock{_state->_mutex};
if (_state->_cond.wait_for(_lock, p_max_time) != std::cv_status::timeout) {
_th.join();
return true;
}
_timeout = true;
_th.detach();
return false;
} else {
std::thread _th([&, _state]() -> void {
_state->_ret = p_function(_timeout, std::forward<t_params>(p_params)...);
_state->_cond.notify_one();
});
std::unique_lock<std::mutex> _lock{_state->_mutex};
if (_state->_cond.wait_for(_lock, p_max_time) != std::cv_status::timeout) {
_th.join();
return {std::move(_state->_ret)};
}
_timeout = true;
_th.detach();
return {};
}
}
Note for simplicity I've left in the capture of the function arguments and function by reference but you still need to ensure those references remain valid for as long as they're needed (e.g. if the timeout is short the function could exit before the target function executes or if the arguments are references then the invoked function can't use those references after the function exits).
If you have c++20 you might want to look into std::jthread
Based on answers and suggestions, I came up with:
template <typename t_time, typename t_function, typename... t_params>
inline std::conditional_t<
// if 't_function' return type is 'void'
std::is_void_v<
std::invoke_result_t<t_function, std::function<bool()>, t_params...>>,
// the 'execute' wrapper will return 'bool', which will be 'true' if the
// 'p_function' executes in less 'p_max_time', or 'false' otherwise
bool,
// else it will result a 'std::optional' with the return type of
// 't_function', which will contain a value of that type, if the
// 'p_function' executes in less 'p_max_time', or empty otherwise
std::optional<
std::invoke_result_t<t_function, std::function<bool()>, t_params...>>>
execute(t_time p_max_time, t_function &p_function, t_params &&... p_params) {
std::mutex _mutex;
std::condition_variable _cond;
auto _timeout = std::make_shared<bool>(false);
auto _is_timeout = [_timeout]() { return *_timeout; };
typedef typename std::invoke_result_t<t_function, std::function<bool()>,
t_params...>
t_ret;
if constexpr (std::is_void_v<t_ret>) {
std::thread _th([&]() -> void {
p_function(_is_timeout, std::forward<t_params>(p_params)...);
_cond.notify_one();
});
std::unique_lock<std::mutex> _lock{_mutex};
if (_cond.wait_for(_lock, p_max_time) != std::cv_status::timeout) {
_th.join();
return true;
}
*_timeout = true;
_th.join();
return false;
} else {
t_ret _ret;
std::thread _th([&]() -> void {
_ret = p_function(_is_timeout, std::forward<t_params>(p_params)...);
_cond.notify_one();
});
std::unique_lock<std::mutex> _lock{_mutex};
if (_cond.wait_for(_lock, p_max_time) != std::cv_status::timeout) {
_th.join();
return {std::move(_ret)};
}
*_timeout = true;
_th.join();
return {};
}
}
And the example becomes:
auto _function = [](std::function<bool()> p_timeout, int &&p_i) -> void {
std::this_thread::sleep_for(1s);
if (p_timeout()) {
std::cout << "timeout in work function\n";
} else {
std::cout << "i = " << p_i << '\n';
}
};
int _i{4};
if (!execute(200ms, _function, std::move(_i))) {
std::cout << "OK - timeout\n";
}
else {
std::cout << "NOT OK - no timeout\n";
}
I believe passing a std::function<bool()> to the work function (p_function) creates a good abstraction on how execute will control the timeout, and an easy way for p_function to check for it.
I also removed the std::thread::detach() calls.
I have a class with a function that takes a std::function and stores it. This part seems to compile ok (but please point out any issue if there are any)
#include <functional>
#include <iostream>
struct worker
{
std::function<bool(std::string)> m_callback;
void do_work(std::function<bool(std::string)> callback)
{
m_callback = std::bind(callback, std::placeholders::_1);
callback("hello world\n");
}
};
// pretty boring class - a cut down of my actual class
struct helper
{
worker the_worker;
bool work_callback(std::string str)
{
std::cout << str << std::endl;
return true;
}
};
int main()
{
helper the_helper;
//the_helper.the_worker.do_work(std::bind(&helper::work_callback, the_helper, std::placeholders::_1)); // <---- SEGFAULT (but works in minimal example)
the_helper.the_worker.do_work(std::bind(&helper::work_callback, &the_helper, std::placeholders::_1)); // <---- SEEMS TO WORK
}
I get a segfault, but I am not sure why. I have used this before, in fact, I copied this example from another place I used it. The only real difference that the member function was part of the class I called it from (i.e. this instead of the_helper).
So this is why I am also asking if there is anything else I am doing wrong in general? Like should I be passing the std::function as:
void do_work(std::function<bool(std::string)>&& callback)
or
void do_work(std::function<bool(std::string)>& callback)
As also noted by #Rakete1111 in comments, the problem probably was in this code:
bool work_callback(std::string str)
{
std::cout << str << std::endl;
}
In C++ if a non-void function does not return a value the result is undefined behavior.
This example will crash with clang but pass with gcc.
If helper::work_callback returns (e.g, true) the code works just fine.
I don't know why your code seg faults because I was spoiled and skipped std::bind straight to lambdas. Since you use C++11 you should really convert your code from std::bind to lambdas:
struct worker
{
std::function<bool(std::string)> m_callback;
void do_work(std::function<bool(std::string)> callback)
{
m_callback = callback;
callback("hello world\n");
}
};
Now with work_callback and calling do_work things need some analysis.
First version:
struct helper
{
worker the_worker;
bool work_callback(std::string)
{
return false;
}
};
int main()
{
helper the_helper;
the_helper.the_worker.do_work([&](std::string s) { return the_helper.work_callback(s); });
}
Now this version works with your toy example. However out in the wild you need to be careful. The lambda passed to do_work and then stored in the_worker captures the_helper by reference. This means that this code is valid only if the helper object passed as reference to the lambda outlives the worker object that stores the m_callback. In your example the worker object is a sub-object of the the helper class so this is true. However if in your real example this is not the case or you cannot prove this, then you need to capture by value.
First attempt to capture by value (does not compile):
struct helper
{
worker the_worker;
bool work_callback(std::string)
{
return false;
}
};
int main()
{
helper the_helper;
the_helper.the_worker.do_work([=](std::string s) { return the_helper.work_callback(s); });
}
This does not compile because the copy of the_helper stored in the lambda object is const by default and as such you cannot call work_callback on it.
A questionable solution if you can't make work_callback const is to make the lambda mutable:
struct helper
{
worker the_worker;
bool work_callback(std::string)
{
return false;
}
};
int main()
{
helper the_helper;
the_helper.the_worker.do_work([=](std::string s) mutable { return the_helper.work_callback(s); });
}
But you need to think if this is what you intended.
What would make more sense is to make work_callback const:
struct helper
{
worker the_worker;
bool work_callback(std::string) const
{
return false;
}
};
int main()
{
helper the_helper;
the_helper.the_worker.do_work([=](std::string s) { return the_helper.work_callback(s); });
}
The reason for getting SEGFAULT has been already mentioned in the comments.
However, I would like to point out that, you need to use neither std::bind nor std::function, here in your given case. Instead, simply having a lambda and a function pointer you can handle what you intend to do.
struct worker
{
typedef bool(*fPtr)(const std::string&); // define fun ptr type
fPtr m_callback;
void do_work(const std::string& str)
{
// define a lambda
m_callback = [](const std::string& str)
{
/* do something with string*/
std::cout << "Call from worker: " << str << "\n";
return true;
};
bool flag = m_callback(str);// just call the lambda here
/* do some other stuff*/
}
};
struct helper
{
worker the_worker;
bool work_callback(const std::string& str)
{
std::cout << "Call from helper: ";
this->the_worker.do_work(str);
return true; ------------------------>// remmeber to keep the promise
}
};
And use case would be:
int main()
{
helper the_helper;
the_helper.work_callback(std::string("hello world"));
// or if you intend to use
the_helper.the_worker.do_work(std::string("hello world"));
return 0;
}
see Output here:
PS: In the above case, if worker does not required m_callback for later cases(i.e, only for do_work()), then you can remove this member, as lambdas can be created and called at same place where it has been declared.
struct worker
{
void do_work(const std::string& str)
{
bool flag = [](const std::string& str)->bool
{
/* do something with string*/
std::cout << "Call from worker: " << str << "\n";
return true;
}(str); -------------------------------------> // function call
/* do other stuff */
}
};
So I have this function which is behaving like the setInterval function in JS. I found it here.
I am currently trying to change it so it can be stopped. I do not fully understand the behavior of this code.
void setInterval(function<void(void)> func, unsigned int interval) {
thread([func, interval]() {
while (1) {
auto x = chrono::steady_clock::now() + chrono::milliseconds(interval);
func();
this_thread::sleep_until(x);
}
}).detach();
}
I tried it like this:
void setInterval(function<void(void)> func, unsigned int interval, bool &b) {
thread([func, interval, *b]() {
while (*b) {
auto x = chrono::steady_clock::now() + chrono::milliseconds(interval);
func();
this_thread::sleep_until(x);
}
}).detach();
}
(this won't compile), and in main calling it like this:
bool B;
setInterval(myFunction,1000,B);
I was expecting that if I change the B variable to false, then the thread in setInterval function stops, but I haven't managed to reach my goal like this. Any idead/suggestions? Thank you in advance.
Sorry, but I didn't find a design simpler than that.
You could, make a class that owns both a thread, and a weak_ptr to itself,
to be a "holder" that the callable can see it safely, because the callable
will still exists even if the object is destructed. You don't want a dangling pointer.
template<typename T>
struct IntervalRepeater {
using CallableCopyable = T;
private:
weak_ptr<IntervalRepeater<CallableCopyable>> holder;
std::thread theThread;
IntervalRepeater(unsigned int interval,
CallableCopyable callable): callable(callable), interval(interval) {}
void thread() {
weak_ptr<IntervalRepeater<CallableCopyable>> holder = this->holder;
theThread = std::thread([holder](){
// Try to strongify the pointer, to make it survive this loop iteration,
// and ensure that this pointer is valid, if not valid, end the loop.
while (shared_ptr<IntervalRepeater<CallableCopyable>> ptr = holder.lock()) {
auto x = chrono::steady_clock::now() + chrono::milliseconds(ptr->interval);
ptr->callable();
this_thread::sleep_until(x);
}
});
}
public:
const CallableCopyable callable;
const unsigned int interval;
static shared_ptr<IntervalRepeater<T>> createIntervalRepeater(unsigned int interval,
CallableCopyable callable) {
std::shared_ptr<IntervalRepeater<CallableCopyable>> ret =
shared_ptr<IntervalRepeater<CallableCopyable>>(
new IntervalRepeater<CallableCopyable>(interval, callable));
ret->holder = ret;
ret->thread();
return ret;
}
~IntervalRepeater() {
// Detach the thread before it is released.
theThread.detach();
}
};
void beginItWaitThenDestruct() {
auto repeater = IntervalRepeater<function<void()>>::createIntervalRepeater(
1000, [](){ cout << "A second\n"; });
std::this_thread::sleep_for(std::chrono::milliseconds(3700));
}
int main() {
beginItWaitThenDestruct();
// Wait for another 2.5 seconds, to test whether there is still an effect of the object
// or no.
std::this_thread::sleep_for(std::chrono::milliseconds(2500));
return 0;
}
C++ is not JavaScript, but C++ can apply most programming paradigms in different languages.
I am implementing a concurrent wrapper as introduced by Herb Sutter presented in his talk "C++ and Beyond 2012".
template <typename T>
class ConcurrentWrapper {
private:
std::deque<std::unique_ptr<std::function<void()>>> _tasks;
std::mutex _mutex;
std::condition_variable _cond;
T _object;
std::thread _worker;
std::atomic<bool> _done {false};
public:
template <typename... ArgsT>
ConcurrentWrapper(ArgsT&&... args) :
_object {std::forward<ArgsT>(args)...},
_worker {
[&]() {
typename decltype(_tasks)::value_type task;
while(!_done) {
{
std::unique_lock<std::mutex> lock(_mutex);
while(_tasks.empty()) {
_cond.wait(lock);
}
task = std::move(_tasks.front());
_tasks.pop_front();
}
(*task)();
}
}
} {
}
~ConcurrentWrapper() {
{
std::unique_lock<std::mutex> lock(_mutex);
_tasks.push_back(std::make_unique<std::function<void()>>(
[&](){_done = true;}
));
}
_cond.notify_one();
_worker.join();
}
template <typename F, typename R = std::result_of_t<F(T&)>>
std::future<R> operator()(F&& f) {
std::packaged_task<R(T&)> task(std::forward<F>(f));
auto fu = task.get_future();
{
std::unique_lock<std::mutex> lock(_mutex);
_tasks.push_back(std::make_unique<std::function<void()>>(
[this, task=MoveOnCopy<decltype(task)>(std::move(task))]() {
task.object(this->_object);
}
));
}
_cond.notify_one();
return fu;
}
};
Basically, the idea is to wrap an object and provide thread-safe access in FIFO order using operation (). However, in some runs (not always happen), the following program hanged:
ConcurrentWrapper<std::vector<int>> results;
results(
[&](std::vector<T>& data) {
std::cout << "sorting...\n";
std::sort(data.begin(), data.end());
std::cout << "done ...\n";
EXPECT_EQ(data, golden);
}
).get();
However, the program work correctly without explicitly calling get() method.
results(
[&](std::vector<T>& data) {
std::cout << "sorting...\n";
std::sort(data.begin(), data.end());
std::cout << "done ...\n";
EXPECT_EQ(data, golden);
}
); // Function correctly without calling get
What could the be problem? Did I implement something wrong? I noticed a posted here saying that "a packaged_task needs to be invoked before you call f.get(), otherwise you program will freeze as the future will never become ready." Is this true? If yes, how can I get this problem solved?
I was compiling the code using -std=c++1z -pthread with G++ 6.1