Can I use std::async without waiting for the future limitation? - c++

High level
I want to call some functions with no return value in a async mode without waiting for them to finish. If I use std::async the future object doesn't destruct until the task is over, this make the call not sync in my case.
Example
void sendMail(const std::string& address, const std::string& message)
{
//sending the e-mail which takes some time...
}
myResonseType processRequest(args...)
{
//Do some processing and valuate the address and the message...
//Sending the e-mail async
auto f = std::async(std::launch::async, sendMail, address, message);
//returning the response ASAP to the client
return myResponseType;
} //<-- I'm stuck here until the async call finish to allow f to be destructed.
// gaining no benefit from the async call.
My questions are
Is there a way to overcome this limitation?
if (1) is no, should I implement once a thread that will take those "zombie" futures and wait on them?
Is (1) and (2) are no, is there any other option then just build my own thread pool?
note:
I rather not using the option of thread+detach (suggested by #galop1n) since creating a new thread have an overhead I wish to avoid. While using std::async (at least on MSVC) is using an inner thread pool.
Thanks.

You can move the future into a global object, so when the local future's destructor runs it doesn't have to wait for the asynchronous thread to complete.
std::vector<std::future<void>> pending_futures;
myResonseType processRequest(args...)
{
//Do some processing and valuate the address and the message...
//Sending the e-mail async
auto f = std::async(std::launch::async, sendMail, address, message);
// transfer the future's shared state to a longer-lived future
pending_futures.push_back(std::move(f));
//returning the response ASAP to the client
return myResponseType;
}
N.B. This is not safe if the asynchronous thread refers to any local variables in the processRequest function.
While using std::async (at least on MSVC) is using an inner thread pool.
That's actually non-conforming, the standard explicitly says tasks run with std::launch::async must run as if in a new thread, so any thread-local variables must not persist from one task to another. It doesn't usually matter though.

why do you not just start a thread and detach if you do not care on joining ?
std::thread{ sendMail, address, message}.detach();
std::async is bound to the lifetime of the std::future it returns and their is no alternative to that.
Putting the std::future in a waiting queue read by an other thread will require the same safety mechanism as a pool receiving new task, like mutex around the container.
Your best option, then, is a thread pool to consume tasks directly pushed in a thread safe queue. And it will not depends on a specific implementation.
Below a thread pool implementation taking any callable and arguments, the threads do poling on the queue, a better implementation should use condition variables (coliru) :
#include <iostream>
#include <queue>
#include <memory>
#include <thread>
#include <mutex>
#include <functional>
#include <string>
struct ThreadPool {
struct Task {
virtual void Run() const = 0;
virtual ~Task() {};
};
template < typename task_, typename... args_ >
struct RealTask : public Task {
RealTask( task_&& task, args_&&... args ) : fun_( std::bind( std::forward<task_>(task), std::forward<args_>(args)... ) ) {}
void Run() const override {
fun_();
}
private:
decltype( std::bind(std::declval<task_>(), std::declval<args_>()... ) ) fun_;
};
template < typename task_, typename... args_ >
void AddTask( task_&& task, args_&&... args ) {
auto lock = std::unique_lock<std::mutex>{mtx_};
using FinalTask = RealTask<task_, args_... >;
q_.push( std::unique_ptr<Task>( new FinalTask( std::forward<task_>(task), std::forward<args_>(args)... ) ) );
}
ThreadPool() {
for( auto & t : pool_ )
t = std::thread( [=] {
while ( true ) {
std::unique_ptr<Task> task;
{
auto lock = std::unique_lock<std::mutex>{mtx_};
if ( q_.empty() && stop_ )
break;
if ( q_.empty() )
continue;
task = std::move(q_.front());
q_.pop();
}
if (task)
task->Run();
}
} );
}
~ThreadPool() {
{
auto lock = std::unique_lock<std::mutex>{mtx_};
stop_ = true;
}
for( auto & t : pool_ )
t.join();
}
private:
std::queue<std::unique_ptr<Task>> q_;
std::thread pool_[8];
std::mutex mtx_;
volatile bool stop_ {};
};
void foo( int a, int b ) {
std::cout << a << "." << b;
}
void bar( std::string const & s) {
std::cout << s;
}
int main() {
ThreadPool pool;
for( int i{}; i!=42; ++i ) {
pool.AddTask( foo, 3, 14 );
pool.AddTask( bar, " - " );
}
}

Rather than moving the future into a global object (and manually manage deletion of unused futures), you can actually move it into the local scope of the asynchronously called function.
"Let the async function take its own future", so to speak.
I have come up with this template wrapper which works for me (tested on Windows):
#include <future>
template<class Function, class... Args>
void async_wrapper(Function&& f, Args&&... args, std::future<void>& future,
std::future<void>&& is_valid, std::promise<void>&& is_moved) {
is_valid.wait(); // Wait until the return value of std::async is written to "future"
auto our_future = std::move(future); // Move "future" to a local variable
is_moved.set_value(); // Only now we can leave void_async in the main thread
// This is also used by std::async so that member function pointers work transparently
auto functor = std::bind(f, std::forward<Args>(args)...);
functor();
}
template<class Function, class... Args> // This is what you call instead of std::async
void void_async(Function&& f, Args&&... args) {
std::future<void> future; // This is for std::async return value
// This is for our synchronization of moving "future" between threads
std::promise<void> valid;
std::promise<void> is_moved;
auto valid_future = valid.get_future();
auto moved_future = is_moved.get_future();
// Here we pass "future" as a reference, so that async_wrapper
// can later work with std::async's return value
future = std::async(
async_wrapper<Function, Args...>,
std::forward<Function>(f), std::forward<Args>(args)...,
std::ref(future), std::move(valid_future), std::move(is_moved)
);
valid.set_value(); // Unblock async_wrapper waiting for "future" to become valid
moved_future.wait(); // Wait for "future" to actually be moved
}
I am a little surprised it works because I thought that the moved future's destructor would block until we leave async_wrapper. It should wait for async_wrapper to return but it is waiting inside that very function. Logically, it should be a deadlock but it isn't.
I also tried to add a line at the end of async_wrapper to manually empty the future object:
our_future = std::future<void>();
This does not block either.

You need to make your future a pointer. Below is exactly what you are looking for:
std::make_unique<std::future<void>*>(new auto(std::async(std::launch::async, sendMail, address, message))).reset();
Live example

i have no idea what i'm doing, but this seem to work:
// :( http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3451.pdf
template<typename T>
void noget(T&& in)
{
static std::mutex vmut;
static std::vector<T> vec;
static std::thread getter;
static std::mutex single_getter;
if (single_getter.try_lock())
{
getter = std::thread([&]()->void
{
size_t size;
for(;;)
{
do
{
vmut.lock();
size=vec.size();
if(size>0)
{
T target=std::move(vec[size-1]);
vec.pop_back();
vmut.unlock();
// cerr << "getting!" << endl;
target.get();
}
else
{
vmut.unlock();
}
}while(size>0);
// ¯\_(ツ)_/¯
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}
});
getter.detach();
}
vmut.lock();
vec.push_back(std::move(in));
vmut.unlock();
}
it creates a dedicated getter thread for each type of future you throw at it (eg. if you give a future and future, you'll have 2 threads. if you give it 100x future, you'll still only have 2 threads), and when there's a future you don't want to deal with, just do notget(fut); - you can also noget(std::async([]()->void{...})); works just fine, no block, it seems. warning, do not try to get the value from a future after using noget() on it. that's probably UB and asking for trouble.

Related

How to implement scoped_lock functionality in c++11 using lock_guard

Looks like scoped_lock in c++17 gives the functionality I'm after however I'm presently tied to c++11.
At the moment I'm seeing deadlock issues with guard_lock when we call it with the same mutex more than once. Does scoped_lock protect against multiple calls (i.e. reenterent?)?
Is there a best practice for doing this in c++11 w/ lock_guard?
mutex lockingMutex;
void get(string s)
{
lock_guard<mutex> lock(lockingMutex);
if (isPresent(s))
{
//....
}
}
bool isPresent(string s)
{
bool ret = false;
lock_guard<mutex> lock(lockingMutex);
//....
return ret;
}
To be able to lock the same mutex multiple times one needs to use std::recursive_mutex. Recursive mutex is more expensive than a non-recursive one.
Best practise, though, is to design your code in such a way that a thread does not lock the same mutex multiple times. For example, have you public functions lock the mutex first and then invoke the implementation function that expects the mutex to have been locked already. Implementation functions must not call the public API functions that lock the mutex. E.g.:
class A {
std::mutex m_;
int state_ = 0;
private: // These expect the mutex to have been locked.
void foo_() {
++state_;
}
void bar_() {
this->foo_();
}
public: // Public functions lock the mutex first.
void foo() {
std::lock_guard<std::mutex> lock(m_);
this->foo_();
}
void bar() {
std::lock_guard<std::mutex> lock(m_);
this->bar_();
}
};
Scoped lock does not give the functionality you are looking for.
Scoped lock is just a variardic version of lock guard. It only exists due to some ABI issues with changing lock guard into a variardic template.
To have reentrant mutexes, you need to use a reentrant mutex. But this is both more expensive at runtime, and usually indicates a lack of care in your mutex state. While holding a mutex you should have complete and total understanding of all other synchronization actions you are performing.
Once you have complete understanding of all synchronizations actions you are performing, it is easy to avoid recursively locking.
There are two patterns you can consider here. First, split public locking API from a private non-locking API. Second, split synchronization from implementation.
private:
mutex lockingMutex;
bool isPresent(string s, lock_guard<mutex> const& lock) {
bool ret = false;
//....
return ret;
}
void get(string s, lock_guard<mutex> const& lock) {
if (isPresent(s, lock))
{
//....
}
}
public:
void get(string s) {
return get( std::move(s), lock_guard<mutex>(lockingMutex) );
}
bool isPresent(string s) {
return isPresent( std::move(s), lock_guard<mutex>(lockingMutex) );
}
};
here I use lock_guard<mutex> as "proof we have a lock".
An often better alternative is to write your class as non-thread-safe, then use a wrapper:
template<class T>
struct mutex_guarded {
template<class T0, class...Ts,
std::enable_if_t<!std::is_same<std::decay_t<T0>, mutex_guarded>{}, bool> =true
>
mutex_guarded(T0&&t0, Ts&&...ts):
t( std::forward<T0>(t0), std::forward<Ts>(ts)... )
{}
mutex_guarded()=default;
~mutex_guarded=default;
template<class F>
auto read( F&& f ) const {
auto l = lock();
return f(t);
}
template<class F>
auto write( F&& f ) {
auto l = lock();
return f(t);
}
private:
auto lock() { return std::unique_lock<std::mutex>(m); }
auto lock() const { return std::unique_lock<std::mutex>(m); }
mutable std::mutex m;
T t;
};
now we can use this like this:
mutex_guarded<Foo> foo;
foo.write([&](auto&&foo){ foo.get("hello"); } );
you can write mutex_gaurded, shared_mutex_guarded, not_mutex_guarded or even async_guarded (which returns futures and serializes actions in a worker thread).
So long as the class doesn't leave its own "zone of control" in methods this pattern makes writing mutex-guarded data much easier, and lets you compose related mutex guarded data into one bundle without having to rewrite them.

Thread-safe reference-counted queue C++

I'm struggling to implement a thread-safe reference-counted queue. The idea is that I have a number of tasks that each maintain a shared_ptr to a task manager that owns the queue. Here is a minimal implementation that should encounter that same issue:
#include <condition_variable>
#include <deque>
#include <functional>
#include <iostream>
#include <memory>
#include <mutex>
#include <thread>
namespace {
class TaskManager;
struct Task {
std::function<void()> f;
std::shared_ptr<TaskManager> manager;
};
class Queue {
public:
Queue()
: _queue()
, _mutex()
, _cv()
, _running(true)
, _thread([this]() { sweepQueue(); })
{
}
~Queue() { close(); }
void close() noexcept
{
try {
{
std::lock_guard<std::mutex> lock(_mutex);
if (!_running) {
return;
}
_running = false;
}
_cv.notify_one();
_thread.join();
} catch (...) {
std::cerr << "An error occurred while closing the queue\n";
}
}
void push(Task&& task)
{
std::unique_lock<std::mutex> lock(_mutex);
_queue.emplace_back(std::move(task));
lock.unlock();
_cv.notify_one();
}
private:
void sweepQueue() noexcept
{
while (true) {
try {
std::unique_lock<std::mutex> lock(_mutex);
_cv.wait(lock, [this] { return !_running || !_queue.empty(); });
if (!_running && _queue.empty()) {
return;
}
if (!_queue.empty()) {
const auto task = _queue.front();
_queue.pop_front();
task.f();
}
} catch (...) {
std::cerr << "An error occurred while sweeping the queue\n";
}
}
}
std::deque<Task> _queue;
std::mutex _mutex;
std::condition_variable _cv;
bool _running;
std::thread _thread;
};
class TaskManager : public std::enable_shared_from_this<TaskManager> {
public:
void addTask(std::function<void()> f)
{
_queue.push({ f, shared_from_this() });
}
private:
Queue _queue;
};
} // anonymous namespace
int main(void)
{
const auto manager = std::make_shared<TaskManager>();
manager->addTask([]() { std::cout << "Hello world\n"; });
}
The problem I find is that on rare occasions, the queue will try to invoke its own destructor within the sweepQueue method. Upon further inspection, it seems that the reference count on the TaskManager hits zero once the last task is dequeued. How can I safely maintain the reference count without invoking the destructor?
Update: The example does not clarify the need for the std::shared_ptr<TaskManager> within Task. Here is an example use case that should illustrate the need for this seemingly unnecessary ownership cycle.
std::unique_ptr<Task> task;
{
const auto manager = std::make_shared<TaskManager>();
task = std::make_unique<Task>(someFunc, manager);
}
// Guarantees manager is not destroyed while task is still in scope.
The ownership hierarchy here is TaskManager owns Queue and Queue owns Tasks. Tasks maintaining a shared pointer to TaskManager create an ownership cycle which does not seem to serve a useful purpose here.
This is the ownership what is root of the problem here. A Queue is owned by TaskManager, so that Queue can have a plain pointer to TaskManager and pass that pointer to Task in sweepQueue. You do not need std::shared_pointer<TaskManager> in Task at all here.
I'd refactor the queue from the thread first.
But to fix your problem:
struct am_I_alive {
explicit operator bool() const { return m_ptr.lock(); }
private:
std::weak_ptr<void> m_ptr;
};
struct lifetime_tracker {
am_I_alive track_lifetime() {
if (!m_ptr) m_ptr = std::make_shared<bool>(true);
return {m_ptr};
}
lifetime_tracker() = default;
lifetime_tracker(lifetime_tracker const&) {} // do nothing, don't copy
lifetime_tracker& operator=(lifetime_tracker const&){ return *this; }
private:
std::shared_ptr<void> m_ptr;
};
this is a little utility to detect if we have been deleted. It is useful in any code that calls an arbitrary callback whose side effect could include delete(this).
Privately inherit your Queue from it.
Then split popping the task from running it.
std::optional<Task> get_task() {
std::unique_lock<std::mutex> lock(_mutex);
_cv.wait(lock, [this] { return !_running || !_queue.empty(); });
if (!_running && _queue.empty()) {
return {}; // end
}
auto task = _queue.front();
_queue.pop_front();
return task;
}
void sweepQueue() noexcept
{
while (true) {
try {
auto task = get_task();
if (!task) return;
// we are alive here
auto alive = track_lifetime();
try {
(*task).f();
} catch(...) {
std::cerr << "An error occurred while running a task\n";
}
task={};
// we could be deleted here
if (!alive)
return; // this was deleted, get out of here
}
} catch (...) {
std::cerr << "An error occurred while sweeping the queue\n";
}
}
}
and now you are safe.
After that you need to deal with the thread problem.
The thread problem is that you need your code to destroy the thread from within the thread it is running. At the same time, you also need to guarantee that the thread has terminated before main ends.
These are not compatible.
To fix that, you need to create a thread owning pool that doesn't have your "keep alive" semantics, and get your thread from there.
These threads don't delete themselves; instead, they return themselves to that pool for reuse by another client.
At shutdown, those threads are blocked on to ensure you don't have code running elsewhere that hasn't halted before the end of main.
To write such a pool without your inverted dependency mess, split the queue part of your code off. This queue owns no thread.
template<class T>
struct threadsafe_queue {
void push(T);
std::optional<T> pop(); // returns empty if thread is aborted
void abort();
~threadsafe_queue();
private:
std::mutex m;
std::condition_variable v;
std::deque<T> data;
bool aborted = false;
};
then a simple thread pool:
struct thread_pool {
template<class F>
std::future<std::result_of_t<F&()>> enqueue( F&& f );
template<class F>
std::future<std::result_of_t<F&()>> thread_off_now( F&& f ); // starts a thread if there aren't any free
void abort();
void start_thread( std::size_t n = 1 );
std::size_t count_threads() const;
~thread_pool();
private:
threadsafe_queue< std::function<void()> > tasks;
std::vector< std::thread > threads;
static void thread_loop( thread_pool* pool );
};
make a thread pool singleton. Get your threads for your queue from thread_off_now method, guaranteeing you a thread that (when you are done with it) can be recycled, and whose lifetime is handled by someone else.
But really, you should instead be thinking with ownership in mind. The idea that tasks and task queues mutually own each other is a mess.
If someone disposes of a task queue, it is probably a good idea to abandon the tasks instead of persisting it magically and silently.
Which is what my simple thread pool does.

Signaling main thread when std::future is ready to be retrieved

I'm trying to understand the std::async, std::future system. What I don't quite understand is how you deal with running multiple async "tasks", and then, based on what returns first, second, etc, running some additional code.
Example: Let's say your main thread is in a simple loop. Now, based on user input, you run several functions via std::async, and save the futures in a std::list.
My issue is, how do I pass information back from the std::async function that can specify which future is complete?
My main thread is basically in a message loop, and what I need to do is have a function run by std::async be able to queue a message that somehow specifies which future is complete. The issue is that the function doesn't have access to the future.
Am I just missing something?
Here is some pseudo-code of what I'm trying to accomplish; extra points if there is a way to also have a way to have a way to make a call to "cancel" the request using a cancelation token.
class RequestA
{
public:
int input1;
int output1;
};
main()
{
while(1)
{
//check for completion
// i.e. pop next "message"
if(auto *completed_task = get_next_completed_task())
{
completed_task->run_continuation();
}
// other code to handle user input
if(userSaidRunA())
{
// note that I don't want to use a raw pointer but
// am not sure how to use future for this
RequestA *a = new RequestA();
run(a, OnRequestTypeAComplete);
}
}
}
void OnRequestTypeAComplete(RequestA &req)
{
// Do stuff with req, want access to inputs and output
}
Unfortunately C++11 std::future doesn't provide continuations and cancellations. You can retrieve result from std::future only once. Moreover future returned from std::async blocks in its destructor. There is a group headed by Sean Parent from Adobe. They implemented future, async, task as it should be. Also functions with continuation like when_all, when_any. Could be it is what you're looking for. Anyway have a look at this project. Code has good quality and can be read easily.
If platform dependent solution are also ok for you you can check them. For windows I know PPL library. It also has primitives with cancellation and continuation.
You can create a struct containing a flag and pass a reference to that flag to your thread function.
Something a bit like this:
int stuff(std::atomic_bool& complete, std::size_t id)
{
std::cout << "starting: " << id << '\n';
// do stuff
std::this_thread::sleep_for(std::chrono::milliseconds(hol::random_number(3000)));
// generate value
int value = hol::random_number(30);
// signal end
complete = true;
std::cout << "ended: " << id << " -> " << value << '\n';
return value;
}
struct task
{
std::future<int> fut;
std::atomic_bool complete;
task() = default;
task(task&& t): fut(std::move(t.fut)), complete(t.complete.load()) {}
};
int main()
{
// list of tasks
std::vector<task> tasks;
// reserve enough spaces so that nothing gets reallocated
// as that would invalidate the references to the atomic_bools
// needed to signal the end of a thread
tasks.reserve(3);
// create a new task
tasks.emplace_back();
// start it running
tasks.back().fut = std::async(std::launch::async, stuff, std::ref(tasks.back().complete), tasks.size());
tasks.emplace_back();
tasks.back().fut = std::async(std::launch::async, stuff, std::ref(tasks.back().complete), tasks.size());
tasks.emplace_back();
tasks.back().fut = std::async(std::launch::async, stuff, std::ref(tasks.back().complete), tasks.size());
// Keep going as long as any of the tasks is incomplete
while(std::any_of(std::begin(tasks), std::end(tasks),
[](auto& t){ return !t.complete.load(); }))
{
// do some parallel stuff
std::this_thread::sleep_for(std::chrono::milliseconds(500));
}
// process the results
int sum = 0;
for(auto&& t: tasks)
sum += t.fut.get();
std::cout << "sum: " << sum << '\n';
}
Here a solution with a std::unordered_map instead of a std::list in which you don't need to modify your callables. Instead of that, you use a helper function that assigns an id to each task and notify when they finish:
class Tasks {
public:
/*
* Helper to create the tasks in a safe way.
* lockTaskCreation is needed to guarantee newTask is (temporarilly)
* assigned before it is moved to the list of tasks
*/
template <class R, class ...Args>
void createNewTask(const std::function<R(Args...)>& f, Args... args) {
std::unique_lock<std::mutex> lock(mutex);
std::lock_guard<std::mutex> lockTaskCreation(mutexTaskCreation);
newTask = std::async(std::launch::async, executeAndNotify<R, Args...>,
std::move(lock), f, std::forward<Args>(args)...);
}
private:
/*
* Assign an id to the task, execute it, and notify when finishes
*/
template <class R, class ...Args>
static R executeAndNotify(std::unique_lock<std::mutex> lock,
const std::function<R(Args...)>& f, Args... args)
{
{
std::lock_guard<std::mutex> lockTaskCreation(mutexTaskCreation);
tasks[std::this_thread::get_id()] = std::move(newTask);
}
lock.unlock();
Notifier notifier;
return f(std::forward<Args>(args)...);
}
/*
* Class to notify when a task is completed (follows RAII)
*/
class Notifier {
public:
~Notifier() {
std::lock_guard<std::mutex> lock(mutex);
finishedTasks.push(std::this_thread::get_id());
cv.notify_one();
}
};
/*
* Wait for a finished task.
* This function needs to be called in an infinite loop
*/
static void waitForFinishedTask() {
std::unique_lock<std::mutex> lock(mutex);
cv.wait(lock, [] { return finishedTasks.size() || finish; });
if (finishedTasks.size()) {
auto threadId = finishedTasks.front();
finishedTasks.pop();
auto result = tasks.at(threadId).get();
tasks.erase(threadId);
std::cout << "task " << threadId
<< " returned: " << result << std::endl;
}
}
static std::unordered_map<std::thread::id, std::future<int>> tasks;
static std::mutex mutex;
static std::mutex mutexTaskCreation;
static std::queue<std::thread::id> finishedTasks;
static std::condition_variable cv;
static std::future<int> newTask;
...
};
...
Then, you can call an async task in this way:
int doSomething(int i) {
std::this_thread::sleep_for(std::chrono::milliseconds(1));
return i;
}
int main() {
Tasks tasks;
tasks.createNewTask(std::function<decltype(doSomething)>(doSomething), 10);
return 0;
}
See a complete implementation run on Coliru

How to detect if handler is an ASIO strand wrap and call it through the strand?

If there's a generic method taking some handler:
template< typename HandlerType >
void Register( HandlerType && handler )
{
m_handler( std::forward< HandlerType >( handler ) );
}
and that handler is going to be invoked through an io_service at some point in the future:
void SomeEvent( )
{
// compute someParameter
m_IOService.post( std::bind( m_handler , someParameter ) );
}
How can it be detected if the caller of Register() had passed something wrapped by a strand, as in:
m_strand( m_IOService );
// ...
Register( m_strand.wrap( []( /*something*/ ){ /*...*/ } ) );
And how SomeEvent() should be changed in order to post the handler through the strand in such cases?
EDIT
When I asked this I didn't had the trouble of carefully reading io_service::strand::wrap docs, more specifically where it says that:
(...) Given a function object with the signature:
R f(A1 a1, ... An an);
If this function object is passed to the wrap function like so:
strand.wrap(f);
then the return value is a function object with the signature
void g(A1 a1, ... An an);
that, when invoked, executes code equivalent to:
strand.dispatch(boost::bind(f, a1, ... an));
And all I need is this indeed - I can just declare m_handler as an appropriate std::function<> and simply post it through the io_service in SomeEvent().
I realized this after reading the answer from #Arunmu, thus I'm accepting it. Nevertheless #Richard Hodges' answer has some good points on ASIO's executors logic and how it was improved in the standalone version.
If I understood your requirement clearly, you do not have to do anything out of the way if implemented like below (Read the comments in the code for explanation):
#include <iostream>
#include <type_traits>
#include <thread>
#include <memory>
#include <asio.hpp>
template <typename Handler>
class GenHandler
{
public:
GenHandler(Handler&& h): hndler_(std::forward<Handler>(h))
{}
template <typename... Args>
void operator()(Args&&... args)
{
std::cout << "GenHandler called" << std::endl;
hndler_();
}
private:
Handler hndler_;
};
template<typename HandlerType>
GenHandler<std::decay_t<HandlerType>> create_handler(HandlerType&& h)
{
return {std::forward<HandlerType>(h)};
}
template <typename Handler>
void SomeEvent(asio::io_service& ios, Handler& h)
{
ios.post([=] ()mutable { h(); });
}
int main() {
asio::io_service ios;
asio::io_service::strand strand{ios};
auto work = std::make_unique<asio::io_service::work>(ios);
std::thread t([&]() { ios.run(); });
// This creates a regular handler which when called by the
// io_context would first execute GenHandler::operator()
// and inside of which it would call the lambda passed below.
auto hndl = create_handler([] {
std::cout << "Regular Handle" << std::endl;
});
SomeEvent(ios, hndl);
///-------- Example 2 ---------////
// This creates a handler just like above, but instead wraps a
// strand handler i.e when GenHandler::operator() gets called
// it will execute the lambda passed to the wrap in the execution context
// of the strand.
auto hndl2 = create_handler(
strand.wrap([] {
std::cout << "Strand handler-depth 2" << std::endl;
}));
// This is a regular strand wrap which is passed to the
// io_service execution context. The lambda passed in the strand::wrap
// would be excuted the execution context of the strand.
auto str_handler = strand.wrap([=]() mutable {
std::cout <<"strand\n";
hndl2();
});
SomeEvent(ios, str_handler);
work.reset();
t.join();
return 0;
}
In the second example the handlers are called in the order as given below:
io_service is passed the strand::wrapped_handler. Therefore, the handler held by the wrapped_handler is executed inside the strand.
hndl2 which is GenHandler holding another strand::wrapped_handler is also called inside of the strand.
When GenHandler::operator() is called, it executes the held strand::wrapped_handler as well. This is done by dispatching the internal handler held by strand::wrapped_handler to the strand.
NOTE: For reasons quite unclear to me strand::wrap is deprecated. Author wants people to use bind_executor instead.
For boost asio the answer I think is in this template function:
namespace boost_asio_handler_cont_helpers {
template <typename Context>
inline bool is_continuation(Context& context)
{
#if !defined(BOOST_ASIO_HAS_HANDLER_HOOKS)
return false;
#else
using boost::asio::asio_handler_is_continuation;
return asio_handler_is_continuation(
boost::asio::detail::addressof(context));
#endif
}
} // namespace boost_asio_handler_cont_helpers
Which if I read it correctly is used to detect whether there is a "context" (i.e. a strand or io_service) in which the handler is to be executed.
The code in the reactor service then switches based on the result, either executing within the already existing context or not.
In standalone asio things have changed somewhat.
There is now a function to detect the context of a handler (if any). I wrote this code after consulting the author.
the relevant lines are:
auto ex = asio::get_associated_executor(handler, this->get_io_service().get_executor());
and..
asio::dispatch(ex, [handler = std::move(handler), future = std::move(future)]() mutable
{
// call the user-supplied handler
});
This is production code from "long running task" execution service:
template<class Task, class Handler>
void async_execute(implementation& impl, Task&& task, Handler&& handler)
{
VALUE_DEBUG_TRACE(module) << method(__func__, this);
using task_type = std::decay_t<Task>;
static_assert(is_callable_t<task_type, long_running_task_context>(), "");
using result_type = std::result_of_t<task_type(long_running_task_context)>;
using promise_type = std::promise<result_type>;
using future_type = std::future<result_type>;
using handler_type = std::decay_t<Handler>;
static_assert(is_callable_t<handler_type, future_type>(), "");
using handler_result_type = std::result_of<handler_type(future_type)>;
auto ex = asio::get_associated_executor(handler, this->get_io_service().get_executor());
if (not impl)
{
post(ex, [handler = std::forward<Handler>(handler)]() mutable
{
promise_type promise;
promise.set_exception(std::make_exception_ptr(system_error(errors::null_handle)));
handler(promise.get_future());
});
return;
}
auto handler_work = make_work(ex);
auto& ios = get_io_service();
auto impl_ptr = impl.get();
auto async_handler = [this,
&ios,
impl_ptr,
handler_work, ex,
handler = std::forward<Handler>(handler)]
(detail::long_running_task_op::identifier ident,
auto future) mutable
{
assert(impl_ptr);
VALUE_DEBUG_TRACE(module) << method("async_execute::async_handler", this, ident);
asio::dispatch(ex, [handler = std::move(handler), future = std::move(future)]() mutable
{
VALUE_DEBUG_TRACE(module) << method("async_execute::completion_handler");
handler(std::move(future));
});
assert(impl_ptr);
impl_ptr->remove_op(ident);
};
using async_handler_type = decltype(async_handler);
static_assert(is_callable_t<async_handler_type, detail::long_running_task_op::identifier, future_type>(), "");
auto op = detail::long_running_task_op(std::forward<Task>(task), std::move(async_handler));
auto ident = op.get_identifier();
impl->add_op(ident);
auto lock = lock_type(this->_queue_mutex);
_ops.emplace(ident, op);
lock.unlock();
this->post_execute();
}

Thread pool using boost asio

I am trying to create a limited thread pool class using boost::asio. But I am stuck at one point can some one help me.
The only problem is the place where I should decrease counter?
code does not work as expected.
the problem is I don't know when my thread will finish execution and how I will come to know that it has return to pool
#include <boost/asio.hpp>
#include <iostream>
#include <boost/thread/thread.hpp>
#include <boost/bind.hpp>
#include <boost/thread/mutex.hpp>
#include <stack>
using namespace std;
using namespace boost;
class ThreadPool
{
static int count;
int NoOfThread;
thread_group grp;
mutex mutex_;
asio::io_service io_service;
int counter;
stack<thread*> thStk ;
public:
ThreadPool(int num)
{
NoOfThread = num;
counter = 0;
mutex::scoped_lock lock(mutex_);
if(count == 0)
count++;
else
return;
for(int i=0 ; i<num ; ++i)
{
thStk.push(grp.create_thread(boost::bind(&asio::io_service::run, &io_service)));
}
}
~ThreadPool()
{
io_service.stop();
grp.join_all();
}
thread* getThread()
{
if(counter > NoOfThread)
{
cout<<"run out of threads \n";
return NULL;
}
counter++;
thread* ptr = thStk.top();
thStk.pop();
return ptr;
}
};
int ThreadPool::count = 0;
struct callable
{
void operator()()
{
cout<<"some task for thread \n";
}
};
int main( int argc, char * argv[] )
{
callable x;
ThreadPool pool(10);
thread* p = pool.getThread();
cout<<p->get_id();
//how i can assign some function to thread pointer ?
//how i can return thread pointer after work done so i can add
//it back to stack?
return 0;
}
In short, you need to wrap the user's provided task with another function that will:
Invoke the user function or callable object.
Lock the mutex and decrement the counter.
I may not be understanding all the requirements for this thread pool. Thus, for clarity, here is an explicit list as to what I believe are the requirements:
The pool manages the lifetime of the threads. The user should not be able to delete threads that reside within the pool.
The user can assign a task to the pool in a non-intrusive way.
When a task is being assigned, if all threads in the pool are currently running other tasks, then the task is discarded.
Before I provide an implementation, there are a few key points I would like to stress:
Once a thread has been launched, it will run until completion, cancellation, or termination. The function the thread is executing cannot be reassigned. To allow for a single thread to execute multiple functions over the course of its life, the thread will want to launch with a function that will read from a queue, such as io_service::run(), and callable types are posted into the event queue, such as from io_service::post().
io_service::run() returns if there is no work pending in the io_service, the io_service is stopped, or an exception is thrown from a handler that the thread was running. To prevent io_serivce::run() from returning when there is no unfinished work, the io_service::work class can be used.
Defining the task's type requirements (i.e. the task's type must be callable by object() syntax) instead of requiring a type (i.e. task must inherit from process), provides more flexibility to the user. It allows the user to supply a task as a function pointer or a type providing a nullary operator().
Implementation using boost::asio:
#include <boost/asio.hpp>
#include <boost/thread.hpp>
class thread_pool
{
private:
boost::asio::io_service io_service_;
boost::asio::io_service::work work_;
boost::thread_group threads_;
std::size_t available_;
boost::mutex mutex_;
public:
/// #brief Constructor.
thread_pool( std::size_t pool_size )
: work_( io_service_ ),
available_( pool_size )
{
for ( std::size_t i = 0; i < pool_size; ++i )
{
threads_.create_thread( boost::bind( &boost::asio::io_service::run,
&io_service_ ) );
}
}
/// #brief Destructor.
~thread_pool()
{
// Force all threads to return from io_service::run().
io_service_.stop();
// Suppress all exceptions.
try
{
threads_.join_all();
}
catch ( const std::exception& ) {}
}
/// #brief Adds a task to the thread pool if a thread is currently available.
template < typename Task >
void run_task( Task task )
{
boost::unique_lock< boost::mutex > lock( mutex_ );
// If no threads are available, then return.
if ( 0 == available_ ) return;
// Decrement count, indicating thread is no longer available.
--available_;
// Post a wrapped task into the queue.
io_service_.post( boost::bind( &thread_pool::wrap_task, this,
boost::function< void() >( task ) ) );
}
private:
/// #brief Wrap a task so that the available count can be increased once
/// the user provided task has completed.
void wrap_task( boost::function< void() > task )
{
// Run the user supplied task.
try
{
task();
}
// Suppress all exceptions.
catch ( const std::exception& ) {}
// Task has finished, so increment count of available threads.
boost::unique_lock< boost::mutex > lock( mutex_ );
++available_;
}
};
A few comments about the implementation:
Exception handling needs to occur around the user's task. If the user's function or callable object throws an exception that is not of type boost::thread_interrupted, then std::terminate() is called. This is the the result of Boost.Thread's exceptions in thread functions behavior. It is also worth reading Boost.Asio's effect of exceptions thrown from handlers.
If the user provides the task via boost::bind, then the nested boost::bind will fail to compile. One of the following options is required:
Not support task created by boost::bind.
Meta-programming to perform compile-time branching based on whether or not the user's type if the result of boost::bind so that boost::protect could be used, as boost::protect only functions properly on certain function objects.
Use another type to pass the task object indirectly. I opted to use boost::function for readability at the cost of losing the exact type. boost::tuple, while slightly less readable, could also be used to preserve the exact type, as seen in the Boost.Asio's serialization example.
Application code can now use the thread_pool type non-intrusively:
void work() {};
struct worker
{
void operator()() {};
};
void more_work( int ) {};
int main()
{
thread_pool pool( 2 );
pool.run_task( work ); // Function pointer.
pool.run_task( worker() ); // Callable object.
pool.run_task( boost::bind( more_work, 5 ) ); // Callable object.
}
The thread_pool could be created without Boost.Asio, and may be slightly easier for maintainers, as they no longer need to know about Boost.Asio behaviors, such as when does io_service::run() return, and what is the io_service::work object:
#include <queue>
#include <boost/bind.hpp>
#include <boost/thread.hpp>
class thread_pool
{
private:
std::queue< boost::function< void() > > tasks_;
boost::thread_group threads_;
std::size_t available_;
boost::mutex mutex_;
boost::condition_variable condition_;
bool running_;
public:
/// #brief Constructor.
thread_pool( std::size_t pool_size )
: available_( pool_size ),
running_( true )
{
for ( std::size_t i = 0; i < pool_size; ++i )
{
threads_.create_thread( boost::bind( &thread_pool::pool_main, this ) ) ;
}
}
/// #brief Destructor.
~thread_pool()
{
// Set running flag to false then notify all threads.
{
boost::unique_lock< boost::mutex > lock( mutex_ );
running_ = false;
condition_.notify_all();
}
try
{
threads_.join_all();
}
// Suppress all exceptions.
catch ( const std::exception& ) {}
}
/// #brief Add task to the thread pool if a thread is currently available.
template < typename Task >
void run_task( Task task )
{
boost::unique_lock< boost::mutex > lock( mutex_ );
// If no threads are available, then return.
if ( 0 == available_ ) return;
// Decrement count, indicating thread is no longer available.
--available_;
// Set task and signal condition variable so that a worker thread will
// wake up andl use the task.
tasks_.push( boost::function< void() >( task ) );
condition_.notify_one();
}
private:
/// #brief Entry point for pool threads.
void pool_main()
{
while( running_ )
{
// Wait on condition variable while the task is empty and the pool is
// still running.
boost::unique_lock< boost::mutex > lock( mutex_ );
while ( tasks_.empty() && running_ )
{
condition_.wait( lock );
}
// If pool is no longer running, break out.
if ( !running_ ) break;
// Copy task locally and remove from the queue. This is done within
// its own scope so that the task object is destructed immediately
// after running the task. This is useful in the event that the
// function contains shared_ptr arguments bound via bind.
{
boost::function< void() > task = tasks_.front();
tasks_.pop();
lock.unlock();
// Run the task.
try
{
task();
}
// Suppress all exceptions.
catch ( const std::exception& ) {}
}
// Task has finished, so increment count of available threads.
lock.lock();
++available_;
} // while running_
}
};