Developing C++ concurrency library with "futures" or similar paradigm - c++

I'm working on a C++ project that needs to run many jobs in a threadpool. The jobs are failure-prone, which means that I need to know how each job terminated after it completes. Being a Java programmer for the most part, I like the idea of using "futures" or a similar paradigm, akin to the various classes in Java's util.concurrent package.
I have two questions: first, does something like this already exist for C++ (I haven't found anything in Boost, but maybe I'm not looking hard enough); and second, is this even a sane idea for C++?
I found a brief example of what I'm trying to accomplish here:
http://www.boostcookbook.com/Recipe:/1234841
Does this approach make sense?

Futures are both present in the upcoming standard (C++0x) and inside boost. Note that while the main name future is the same, you will need to read into the documentation to locate other types and to understand the semantics. I don't know Java futures, so I cannot tell you where they differ, if they do.
The library in boost was written by Anthony Williams, that I believe was also involved in the definition of that part of the standard. He has also written C++ Concurrency in Action, that includes a good description of futures, tasks, promises and related objects. His company also sells a complete and up to implementation of the C++0x threading libraries, if you are interested.

Boost has futures and other threading tools implemented.
Note that when you call the get() method on a boost::unique_future it will re-throw any exception that might have been stored inside it during asynchronous execution.
I would suggest you do something like:
#pragma once
#include <tbb/concurrent_queue.h>
#include <boost/thread.hpp>
#include <boost/noncopyable.hpp>
#include <functional>
namespace internal
{
template<typename T>
struct move_on_copy
{
move_on_copy(const move_on_copy<T>& other) : value(std::move(other.value)){}
move_on_copy(T&& value) : value(std::move(value)){}
mutable T value;
};
template<typename T>
move_on_copy<T> make_move_on_copy(T&& value)
{
return move_on_copy<T>(std::move(value));
}
}
class executor : boost::noncopyable
{
boost::thread thread_;
tbb::concurrent_bounded_queue<std::function<void()>> execution_queue_;
template<typename Func>
auto create_task(Func&& func) -> boost::packaged_task<decltype(func())> // noexcept
{
typedef boost::packaged_task<decltype(func())> task_type;
auto task = task_type(std::forward<Func>(func));
task.set_wait_callback(std::function<void(task_type&)>([=](task_type& my_task) // The std::function wrapper is required in order to add ::result_type to functor class.
{
try
{
if(boost::this_thread::get_id() == thread_.get_id()) // Avoids potential deadlock.
my_task();
}
catch(boost::task_already_started&){}
}));
return std::move(task);
}
public:
explicit executor() // noexcept
{
thread_ = boost::thread([this]{run();});
}
~executor() // noexcept
{
execution_queue_.push(nullptr); // Wake the execution thread.
thread_.join();
}
template<typename Func>
auto begin_invoke(Func&& func) -> boost::unique_future<decltype(func())> // noexcept
{
// Create a move on copy adaptor to avoid copying the functor into the queue, tbb::concurrent_queue does not support move semantics.
auto task_adaptor = internal::make_move_on_copy(create_task(func));
auto future = task_adaptor.value.get_future();
execution_queue_.push([=]
{
try{task_adaptor.value();}
catch(boost::task_already_started&){}
});
return std::move(future);
}
template<typename Func>
auto invoke(Func&& func) -> decltype(func()) // noexcept
{
if(boost::this_thread::get_id() == thread_.get_id()) // Avoids potential deadlock.
return func();
return begin_invoke(std::forward<Func>(func), prioriy).get();
}
private:
void run() // noexcept
{
while(true)
{
std::function<void()> func;
execution_queue_.pop(func);
if(!func)
break;
func();
}
}
};

C++ templates are less restrictive than Java Generics so 'Future's could readily be ported with them and thread synchronization primitives. As for existing libraries which support such a mechanism, hopefully someone else knows of one.

Related

Use boost phoenix lambda with io_service

I am using boost io_service to run methods asynchronously:
void my_class::completion_handler()
{
...
}
m_io_service.post(boost::bind(&my_class::completion_handler, this));
I would like to use lambda expression instead of boost::bind (see below) in order to avoid creating method for each handler but I am using a C++ compiler that does not support C++11 fully:
m_io_service.post([this](){ ... });
Is it possible to have the same behavior by using phoenix lambda ?
Thank you.
Yes that's possible.
Most notable difference is the placeholders (don't use std::place_holders::_1, _2... but boost::phoenix::arg_names::arg1, arg2...).
However, simply replacing boost::bind with std::bind, boost::lambda::bind or boost::phoenix::bind is ultimately useless of course.
Instead you could use Phoenix actors to compose "lambdas", like e.g.
namespace phx = boost::phoenix;
boost::mutex mx;
boost::condition_variable cv;
boost::unique_lock<boost::mutex> lk(mx);
vc.wait(lk, phx::ref(m_queue_size) > 0);
Member invocations are tricky in that respect.
The good news is that Phoenix comes with implementations of many STL operations like size(), empty(), push_back() etc.
Similar use of Phoenix in this queue implementation: Boost group_threads Maximal number of parallel thread and e.g. asio::io_service and thread_group lifecycle issue).
boost::fusion::function<>
You can adapt free functions with BOOST_PHOENIX_ADAPT_FUNCTION and function objects with BOOST_PHOENIX_ADAPT_CALLABLE. However in the latter case it's probably more elegant to use boost::fusion::function<>:
struct MyType {
MyType()
: complete_(complete_f { this })
{ }
void doSomething() { }
private:
struct complete_f {
MyType* _this;
void operator()() const {
// do something with _this, e.g
this->doSomething();
}
};
boost::phoenix::function<complete_f> complete_;
};

Emulate C# lock statement in C++

Intro: For synchronization, C# offers the System.Threading.Monitorclass, offering thread synchronization routines such as Enter(), Exit(), TryEnter() and alike.
Furthermore, there is the lock statement that makes sure a lock gets destroyed when a critical code block is left, either by normal execution flow or by an exception:
private static readonly obj = new Object();
lock(obj) {
...
}
Problem: In C++, for this purpose, we got the RAII wrappers std::lock_guard and std::unique_lock that are not applied to Monitor classes but to types fulfilling the Lockable concept. However, I consider this approach syntactically weaker than the way C# implemented it for several reasons:
You pollute the local scope with a variable name that cannot be reused. This can be countered by adding new scopes like
{
std::unique_lock<std::mutex> lck{ mtx };
...
}
But I find this notation rather awkward-looking. What troubles me even more that this is valid C++:
std::unique_lock<std::mutex>{ mtx ]; // note there is no name to the lock!
...
So by forgetting to give a proper name to the lock guard, this statement will be interpreted as a variable declaration named "mtx" of type std::unique_lock<std::mutex>, without having anything locked!
I want to implement something like the lock statement from C# in C++. In C++17, this can be accomplished very easily:
#define LOCK(mutex) if(std::lock_guard<decltype(mutex)> My_Lock_{ mutex }; true)
std::mutex mtx;
LOCK(mtx) {
...
}
Q: How can I implement this in C++11/14?
Putting aside the "should you do this", here's how:
While it's not quite the same, since it requires a semi-colon, it's near enough that I feel I may present it. This pure C++14 solution basically just defines the macro to start a lambda which is immediately executed:
template<typename MTX>
struct my_lock_holder {
MTX& mtx;
my_lock_holder(MTX& m) : mtx{m} {}
};
template<typename MTX, typename F>
void operator+(my_lock_holder<MTX>&& h, F&& f) {
std::lock_guard<MTX> guard{h.mtx};
std::forward<F>(f)();
}
#define LOCK(mtx) my_lock_holder<decltype(mtx)>{mtx} + [&]
The my_lock_holder just nabs the mutex reference for later, and allows us to overload operator+. The idea is that the operator creates the guard and execute the lambda. As you can see the macro defines a default reference capture, so that lambda will be able to reference anything in the enclosing scope. Then it's pretty much straight forward:
std::mutex mtx;
LOCK(mtx) {
}; // Note the semi-colon
And you can see it build live.
Inspired by StoryTeller's great idea, I think I found a viable solution myself, despite being somewhat a "hack":
template <typename T>
struct Weird_lock final : private std::lock_guard<T> {
bool flip;
Weird_lock(T& m) : std::lock_guard<T>{ m }, flip{ true } { }
operator bool() noexcept {
bool old = flip;
flip = false;
return old;
}
};
#define LOCK(mutex) for(Weird_lock<decltype(mutex)> W__l__{ mutex }; W__l__;)
The good thing is that it doesn't need a semicolon in the end. The bad is the need for an additional bool, but from what I see in godbolt.org, the compiler optimizes this out anyways.
I suggest you do:
#define UNIQUE_NAME(name) name##__COUNTER__
#define LOCK(mutex) std::lock_guard<decltype(mutex)> UNIQUE_NAME(My_Lock){ mutex };
Using the
COUNTER preprocessor symbol will generate a unique variable name that you simply don't care about.

Scope-based locking wrapper for a class

I'm not too experienced with multithreaded programming, but I've come up with the following and I'm wondering whether there are any obvious problems I've overlooked in my naivety.
I have a resource (in my case a drawing surface) that can only safely be used by one thread at a time. To enforce this, I have the following pair of classes:
template <typename Surface>
class lockable
{
public:
template <typename... Args, typename = /* is_constructible constraint*/>
lockable(Args&&... args)
: surface_(std::forward<Args>(args)...)
{}
locked_surface<Surface> lock()
{
return locked_surface<Surface>{*this};
}
private:
Surface surface_;
// Heap-allocate the mutex to allow the class to be moveable
std::unique_ptr<std::mutex> mutex_ = std::make_unique<std::mutex>();
friend class locked_surface<Surface>;
};
template <typename Surface>
class locked_surface
{
public:
// Construct wrapper, obtaining lock
explicit locked_surface(lockable<Surface>& lockable_)
: surface_(lockable_.surface_),
lock_(*lockable_.mutex_)
{}
// Wrap the Surface API
void move_to(point2f p) { surface_.move_to(p); }
void line_to(point2f p) { surface_.line_to(p); }
/* Other Surface API functions... */
private:
Surface& surface_;
std::unique_lock<std::mutex> lock_;
};
The idea is that you wrap a surface up in a lockable<>, and then call the lock() member function to obtain exclusive access to something which implements the Surface API, and forwards all its calls to the real surface. The duration of the lock is controlled by the lifetime of the returned wrapper using RAII. For example:
lockable<cairo_pixmap_surface> ls{/*args...*/};
auto draw_shape = [&ls] (auto&& shape) {
auto surface = ls.lock(); // blocks until surface is available
shape.draw(surface); // lock is released even if shape.draw() throws
};
std::thread t1(draw_shape, triangle{});
std::thread t2(draw_shape, circle{});
t1.join();
t2.join();
This seems like a simple, elegant and C++-y solution to the problem. It works well in my tests, but testing multithreaded stuff is tricky: things happen in real life that are hard to simulate. Like I said, I'm a bit of a novice when it comes to multithreading in general so I'd appreciate any advice, specifically:
Are there any obvious problems with the above that I've overlooked?
Is this RAII-controlled locking wrapper idea a common pattern? If so, are there any good links to read up on it?

How to schedule member function for execution in boost::threadpoool

I found a threadpool which doesn't seem to be in boost yet, but I may be able to use it for now (unless there is a better solution).
I have several million small tasks that I want to execute concurrently and I wanted to use a threadpool to schedule the execution of the tasks. The documentation of the threadpool provides (roughly) this example:
#include "threadpool.hpp"
using namespace boost::threadpool;
// A short task
void task()
{
// do some work
}
void execute_with_threadpool(int poolSize, int numTasks)
{
// Create a thread pool.
pool tp(poolSize);
for(int i = 0; i++; i < numTasks)
{
// Add some tasks to the pool.
tp.schedule(&task);
}
// Leave this function and wait until all tasks are finished.
}
However, the example only allows me to schedule non-member functions (or tasks). Is there a way that I can schedule a member function for execution?
Update:
OK, supposedly the library allows you to schedule a Runnable for execution, but I can't figure out where is the Runnable class that I'm supposed to inherit from.
template<typename Pool, typename Runnable>
bool schedule(Pool& pool, shared_ptr<Runnable> const & obj);
Update2:
I think I found out what I need to do: I have to make a runnable which will take any parameters that would be necessary (including a reference to the object that has a function which will be called), then I use the static schedule function to schedule the runnable on the given threadpool:
class Runnable
{
private:
MyClass* _target;
Data* _data;
public:
Runnable(MyClass* target, Data* data)
{
_target = target;
_data = data;
}
~Runnable(){}
void run()
{
_target->doWork(_data);
}
};
Here is how I schedule it within MyClass:
void MyClass::doWork(Data* data)
{
// do the work
}
void MyClass::produce()
{
boost::threadpool::schedule(myThreadPool, boost::shared_ptr<Runnable>(new Runnable(myTarget, new Data())));
}
However, the adaptor from the library has a bug in it:
template<typename Pool, typename Runnable>
bool schedule(Pool& pool, shared_ptr<Runnable> const & obj)
{
return pool->schedule(bind(&Runnable::run, obj));
}
Note that it takes a reference to a Pool but it tries to call it as if it was a pointer to a Pool, so I had to fix that too (just changing the -> to a .).
To schedule any function or member function - use Boost.Bind or Boost.Lambda (in this order). Also you can consider special libraries for your situation. I can recommend Inter Threading Building Blocks or, in case you use VC2010 - Microsoft Parallel Patterns Library.
EDIT:
I've never used this library or heard anything bad about it, but it's old enough and still is not included into Boost. I would check why.
EDIT 2:
Another option - Boost.Asio. It's primarily a networking library, but it has a scheduler that you can use. I would use this multithreading approach. Just instead of using asynchronous network operations schedule your tasks by boost::asio::io_service::post().
I think I found out what I need to do: I have to make a runnable which will take any parameters that would be necessary (including a reference to the object that has a function which will be called), then I use the static schedule function to schedule the runnable on the given threadpool:
class Runnable
{
private:
MyClass* _target;
Data* _data;
public:
Runnable(MyClass* target, Data* data)
{
_target = target;
_data = data;
}
~Runnable(){}
void run()
{
_target->doWork(_data);
}
};
Here is how I schedule it within MyClass:
void MyClass::doWork(Data* data)
{
// do the work
}
void MyClass::produce()
{
boost::threadpool::schedule(myThreadPool, boost::shared_ptr<Runnable>(new Runnable(myTarget, new Data())));
}
However, the adaptor from the library has a bug in it:
template<typename Pool, typename Runnable>
bool schedule(Pool& pool, shared_ptr<Runnable> const & obj)
{
return pool->schedule(bind(&Runnable::run, obj));
}
Note that it takes a reference to a Pool but it tries to call it as if it was a pointer to a Pool, so I had to fix that too (just changing the -> to a .).
However, as it turns out, I can't use that boost thread pool because I am mixing native C++ (dll), C++/CLI (dll) and .NET code: I have a C++/CLI library that wraps a native C++ library which in tern uses boost::thread. Unfortunately, that results in a BadImageFormatException at runtime (which has previously been discussed by other people):
The problem is that the static boost thread library tries to hook the
native win32 PE TLS callbacks in order to ensure that the thread-local
data used by boost thread is cleaned up correctly. This is not
compatible with a C++/CLI executable.
This solution is what I was able to implement using the information: http://think-async.com/Asio/Recipes. I tried implementing this recipe and found that the code worked in Windows but not in Linux. I was unable to figure out the problem but searching the internet found the key which was make the work object an auto pointer within the code block. I've include the void task() that the user wanted for my example I was able to create a convenience function and pass pointers into my function does the work. For my case, I create a thread pool that uses the function : boost::thread::hardware_concurrency() to get the possible number of threads. I've used the recipe below with as many as 80 tasks with 15 threads.
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/thread.hpp>
#include <boost/scoped_ptr.hpp>
// A short task
void task()
{
// do some work
}
void execute_with_threadpool( int numTasks,
int poolSize = boost::thread::hardware_concurrency() )
{
boost::asio::io_service io_service;
boost::thread_group threads;
{
boost::scoped_ptr< boost::asio::io_service::work > work( new boost::asio::io_service::work(io_service) );
for(int t = 0; t < poolSize; t++)
{
threads.create_thread(boost::bind(&boost::asio::io_service::run, &io_service));
}
for( size_t t = 0; t < numTasks; t++ )
{
++_number_of_jobs;
io_service.post(boost::bind(task) );
}
}
threads.join_all();
}
Figured it out, you must have run() method defined, this is the easiest way:
class Command
{
public:
Command() {}
~Command() {}
void run() {}
};
In main(), tp is your threadpool:
shared_ptr<Command> pc(new Command());
tp.schedule(bind(&Command::run, pc));
Done.

Lightweight wrapper - is this a common problem and if yes, what is its name?

I have to use a library that makes database calls which are not thread-safe. Also I occasionally have to load larger amounts of data in a background thread.
It is hard to say which library functions actually access the DB, so I think the safest approach for me is to protect every library call with a lock.
Let's say I have a library object:
dbLib::SomeObject someObject;
Right now I can do something like this:
dbLib::ErrorCode errorCode = 0;
std::list<dbLib::Item> items;
{
DbLock dbLock;
errorCode = someObject.someFunction(&items);
} // dbLock goes out of scope
I would like to simplify that to something like this (or even simpler):
dbLib::ErrorCode errorCode =
protectedCall(someObject, &dbLib::SomeObject::someFunction(&items));
The main advantage of this would be that I won't have to duplicate the interface of dbLib::SomeObject in order to protect each call with a lock.
I'm pretty sure that this is a common pattern/idiom but I don't know its name or what keywords to search for. (Looking at http://www.vincehuston.org/dp/gof_intents.html I think, it's more an idiom than a pattern).
Where do I have to look for more information?
You could make protectedCall a template function that takes a functor without arguments (meaning you'd bind the arguments at the call-site), and then creates a scoped lock, calls the functor, and returns its value. For example something like:
template <typename Ret>
Ret protectedCall(boost::function<Ret ()> func)
{
DbLock lock;
return func();
}
You'd then call it like this:
dbLib::ErrorCode errorCode = protectedCall(boost::bind(&dbLib::SomeObject::someFunction, &items));
EDIT. In case you're using C++0x, you can use std::function and std::bind instead of the boost equivalents.
In C++0x, you can implement some form of decorators:
template <typename F>
auto protect(F&& f) -> decltype(f())
{
DbLock lock;
return f();
}
usage:
dbLib::ErrorCode errorCode = protect([&]()
{
return someObject.someFunction(&items);
});
From your description this would seem a job for Decorator Pattern.
However, especially in the case of resources, I wouldn't recommend using it.
The reason is that in general these functions tend to scale badly, require higher level (less finegrained) locking for consistency, or return references to internal structures that require the lock to stay locked until all information is read.
Think, e.g. about a DB function that calls a stored procedure that returns a BLOB (stream) or a ref cursor: the streams should not be read outside of the lock.
What to do?
I recommend instead to use the Facade Pattern. Instead of composing your operations directly in terms of DB calls, implement a facade that uses the DB layer; This layer could then manage the locking at exactly the required level (and optimize where needed: you could have the facade be implemented as a thread-local Singleton, and use separate resources, obviating the need for locks, e.g.)
The simplest (and still straightforward) solution might be to write a function which returns a proxy for the object. The proxy does the locking and overloads -> to allow calling the object. Here is an example:
#include <cstdio>
template<class T>
class call_proxy
{
T &item;
public:
call_proxy(T &t) : item(t) { puts("LOCK"); }
T *operator -> () { return &item; }
~call_proxy() { puts("UNLOCK"); }
};
template<class T>
call_proxy<T> protect(T &t)
{
return call_proxy<T>(t);
}
Here's how to use it:
class Intf
{
public:
void function()
{
puts("foo");
}
};
int main()
{
Intf a;
protect(a)->function();
}
The output should be:
LOCK
foo
UNLOCK
If you want the lock to happen before the evaluation of the arguments, then can use this macro:
#define PCALL(X,APPL) (protect(X), (X).APPL)
PCALL(x,x.function());
This evaluates x twice though.
This article by Andrei Alexandrescu has a pretty interesting article how to create this kind of thin wrapper and combine it with dreaded volatile keyword for thread safety.
Mutex locking is a similar problem. It asked for help here: Need some feedback on how to make a class "thread-safe"
The solution I came up with was a wrapper class that prevents access to the protected object. Access can be obtained via an "accessor" class. The accessor will lock the mutex in its constructor and unlock it on destruction. See the "ThreadSafe" and "Locker" classes in Threading.h for more details.