Throwing exception vs return code - c++

I'm implementing my own queue which blocks on .pop(). This function also accepts additional argument which is a timeout. So at the moment I have such code:
template <class T>
class BlockingQueue {
private:
std::queue<T> m_queue;
std::mutex m_mutex;
std::condition_variable m_condition;
public:
T pop(uint64_t t_millis) {
std::unique_lock<std::mutex> lock(m_mutex);
auto status = m_condition.wait_for(
lock,
std::chrono::milliseconds(t_millis),
[=] {
return !m_queue.empty();
}
);
if (!status) {
throw exceptions::Timeout();
}
T next(std::move(m_queue.front()));
m_queue.pop();
return next;
};
}
where exceptions::Timeout is my custom exception. Now I've been thinking about this exception throwing from the performance point of view. Would it be better to return some kind of return code from that function? How does that affect performance?
Also since .pop already returns something how would you implement additional return code? I suppose some new structure that holds both T and a return code would be needed. Is that increase in complexity really worth it?

Throw exceptions when an expectation has not been met, return a status code when you're querying for status.
for example:
/// pops an object from the stack
/// #returns an object of type T
/// #pre there is an object on the stack
/// #exception std::logic_error if precondition not met
T pop();
/// queries how many objects are on the stack
/// #returns a count of objects on the stack
std::size_t object_count() const;
/// Queries the thing for the last transport error
/// #returns the most recent error or an empty error_code
std::error_code last_error() const;
and then there's the asio-style reactor route coupled with executor-based futures:
/// Asynchronously wait for an event to be available on the stack.
/// The handler will be called exactly once.
/// to cancel the wait, call the cancel() method
/// #param handler is the handler to call either on error or when
/// an item is available
/// #note Handler has the call signature void(const error_code&, T)
///
template<class Handler>
auto async_pop(Handler handler);
which could be called like this:
queue.async_pop(asio::use_future).then([](auto& f) {
try {
auto thing = f.get();
// use the thing we just popped
}
catch(const system_error& e) {
// e.code() indicates why the pop failed
}
});

One way to signal an error in a situation like this, without throwing an exception, would be to use something like Andrei Alexandrescu's expected<T> template.
He gave a nice talk about it a while back. The idea is, expected<T> either contains a T, or it contains an exception / error code object describing why the T couldn't be produced.
You can use his implementation, or easily adapt the idea for your own purposes. For instance you can build such a class on top of boost::variant<T, error_code> quite easily.
This is just another style of error handling, distinct from C-style integer error codes and C++ exceptions. Using a variant type does not imply any extra dynamic allocations -- such code can be efficient and doesn't add much complexity.
This is actually pretty close to how error handling is done in Rust idiomatically. c.f. 2 3

Also since .pop already returns something how would you implement additional
return code? I suppose some new structure that holds both T and a return code
would be needed.
Going with this approach would put an extra requirement on the types that can be used with your BlockingQueue: they must be default constructible. It can be avoided if pop() returns the result through a std::unique_ptr (signaling the timeout with a nullptr), but that will introduce noticeable overhead.
I see no disadvantage of using exceptions here. If you are measuring your timeouts in milliseconds, then handling an exception in case of a timeout should be negligible.

An exception is not necessary here. A "timeout" is just as expected an outcome as getting an item from the queue. Without the timeout, the program is essentially equivalent to the halting problem. Let's say the client specified that they want an indefinite timeout. Would the exception ever throw? How would you handle such an exception (assuming you're still alive in this post-apocalyptic scenario?)
Instead I find these two design choices more logical (though they're not the only ones):
Block until an item is available. Create a function named wait that polls and returns false if it times out, or true when an item is available. The rest of your pop() function can remain unchanged.
Don't block. Instead return a status:
If the operation would block, return "busy"
If the queue is empty, return "empty"
Otherwise, you can "pop" and return "success"
Since you have a mutex, these options seem preferable to an i.e. non-waiting function.

Related

C++20 coroutines using final_suspend for continuations

BACKGROUND
After being convinced that C++ stackless coroutines are pretty awesome. I have been implementing coroutines for my codebase, and realised an oddity in final_suspend.
CONTEXT
Let’s say you have the following final_suspend function:
final_awaitable final_suspend() noexcept
{
return {};
}
And, final_awaitable was implemented as follows:
struct final_awaitable
{
bool await_ready() const noexcept
{
return false;
}
default_handle_t await_suspend( promise_handle_t h ) const noexcept
{
return h.promise().continuation();
}
void await_resume() const noexcept {}
};
If continuation here was retrieved atomically from task queue and the task queue is potentially empty (which could occur any time between await_ready and await_suspend) then await_suspend must be able to return a blank continuation.
It is my understanding that when await_suspend returns a handle, the returned handle is immediately resumed (5.1 in N4775 draft). So, if there was no avaliable continuation here, any application crashes as resume is called on an invalid coroutine handle after receiving it from await_suspend.
The following is the execution order:
final_suspend Constructs final_awaitable.
final_awaitable::await_ready Returns false, triggering await_suspend.
final_awaitable::await_suspend Returns a continuation (or empty continuation).
continuation::resume This could be null if a retrieved from an empty work queue.
No check appears to be specified for a valid handle (as it is if await_suspend returns bool).
QUESTION
How are you suppose to add a worker queue to await_suspend without a lock in this case? Looking for a scalable solution.
Why doesn't the underlying coroutine implementation check for a valid handle.
A contrived example causing the crash is here.
SOLUTION IDEAS
Using a dummy task that is an infinite loop of co_yield. This is sort of wasted cycles and I would prefer not to have to do this, also I would need to create seperate handles to the dummy task for every thread of execution and that just seems silly.
Creating a specialisation of std::coroutine_handle where resume does nothing, returning an instance of that handle. I'd prefer not specialise the standard library. This also doesn't work because coroutine_handle<> doesn't have done() and resume() as virtual.
EDIT 1 16/03/2020 Call continuation() to atomically retrieve a continuation and store the result in the final_awaitable structure, await_ready world return true if there wasn't a continuation available. If there was a continuation available await_ready would return false, await_suspend would then be called and the continuation returned (immediately resuming it).
This doesn't work because the value returned by a task is stored in the coroutine frame and if the value is still needed then the coroutine frame must not be destroyed. In this case it is destroyed after await_resume is called on the final_awaitable.
This is only an issue if the task is the last in a chain of continuations.
EDIT 2 - 20/03/2020 Ignore the possibility of returning a usable co routine handle from await_suspend. Only resume continuation from top level co routine. This doesn't appear as efficient.
01/04/2020
I still haven't found a solution that doesn't have substantial disadvantages. I suppose the reason I'm caught up on this is because await_suspend appears to be designed to solve this exact problem (being able to return a corountine_handle). I just cannot figure out the pattern that was intended.
You can use std::noop_coroutine as a blank continuation.
What about: (Just a large comment in fact.)
struct final_awaitable
{
bool await_ready() const noexcept
{
return false;
}
bool await_suspend( promise_handle_t h ) const noexcept
{
auto continuation = h.promise().atomically_pop_a_continuation();
if (continuation)
continuation.handle().resume();
return true;//or whatever is meaningfull for your case.
}
void await_resume() const noexcept {}
};

Implementing a simple, generic thread pool in C++11

I want to create a thread pool for experimental purposes (and for the fun factor). It should be able to process a wide variety of tasks (so I can possibly use it in later projects).
In my thread pool class I'm going to need some sort of task queue. Since the Standard Library provides std::packaged_task since the C++11 standard, my queue will look like std::deque<std::packaged_task<?()> > task_queue, so the client can push std::packaged_tasks into the queue via some sort of public interface function (and then one of the threads in the pool will be notified with a condition variable to execute it, etc.).
My question is related to the template argument of the std::packaged_task<?()>s in the deque.
The function signature ?() should be able to deal with any type/number of parameters, because the client can do something like:
std::packaged_task<int()> t(std::bind(factorial, 342));
thread_pool.add_task(t);
So I don't have to deal with the type/number of parameters.
But what should the return value be? (hence the question mark)
If I make my whole thread pool class a template class, one instance
of it will only be able to deal with tasks with a specific signature
(like std::packaged_task<int()>).
I want one thread pool object to be able to deal with any kind of task.
If I go with std::packaged_task<void()> and the function invoked
returns an integer, or anything at all, then thats undefined behaviour.
So the hard part is that packaged_task<R()> is move-only, otherwise you could just toss it into a std::function<void()>, and run those in your threads.
There are a few ways around this.
First, ridiculously, use a packaged_task<void()> to store a packaged_task<R()>. I'd advise against this, but it does work. ;) (what is the signature of operator() on packaged_task<R()>? What is the required signature for the objects you pass to packaged_task<void()>?)
Second, wrap your packaged_task<R()> in a shared_ptr, capture that in a lambda with signature void(), store that in a std::function<void()>, and done. This has overhead costs, but probably less than the first solution.
Finally, write your own move-only function wrapper. For the signature void() it is short:
struct task {
template<class F,
class dF=std::decay_t<F>,
class=decltype( std::declval<dF&>()() )
>
task( F&& f ):
ptr(
new dF(std::forward<F>(f)),
[](void* ptr){ delete static_cast<dF*>(ptr); }
),
invoke([](void*ptr){
(*static_cast<dF*>(ptr))();
})
{}
void operator()()const{
invoke( ptr.get() );
}
task(task&&)=default;
task&operator=(task&&)=default;
task()=default;
~task()=default;
explicit operator bool()const{return static_cast<bool>(ptr);}
private:
std::unique_ptr<void, void(*)(void*)> ptr;
void(*invoke)(void*) = nullptr;
};
and simple. The above can store packaged_task<R()> for any type R, and invoke them later.
This has relatively minimal overhead -- it should be cheaper than std::function, at least the implementations I've seen -- except it does not do SBO (small buffer optimization) where it stores small function objects internally instead of on the heap.
You can improve the unique_ptr<> ptr container with a small buffer optimization if you want.
I happen to have an implementation which does exactly that. My way of doing things is to wrap the std::packaged_task objects in a struct which abstracts away the return type. The method which submits a task into the thread pool returns a future on the result.
This kind of works, but due to the memory allocations required for each task it is not suitable for tasks which are very short and very frequent (I tried to use it to parallelize chunks of a fluid simulation and the overhead was way too high, in the order of several milliseconds for 324 tasks).
The key part is this structure:
struct abstract_packaged_task
{
template <typename R>
abstract_packaged_task(std::packaged_task<R> &&task):
m_task((void*)(new std::packaged_task<R>(std::move(task)))),
m_call_exec([](abstract_packaged_task *instance)mutable{
(*(std::packaged_task<R>*)instance->m_task)();
}),
m_call_delete([](abstract_packaged_task *instance)mutable{
delete (std::packaged_task<R>*)(instance->m_task);
})
{
}
abstract_packaged_task(abstract_packaged_task &&other);
~abstract_packaged_task();
void operator()();
void *m_task;
std::function<void(abstract_packaged_task*)> m_call_exec;
std::function<void(abstract_packaged_task*)> m_call_delete;
};
As you can see, it hides away the type dependencies by using lambdas with std::function and a void*. If you know the maximum size of all possibly occuring std::packaged_task objects (I have not checked whether the size has a dependency on R at all), you could try to further optimize this by removing the memory allocation.
The submission method into the thread pool then does this:
template <typename R>
std::future<R> submit_task(std::packaged_task<R()> &&task)
{
assert(m_workers.size() > 0);
std::future<R> result = task.get_future();
{
std::unique_lock<std::mutex> lock(m_queue_mutex);
m_task_queue.emplace_back(std::move(task));
}
m_queue_wakeup.notify_one();
return result;
}
where m_task_queue is an std::deque of abstract_packaged_task structs. m_queue_wakeup is a std::condition_variable to wake a worker thread up to pick up the task. The worker threads implementation is as simple as:
void ThreadPool::worker_impl()
{
std::unique_lock<std::mutex> lock(m_queue_mutex, std::defer_lock);
while (!m_terminated) {
lock.lock();
while (m_task_queue.empty()) {
m_queue_wakeup.wait(lock);
if (m_terminated) {
return;
}
}
abstract_packaged_task task(std::move(m_task_queue.front()));
m_task_queue.pop_front();
lock.unlock();
task();
}
}
You can take a look at the full source code and the corresponding header on my github.

Pattern for future conversion

currently we are using asynchronous values very heavily.
Assume that I have a function which does something like this:
int do_something(const boost::posix_time::time_duration& sleep_time)
{
BOOST_MESSAGE("Sleeping a bit");
boost::this_thread::sleep(sleep_time);
BOOST_MESSAGE("Finished taking a nap");
return 42;
}
At some point in code we create a task which creates a future to such an int value which will be set by a packaged_task - like this (worker_queue is a boost::asio::io_service in this example):
boost::unique_future<int> createAsynchronousValue(const boost::posix_time::seconds& sleep)
{
boost::shared_ptr< boost::packaged_task<int> > task(
new boost::packaged_task<int>(boost::bind(do_something, sleep)));
boost::unique_future<int> ret = task->get_future();
// Trigger execution
working_queue.post(boost::bind(&boost::packaged_task<int>::operator (), task));
return boost::move(ret);
}
At another point in code I want to wrap this function to return some higher level object which should also be a future. I need a conversion function which takes the first value and transforms it to another value (in our actual code we have some layering and doing asynchronous RPC which returns futures to responses - these responses should be converted to futures to real objects, PODs or even void future to be able to wait on it or catch exceptions). So this is the conversion function in this example:
float converter(boost::shared_future<int> value)
{
BOOST_MESSAGE("Converting value " << value.get());
return 1.0f * value.get();
}
Then I thought of creating a lazy future as described in the Boost docs to do this conversion only if wanted:
void invoke_lazy_task(boost::packaged_task<float>& task)
{
try
{
task();
}
catch(boost::task_already_started&)
{}
}
And then I have a function (might be a higher level API) to create a wrapped future:
boost::unique_future<float> createWrappedFuture(const boost::posix_time::seconds& sleep)
{
boost::shared_future<int> int_future(createAsynchronousValue(sleep));
BOOST_MESSAGE("Creating converter task");
boost::packaged_task<float> wrapper(boost::bind(converter, int_future));
BOOST_MESSAGE("Setting wait callback");
wrapper.set_wait_callback(invoke_lazy_task);
BOOST_MESSAGE("Creating future to converter task");
boost::unique_future<float> future = wrapper.get_future();
BOOST_MESSAGE("Returning the future");
return boost::move(future);
}
At the end I want to be able to use it like this:
{
boost::unique_future<float> future = createWrappedFuture(boost::posix_time::seconds(1));
BOOST_MESSAGE("Waiting for the future");
future.wait();
BOOST_CHECK_EQUAL(future.get(), 42.0f);
}
But here I end up getting an exception about a broken promise. The reason seems to be pretty clear for me because the packaged_task which does the conversion goes out of scope.
So my questing is: How do I deal with such situations. How can I prevent the task from being destroyed? Is there a pattern for this?
Bests,
Ronny
You need to manage the lifetime of task object properly.
The most correct way is to return boost::packaged_task<float> instead of boost::unique_future<float> from createWrappedFuture(). The caller will be responsible to get future object and to prolongate task lifetime until future value is ready.
Or you can place task object into some 'pending' queue (global or class member) the similar way you did in createAsynchronousValue. But in this case you will need to explcitly manage task lifetime and remove it from queue after completion. So don't think this solution has advantages against returning task object itself.

Is it safe to modify data of pointer in vector from another thread?

Things seem to be working but I'm unsure if this is the best way to go about it.
Basically I have an object which does asynchronous retrieval of data. This object has a vector of pointers which are allocated and de-allocated on the main thread. Using boost functions a process results callback is bound with one of the pointers in this vector. When it fires it will be running on some arbitrary thread and modify the data of the pointer.
Now I have critical sections around the parts that are pushing into the vector and erasing in case the asynch retrieval object is receives more requests but I'm wondering if I need some kind of guard in the callback that is modifying the pointer data as well.
Hopefully this slimmed down pseudo code makes things more clear:
class CAsyncRetriever
{
// typedefs of boost functions
class DataObject
{
// methods and members
};
public:
// Start single asynch retrieve with completion callback
void Start(SomeArgs)
{
SetupRetrieve(SomeArgs);
LaunchRetrieves();
}
protected:
void SetupRetrieve(SomeArgs)
{
// ...
{ // scope for data lock
boost::lock_guard<boost::mutex> lock(m_dataMutex);
m_inProgress.push_back(SmartPtr<DataObject>(new DataObject)));
m_callback = boost::bind(&CAsyncRetriever::ProcessResults, this, _1, m_inProgress.back());
}
// ...
}
void ProcessResults(DataObject* data)
{
// CALLED ON ANOTHER THREAD ... IS THIS SAFE?
data->m_SomeMember.SomeMethod();
data->m_SomeOtherMember = SomeStuff;
}
void Cleanup()
{
// ...
{ // scope for data lock
boost::lock_guard<boost::mutex> lock(m_dataMutex);
while(!m_inProgress.empty() && m_inProgress.front()->IsComplete())
m_inProgress.erase(m_inProgress.begin());
}
// ...
}
private:
std::vector<SmartPtr<DataObject>> m_inProgress;
boost::mutex m_dataMutex;
// other members
};
Edit: This is the actual code for the ProccessResults callback (plus comments for your benefit)
void ProcessResults(CRetrieveResults* pRetrieveResults, CRetData* data)
{
// pRetrieveResults is delayed binding that server passes in when invoking callback in thread pool
// data is raw pointer to ref counted object in vector of main thread (the DataObject* in question)
// if there was an error set the code on the atomic int in object
data->m_nErrorCode.Store_Release(pRetrieveResults->GetErrorCode());
// generic iterator of results bindings for generic sotrage class item
TPackedDataIterator<GenItem::CBind> dataItr(&pRetrieveResults->m_DataIter);
// namespace function which will iterate results and initialize generic storage
GenericStorage::InitializeItems<GenItem>(&data->m_items, dataItr, pRetrieveResults->m_nTotalResultsFound); // this is potentially time consuming depending on the amount of results and amount of columns that were bound in storage class definition (i.e.about 8 seconds for a million equipment items in release)
// atomic uint32_t that is incremented when kicking off async retrieve
m_nStarted.Decrement(); // this one is done processing
// boost function completion callback bound to interface that requested results
data->m_complete(data->m_items);
}
As it stands, it appears that the Cleanup code can destroy an object for which a callback to ProcessResults is in flight. That's going to cause problems when you deref the pointer in the callback.
My suggestion would be that you extend the semantics of your m_dataMutex to encompass the callback, though if the callback is long-running, or can happen inline within SetupRetrieve (sometimes this does happen - though here you state the callback is on a different thread, in which case you are OK) then things are more complex. Currently m_dataMutex is a bit confused about whether it controls access to the vector, or its contents, or both. With its scope clarified, ProcessResults could then be enhanced to verify validity of the payload within the lock.
No, it isn't safe.
ProcessResults operates on the data structure passed to it through DataObject. It indicates that you have shared state between different threads, and if both threads operate on the data structure concurrently you might have some trouble coming your way.
Updating a pointer should be an atomic operation, but you can use InterlockedExchangePointer (in Windows) to be sure. Not sure what the Linux equivalent would be.
The only consideration then would be if one thread is using an obsolete pointer. Does the other thread delete the object pointed to by the original pointer? If so, you have a definite problem.

Dispatching exceptions in C++

How should exceptions be dispatched so that error handling and diagnostics can be handled in a centralized, user-friendly manner?
For example:
A DataHW class handles communication with some data acquisition hardware.
The DataHW class may throw exceptions based on a number of possible errors: intermittent signal, no signal, CRC failure, driver error. Each type of error gets its own exception class.
The DataHW class is called by a number of different pieces of code that do different types of acquisition and analysis.
The proper error handling strategy depends on the type of exception and the operation being attempted. (On intermittent signal, retry X times then tell the user; on a driver error, log an error and restart the driver; etc.) How should this error handling strategy be invoked?
Coding error recovery into each exception class: This would result in exception classes that are rather large and contain high-level UI and system management code. This seems bad.
Providing a separate catch block for each type of exception: Since the DataHW class is called from many different places, each catch block would have to be duplicated at each call site. This seems bad.
Using a single catch block that calls some ExceptionDispatch function with a giant RTTI-based switch statement: RTTI and switch usually indicates a failure to apply OO design, but this seems the least bad alternative.
Avoid duplicating the catch blocks at each call site by catching (...) and calling a shared handler function which rethrows and dispatches:
f()
{
try
{
// something
}
catch (...)
{
handle();
}
}
void handle()
{
try
{
throw;
}
catch (const Foo& e)
{
// handle Foo
}
catch (const Bar& e)
{
// handle Bar
}
// etc
}
An idea I keep running into is that exceptions should be caught by levels which can handle them. For example, a CRC error might be caught by the function that transmits the data, and upon catching this exception, it might try to retransmit, whereas a "no signal" exception might be caught in a higher level and drop or delay the whole operation.
But my guess is that most of these exceptions will be caught around the same function. It is a good idea to catch and handle them seperately (as in soln #2), but you say this causes a lot of duplicate code (leading to soln #3.)
My question is, if there is a lot of code to duplicate, why not make it into a function?
I'm thinking along the lines of...
void SendData(DataHW* data, Destination *dest)
{
try {
data->send(dest);
} catch (CRCError) {
//log error
//retransmit:
data->send(dest);
} catch (UnrecoverableError) {
throw GivingUp;
}
}
I guess it would be like your ExceptionDispatch() function, only instead of switching on the exception type, it would wrap the exception-generating call itself and catch the exceptions.
Of course, this function is overly simplified - you might need a whole wrapper class around DataHW; but my point is, it would be a good idea to have a centralized point around which all DataHW exceptions are handled - if the way different users of the class would handle them are similar.
Perhaps you could write a wrapper class for the DataHW class?
The wrapper would offer the same functionality as the DataHW class, but also contained the needed error handling code. Benefit is that you have the error handling code in a single place (DRY principle), and all errors would be handled uniformly. For example you can translate all low level I/O exceptions to higher level exceptions in the wrapper.
Basically preventing low level exceptions being showed to user.
As Butler Lampson said: All problems in computer science can be solved by another level of indirection
There are three ways i see to solve this.
Writing wrapper functions
Write a wrapper function for each function that can throw exceptions which would handle exceptions. That wrapper is then called by all the callers, instead of the original throwing function.
Using function objects
Another solution is to take a more generic approach and write one function that takes a function object and handles all exceptions. Here is some example:
class DataHW {
public:
template<typename Function>
bool executeAndHandle(Function f) {
for(int tries = 0; ; tries++) {
try {
f(this);
return true;
}
catch(CrcError & e) {
// handle crc error
}
catch(IntermittentSignalError & e) {
// handle intermittent signal
if(tries < 3) {
continue;
} else {
logError("Signal interruption after 3 tries.");
}
}
catch(DriverError & e) {
// restart
}
return false;
}
}
void sendData(char const *data, std::size_t len);
void readData(char *data, std::size_t len);
};
Now if you want to do something, you can just do it:
void doit() {
char buf[] = "hello world";
hw.executeAndHandle(boost::bind(&DataHW::sendData, _1, buf, sizeof buf));
}
Since you provide function objects, you can manage state too. Let's say sendData updates len so that it knows how much bytes were read. Then you can write function objects that read and write and maintain a count for how many characters are read so far.
The downside of this second approach is that you can't access result values of the throwing functions, since they are called from the function object wrappers. There is no easy way to get the result type of a function object binder. One workaround is to write a result function object that is called by executeAndHandle after the execution of the function object succeeded. But if we put too much work into this second approach just to make all the housekeeping work, it's not worth the results anymore.
Combining the two
There is a third option too. We can combine the two solutions (wrapper and function objects).
class DataHW {
public:
template<typename R, typename Function>
R executeAndHandle(Function f) {
for(int tries = 0; ; tries++) {
try {
return f(this);
}
catch(CrcError & e) {
// handle crc error
}
catch(IntermittentSignalError & e) {
// handle intermittent signal
if(tries < 3) {
continue;
} else {
logError("Signal interruption after 3 tries.");
}
}
catch(DriverError & e) {
// restart
}
// return a sensible default. for bool, that's false. for other integer
// types, it's zero.
return R();
}
}
T sendData(char const *data, std::size_t len) {
return executeAndHandle<T>(
boost::bind(&DataHW::doSendData, _1, data, len));
}
// say it returns something for this example
T doSendData(char const *data, std::size_t len);
T doReadData(char *data, std::size_t len);
};
The trick is the return f(); pattern. We can return even when f returns void. This eventually would be my favorite, since it allows both to keep handle code central at one place, but also allows special handling in the wrapper functions. You can decide whether it's better to split this up and make an own class that has that error handler function and the wrappers. Probably that would be a cleaner solution (i think of Separation of Concerns here. One is the basic DataHW functionality and one is the error handling).