Asynchronous HTTP requests with co_await - c++

I have multiple std::functions that are called on the main thread (not on different threads) in my app (as the result of an asynchronous HTTP requests), for example:
namespace model { struct Order{}; struct Trade{};}
std::function<void (std::string)> func1 = [](std::string http_answer)
{
std::vector<model::Order> orders = ParseOrders(http_answer);
std::cout << "Fetched " << orders.size() << " open/closed orders.");
}
std::function<void (std::string)> func2 = [](std::string http_answer)
{
std::vector<model::Trade> trades = ParseTrades(http_answer);
std::cout << "Fetched " << trades.size() << " trades.");
}
How to call process_result when the both func1 and func2 have parsed HTTP answers?
auto process_result = [](std::vector<model::Order> orders, std::vector<model::Trades> trades)
{
std::cout << "Matching orders and trades.";
};
Is there some solution with co_await or something like this?

You need some kind of synchronization point. Have not used co_await so far so this might not be what you are looking for, however in c++17 I'd go for a std::promise / std::future, maybe like this:
#include <iostream>
#include <functional>
#include <future>
std::promise<std::string> p1;
std::function<void (std::string)> func1 = [](std::string http_answer)
{
// std::vector<model::Order> orders = ParseOrders(http_answer);
// std::cout << "Fetched " << orders.size() << " open/closed orders.");
p1.set_value(http_answer);
};
std::promise<std::string> p2;
std::function<void (std::string)> func2 = [](std::string http_answer)
{
// std::vector<model::Trade> trades = ParseTrades(http_answer);
// std::cout << "Fetched " << trades.size() << " trades.");
p2.set_value(http_answer);
};
int main () {
// whenever that happens...
func1("foo");
func2("bar");
// synchronize on func1 and func2 finished
auto answer1 = p1.get_future().get();
auto answer2 = p2.get_future().get();
auto process_result = [&](/* std::vector<model::Order> orders, std::vector<model::Trades> trades */)
{
std::cout << "Matching orders and trades... " << answer1 << answer2;
};
process_result();
return 0;
}
http://coliru.stacked-crooked.com/a/3c74f00125999fb6
https://en.cppreference.com/w/cpp/thread/future

Related

Asio How to write a custom AsyncStream?

I have actually managed to write a working AsyncStream. However, I am not really sure if I did it the way it is supposed to be done.
My main question is: Which executor is the get_executor() function supposed to return?
While implementing it several questions arose. I tagged them with Q<index>:. (I will keep the index stable in case of edits.) I would appreciate answers to them.
I tried to shorten/simplify the example as much as possible. It does compile and execute correctly.
#include <iostream>
#include <syncstream>
#include <thread>
#include <coroutine>
#include <future>
#include <random>
#include <string>
#include <memory>
#include <boost/asio.hpp>
#include <boost/asio/experimental/as_tuple.hpp>
#include <fmt/format.h>
inline std::osyncstream tout(const std::string & tag = "") {
auto hash = std::hash<std::thread::id>{}(std::this_thread::get_id());
auto hashStr = fmt::format("T{:04X} ", hash >> (sizeof(hash) - 2) * 8); // only display 2 bytes
auto stream = std::osyncstream(std::cout);
stream << hashStr;
if (not tag.empty())
stream << tag << " ";
return stream;
}
namespace asio = boost::asio;
template <typename Executor>
requires asio::is_executor<Executor>::value // Q1: Is this the correct way to require that Executor actually is an executor?
// I can't replace typename as there is no concept for Executors.
class Service : public std::enable_shared_from_this<Service<Executor>> {
template<typename CallerExecutor, typename ServiceExecutor>
// requires asio::is_executor<CallerExecutor>::value && asio::is_executor<ServiceExecutor>::value
friend class MyAsyncStream;
/// Data sent to the service
std::string bufferIn;
/// Data produced by the service
std::string bufferOut;
/// The strand used to avoid concurrent execution if the passed executor is backed by multiple threads.
asio::strand<Executor> strand;
/// Used to slow the data consumption and generation
asio::steady_timer timer;
/// Used to generate data
std::mt19937 gen;
/// https://stackoverflow.com/a/69753502/4479969
constexpr static const char charset[] =
"0123456789"
"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
"abcdefghijklmnopqrstuvwxyz";
template<typename URBG>
static std::string gen_string(std::size_t length, URBG &&g) {
std::string result;
result.resize(length);
std::sample(std::cbegin(charset),
std::cend(charset),
std::begin(result),
std::intptr_t(length),
std::forward<URBG>(g));
return result;
}
static const constexpr auto MAX_OPS = 7;
asio::awaitable<void> main(std::shared_ptr<Service> captured_self) {
const constexpr auto TAG = "SrvCo";
auto exe = co_await asio::this_coro::executor;
auto use_awaitable = asio::bind_executor(exe, asio::use_awaitable);
for (size_t ops = 0; ops < MAX_OPS; ops++) {
timer.expires_after(std::chrono::milliseconds(1000));
co_await timer.async_wait(use_awaitable);
tout(TAG) << "Ops " << ops << std::endl;
bufferOut += gen_string(8, gen);
tout(TAG) << "Produced: " << bufferOut << std::endl;
auto consumed = std::string_view(bufferIn).substr(0, 4);
tout(TAG) << "Consumed: " << consumed << std::endl;
bufferIn.erase(0, consumed.size());
}
tout(TAG) << "Done" << std::endl;
}
std::once_flag initOnce;
public:
explicit Service(Executor && exe) : strand{asio::make_strand(exe)}, timer{exe.context()} {}
void init() {
std::call_once(initOnce, [this]() {
asio::co_spawn(strand, main(this->shared_from_this()), asio::detached);
});
}
};
/// https://www.boost.org/doc/libs/1_66_0/doc/html/boost_asio/reference/AsyncReadStream.html
template<typename CallerExecutor, typename ServiceExecutor>
// requires asio::is_executor<CallerExecutor>::value && asio::is_executor<ServiceExecutor>::value // Q2: Q1 is working why isn't this working with two Types?
class MyAsyncStream {
typedef void async_rw_handler(boost::system::error_code, size_t);
/// Holds the callers executor.
/// Q3: Should this field even exist?
CallerExecutor executor;
/// Use a weak_ptr to behave like a file descriptor.
std::weak_ptr<Service<ServiceExecutor>> serviceRef;
public:
explicit MyAsyncStream(std::shared_ptr<Service<ServiceExecutor>> & service, CallerExecutor & exe) : executor{exe}, serviceRef{service} {}
/// Needed by the stream specification.
typedef CallerExecutor executor_type;
/**
* Q4: Which executor should this function return? The CallerExecutor or the ServiceExecutor or something different.
* In this example it is never called. However it is needed by the stream specification. https://www.boost.org/doc/libs/1_79_0/doc/html/boost_asio/reference/AsyncReadStream.html
* I really don't want to leak the ServiceExecutor to library users.
* #return Returns the executor supplied in the constructor.
*/
auto get_executor() {
tout() << "GETTING EXE" << std::endl;
return executor;
}
template<typename MutableBufferSequence,
asio::completion_token_for<async_rw_handler>
CompletionToken = typename asio::default_completion_token<CallerExecutor>::type>
requires asio::is_mutable_buffer_sequence<MutableBufferSequence>::value
auto async_read_some(const MutableBufferSequence &buffer,
CompletionToken &&token = typename asio::default_completion_token<CallerExecutor>::type()) {
return asio::async_initiate<CompletionToken, async_rw_handler>([&](auto completion_handler) { // Q5: Can I avoid this async_initiate somehow?
BOOST_ASIO_READ_HANDLER_CHECK(CompletionToken, completion_handler) type_check; // I tried using co_spawn directly without success.
asio::co_spawn(
asio::get_associated_executor(completion_handler), // Q6-1: should I use get_executor() here? Currently, I just get the callers executor.
[&, buffer = std::move(buffer), completion_handler = std::forward<CompletionToken>(completion_handler)]
() mutable -> asio::awaitable<void> {
const constexpr auto TAG = "ARS";
auto callerExe = co_await asio::this_coro::executor;
auto to_caller = asio::bind_executor(callerExe, asio::use_awaitable);
auto service = serviceRef.lock();
if (service == nullptr) {
std::move(completion_handler)(asio::error::bad_descriptor, 0);
co_return;
}
auto to_service = asio::bind_executor(service->strand, asio::use_awaitable);
co_await asio::post(to_service);
tout(TAG) << "performing read" << std::endl;
auto buf_begin = asio::buffers_begin(buffer);
auto buf_end = asio::buffers_end(buffer);
boost::system::error_code err = asio::error::fault;
size_t it = 0;
while (!service->bufferOut.empty()) {
if (buf_begin == buf_end) {
// error the buffer is smaller than the request read amount
err = asio::error::no_buffer_space;
goto completion;
}
*buf_begin++ = service->bufferOut.at(0);
service->bufferOut.erase(0, 1);
it++;
}
err = asio::stream_errc::eof;
completion:
co_await asio::post(to_caller); // without this call the function returns on the wrong thread
tout(TAG) << "read done returned" << std::endl;
std::move(completion_handler)(err, it);
}, asio::detached);
}, token);
}
template<typename ConstBufferSequence,
asio::completion_token_for <async_rw_handler>
CompletionToken = typename asio::default_completion_token<CallerExecutor>::type>
requires asio::is_const_buffer_sequence<ConstBufferSequence>::value
auto async_write_some(const ConstBufferSequence &buffer,
CompletionToken &&token = typename asio::default_completion_token<CallerExecutor>::type()) {
return asio::async_initiate<CompletionToken, async_rw_handler>([&](auto completion_handler) {
BOOST_ASIO_WRITE_HANDLER_CHECK(CompletionToken, completion_handler) type_check;
asio::co_spawn(
asio::get_associated_executor(completion_handler), // Q6-2: should I use get_executor() here? Currently, I just get the callers executor.
[&, buffer = std::move(buffer), completion_handler = std::forward<CompletionToken>(completion_handler)]
() mutable -> asio::awaitable<void> {
const constexpr auto TAG = "AWS";
auto callerExe = co_await asio::this_coro::executor;
auto to_caller = asio::bind_executor(callerExe, asio::use_awaitable);
auto service = serviceRef.lock();
if (service == nullptr) {
std::move(completion_handler)(asio::error::bad_descriptor, 0);
co_return;
}
auto to_service = asio::bind_executor(service->strand, asio::use_awaitable);
co_await asio::post(to_service);
tout(TAG) << "performing write" << std::endl;
auto buf_begin = asio::buffers_begin(buffer);
auto buf_end = asio::buffers_end(buffer);
boost::system::error_code err = asio::error::fault;
size_t it = 0;
while (buf_begin != buf_end) {
service->bufferIn.push_back(static_cast<char>(*buf_begin++));
it++;
}
err = asio::stream_errc::eof;
completion:
co_await asio::post(to_caller); // without this call the function returns on the wrong thread
tout(TAG) << "write done returned" << std::endl;
std::move(completion_handler)(err, it);
}, asio::detached);
}, token);
}
};
asio::awaitable<int> mainCo() {
const constexpr auto TAG = "MainCo";
auto exe = co_await asio::this_coro::executor;
auto use_awaitable = asio::bind_executor(exe, asio::use_awaitable);
auto as_tuple = asio::experimental::as_tuple(use_awaitable);
auto use_future = asio::use_future;
auto timer = asio::steady_timer(exe);
asio::thread_pool servicePool{1};
co_await asio::post(asio::bind_executor(servicePool, asio::use_awaitable));
tout() << "ServiceThread run start" << std::endl;
co_await asio::post(use_awaitable);
auto service = std::make_shared<Service<boost::asio::thread_pool::basic_executor_type<std::allocator<void>, 0> >>(servicePool.get_executor());
service->init();
auto stream = MyAsyncStream{service, exe};
for (size_t it = 0; it < 4; it++) {
{
std::vector<char> dataBackend;
auto dynBuffer = asio::dynamic_buffer(dataBackend, 50);
auto [ec, n] = co_await asio::async_read(stream, dynBuffer, as_tuple); // Q7-1: Can I avoid using as_tuple here?
tout(TAG) << "read done: " << std::endl
<< "n: " << n << std::endl
<< "msg: " << std::string{dataBackend.begin(), dataBackend.end()} << std::endl
<< "ec: " << ec.message()
<< std::endl;
}
{
auto const constexpr str = std::string_view{"HelloW"};
std::vector<char> dataBackend{str.begin(), str.end()};
auto dynBuffer = asio::dynamic_buffer(dataBackend, 50);
auto [ec, n] = co_await asio::async_write(stream, dynBuffer, as_tuple); // Q7-2: Can I avoid using as_tuple here?
tout(TAG) << "write done: " << std::endl
<< "n: " << n << std::endl
<< "msg: " << str << std::endl
<< "ec: " << ec.message()
<< std::endl;
}
timer.expires_after(std::chrono::milliseconds(2500));
co_await timer.async_wait(use_awaitable);
}
servicePool.join();
tout(TAG) << "Normal exit" << std::endl;
co_return 0;
}
int main() {
asio::io_context appCtx;
auto fut = asio::co_spawn(asio::make_strand(appCtx), mainCo(), asio::use_future);
tout() << "MainThread run start" << std::endl;
appCtx.run();
tout() << "MainThread run done" << std::endl;
return fut.get();
}
Q1
Looks fine I guess. But, see Q2.
Q2
Looks like it kills CTAD for AsyncStream. If I had to guess it's because ServiceExecutor is in non-deduced context. Helping it manually might help, but note how the second static assert here fails:
using ServiceExecutor = asio::thread_pool::executor_type;
using CallerExecutor = asio::any_io_executor;
static_assert(asio::is_executor<ServiceExecutor>::value);
static_assert(asio::is_executor<CallerExecutor>::value);
That's because co_await this_coro::executor returns any_io_executor, which is a different "brand" of executor. You need to check with execution::is_executor<T>::value instead. In fact, you might want to throw in a compatibility check as happens in Asio implementation functions:
(is_executor<Executor>::value || execution::is_executor<Executor>::value)
&& is_convertible<Executor, AwaitableExecutor>::value
PS:
It dawned on me that the non-deduced context is a symptom of
overly-specific template arguments. Just make AsyncStream<Executor, Service> (why bother with the specific type arguments that are
implementation details of Service?). That fixes the
CTAD (Live On Compiler Explorer)
template <typename CallerExecutor, typename Service>
requires my_is_executor<CallerExecutor>::value //
class MyAsyncStream {
Q3: Should this field even exist?
CallerExecutor executor;
Yes, that's how the IO object remembers its bound executor.
Q4: that's the spot where you return that caller executor.
It's not called in your application, but it might be. If you call any composed operation (like asio::async_read_until) against your IO Object (MyAsyncStream) it will - by default - run any handlers on the associated executor. This may add behaviours (like handler serialization, work tracking etc) that are required for correctness.
Like ever, the handler can be bound to another executor to override this.
Q5 I don't think so, unless you want to mandate use_awaitable (or compatible) completion tokens. The fact that you run a coro inside should be an implementation detail for the caller.
Q6 Yes, but not instead off. I'd assume you need to use the IO object's executor as the fallback:
asio::get_associated_executor(
completion_handler, this->get_executor())
Q7-1: Can I avoid using as_tuple here?
auto [ec, n] = co_await asio::async_read(stream, dynBuffer, as_tuple);
I suppose if you can "just" handle system_error exceptions:
auto n = co_await asio::async_read(stream, dynBuffer, use_awaitable);
Alternatively, I believe maybe redirect_error is applicable?

std::jthread runs a member function from another member function

Here is my code:
#include <iostream>
#include <zconf.h>
#include <thread>
class JT {
public:
std::jthread j1;
JT() {
j1 = std::jthread(&JT::init, this, std::stop_token());
}
void init(std::stop_token st={}) {
while (!st.stop_requested()) {
std::cout << "Hello" << std::endl;
sleep(1);
}
std::cout << "Bye" << std::endl;
}
};
void init_2(std::stop_token st = {}) {
while (!st.stop_requested()) {
std::cout << "Hello 2" << std::endl;
sleep(1);
}
std::cout << "Bye 2" << std::endl;
}
int main() {
std::cout << "Start" << std::endl;
JT *jt = new JT();
std::jthread j2(init_2);
sleep(5);
std::cout << "Finish" << std::endl;
}
Here is the output:
Start
Hello
Hello 2
Hello
Hello 2
Hello
Hello 2
Hello
Hello 2
Hello
Hello 2
Finish
Bye 2
Hello
The problem is I could get Bye 2 message but not Bye message.
I know the passed stop_token variable results in this problem but I do not know how to pass it to a member function inside another member function.
If I'm understanding the problem correctly (my understanding being that for std::jthread(&JT::init, this) jthread wants to call JT::init(std::stop_token st, this), which isn't going to work), you probably want to use std::bind_front to give it a Callable that works.
e.g.
JT() {
j1 = std::jthread(std::bind_front(&JT::init, this));
}
According to the useful comments, I have rewritten the class code as below:
class JT {
public:
std::jthread j1;
JT() {
j1 = std::jthread(&JT::init, this);
}
void init() {
auto st = j1.get_stop_token();
while (!st.stop_requested()) {
std::cout << "Hello" << std::endl;
sleep(1);
}
std::cout << "Bye" << std::endl;
}
};
You must get the stop_token on the fly through auto st = j1.get_stop_token();.
And the revised main function:
int main() {
std::cout << "Start" << std::endl;
JT *jt = new JT();
// auto jt = std::make_unique<JT>();
std::jthread j2(init_2);
sleep(5);
std::cout << "Finish" << std::endl;
delete jt;
}
You need to delete the class object directly or use RAII (like smart pointers).
The std::stop_token must be received as parameter by the JT::init function, during the thread construction. You can use either std::bind
j1 = std::jthread{ std::bind(&JT::init, this, std::placeholders::_1) };
or, more simpler, std::bind_front as in #Hasturkun answer.
Note
Obtaining the std::stop_token after the thread has been constructed will eventually result in missing the stop request, as demonstrated bellow:
#include <thread>
#include <iostream>
using namespace std::chrono_literals;
class JT {
public:
std::jthread j1;
JT() {
j1 = std::jthread(&JT::init, this);
}
~JT() {
j1.request_stop();
j1.join();
}
void init() {
auto st = j1.get_stop_token();
while (!st.stop_requested()) {
std::this_thread::sleep_for(1ms);
std::cout << "Hello" << std::endl;
}
std::cout << "Bye" << std::endl;
}
};
int main() {
std::cout << "Start" << std::endl;
for (int i = 0; i < 1000; i++) {
JT jt;
std::this_thread::sleep_for(5ms);
}
}
Which results in:
Start
Hello
Bye
Hello
Bye
Hello
Hello
Hello
Hello
Hello
Hello
....
and program never ending. I've tested on release with gcc 12.1.0 and msvc (VS 2019 16.11.5).

How to prevent compilation of passed lambda, if arguments are not references

In one of my projects I'm using a small utility function, which takes a Message struct and a lambda function, that modifies this message struct.
Now, I unintentionally passed a lambda without the necessary reference &. It perfectly compiles, but doesn't gave the desired output.
As for me, there should be one of the two following behaviors:
Forgetting to write auto&, but just auto should lead to compilation errors
Writing just auto should be interpreted as auto&.
It is possible to prevent compilation in case of a missing & or even better to interpret auto as auto& automatically?
#include <iostream>
#include <functional>
#include <boost/variant.hpp>
struct Message {
int x;
int y;
};
void changeMessage(Message& m, const std::function<void(Message&)>& messageModifier) {
std::cout << "Message before:" << m.x << " " << m.y << "\n";
messageModifier(m);
std::cout << "Message after:" << m.x << " " << m.y << "\n";
}
int main(int, char**) {
{
std::function<void(int&)> f = [](int&) {};
std::function<void(int)> g = [](int) {};
f = g; // This compiles.
}
{
std::function<void(int&)> f = [](int&) {};
std::function<void(int)> g = [](int) {};
//g = f; // This does not compile. Makes perfect sense.
}
Message m{ 10,20 };
{
changeMessage(m, [](auto m) { m.x++; m.y--; }); // User unintentionally forgot &! Can I prevent this from compilation?
std::cout << "Message outside: " << m.x << " " << m.y << "\n";
}
{
changeMessage(m, [](auto& m) { m.x++; m.y--; });
std::cout << "Message outside: " << m.x << " " << m.y << "\n";
}
}
One way to prevent passing Message by value (and auto itself is never a reference) is to disable copy construction:
struct Message {
Message() = default;
Message(const Message&) = delete;
int x;
int y;
};
Another solution suggested by #L. F. is to check that lambda doesn't accept rvalues:
template<class Fn>
void change_message(Message& m, Fn fn) {
static_assert(!std::is_invocable_v<Fn, Message&&>);
fn(m);
}

Multiple threads passing parameter

Having:
class CPU() {};
void executable() {} , inside CPU; this function is executed by a thread.
void executable(){
while(run) { // for thread
cout << "Printing the memory:" << endl;
for (auto& t : map) {
cout << t.first << " " << t.second << "\n";
}
}
}
Need to instantiate 5 threads that execute executable() function:
for (int i = 0; i < 5; i++)
threads.push_back(thread(&CPU::executable, this)); //creating threads
cout << "Synchronizing all threads...\n";
for (auto& th : threads) th.join(); //waits for all of them to finish
Now, I want to create:
void executable0 () {
while(run) {
cout << "Printing the memory:" << endl;
for (auto& t : map) {
cout << t.first << " " << t.second << "\n";
}
}
}
void executable1 () {....}
to executable4() {....} // using that five threads that I`ve done above.
How could I do? Initialize or using std:thread constructor?
Can someone give me an example to understand this process.
Thanks & regards!
Following Some programmer dude's comment, I would also advise using a standard container of std::function:
#include <iostream>
#include <thread>
#include <map>
#include <functional>
#include <vector>
class CPU {
std::vector<std::function<void()>> executables{};
std::vector<std::thread> threads{};
public:
CPU() {
executables.emplace_back([](){
std::cout << "executable0\n";
});
executables.emplace_back([](){
std::cout << "executable1\n";
});
executables.emplace_back([](){
std::cout << "executable2\n";
});
}
void create_and_exec_threads() {
for(const auto executable : executables) {
threads.emplace_back([=](){ executable(); });
}
for(auto& thread : threads) {
thread.join();
}
}
};
We create a vector holding three callbacks, which will be used to initialise threads and start them inside create_and_exec_threads method.
Please do note that, as opposed to the comment in your example, creating a std::thread with a callback passed to its contructor will not only construct the thread, but also it will start it immediately.
Additionally, the std::thread::join method does not start the the thread. It waits for it to finish.

How to add timer to every method of class?

How to 'implicitly' add some sort of timer for every method of class, excluding constructor and destructor?
What I'm doing now for every method of class:
void MyClass::SomeFunc()
{
cout << __PRETTY_FUNCTION__ <<endl;
boost::timer::cpu_timer timer;
//Some code
boost::timer::cpu_times elapsed = timer.elapsed();
cout << __PRETTY_FUNCTION__ << " : WALLCLOCK TIME: " << elapsed.wall / 1e9 << " seconds" << endl;
}
What I want:
void MyClass::SomeFunc()
{
//Some code
}
assuming behaviour of this two parts of code should be equivalent.
You can almost achieve this using RAII:
struct FunctionLogger {
FunctionLogger(const char* func)
: m_func(func)
{
cout << func <<endl;
}
~FunctionLogger() {
boost::timer::cpu_times elapsed = timer.elapsed();
GSULOG << m_func << " : WALLCLOCK TIME: " << elapsed.wall / 1e9 << " seconds" << endl;
}
const char* m_func;
boost::timer::cpu_timer timer;
};
Now:
void MyClass::SomeFunc()
{
FunctionLogger _(__PRETTY_FUNCTION__);
//Some code
}
And of course if you like macros:
#define FL FunctionLogger _(__PRETTY_FUNCTION__)
void MyClass::SomeFunc()
{
FL;
//Some code
}
If you are looking for an industrial-grade solution for this sort of thing, the term of art is Aspect Oriented Programming. But it's not directly supported by C++.
What you are trying to do is known as known as profiling (getting the duration of each function call) and instrumentation (injecting code into the functions to get more detailed, but probably less accurate, timing information).
By far the best way of doing this is by not doing it yourself, but by running your code under a profiler (an off-the-shelf application which does the timing and, optionally, instrumentation automatically, all without polluting your source code.)
If you want to avoid modifying code, and are willing to sacrifice that __PRETTY_FUNCTION__ output. You can achieve this by accessing the class through a timing handle.
First you define a RAII class for timing, kind of like in John Zwinck's answer:
template<typename T>
struct TimingDecorator {
T *ptr_;
boost::timer::cpu_timer timer;
TimingDecorator (T* ptr) : ptr_(ptr) {}
~TimingDecorator () {
boost::timer::cpu_times elapsed = timer.elapsed();
GSULOG << " : WALLCLOCK TIME: " << elapsed.wall / 1e9 << " seconds" << endl;
}
T* operator->() { return ptr_; }
T const * operator->() const { return ptr_; }
};
Then you define a handle that forces all access to the class through the decorator:
template<typename T>
struct TimingHandle {
T &obj_;
boost::timer::cpu_timer timer;
TimingHandle (T const& obj) : obj_(obj) {}
TimingDecorator<T> operator->() { return &obj_; }
TimingDecorator<T const> operator->() const { return &obj_; }
};
And then for timing you do all access through the handle:
MyClass obj;
TimingHandle<MyClass> obj_timing(obj);
GSULOG << "MyClass::SomeFunc" << endl;
obj_timing->SomeFunc();
I should point out that the last two lines can be wrapped in a macro (if you don't mind using one), to avoid repeating yourself.
#define MYCLASS_TIME_FUNC(handle, func) \
GSULOG << "MyClass::" #func << endl; \
(handle)->func
Which you ultimately can use as
MYCLASS_TIME_FUNC(obj_timing, SomeFunc2)(/* params for SomeFunc2 */);
Reversing the initiatives a bit you can also use:
template <typename Caption, typename F>
auto timed(Caption const& task, F&& f) {
return [f=std::forward<F>(f), task](auto&&... args) {
using namespace std::chrono;
struct measure {
high_resolution_clock::time_point start;
Caption task;
~measure() { GSU_LOCK << " -- (" << task << " completed in " << duration_cast<microseconds>(high_resolution_clock::now() - start).count() << "µs)\n"; }
} timing { high_resolution_clock::now(), task };
return f(std::forward<decltype(args)>(args)...);
};
}
Which you can use like: Live On Coliru
timed_rand = time("Generate a random number", &::rand);
for (int i = 0; i<10; ++i)
std::cout << timed_rand() << " ";
With a bit of MACRO help you can make it even more verstatile to use:
Live On Coliru
#include <iostream>
#include <chrono>
using namespace std::literals::string_literals;
#define GSU_LOG std::clog
template <typename Caption, typename F>
auto timed(Caption const& task, F&& f) {
return [f=std::forward<F>(f), task](auto&&... args) -> decltype(auto) {
using namespace std::chrono;
struct measure {
high_resolution_clock::time_point start;
Caption task;
~measure() { GSU_LOG << " -- (" << task << " completed in " << duration_cast<microseconds>(high_resolution_clock::now() - start).count() << "µs)\n"; }
} timing { high_resolution_clock::now(), task };
return f(std::forward<decltype(args)>(args)...);
};
}
#define TIMED(expr) (timed(__FILE__ + (":" + std::to_string(__LINE__)) + " " #expr, [&]() -> decltype(auto) {return (expr);})())
int main() {
std::string line;
while (TIMED(std::getline(std::cin, line))) {
std::cout << "Simple arithmetic: " << TIMED(42 * TIMED(line.length())) << "\n";
}
}
Prints
$ clang++ -std=c++14 -O2 -Wall -pedantic -pthread main.cpp
$ for a in x xx xxx; do sleep 0.5; echo "$a"; done | ./a.out
-- (main.cpp:25 std::getline(std::cin, line) completed in 497455µs)
-- (main.cpp:26 line.length() completed in 36µs)
-- (main.cpp:26 42 * TIMED(line.length()) completed in 106µs)
Simple arithmetic: 42
-- (main.cpp:25 std::getline(std::cin, line) completed in 503516µs)
-- (main.cpp:26 line.length() completed in 14µs)
-- (main.cpp:26 42 * TIMED(line.length()) completed in 42µs)
Simple arithmetic: 84
-- (main.cpp:25 std::getline(std::cin, line) completed in 508554µs)
-- (main.cpp:26 line.length() completed in 14µs)
-- (main.cpp:26 42 * TIMED(line.length()) completed in 38µs)
Simple arithmetic: 126
-- (main.cpp:25 std::getline(std::cin, line) completed in 286µs)
Note you could also make the lambda accumulate data for different calls and report the totals/average.