Using asynchronous boost asio code for synchronous operation - c++

I have a server and a client code written in boost ASIO and it works pretty fine.
Since synchronous and asynchornous boost asio API's are different, is it possible in any way that the code I have written for asynchronous communication behaves and works in a synchronous fashion instead of asynchronous. ?

You can run any asynchronous code on a dedicated io_service, and simply run the service blocking:
Live On Coliru
#include <boost/asio.hpp>
#include <boost/asio/high_resolution_timer.hpp>
#include <iostream>
using namespace std::chrono_literals;
using namespace boost::asio;
using boost::system::system_error;
io_service svc;
high_resolution_timer deadline(svc, 3s);
void task_foo() {
deadline.async_wait([](system_error) { std::cout << "task done\n"; });
}
int main() {
task_foo();
std::cout << "Before doing work\n";
svc.run(); // blocks!
std::cout << "After doing work\n";
}
Prints
Before doing work
task done
After doing work
Alternatively:
You can always use futures that you can then await blocking:
Live On Coliru
#include <boost/asio.hpp>
#include <boost/asio/high_resolution_timer.hpp>
#include <boost/make_shared.hpp>
#include <future>
#include <iostream>
#include <thread>
using namespace std::chrono_literals;
using namespace boost::asio;
using boost::system::system_error;
io_service svc;
high_resolution_timer deadline(svc, 3s);
std::future<int> task_foo() {
auto p = boost::make_shared<std::promise<int> >();
auto fut = p->get_future();
deadline.async_wait([p](system_error) {
std::cout << "task done\n";
p->set_value(42);
});
return fut;
}
int main() {
auto foo = task_foo();
std::cout << "Before doing work\n";
std::thread([] { svc.run(); }).detach(); // doesn't block!
std::cout << "After starting work\n"; // happens before task completion
auto result = foo.get(); // blocks again!
std::cout << "Task result: " << result << "\n";
}
Prints
Before doing work
After starting work
task done
Task result: 42
This way you can still have the io_service running concurrently and don't require it to complete even though a particular task completes synchronously (foo.get())

Related

Boost interprocess lock_file does not work between two processes

I am trying to use boost file_lock to control two processes. I have process 1 obtaining a lock and then sleeping:
#include <boost/interprocess/sync/file_lock.hpp>
#include <fstream>
#include <chrono>
#include <thread>
int main()
{
std::string lock_path = "lockfile";
std::ofstream stream(lock_path, std::ios::app);
boost::interprocess::file_lock lock(lock_path.c_str());
if (lock.try_lock())
{
std::this_thread::sleep_for(std::chrono::seconds(30));
}
return 0;
}
while this process is sleeping, I will run a second process which tries to obtain the lock as well.
#include <boost/interprocess/sync/file_lock.hpp>
#include <iostream>
int main()
{
boost::interprocess::file_lock lock("lockfile");
if (lock.try_lock())
{
std::cout << "got here" << std::endl;
}
return 0;
}
I am expecting the cout statement on the second process not to print because the file is already locked by another process but it does print. What am I missing here? is file_lock not supposed to be used this way?
The best explanation I can think of is when your processes accidentally refer to different files. This might occur when
the current working directories are not the same
the processes run in isolated environments altogether (e.g. dockerized)
the file has been deleted/recreated in the meantime (meaning the inode doesn't match, even though the filename does)
Here's a simplified program that can serve as both parties:
Live On Coliru
#include <fstream>
#include <iostream>
#include <thread>
#include <boost/static_assert.hpp>
#include <boost/interprocess/sync/file_lock.hpp>
namespace bip = boost::interprocess;
using namespace std::chrono_literals;
static inline auto constexpr lock_path = "lockfile";
int main() {
bip::file_lock lock(lock_path);
if (lock.try_lock()) {
std::ofstream stream(lock_path, std::ios::app);
std::cout << "got lock" << std::endl;
std::this_thread::sleep_for(2s);
}
std::cout << "bye" << std::endl;
}
Local demo:

Do boost::asio c++20 coroutines support multithreading?

Do boost::asio c++20 coroutines support multithreading?
The boost::asio documentation examples are all single-threaded, are there any multithreaded examples?
Yes.
In Asio, if multiple threads run execution context, you don't normally even control which thread resumes your coroutine.
You can look at some of these answers that ask about how to switch executors mid-stream (controlling which strand or execution context may resume the coro):
asio How to change the executor inside an awaitable?
Switch context in coroutine with boost::asio::post
Update to the comment:
To make the c++20 coro echo server sample multi-threading you could change 2 lines:
boost::asio::io_context io_context(1);
// ...
io_context.run();
Into
boost::asio::thread_pool io_context;
// ...
io_context.join();
Since each coro is an implicit (or logical) strand, nothing else is needed. Notes:
Doing this is likely useless, unless you're doing significant work inside the coroutines, that would slow down IO multiplexing on a single thread.
In practice a single thread can easily handle 10k concurrent connections, especially with C++20 coroutines.
Note that it can be a significant performance gain to run the asio::io_context(1) with the concurrency hint, because it can avoid synchronization overhead.
When you introduce e.g. asynchronous session control or full-duplex you will have the need for an explicit strand. In the below example I show how you would make each "session" use a strand, and e.g. do graceful shutdown.
Live On Coliru
#include <boost/asio.hpp>
#include <boost/asio/co_spawn.hpp>
#include <boost/asio/experimental/awaitable_operators.hpp>
#include <iostream>
#include <list>
namespace asio = boost::asio;
namespace this_coro = asio::this_coro;
using boost::system::error_code;
using asio::ip::tcp;
using asio::detached;
using executor_type = asio::any_io_executor;
using socket_type = asio::use_awaitable_t<>::as_default_on_t<tcp::socket>; // or tcp::socket
//
using session_state = std::shared_ptr<socket_type>; // or any additional state
using handle = std::weak_ptr<session_state::element_type>;
using namespace std::string_view_literals;
using namespace asio::experimental::awaitable_operators;
asio::awaitable<void> echo_session(session_state s) {
try {
for (std::array<char, 1024> data;;) {
size_t n = co_await s->async_read_some(asio::buffer(data));
co_await async_write(*s, asio::buffer(data, n));
}
} catch (boost::system::system_error const& se) {
if (se.code() != asio::error::operation_aborted) // expecting cancellation
throw;
} catch (std::exception const& e) {
std::cout << "echo Exception: " << e.what() << std::endl;
co_return;
}
error_code ec;
co_await async_write(*s, asio::buffer("Server is shutting down\n"sv),
redirect_error(asio::use_awaitable, ec));
// std::cout << "echo shutdown: " << ec.message() << std::endl;
}
asio::awaitable<void> listener(std::list<handle>& sessions) {
auto ex = co_await this_coro::executor;
for (tcp::acceptor acceptor(ex, {tcp::v4(), 55555});;) {
session_state s = std::make_shared<socket_type>(
co_await acceptor.async_accept(make_strand(ex), asio::use_awaitable));
sessions.remove_if(std::mem_fn(&handle::expired)); // "garbage collect", optional
sessions.emplace_back(s);
co_spawn(ex, echo_session(s), detached);
}
}
int main() {
std::list<handle> handles;
asio::thread_pool io_context;
asio::signal_set signals(io_context, SIGINT, SIGTERM);
auto handler = [&handles](std::exception_ptr ep, auto result) {
try {
if (ep)
std::rethrow_exception(ep);
int signal = get<1>(result);
std::cout << "Signal: " << ::strsignal(signal) << std::endl;
for (auto h : handles)
if (auto s = h.lock()) {
// more logic could be implemented via members on a session_state struct
std::cout << "Shutting down live session " << s->remote_endpoint() << std::endl;
post(s->get_executor(), [s] { s->cancel(); });
}
} catch (std::exception const& e) {
std::cout << "Server: " << e.what() << std::endl;
}
};
co_spawn(io_context, listener(handles) || signals.async_wait(asio::use_awaitable), handler);
io_context.join();
}
Online demo, and local demo:

How to specify `boost::asio::yield_context` with timeout?

I would like to learn how to pass timeout timer to boost::asio::yield_context.
Let's say, in terms of Boost 1.80, there is smth like the following:
#include <boost/asio/io_context.hpp>
#include <boost/asio/spawn.hpp>
void async_func_0(boost::asio::yield_context yield) {
async_func_1(yield);
}
void async_func_1(boost::asio::yield_context) {
}
int main() {
boost::asio::io_context ioc;
boost::asio::spawn(ioc.get_executor(), &async_func_0);
ioc.run();
return 0;
}
Let's imaging that the async_func_1 is quite a burden, it is async by means of boost::coroutines (since boost::asio does not use boost::coroutines2 for some unknown reason) and it can work unpredictably long, mostly on io operations.
A good idea would be to specify the call of async_func_1 with a timeout so that if the time passed it must return whatever with an error. Let's say at the nearest use of boost::asio::yield_context within the async_func_1.
But I'm puzzled how it should be expressed in terms of boost::asio.
P.S. Just to exemplify, in Rust it would be smth like the following:
use std::time::Duration;
use futures_time::FutureExt;
async fn func_0() {
func_1().timeout(Duration::from_secs(60)).await;
}
async fn func_1() {
}
#[tokio::main]
async fn main() {
tokio::task::spawn(func_0());
}
In Asio cancellation and executors are separate concerns.
That's flexible. It also means you have to code your own timeout.
One very rough idea:
#include <boost/asio.hpp>
#include <boost/asio/spawn.hpp>
#include <iostream>
namespace asio = boost::asio;
using boost::asio::yield_context;
using namespace std::chrono_literals;
using boost::system::error_code;
static std::chrono::steady_clock::duration s_timeout = 500ms;
template <typename Token>
void async_func_1(Token token) {
error_code ec;
// emulating a long IO bound task
asio::steady_timer work(get_associated_executor(token), 1s);
work.async_wait(redirect_error(token, ec));
std::cout << "async_func_1 completion: " << ec.message() << std::endl;
}
void async_func_0(yield_context yield) {
asio::cancellation_signal cancel;
auto cyield = asio::bind_cancellation_slot(cancel.slot(), yield);
std::cout << "async_func_0 deadline at " << s_timeout / 1.0s << "s" << std::endl;
asio::steady_timer deadline(get_associated_executor(cyield), s_timeout);
deadline.async_wait([&](error_code ec) {
std::cout << "Timeout: " << ec.message() << std::endl;
if (!ec)
cancel.emit(asio::cancellation_type::terminal);
});
async_func_1(cyield);
std::cout << "async_func_0 completion" << std::endl;
}
int main(int argc, char** argv) {
if (argc>1)
s_timeout = 1ms * atoi(argv[1]);
boost::asio::io_context ioc;
spawn(ioc.get_executor(), async_func_0);
ioc.run();
}
No online compilers that accept this¹ are able to run this currently. So here's local output:
for t in 150 1500; do time ./build/sotest "$t" 2>"$t.trace"; ~/custom/superboost/libs/asio/tools/handlerviz.pl < "$t.trace" | dot -T png -o trace_$t.png; done
async_func_0 deadline at 0.15s
Timeout: Success
async_func_1 completion: Operation canceled
async_func_0 completion
real 0m0,170s
user 0m0,009s
sys 0m0,011s
async_func_0 deadline at 1.5s
async_func_1 completion: Success
async_func_0 completion
Timeout: Operation canceled
real 0m1,021s
user 0m0,011s
sys 0m0,011s
And the handler visualizations:
¹ wandbox, coliru, CE
Road From Here
You'll probably say this is cumbersome. Compared to your Rust library feature it is. To library this in Asio you could
derive your own completion token from type yield_context, adding the behaviour you want
make a composing operation (e.g. using deferred)

read from keyboard using boost async_read and posix::stream_descriptor

I am trying to capture single keyboard inputs in a non blocking way inside a while loop using boost asio async_read. The handler is expected to display the read characters.
My code:
#include <boost/asio/io_service.hpp>
#include <boost/asio/posix/stream_descriptor.hpp>
#include <boost/asio/read.hpp>
#include <boost/system/error_code.hpp>
#include <iostream>
#include <unistd.h>
#include <termios.h>
using namespace boost::asio;
void read_handler(const boost::system::error_code&, std::size_t)
{
char c;
std::cin>>c;
std::cout << "keyinput=" << c << std::endl;
}
int main()
{
io_service ioservice;
posix::stream_descriptor stream(ioservice, STDIN_FILENO);
char buf[1];
while(1)
{
async_read(stream, buffer(buf,sizeof(buf)), read_handler);
ioservice.run();
}
return 0;
}
My output is not as expected(keyinput=char format):
a
key input
b
c
d
e
Where am I going wrong?
Also the program is very cpu intensive. How to rectify it?
There's an important restriction on async IO with stdin: Strange exception throw - assign: Operation not permitted
Secondly, if you use async_read do not use std::cin at the same time (you will just do two reads). (Do look at async_wait instead).
That aside, you should be able to fix the high CPU load by using async IO properly:
#include <boost/asio.hpp>
#include <iostream>
using namespace boost::asio;
int main()
{
io_service ioservice;
posix::stream_descriptor stream(ioservice, STDIN_FILENO);
char buf[1] = {};
std::function<void(boost::system::error_code, size_t)> read_handler;
read_handler = [&](boost::system::error_code ec, size_t len) {
if (ec) {
std::cerr << "exit with " << ec.message() << std::endl;
} else {
if (len == 1) {
std::cout << "keyinput=" << buf[0] << std::endl;
}
async_read(stream, buffer(buf), read_handler);
}
};
async_read(stream, buffer(buf), read_handler);
ioservice.run();
}
As you can see the while loop has been replaced with a chain of async operations.

Resuming asio coroutine from another thread

I have a problem with resuming boost::asio coroutine from another thread. Here is sample code:
#include <iostream>
#include <thread>
#include <boost/asio.hpp>
#include <boost/asio/steady_timer.hpp>
#include <boost/asio/spawn.hpp>
using namespace std;
using namespace boost;
void foo(asio::steady_timer& timer, asio::yield_context yield)
{
cout << "Enter foo" << endl;
timer.expires_from_now(asio::steady_timer::clock_type::duration::max());
timer.async_wait(yield);
cout << "Leave foo" << endl;
}
void bar(asio::steady_timer& timer)
{
cout << "Enter bar" << endl;
sleep(1); // wait a little for asio::io_service::run to be executed
timer.cancel();
cout << "Leave bar" << endl;
}
int main()
{
asio::io_service ioService;
asio::steady_timer timer(ioService);
asio::spawn(ioService, bind(foo, std::ref(timer), placeholders::_1));
thread t(bar, std::ref(timer));
ioService.run();
t.join();
return 0;
}
The problem is that the asio::steady_timer object is not thread safe and the program crashes. But if I try to use mutex to synchronize access to it then I have a deadlock because the scope of foo is not leaved.
#include <iostream>
#include <thread>
#include <mutex>
#include <boost/asio.hpp>
#include <boost/asio/steady_timer.hpp>
#include <boost/asio/spawn.hpp>
using namespace std;
using namespace boost;
void foo(asio::steady_timer& timer, mutex& mtx, asio::yield_context yield)
{
cout << "Enter foo" << endl;
{
lock_guard<mutex> lock(mtx);
timer.expires_from_now(
asio::steady_timer::clock_type::duration::max());
timer.async_wait(yield);
}
cout << "Leave foo" << endl;
}
void bar(asio::steady_timer& timer, mutex& mtx)
{
cout << "Enter bar" << endl;
sleep(1); // wait a little for asio::io_service::run to be executed
{
lock_guard<mutex> lock(mtx);
timer.cancel();
}
cout << "Leave bar" << endl;
}
int main()
{
asio::io_service ioService;
asio::steady_timer timer(ioService);
mutex mtx;
asio::spawn(ioService, bind(foo, std::ref(timer), std::ref(mtx),
placeholders::_1));
thread t(bar, std::ref(timer), std::ref(mtx));
ioService.run();
t.join();
return 0;
}
There is no such a problem if I use standard completion handler instead of coroutines.
#include <iostream>
#include <thread>
#include <mutex>
#include <boost/asio.hpp>
#include <boost/asio/steady_timer.hpp>
using namespace std;
using namespace boost;
void baz(system::error_code ec)
{
cout << "Baz: " << ec.message() << endl;
}
void foo(asio::steady_timer& timer, mutex& mtx)
{
cout << "Enter foo" << endl;
{
lock_guard<mutex> lock(mtx);
timer.expires_from_now(
asio::steady_timer::clock_type::duration::max());
timer.async_wait(baz);
}
cout << "Leave foo" << endl;
}
void bar(asio::steady_timer& timer, mutex& mtx)
{
cout << "Enter bar" << endl;
sleep(1); // wait a little for asio::io_service::run to be executed
{
lock_guard<mutex> lock(mtx);
timer.cancel();
}
cout << "Leave bar" << endl;
}
int main()
{
asio::io_service ioService;
asio::steady_timer timer(ioService);
mutex mtx;
foo(std::ref(timer), std::ref(mtx));
thread t(bar, std::ref(timer), std::ref(mtx));
ioService.run();
t.join();
return 0;
}
Is it possible to have behavior similar to the last example when couroutines are used.
A coroutine runs within the context of a strand. In spawn(), if one is not explicitly provided, a new strand will be created for the coroutine. By explicitly providing strand to spawn(), one can post work into the strand that will be synchronized with the coroutine.
Also, as noted by sehe, undefined behavior may occur if the coroutine is running in one thread, acquires a mutex lock, then suspends, but resumes and runs in a different thread and releases the lock. To avoid this, ideally one should not hold locks while the coroutine suspends. However, if it is necessary, one must guarantee that the coroutine runs within the same thread when it is resumed, such as by only running the io_service from a single thread.
Here is the minimal complete example based on the original example where bar() posts work into a strand to cancel the timer, causing the foo() coroutine to resume:
#include <iostream>
#include <thread>
#include <boost/asio.hpp>
#include <boost/asio/spawn.hpp>
#include <boost/asio/steady_timer.hpp>
void foo(boost::asio::steady_timer& timer, boost::asio::yield_context yield)
{
std::cout << "Enter foo" << std::endl;
timer.expires_from_now(
boost::asio::steady_timer::clock_type::duration::max());
boost::system::error_code error;
timer.async_wait(yield[error]);
std::cout << "foo error: " << error.message() << std::endl;
std::cout << "Leave foo" << std::endl;
}
void bar(
boost::asio::io_service::strand& strand,
boost::asio::steady_timer& timer
)
{
std::cout << "Enter bar" << std::endl;
// Wait a little for asio::io_service::run to be executed
std::this_thread::sleep_for(std::chrono::seconds(1));
// Post timer cancellation into the strand.
strand.post([&timer]()
{
timer.cancel();
});
std::cout << "Leave bar" << std::endl;
}
int main()
{
boost::asio::io_service io_service;
boost::asio::steady_timer timer(io_service);
boost::asio::io_service::strand strand(io_service);
// Use an explicit strand, rather than having the io_service create.
boost::asio::spawn(strand, std::bind(&foo,
std::ref(timer), std::placeholders::_1));
// Pass the same strand to the thread, so that the thread may post
// handlers synchronized with the foo coroutine.
std::thread t(&bar, std::ref(strand), std::ref(timer));
io_service.run();
t.join();
}
Which provides the following output:
Enter foo
Enter bar
foo error: Operation canceled
Leave foo
Leave bar
As covered in this answer, when the boost::asio::yield_context detects that the asynchronous operation has failed, such as when the operation is canceled, it converts the boost::system::error_code into a system_error exception and throws. The above example uses yield_context::operator[] to allow the yield_context to populate the provided error_code on failure instead of throwing throwing.