Boost asio io_content run non-blocking [duplicate] - c++

This question already has answers here:
boost::asio::read function hanging
(2 answers)
Closed 3 years ago.
Currently i am writing a C++ Websocket Client Library which get wrapped in a C# Library.
I am using Boost Beast for the websocket connection.
Now i am at the point that i start a async_read when the handshake is completed so the websocket dont disconnect.
The Problem is the io_content blocks the thread so the C# program stops excecuting until the async_read gets a timeout and the io_content returns. But i want that the C# program keep executing.
I tried to execute the connect function in a thread, but there's the problem, that the next function that the C# program calls is a write operation and it crashes, because the connect function is still connecting...
library.cpp:
void OpenConnection(char* id)
{
std::cout << "Opening new connection..." << std::endl;
std::shared_ptr<Session> session = std::make_shared<Session>();
session->connect();
sessions.emplace(std::string(id), std::move(session));
}
session.cpp:
void Session::connect()
{
resolver.async_resolve(
"websocket_uri",
"443",
beast::bind_front_handler(
&Session::on_resolve,
shared_from_this()));
ioc.run();
}
The on_resolve,on_connect,on_handshake... are the same as here: https://www.boost.org/doc/libs/1_70_0/libs/beast/example/websocket/client/async-ssl/websocket_client_async_ssl.cpp
Unless the on_handshake function:
void Session::on_handshake(beast::error_code ec)
{
if (ec)
return fail(ec, "handshake");
ws.async_read(buffer, beast::bind_front_handler(&Session::on_read, shared_from_this()));
}
And the on_read function:
void Session::on_read(beast::error_code ec, std::size_t bytes_transfered)
{
boost::ignore_unused(bytes_transfered);
if (ec)
return fail(ec, "read");
std::cout << "Got message" << std::endl;
onMessage(Message::parseMessage(beast::buffers_to_string(buffer.data())));
ws.async_read(buffer, beast::bind_front_handler(&Session::on_read, shared_from_this()));
}
And the on_write function:
void Session::on_write(beast::error_code ec, std::size_t bytes_transfered)
{
boost::ignore_unused(bytes_transfered);
if (ec)
return fail(ec, "write");
queue.erase(queue.begin());
if (!queue.empty())
{
ws.async_write(net::buffer(queue.front()->toString()), beast::bind_front_handler(&Session::on_write, shared_from_this()));
}
}
C# Program(For testing):
[DllImport(#"/data/cpp_projects/whatsapp_web_library/build/Debug/libWhatsApp_Web_Library.so")]
public static extern void OpenConnection(string id);
[DllImport(#"/data/cpp_projects/whatsapp_web_library/build/Debug/libWhatsApp_Web_Library.so")]
public static extern void CloseConnection(string id);
[DllImport(#"/data/cpp_projects/whatsapp_web_library/build/Debug/libWhatsApp_Web_Library.so")]
public static extern void GenerateQRCode(string id);
static void Main(string[] args)
{
string id = "test";
OpenConnection(id);
GenerateQRCode(id);
}
Now is my question, how can i implement this?
I have been stuck on this problem for 3 days now and am slowly despairing.
Thanks already :)

You need to use async_read_some instead of async_read
From boost
async_read_some function is used to asynchronously read data from the stream socket. The function call always returns immediately.
The read operation may not read all of the requested number of bytes.
Consider using the async_read function if you need to ensure that the
requested amount of data is read before the asynchronous operation
completes.
Basically a successful call to async_read_some may read just one byte,
or it may fill the whole buffer, or anywhere in between. The
asio::async_read function, on the other hand, can be used to ensure that the entire
buffer is filled before the operation completes. The async_write_some
and asio::async_write functions have the same relationship.
More about async_read_some
Here is a good example of how to use async_read_some

Related

Boost.Asio - when is explicit strand wrapping needed when using make_strand

I have been researching Boost.Asio and Boost.Beast and have some confusion around when explicit strand wrapping is needed with socket::async_* member function calls.
In Boost.Asio (1.78), there is a make_strand function. The examples provided with Boost.Beast show it being used like this:
server/chat-multi/listener.cpp
void
listener::
run()
{
// The new connection gets its own strand
acceptor_.async_accept(
net::make_strand(ioc_),
beast::bind_front_handler(
&listener::on_accept,
shared_from_this()));
}
//...
// Handle a connection
void
listener::
on_accept(beast::error_code ec, tcp::socket socket)
{
if(ec)
return fail(ec, "accept");
else
// Launch a new session for this connection
boost::make_shared<http_session>(std::move(socket), state_)->run();
// The new connection gets its own strand
acceptor_.async_accept(
net::make_strand(ioc_),
beast::bind_front_handler(
&listener::on_accept,
shared_from_this()));
}
server/chat-multi/http_session.cpp
void
http_session::
run()
{
do_read();
}
//...
void
http_session::
do_read()
{
// Construct a new parser for each message
parser_.emplace();
// Apply a reasonable limit to the allowed size
// of the body in bytes to prevent abuse.
parser_->body_limit(10000);
// Set the timeout.
stream_.expires_after(std::chrono::seconds(30));
// Read a request
http::async_read(
stream_,
buffer_,
parser_->get(),
beast::bind_front_handler(
&http_session::on_read,
shared_from_this()));
}
void
http_session::
on_read(beast::error_code ec, std::size_t)
{
// This means they closed the connection
if(ec == http::error::end_of_stream)
{
stream_.socket().shutdown(tcp::socket::shutdown_send, ec);
return;
}
// Handle the error, if any
if(ec)
return fail(ec, "read");
// See if it is a WebSocket Upgrade
if(websocket::is_upgrade(parser_->get()))
{
// Create a websocket session, transferring ownership
// of both the socket and the HTTP request.
boost::make_shared<websocket_session>(
stream_.release_socket(),
state_)->run(parser_->release());
return;
}
//...
}
server/chat-multi/websocket_session.cpp
void
websocket_session::
on_read(beast::error_code ec, std::size_t)
{
// Handle the error, if any
if(ec)
return fail(ec, "read");
// Send to all connections
state_->send(beast::buffers_to_string(buffer_.data()));
// Clear the buffer
buffer_.consume(buffer_.size());
// Read another message
ws_.async_read(
buffer_,
beast::bind_front_handler(
&websocket_session::on_read,
shared_from_this()));
}
In the same Boost.Beast example, subsequent calls on the socket's async_read member function are done without explicitly wrapping the work in a strand, either via post, dispatch (with socket::get_executor) or wrapping the completion handler with strand::wrap.
Based on the answer to this question, it seems that the make_strand function copies the executor into the socket object, and by default the socket object's completion handlers will be invoked on the same strand. Using socket::async_receive as an example, this to me says that there are two bits of work to be done:
A) The socket::async_receive I/O work itself
B) The work involved in calling the completion handler
My questions are:
According to the linked answer, when using make_strand B is guaranteed to be called on the same strand, but not A. Is this correct, or have I misunderstood something?
If 1) is correct, why does the server/chat-multi example provided above not explicitly wrap the async_read work on a strand?
In Michael Caisse's cppcon 2016 talk, "Asynchronous IO with Boost.Asio", he also does not explicitly wrap async_read_until operations in a strand. He explains that write calls should be synchronised with a strand, as they can in theory be called from any thread in the application. But read calls don't, as he is controlling them himself. How does this fit into the picture?
Thanks in advance
If an executor is not specified or bound, the "associated executor" is used.
For member async initiation functions the default executor is the one from the IO object. In your case it would be the socket which has been created "on" (with) the strand executor. In other words, socket.get_executor() already returns the strand<> executor.
Only when posting you would either need to specify the strand executor (or bind the handler to it, so it becomes the implicit default for the handler):
When must you pass io_context to boost::asio::spawn? (C++)
Why is boost::asio::io service designed to be used as a parameter?

How does boost::beast::bind_front_handler works?

I am trying boost::beast examples, I came across to this piece of code.
void on_write(beast::error_code ec, std::size_t byte_transferred) {
if (ec) return fail(ec, "write");
http::async_read(m_tcp_stream, m_buffer, m_response, beast::bind_front_handler(
&Session::on_read, shared_from_this()));
}
void on_read(beast::error_code ec, std::size_t bytes_transferred) {
if (ec) return fail(ec, "read");
//std::cout << m_response << std::endl;
write_on_file(m_response);
m_tcp_stream.socket().shutdown(tcp::socket::shutdown_both, ec);
if (ec && ec != beast::errc::not_connected) return fail(ec, "showdown");
}
Particularly http::async_read(m_tcp_stream, m_buffer, m_response, beast::bind_front_handler(&Session::on_read, shared_from_this())); this line. I am not able to understand its code. How does it work. As far as I get from the code, that It returns bind_front_wrapper which constructs a Handler and tuple of args within itself. But I did not understand how does it manage to get the arguments of the passed Handler in bind_front_handler even though we are not passing, we are just passing shared_ptr. In this case async_read is calling on_read method. But we are not passing any parameters, but still it get called, I wonder how?
You use asynchronous operations, so your job is to define callbacks which are called
by Beast core code when operations are completed. When an operation started by async_read is ready,
handler passed to async_read is called with two arguments: error code + number of transferred bytes.
You decided to wrap on_read into callback by bind_front_handler. bind_front_handler generates a functor object
whose implementation in pseudocode may look like:
class Handler {
void (Session::*onRead)(...); // pointer to on_read function member of Session
Session* session; // pointer to session, get by shared_from_this
Handler(/* pointer to on_read, pointer to session */) {}
template<class ... Args>
void operator() (Args... args) {
((*session).*onRead)(args...);
}
}
when read operation is ready, function operator() of above handler is called with two arguments pack: error code
and number of read bytes.
Since c++20 there is std::bind_front,
you may visit reference to get more details how it could be implemented in Beast library.

multithreading problem in boost asio example

I'm developing a tcp service, and I took an example from boost asio to start (https://www.boost.org/doc/libs/1_73_0/doc/html/boost_asio/example/cpp11/chat/chat_server.cpp), and I'm worried about something, as I understand, any time you want to send something you have to use the deliver function that check the status of and run some operations over the write_msgs_ queue (in my code write_msgs_ is a queue of std::byte based structures):
void deliver(const chat_message& msg)
{
bool write_in_progress = !write_msgs_.empty();
write_msgs_.push_back(msg);
if (!write_in_progress)
{
do_write();
}
}
and inside the do_write() function you will see an asynchronous call wrapping a lambda function:
void do_write()
{
auto self(shared_from_this());
boost::asio::async_write(socket_,
boost::asio::buffer(write_msgs_.front().data(),
write_msgs_.front().length()),
[this, self](boost::system::error_code ec, std::size_t /*length*/)
{
if (!ec)
{
write_msgs_.pop_front();
if (!write_msgs_.empty())
{
do_write();
}
}
else
{
room_.leave(shared_from_this());
}
});
}
where the call is constantly sending messages until the queue is empty.
Now, as I understand, the boost::asio::async_write make the lambda function thread safe, but, as the write_msgs_ is open to be used in the deliver function which is out of the isolation given by the io_context a mutex is needed. Now, should I put a mutex each time the write_queue is used or is cheaper to use the boost::asio::post() calling the deliver function to isolate the write_msgs_ from asynchronous calls ?
something like this:
boost::asio::io_service srvc; // this somewhere
void deliver2(const chat_message &msg)
{
srvc.post(std::bind(&chat_session::deliver,this,msg));
}

ASIO async_read doesn't work while async_read_until works on server

Observation
I built a demo application according to this server example using ASIO after I used C++11 std to replace everything originally in boost. The server can show that class member tcp_session::start() is called only after the client connects which is good indication that the server accepts the connection from the client.
However, I saw nothing received by handle_read while the clients sends a lot of data. I got some std::cout in handle_read and stop. I put the timeout to be 6 seconds now and found this:
The start is called right after the client connects, and then nothing indicating that the handle_read is called, but after 6 seconds, stop() is called, and then handle_read is called because of the timeout and socket_.isOpen() is false.
Then I found that if I change async_read to async_read_until that was commented originally by me, then the handle_read will be called andthe socket_.isopen is true so I can really see the packets.
Question:
The delimiter was there but I don't want one. How do I async read a whole TCP string without a delimiter? Why async_read doesn't work? Should it work like this? Is there anything wrong in my code?
I am using VS2015 and test on localhost.
Answer
TCP doesn't have boundary so I decided to put special character to indicate the end of each packet.
Here are some relevant code:
class tcp_session : public subscriber, public std::enable_shared_from_this<tcp_session> {
public:
void start() {
std::cout<<"started"<<std::endl;
channel_.join(shared_from_this());
start_read();
input_deadline_.async_wait(
std::bind(&tcp_session::check_deadline, shared_from_this(), &input_deadline_)
);
await_output();
output_deadline_.async_wait(
std::bind(&tcp_session::check_deadline, shared_from_this(), &output_deadline_)
);
}
private:
bool stopped() const {
return !socket_.is_open();// weird that it is still not open
}
void start_read() {
// Set a deadline for the read operation.
input_deadline_.expires_from_now(timeout_); //was std::chrono::seconds(30) in example
char a = 0x7F;
// Start an asynchronous operation to read a 0x7F-delimited message or read all
//asio::async_read_until(socket_, input_buffer_, a, std::bind(&TCP_Session::handle_read, shared_from_this(), std::placeholders::_1));
asio::async_read(socket_, input_buffer_,
std::bind(&TCP_Session::handle_read, shared_from_this(), std::placeholders::_1));
}
void handle_read(const asio::error_code& ec) {
if (stopped()) // it thinks it stopped and returned without processing
return;

Persistent ASIO connections

I am working on a project where I need to be able to use a few persistent to talk to different servers over long periods of time. This server will have a fairly high throughput. I am having trouble figuring out a way to setup the persistent connections correctly. The best way I could think of to do this is create a persistent connection class. Ideally I would connect to the server one time, and do async_writes as information comes into me. And read information as it comes back to me. I don't think I am structuring my class correctly though.
Here is what I have built right now:
persistent_connection::persistent_connection(std::string ip, std::string port):
io_service_(), socket_(io_service_), strand_(io_service_), is_setup_(false), outbox_()
{
boost::asio::ip::tcp::resolver resolver(io_service_);
boost::asio::ip::tcp::resolver::query query(ip,port);
boost::asio::ip::tcp::resolver::iterator iterator = resolver.resolve(query);
boost::asio::ip::tcp::endpoint endpoint = *iterator;
socket_.async_connect(endpoint, boost::bind(&persistent_connection::handler_connect, this, boost::asio::placeholders::error, iterator));
io_service_.poll();
}
void persistent_connection::handler_connect(const boost::system::error_code &ec, boost::asio::ip::tcp::resolver::iterator endpoint_iterator)
{
if(ec)
{
std::cout << "Couldn't connect" << ec << std::endl;
return;
}
else
{
boost::asio::socket_base::keep_alive option(true);
socket_.set_option(option);
boost::asio::async_read_until(socket_, buf_ ,"\r\n\r\n", boost::bind(&persistent_connection::handle_read_headers, this, boost::asio::placeholders::error));
}
}
void persistent_connection::write(const std::string &message)
{
write_impl(message);
//strand_.post(boost::bind(&persistent_connection::write_impl, this, message));
}
void persistent_connection::write_impl(const std::string &message)
{
outbox_.push_back(message);
if(outbox_.size() > 1)
{
return;
}
this->write_to_socket();
}
void persistent_connection::write_to_socket()
{
std::string message = "GET /"+ outbox_[0] +" HTTP/1.0\r\n";
message += "Host: 10.1.10.120\r\n";
message += "Accept: */*\r\n";
boost::asio::async_write(socket_, boost::asio::buffer(message.c_str(), message.size()), strand_.wrap(
boost::bind(&persistent_connection::handle_write, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)));
}
void persistent_connection::handle_write(const boost::system::error_code& ec, std::size_t bytes_transfered)
{
outbox_.pop_front();
if(ec)
{
std::cout << "Send error" << boost::system::system_error(ec).what() << std::endl;
}
if(!outbox_.empty())
{
this->write_to_socket();
}
boost::asio::async_read_until(socket_, buf_ ,"\r\n\r\n",boost::bind(&persistent_connection::handle_read_headers, this, boost::asio::placeholders::error));
}
The first message I will send from this seems to send out fine, the server gets it, and responds with a valid response. I see two problem unfortunately:
1) My handle_write is never called after doing the async_write command, I have no clue why.
2) The program never reads the response, I am guessing this is related to #1, since asyn_read_until is not called until that function happens.
3) I was also wondering if someone could tell me why my commented out strand_.post call would not work.
I am guessing most of this has to due with my lack of knowledge of how I should be using my io_service, so if somebody could give me any pointer that would be greatly appreciated. And if you need any additional information, I would be glad to provide some more.
Thank you
Edit call to write:
int main()
{
persistent_connection p("10.1.10.220", "80");
p.write("100");
p.write("200");
barrier b(1,30000); //Timed mutex, waits for 300 seconds.
b.wait();
}
and
void persistent_connection::handle_read_headers(const boost::system::error_code &ec)
{
std::istream is(&buf_);
std::string read_stuff;
std::getline(is,read_stuff);
std::cout << read_stuff << std::endl;
}
The behavior described is the result of the io_service_'s event loop no longer being processed.
The constructor invokes io_service::poll() which will run handlers that are ready to run and will not block waiting for work to finish, where as io_service::run() will block until all work has finished. Thus, when polling, if the other side of the connection has not written any data, then no handlers may be ready to run, and execution will return from poll().
With regards to threading, if each connection will have its own thread, and the communication is a half-duplex protocol, such as HTTP, then the application code may be simpler if it is written synchronously. On the other hand, if it each connection will have its own thread, but the code is written asynchronously, then consider handling exceptions being thrown from within the event loop. It may be worth reading Boost.Asio's
effect of exceptions thrown from handlers.
Also, persistent_connection::write_to_socket() introduces undefined behavior. When invoking boost::asio::async_write(), it is documented that the caller retains ownership of the buffer and must guarantee that the buffer remains valid until the handler is called. In this case, the message buffer is an automatic variable, whose lifespan may end before the persistent_connection::handle_write handler is invoked. One solution could be to change the lifespan of message to match that of persistent_connection by making it a member variable.