boost::asio::async_receive and 0 bytes in socket - c++

Pseudo-code
boost::asio::streambuf my_buffer;
boost::asio::ip::tcp::socket my_socket;
auto read_handler = [this](const boost::system::error_code& ec, size_t bytes_transferred) {
// my logic
};
my_socket.async_receive(my_buffer.prepare(512),
read_handler);
When using traditional recv with non-blocking socket, it returns -1 when there is nothing to read from socket.
But use of async_receive does not call read_handler if there is no data, and it waits infinitely.
How to realize such a logic (asynchronously) that calls read_handler with bytes_transferred == 0 (possibly with error code set) when there is nothing to read from socket?
(async_read_some has the same behavior).

In short, immediately after initiating the async_receive() operation, cancel it. If the completion handler is invoked with boost::asio::error::operation_aborted as the error, then the operation blocked. Otherwise, the read operation completed with success and has read from the socket or failed for other reasons, such as the remote peer closing the connection.
socket.async_receive(boost::asio::buffer(buffer), handler);
socket.cancel();
Within the initiating function of an asynchronous operation, a non-blocking read will attempt to be made. This behavior is subtlety noted in the async_receive() documentation:
Regardless of whether the asynchronous operation completes immediately or not, [...]
Hence, if the operation completes immediately with success or error, then the completion handler will be ready for invocation and is not cancelable. On the other hand, if the operation would block, then it will be enqueued into the reactor for monitoring, where it becomes cancelable.
One can also obtain similar behavior with synchronous operations by enabling non-blocking mode on the socket. When the socket is set to non-blocking, synchronous operations that would block will instead fail with boost::asio::error::would_block.
socket.non_blocking(true);
auto bytes_transferred = socket.receive(
boost::asio::buffer(buffer), 0 /* flags */, error);
Here is a complete example demonstrating these behaviors:
#include <array>
#include <iostream>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
// This example is not interested in the handlers, so provide a noop function
// that will be passed to bind to meet the handler concept requirements.
void noop() {}
void print_status(
const boost::system::error_code& error,
std::size_t bytes_transferred)
{
std::cout << "error = (" << error << ") " << error.message() << "; "
"bytes_transferred = " << bytes_transferred
<< std::endl;
}
int main()
{
using boost::asio::ip::tcp;
// Create all I/O objects.
boost::asio::io_service io_service;
tcp::acceptor acceptor(io_service, tcp::endpoint(tcp::v4(), 0));
tcp::socket socket1(io_service);
tcp::socket socket2(io_service);
// Connect the sockets.
acceptor.async_accept(socket1, boost::bind(&noop));
socket2.async_connect(acceptor.local_endpoint(), boost::bind(&noop));
io_service.run();
io_service.reset();
std::array<char, 512> buffer;
// Scenario: async_receive when socket has no data.
// Within the intiating asynchronous read function, an attempt to read
// data will be made. If it fails, it will be added to the reactor,
// for monitoring where it can be cancelled.
{
std::cout << "Scenario: async_receive when socket has no data"
<< std::endl;
socket1.async_receive(boost::asio::buffer(buffer), &print_status);
socket1.cancel();
io_service.run();
io_service.reset();
}
// Scenario: async_receive when socket has data.
// The operation will complete within the initiating function, and is
// not available for cancellation.
{
std::cout << "Scenario: async_receive when socket has data" << std::endl;
boost::asio::write(socket2, boost::asio::buffer("hello"));
socket1.async_receive(boost::asio::buffer(buffer), &print_status);
socket1.cancel();
io_service.run();
}
// One can also get the same behavior with synchronous operations by
// enabling non_blocking mode.
boost::system::error_code error;
std::size_t bytes_transferred = 0;
socket1.non_blocking(true);
// Scenario: non-blocking synchronous read when socket has no data.
{
std::cout << "Scenario: non-blocking synchronous read when socket"
" has no data." << std::endl;
bytes_transferred = socket1.receive(
boost::asio::buffer(buffer), 0 /* flags */, error);
assert(error == boost::asio::error::would_block);
print_status(error, bytes_transferred);
}
// Scenario: non-blocking synchronous read when socket has data.
{
std::cout << "Scenario: non-blocking synchronous read when socket"
" has data." << std::endl;
boost::asio::write(socket2, boost::asio::buffer("hello"));
bytes_transferred = socket1.receive(
boost::asio::buffer(buffer), 0 /* flags */, error);
print_status(error, bytes_transferred);
}
}
Output:
Scenario: async_receive when socket has no data
error = (system:125) Operation canceled; bytes_transferred = 0
Scenario: async_receive when socket has data
error = (system:0) Success; bytes_transferred = 6
Scenario: non-blocking synchronous read when socket has no data.
error = (system:11) Resource temporarily unavailable; bytes_transferred = 0
Scenario: non-blocking synchronous read when socket has no data.
error = (system:0) Success; bytes_transferred = 6

Related

Boost-Beast async web socket Server-Client async read-write not writing output on console

I am trying out Boost Beast examples for asynchronous web socket server - client
I am running server and client as below,
server.exe 127.0.0.1 4242 1
client.exe 127.0.0.1 4242 "Hello"
If everything works I believe it should print "Hello" on server command prompt
Below is the code
void
on_read(
beast::error_code ec,
std::size_t bytes_transferred)
{
boost::ignore_unused(bytes_transferred);
// This indicates that the session was closed
if (ec == websocket::error::closed)
return;
if (ec)
fail(ec, "read");
// Echo the message
ws_.text(ws_.got_text());
std::cout << "writing received value " << std::endl;
ws_.async_write(
buffer_.data(),
beast::bind_front_handler(
&session::on_write,
shared_from_this()));
std::cout << buffer_.data().data()<< std::endl;
}
ws_.write() is not writing anything on console , however buffer_data.data() renders
00000163E044EE80
How do I make sure this is working fine? How do I retrieve string value from the socket buffer?
The line printing content of sent message should be placed before async_write:
std::cout << buffer_.data().data()<< std::endl;
ws_.async_write(
buffer_.data(),
beast::bind_front_handler(
&session::on_write,
shared_from_this()));
Why?
All functions from BOOST-ASIO/BEAST which start with async_ ALWAYS return immediately. They initate some task, which are performed in background asio core, and when they are ready, handlers are called.
Look at on_write handler:
void
on_write(
beast::error_code ec,
std::size_t bytes_transferred)
{
boost::ignore_unused(bytes_transferred);
if (ec)
return fail(ec, "write");
// Clear the buffer
buffer_.consume(buffer_.size()); /// <---
consume removes block of bytes whose length is buffer_size from the beginning of buffer_.
Your problem is that buffer probably has been cleared and then it is printed:
thread 1 thread 2
------------------------------ | steps
async_write | | [1]
| consume | [2]
cout << buffer_ | | [3]
|
In addition to using buffer before its consumed, in order to convert buffer I had to write to_string_ function which takes flat buffer and returns the string
std::string to_string_(beast::flat_buffer const& buffer)
{
return std::string(boost::asio::buffer_cast<char const*>(
beast::buffers_front(buffer.data())),
boost::asio::buffer_size(buffer.data()));
};
Found out this can easily be done by beast::buffers_to_string(buffer_.data()) too.
Reference : trying-to-understand-the-boostbeast-multibuffer

Handling multiple clients with async_accept

I'm writing a secure SSL echo server with boost ASIO and coroutines. I'd like this server to be able to serve multiple concurrent clients, this is my code
try {
boost::asio::io_service io_service;
boost::asio::spawn(io_service, [&io_service](boost::asio::yield_context yield) {
auto ctx = boost::asio::ssl::context{ boost::asio::ssl::context::sslv23 };
ctx.set_options(
boost::asio::ssl::context::default_workarounds
| boost::asio::ssl::context::no_sslv2
| boost::asio::ssl::context::single_dh_use);
ctx.use_private_key_file(..); // My data setup
ctx.use_certificate_chain_file(...); // My data setup
boost::asio::ip::tcp::acceptor acceptor(io_service,
boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), port));
for (;;) {
boost::asio::ssl::stream<boost::asio::ip::tcp::socket> sock{ io_service, ctx };
acceptor.async_accept(sock.next_layer(), yield);
sock.async_handshake(boost::asio::ssl::stream_base::server, yield);
auto ec = boost::system::error_code{};
char data_[1024];
auto nread = sock.async_read_some(boost::asio::buffer(data_, 1024), yield[ec]);
if (ec == boost::asio::error::eof)
return; //connection closed cleanly by peer
else if (ec)
throw boost::system::system_error(ec); //some other error, is this desirable?
sock.async_write_some(boost::asio::buffer(data_, nread), yield[ec]);
if (ec == boost::asio::error::eof)
return; //connection closed cleanly by peer
else if (ec)
throw boost::system::system_error(ec); //some other error
// Shutdown gracefully
sock.async_shutdown(yield[ec]);
if (ec && (ec.category() == boost::asio::error::get_ssl_category())
&& (SSL_R_PROTOCOL_IS_SHUTDOWN == ERR_GET_REASON(ec.value())))
{
sock.lowest_layer().close();
}
}
});
io_service.run();
}
catch (std::exception& e)
{
std::cerr << "Exception: " << e.what() << "\n";
}
Anyway I'm not sure if the code above will do: in theory calling async_accept will return control to the io_service manager.
Will another connection be accepted if one has already been accepted, i.e. it's already past the async_accept line?
It's a bit hard to understand the specifics of your question, since the code is incomplete (e.g., there's a return in your block, but it's unclear what is that block part of).
Notwithstanding, the documentation contains an example of a TCP echo server using coroutines. It seems you basically need to add SSL support to it, to adapt it to your needs.
If you look at main, it has the following chunk:
boost::asio::spawn(io_service,
[&](boost::asio::yield_context yield)
{
tcp::acceptor acceptor(io_service,
tcp::endpoint(tcp::v4(), std::atoi(argv[1])));
for (;;)
{
boost::system::error_code ec;
tcp::socket socket(io_service);
acceptor.async_accept(socket, yield[ec]);
if (!ec) std::make_shared<session>(std::move(socket))->go();
}
});
This loops endlessly, and, following each (successful) call to async_accept, handles accepting the next connection (while this connection and others might still be active).
Again, I'm not sure about your code, but it contains exits from the loop like
return; //connection closed cleanly by peer
To illustrate the point, here are two applications.
The first is a Python multiprocessing echo client, adapted from PMOTW:
import socket
import sys
import multiprocessing
def session(i):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('localhost', 5000)
print 'connecting to %s port %s' % server_address
sock.connect(server_address)
print 'connected'
for _ in range(300):
try:
# Send data
message = 'client ' + str(i) + ' message'
print 'sending "%s"' % message
sock.sendall(message)
# Look for the response
amount_received = 0
amount_expected = len(message)
while amount_received < amount_expected:
data = sock.recv(16)
amount_received += len(data)
print 'received "%s"' % data
except:
print >>sys.stderr, 'closing socket'
sock.close()
if __name__ == '__main__':
pool = multiprocessing.Pool(8)
pool.map(session, range(8))
The details are not that important (although it's Python, and therefore easy to read), but the point is that it opens up 8 processes, and each engages the same asio echo server (below) with 300 messages.
When run, it outputs
...
received "client 1 message"
sending "client 1 message"
received "client 2 message"
sending "client 2 message"
received "client 3 message"
received "client 0 message"
sending "client 3 message"
sending "client 0 message"
...
showing that the echo sessions are indeed interleaved.
Now for the echo server. I've slightly adapted the example from the docs:
#include <cstdlib>
#include <iostream>
#include <memory>
#include <utility>
#include <boost/asio.hpp>
using boost::asio::ip::tcp;
class session :
public std::enable_shared_from_this<session> {
public:
session(tcp::socket socket) : socket_(std::move(socket)) {}
void start() { do_read(); }
private:
void do_read() {
auto self(
shared_from_this());
socket_.async_read_some(
boost::asio::buffer(data_, max_length),
[this, self](boost::system::error_code ec, std::size_t length) {
if(!ec)
do_write(length);
});
}
void do_write(std::size_t length) {
auto self(shared_from_this());
socket_.async_write_some(
boost::asio::buffer(data_, length),
[this, self](boost::system::error_code ec, std::size_t /*length*/) {
if (!ec)
do_read();
});
}
private:
tcp::socket socket_;
enum { max_length = 1024 };
char data_[max_length];
};
class server {
public:
server(boost::asio::io_service& io_service, short port) :
acceptor_(io_service, tcp::endpoint(tcp::v4(), port)),
socket_(io_service) {
do_accept();
}
private:
void do_accept() {
acceptor_.async_accept(
socket_,
[this](boost::system::error_code ec) {
if(!ec)
std::make_shared<session>(std::move(socket_))->start();
do_accept();
});
}
tcp::acceptor acceptor_;
tcp::socket socket_;
};
int main(int argc, char* argv[]) {
const int port = 5000;
try {
boost::asio::io_service io_service;
server s{io_service, port};
io_service.run();
}
catch (std::exception& e) {
std::cerr << "Exception: " << e.what() << "\n";
}
}
This shows that this server indeed interleaves.
Note that this is not the coroutine version. While I once played with the coroutine version a bit, I just couldn't get it to build on my current box (also, as sehe notes in the comments below, you might anyway prefer this more mainstream version for now).
However, this is not a fundamental difference, w.r.t. your question. The non-coroutine version has callbacks explicitly explicitly launching new operations supplying the next callback; the coroutine version uses a more sequential-looking paradigm. Each call returns to asio's control loop in both versions, which monitors all the current operations which can proceed.
From the asio coroutine docs:
Coroutines let you create a structure that mirrors the actual program logic. Asynchronous operations don’t split functions, because there are no handlers to define what should happen when an asynchronous operation completes. Instead of having handlers call each other, the program can use a sequential structure.
It's not that the sequential structure makes all operations sequential - that would eradicate the entire need for asio.

calling boost::asio::tcp::socket methods after async_read handler returned an error in server

For log output i am calling tcp::socket::remote_endpoint() from a shared_ptr Session object when the Session is created and when it is destroyed. If an async_read is called and the client has sent a FIN before the server sends a reply and then an RST packet after the server has sent the reply (write doesn't return any errors), the async_read function returns error code system::54 (not_connected - with a message of "Connection reset by peer") and then when i call the remote_endpoint method again (in the Session object destructor) it throws a exception:
libc++abi.dylib: terminating with uncaught exception of type boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::system::system_error> >: remote_endpoint: Invalid argument
Does the async_read error invalidate the socket or is there another cause of this? I can't see anything in the boost::asio 1.59.0 docs.
I should probably add that this socket is the socket underlying a boost::asio::ssl::stream<tcp::socket&>
An example of the above occurs in this code:
void read() {
auto self(shared_from_this());
boost::asio::async_read(ssl_stream_, boost::asio::buffer(buffer_),
[this, self](const boost::system::error_code &ec, std::size_t) {
if (!ec) {
processBuffer();
} else {
/* system:54 error occurs see here */
std::cout << "read ec: " << ec << " " << ec.message() << std::endl;
/* This will throw an exception (Invalid argument) */
auto endpoint = socket_.remote_endpoint();
}
});
}
Is boost::asio::ssl::stream<tcp::socket&> correct? I think I've only ever seen boost::asio::ssl::stream<tcp::socket> before
Also
I should probably add that this socket is the socket underlying a boost::asio::ssl::stream
You should be async_read-ing from the stream (after handshake). If the stream is in an SSL session, reading/writing from it directly will cause the SSL session to fail, and it might be closed down.

boost socket comms are not working past one exchange

I am converting an app which had a very simple heartbeat / status monitoring connection between two services. As that now needs to be made to run on linux in addition to windows, I thought I'd use boost (v1.51, and I cannot upgrade - linux compilers are too old and windows compiler is visual studio 2005) to accomplish the task of making it platform agnostic (considering, I really would prefer not to either have two code files, one for each OS, or a littering of #defines throughout the code, when boost offers the possibility of being pleasant to read (6mos after I've checked in and forgotten this code!)
My problem now, is the connection is timing out. Actually, it's not really working at all.
First time through, the 'status' message is sent, it's received by the server end which sends back an appropriate response. Server end then goes back to waiting on the socket for another message. Client end (this code), sends the 'status' message again... but this time, the server never receives it and the read_some() call blocks until the socket times out. I find it really strange that
The server end has not changed. The only thing that's changed, is my having altered the client code from basic winsock2 sockets, to this code. Previously, it connected and just looped through send / recv calls until the program was aborted or the 'lockdown' message was received.
Why would subsequent calls (to send) silently fail to send anything on the socket and, what do I need to adjust in order to restore the simple send / recv flow?
#include <boost/signals2/signal.hpp>
#include <boost/bind.hpp>
#include <iostream>
#include <boost/array.hpp>
#include <boost/asio.hpp>
#include <boost/thread.hpp>
using boost::asio::ip::tcp;
using namespace std;
boost::system::error_code ServiceMonitorThread::ConnectToPeer(
tcp::socket &socket,
tcp::resolver::iterator endpoint_iterator)
{
boost::system::error_code error;
int tries = 0;
for (; tries < maxTriesBeforeAbort; tries++)
{
boost::asio::connect(socket, endpoint_iterator, error);
if (!error)
{
break;
}
else if (error != make_error_code(boost::system::errc::success))
{
// Error connecting to service... may not be running?
cerr << error.message() << endl;
boost::this_thread::sleep_for(boost::chrono::milliseconds(200));
}
}
if (tries == maxTriesBeforeAbort)
{
error = make_error_code(boost::system::errc::host_unreachable);
}
return error;
}
// Main thread-loop routine.
void ServiceMonitorThread::run()
{
boost::system::error_code error;
tcp::resolver resolver(io_service);
tcp::resolver::query query(hostnameOrAddress, to_string(port));
tcp::resolver::iterator endpoint_iterator = resolver.resolve(query);
tcp::socket socket(io_service);
error = ConnectToPeer(socket, endpoint_iterator);
if (error && error == boost::system::errc::host_unreachable)
{
TerminateProgram();
}
boost::asio::streambuf command;
std::ostream command_stream(&command);
command_stream << "status\n";
boost::array<char, 10> response;
int retry = 0;
while (retry < maxTriesBeforeAbort)
{
// A 1s request interval is more than sufficient for status checking.
boost::this_thread::sleep_for(boost::chrono::seconds(1));
// Send the command to the network monitor server service.
boost::asio::write(socket, command, error);
if (error)
{
// Error sending to socket
cerr << error.message() << endl;
retry++;
continue;
}
// Clear the response buffer, then read the network monitor status.
response.assign(0);
/* size_t bytes_read = */ socket.read_some(boost::asio::buffer(response), error);
if (error)
{
if (error == make_error_code(boost::asio::error::eof))
{
// Connection was dropped, re-connect to the service.
error = ConnectToPeer(socket, endpoint_iterator);
if (error && error == make_error_code(boost::system::errc::host_unreachable))
{
TerminateProgram();
}
continue;
}
else
{
cerr << error.message() << endl;
retry++;
continue;
}
}
// Examine the response message.
if (strncmp(response.data(), "normal", 6) != 0)
{
retry++;
// If we received the lockdown response, then terminate.
if (strncmp(response.data(), "lockdown", 8) == 0)
{
break;
}
// Not an expected response, potential error, retry to see if it was merely an aberration.
continue;
}
// If we arrived here, the exchange was successful; reset the retry count.
if (retry > 0)
{
retry = 0;
}
}
// If retry count was incremented, then we have likely encountered an issue; shut things down.
if (retry != 0)
{
TerminateProgram();
}
}
When a streambuf is provided directly to an I/O operation as the buffer, then the I/O operation will manage the input sequence appropriately by either commiting read data or consuming written data. Hence, in the following code, command is empty after the first iteration:
boost::asio::streambuf command;
std::ostream command_stream(&command);
command_stream << "status\n";
// `command`'s input sequence contains "status\n".
while (retry < maxTriesBeforeAbort)
{
...
// write all of `command`'s input sequence to the socket.
boost::asio::write(socket, command, error);
// `command.size()` is 0, as the write operation will consume the data.
// Subsequent write operations with `command` will be no-ops.
...
}
One solution would be to use std::string as the buffer:
std::string command("status\n");
while (retry < maxTriesBeforeAbort)
{
...
boost::asio::write(socket, boost::asio::buffer(command), error);
...
}
For more details on streambuf usage, consider reading this answer.

Stopping async_connect

I currently use Windows 7 64bit, MSVC2010 and Boost.Asio 1.57. I would like to connect to a TCP server with a timeout. If the timeout expires, I should close the connection as soon as possible as the IP address (chosen by a user) is probably wrong.
I know I should use async requests because sync requests have no timeouts options included. So I'm using async_connect with an external timeout. This is a solution I have found in many places, including stackoverflow.
The problem is that the following code does not behave like I wished. async_connect is not "cancelled" by the socket.close(). With my computer, closing the socket takes about 15 seconds to complete, which makes my program not responsive for a while...
I would like to have a decent timeout (approx. 3 seconds) and close the socket after this time, so that the user can try to connect with another IP address (from the HMI)
#include <iostream>
#include <boost\asio.hpp>
#include <boost\shared_ptr.hpp>
#include <boost\bind.hpp>
using boost::asio::ip::tcp;
class tcp_client
{
public:
tcp_client(boost::asio::io_service& io_service, tcp::endpoint& endpoint, long long timeout = 3000000)
:m_io_service (io_service),
m_endpoint(endpoint),
m_timer(io_service),
m_timeout(timeout)
{
connect();
}
void stop()
{
m_socket->close();
}
private:
void connect()
{
m_socket.reset(new tcp::socket(m_io_service));
std::cout << "TCP Connection in progress" << std::endl;
m_socket->async_connect(m_endpoint,
boost::bind(&tcp_client::handle_connect, this,
m_socket,
boost::asio::placeholders::error)
);
m_timer.expires_from_now(boost::posix_time::microseconds(m_timeout));
m_timer.async_wait(boost::bind(&tcp_client::HandleWait, this, boost::asio::placeholders::error));
}
void handle_connect(boost::shared_ptr<tcp::socket> socket, const boost::system::error_code& error)
{
if (!error)
{
std::cout << "TCP Connection : connected !" << std::endl;
m_timer.expires_at(boost::posix_time::pos_infin); // Stop the timer !
// Read normally
}
else
{
std::cout << "TCP Connection failed" << std::endl;
}
}
public:
void HandleWait(const boost::system::error_code& error)
{
if (!error)
{
std::cout << "Connection not established..." << std::endl;
std::cout << "Trying to close socket..." << std::endl;
stop();
return;
}
}
boost::asio::io_service& m_io_service;
boost::shared_ptr<tcp::socket> m_socket;
tcp::endpoint m_endpoint;
boost::asio::deadline_timer m_timer;
long long m_timeout;
};
int main()
{
boost::asio::io_service io_service;
tcp::endpoint endpoint(boost::asio::ip::address_v4::from_string("192.168.10.74"), 7171); // invalid address
tcp_client tcpc(io_service, endpoint);
io_service.run();
system("pause");
}
The only solution I found is to run io_service:run() in many threads, and create a new socket for each connection. But this solution does not appear valid to me as I have to specify a number of threads and I don't know how many wrong address the user will enter in my HMI. Yes, some users are not as clever as others...
What's wrong with my code ? How do I interrupt a TCP connection in a clean and fast way ?
Best regards,
Poukill
There's nothing elementary wrong with the code, and it does exactly what you desire on my Linux box:
TCP Connection in progress
Connection not established...
Trying to close socket...
TCP Connection failed
real 0m3.003s
user 0m0.002s
sys 0m0.000s
Notes:
You may have success adding a cancel() call to the stop() function:
void stop()
{
m_socket->cancel();
m_socket->close();
}
You should check for abortion of the timeout though:
void HandleWait(const boost::system::error_code& error)
{
if (error && error != boost::asio::error::operation_aborted)
{
std::cout << "Connection not established..." << std::endl;
std::cout << "Trying to close socket..." << std::endl;
stop();
return;
}
}
Otherwise the implicit cancel of the timer after successful connect will still close() the socket :)
If you want to run (many) connection attempts in parallel, you don't need any more threads or even more than one io_service. This is the essence of Boost Asio: you can do asynchronous IO operations on a single thread.
This answer gives a pretty isolated picture of this (even though the connections are done using ZMQ there): boost asio deadline_timer async_wait(N seconds) twice within N seconds cause operation canceled
another example, this time about timing out many sessions independently on a single io_service: boost::asio::deadline_timer::async_wait not firing callback