async_send data not sent - c++

[disclaimer] I am new to boost.
Looking into boost::asio and tried to create a simple asynchronous TCP server with the following functionality:
Listen for connections on port 13
When connected, receive data
If data received == time, then return current datetime, else return a predefined string ("Something else was requested")
The problem:
Although, I accept the connection and receive the data, when transmitting data using async_send, although I receive no error and the value of bytes_transferred is correct, I receive empty data on the client side.
If I try to transmit the data from within handle_accept (instead of handle_read), this works fine.
The implementation:
I worked on the boost asio tutorial found here:
Instantiate a tcp_server object, that basically initiates the acceptor and starts listening. as shown below:
int main()
{
try
{
boost::asio::io_service io_service;
tcp_server server(io_service);
io_service.run();
}
catch (std::exception& e)
{
std::cerr << e.what() << std::endl;
}
return 0;
}
and in tcp_server:
class tcp_server
{
public:
tcp_server(boost::asio::io_service& io_service)
: acceptor_(io_service, tcp::endpoint(tcp::v4(), 13))
{
start_accept();
}
private:
void start_accept()
{
using std::cout;
tcp_connection::pointer new_connection =
tcp_connection::create(acceptor_.get_io_service());
acceptor_.async_accept(new_connection->socket(),
boost::bind(&tcp_server::handle_accept, this, new_connection,
boost::asio::placeholders::error));
cout << "Done";
}
...
}
Once a connection is accepted, I am handling it as shown below:
void handle_accept(tcp_connection::pointer new_connection,
const boost::system::error_code& error)
{
if (!error)
{
new_connection->start();
}
start_accept();
}
Below is the tcp_connection::start() method:
void start()
{
boost::asio::async_read(socket_, boost::asio::buffer(inputBuffer_),
boost::bind(&tcp_connection::handle_read, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
/* the snippet below works here - but not in handle_read
outputBuffer_ = make_daytime_string();
boost::asio::async_write(socket_, boost::asio::buffer(outputBuffer_),
boost::bind(&tcp_connection::handle_write, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));*/
}
and in handle_read:
void handle_read(const boost::system::error_code& error, size_t bytes_transferred)
{
outputBuffer_ = make_daytime_string();
if (strcmp(inputBuffer_, "time"))
{
/*this does not work - correct bytes_transferred but nothing shown on receiving end */
boost::asio::async_write(socket_, boost::asio::buffer(outputBuffer_),
boost::bind(&tcp_connection::handle_write, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
else
{
outputBuffer_ = "Something else was requested";//, 128);
boost::asio::async_write(socket_, boost::asio::buffer(outputBuffer_),
boost::bind(&tcp_connection::handle_write, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
}
The handle_write is shown below:
void handle_write(const boost::system::error_code& error,
size_t bytes_transferred)
{
if (!error)
{
std::cout << "Bytes transferred: " << bytes_transferred;
std::cout << "Message sent: " << outputBuffer_;
}
else
{
std::cout << "Error in writing: " << error.message();
}
}
Note the following regarding handle_write (and this is the really strange thing):
There is no error
The bytes_transferred variable has the correct value
outputBuffer_ has the correct value (as set in handle_read)
Nevertheless, the package received at the client side (Packet Sender) is empty (as far as data is concerned).
The complete code is shared here.

Complete test program (c++14). Note the handling of asynchronous buffering when responding to a receive - there may be a send already in progress.
#include <boost/asio.hpp>
#include <thread>
#include <future>
#include <vector>
#include <array>
#include <memory>
#include <mutex>
#include <condition_variable>
#include <iterator>
#include <iostream>
namespace asio = boost::asio;
asio::io_service server_service;
asio::io_service::work server_work{server_service};
bool listening = false;
std::condition_variable cv_listening;
std::mutex management_mutex;
auto const shared_query = asio::ip::tcp::resolver::query(asio::ip::tcp::v4(), "localhost", "8082");
void client()
try
{
asio::io_service client_service;
asio::ip::tcp::socket socket(client_service);
auto lock = std::unique_lock<std::mutex>(management_mutex);
cv_listening.wait(lock, [] { return listening; });
lock.unlock();
asio::ip::tcp::resolver resolver(client_service);
asio::connect(socket, resolver.resolve(shared_query));
auto s = std::string("time\ntime\ntime\n");
asio::write(socket, asio::buffer(s));
socket.shutdown(asio::ip::tcp::socket::shutdown_send);
asio::streambuf sb;
boost::system::error_code sink;
asio::read(socket, sb, sink);
std::cout << std::addressof(sb);
socket.close();
server_service.stop();
}
catch(const boost::system::system_error& se)
{
std::cerr << "client: " << se.code().message() << std::endl;
}
struct connection
: std::enable_shared_from_this<connection>
{
connection(asio::io_service& ios)
: strand_(ios)
{
}
void run()
{
asio::async_read_until(socket_, buffer_, "\n",
strand_.wrap([self = shared_from_this()](auto const&ec, auto size)
{
if (size == 0 )
{
// error condition
boost::system::error_code sink;
self->socket_.shutdown(asio::ip::tcp::socket::shutdown_receive, sink);
}
else {
self->buffer_.commit(size);
std::istream is(std::addressof(self->buffer_));
std::string str;
while (std::getline(is, str))
{
if (str == "time") {
self->queue_send("eight o clock");
}
}
self->run();
}
}));
}
void queue_send(std::string s)
{
assert(strand_.running_in_this_thread());
s += '\n';
send_buffers_pending_.push_back(std::move(s));
nudge_send();
}
void nudge_send()
{
assert(strand_.running_in_this_thread());
if (send_buffers_sending_.empty() and not send_buffers_pending_.empty())
{
std::swap(send_buffers_pending_, send_buffers_sending_);
std::vector<asio::const_buffers_1> send_buffers;
send_buffers.reserve(send_buffers_sending_.size());
std::transform(send_buffers_sending_.begin(), send_buffers_sending_.end(),
std::back_inserter(send_buffers),
[](auto&& str) {
return asio::buffer(str);
});
asio::async_write(socket_, send_buffers,
strand_.wrap([self = shared_from_this()](auto const& ec, auto size)
{
// should check for errors here...
self->send_buffers_sending_.clear();
self->nudge_send();
}));
}
}
asio::io_service::strand strand_;
asio::ip::tcp::socket socket_{strand_.get_io_service()};
asio::streambuf buffer_;
std::vector<std::string> send_buffers_pending_;
std::vector<std::string> send_buffers_sending_;
};
void begin_accepting(asio::ip::tcp::acceptor& acceptor)
{
auto candidate = std::make_shared<connection>(acceptor.get_io_service());
acceptor.async_accept(candidate->socket_, [candidate, &acceptor](auto const& ec)
{
if (not ec) {
candidate->run();
begin_accepting(acceptor);
}
});
}
void server()
try
{
asio::ip::tcp::acceptor acceptor(server_service);
asio::ip::tcp::resolver resolver(server_service);
auto first = resolver.resolve(shared_query);
acceptor.open(first->endpoint().protocol());
acceptor.bind(first->endpoint());
acceptor.listen();
begin_accepting(acceptor);
auto lock = std::unique_lock<std::mutex>(management_mutex);
listening = true;
lock.unlock();
cv_listening.notify_all();
server_service.run();
}
catch(const boost::system::system_error& se)
{
std::cerr << "server: " << se.code().message() << std::endl;
}
int main()
{
using future_type = std::future<void>;
auto stuff = std::array<future_type, 2> {{std::async(std::launch::async, client),
std::async(std::launch::async, server)}};
for (auto& f : stuff) f.wait();
}

There are multiple issues in this code. Some of them may be responsible for your problem:
TCP has no definition of packets, so there's no guarantee that you will ever receive time at once in handle_read. You need a statemachine for that and to respect the bytes_transferred info. If you only have received a part of the message you need to continue at the correct offset. Or you can use asio utility functions, like reading exactly a length of bytes or reading a line.
In addition the last point, you shouldn't really compare the received data with strcmp. That will only work if the remote also sends a null terminator over the connection - does it?
You don't check whether an error happend, although that might manifest itself in other errors.
You are possibly issueing multiple concurrent async writes if you receive multiple data fragments in a shart timespan. This is not valid in asio.
More important, you mutate the send buffer (outputBuffer_) while the send is in progress. This will pretty much lead to undefined behavior. asio might try to write a piece of memory which is no longer valid.

I have solved the problem with the collective help of the comments provided in the question. The behavior that I was experiencing was because of the functionality of async_read. More specifically in the boost asio documentation it reads:
This function is used to asynchronously read a certain number of bytes
of data from a stream. The function call always returns immediately.
The asynchronous operation will continue until one of the following
conditions is true:
The supplied buffers are full. That is, the bytes transferred is equal to the sum of the buffer sizes.
An error occurred.
The inputBuffer_ I was using to read the input, was a 128 char array. The client I was using, would only transfer the real data (without padding), and therefore the async_read would not return until the connection was closed by the client (or 128 bytes of data were transferred). When the connection was closed by the client there was no way to send back the requested data. This is also the reason that it was working with #Arunmu's simple python tcp client (because he was sending 128 bytes of data always).
To fix the issues, I made the following changes (the full working code is supplied here for reference):
In tcp_connection::start: I am now using async_read_until to read the incoming data (and use \n as a delimiter). The input is stored in a boost::asio::streambuf. async_read is guaranteed to return once the delimiter has been found, or an error has occurred. So there is no chance to issue multiple async_write concurrently.
In handle_read: I have included error checking, which made it much simpler to debug.

Related

How to implement an IPC protocol using Boost ASIO?

I'm trying to implement a simple IPC protocol for a project that will be built using Boost ASIO. The idea is to have the communication be done through IP/TCP, with a server with the backend and a client that will be using the data received from the server to build the frontend. The whole session would go like this:
The connection is established
The client sends a 2 byte packet with some information that will be used by the server to build its response (this is stored as the struct propertiesPacket)
The server processes the data received and stores the output in a struct of variable size called processedData
The server sends a 2 byte unsigned integer that will indicate the client what size the struct it will receive has (let's say the struct is of size n bytes)
The server sends the struct data as a n byte packet
The connection is ended
I tried implementing this by myself, following the great tutorial available in Boost ASIO's documentation, as well as the examples included in the library and some repos I found on Github, but as this is my first hand working with networking and IPC, I couldn't make it work, my client returns an exception saying the connection was reset by the peer.
What I have right now is this:
// File client.cpp
int main(int argc, char *argv[])
{
try {
propertiesPacket properties;
// ...
// We set the data inside the properties struct
// ...
boost::asio::io_context io;
boost::asio::ip::tcp::socket socket(io);
boost::asio::ip::tcp::resolver resolver(io);
boost::asio::connect(socket, resolver.resolve(argv[1], argv[2]));
boost::asio::write(socket, boost::asio::buffer(&properties, sizeof(propertiesPacket)));
unsigned short responseSize {};
boost::asio::read(socket, boost::asio::buffer(&responseSize, sizeof(short)));
processedData* response = reinterpret_cast<processedData*>(malloc(responseSize));
boost::asio::read(socket, boost::asio::buffer(response, responseSize));
// ...
// The client handles the data
// ...
return 0;
} catch (std::exception &e) {
std::cerr << e.what() << std::endl;
}
}
// File server.cpp
class ServerConnection
: public std::enable_shared_from_this<ServerConnection>
{
public:
using TCPSocket = boost::asio::ip::tcp::socket;
ServerConnection::ServerConnection(TCPSocket socket)
: socket_(std::move(socket)),
properties_(nullptr),
filePacket_(nullptr),
filePacketSize_(0)
{
}
void start() { doRead(); }
private:
void doRead()
{
auto self(shared_from_this());
socket_.async_read_some(boost::asio::buffer(properties_, sizeof(propertiesPacket)),
[this, self](boost::system::error_code ec, std::size_t /*length*/)
{
if (!ec) {
processData();
doWrite(&filePacketSize_, sizeof(short));
doWrite(filePacket_, sizeof(*filePacket_));
}
});
}
void doWrite(void* data, size_t length)
{
auto self(shared_from_this());
boost::asio::async_write(socket_, boost::asio::buffer(data, length),
[this, self](boost::system::error_code ec, std::size_t /*length*/)
{
if (!ec) { doRead(); }
});
}
void processData()
{ /* Data is processed */ }
TCPSocket socket_;
propertiesPacket* properties_;
processedData* filePacket_;
short filePacketSize_;
};
class Server
{
public:
using IOContext = boost::asio::io_context;
using TCPSocket = boost::asio::ip::tcp::socket;
using TCPAcceptor = boost::asio::ip::tcp::acceptor;
Server::Server(IOContext& io, short port)
: socket_(io),
acceptor_(io, boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), port))
{
doAccept();
}
private:
void doAccept()
{
acceptor_.async_accept(socket_,
[this](boost::system::error_code ec)
{
if (!ec) {
std::make_shared<ServerConnection>(std::move(socket_))->start();
}
doAccept();
});
}
TCPSocket socket_;
TCPAcceptor acceptor_;
};
What did I do wrong? My guess is that inside the doRead function, calling multiple times the doWrite function, when that function then also calls doRead is in part what's causing problems, but I don't know what the correct way of writing data asynchronously multiple times is. But I'm also sure that isn't the only part of my code that isn't behaving as I think it should.
Besides the problems with the code shown that I mentioned in the comments, there is indeed the problem that you suspected:
My guess is that inside the doRead function, calling multiple times the doWrite function, when that function then also calls doRead is in part what's causing problems
The fact that "doRead" is in the same function isn't necessarily a problem (that's just full-duplex socket IO). However "calling multiple times" is. See the docs:
This operation is implemented in terms of zero or more calls to the stream's async_write_some function, and is known as a composed operation. The program must ensure that the stream performs no other write operations (such as async_write, the stream's async_write_some function, or any other composed operations that perform writes) until this operation completes.
The usual way is to put the whole message in a single buffer, but if that would be "expensive" to copy, you can use a BufferSequence, which is known as scatter/gather buffers.
Specifically, you would replace
doWrite(&filePacketSize_, sizeof(short));
doWrite(filePacket_, sizeof(*filePacket_));
with something like
std::vector<boost::asio::const_buffer> msg{
boost::asio::buffer(&filePacketSize_, sizeof(short)),
boost::asio::buffer(filePacket_, sizeof(*filePacket_)),
};
doWrite(msg);
Note that this assumes that filePacketSize and filePacket have been assigned proper values!
You could of course modify do_write to accept the buffer sequence:
template <typename Buffers> void doWrite(Buffers msg)
{
auto self(shared_from_this());
boost::asio::async_write(
socket_, msg,
[this, self](boost::system::error_code ec, std::size_t /*length*/) {
if (!ec) {
doRead();
}
});
}
But in your case I'd simplify by inlining the body (now that you don't call it more than once anyway).
SIDE NOTES
Don't use new or delete. NEVER use malloc in C++. Never use reinterpret_cast<> (except in the very rarest of exceptions that the standard allows!). Instead of
processedData* response = reinterpret_cast<processedData*>(malloc(responseSize));
Just use
processedData response;
(optionally add {} for value-initialization of aggregates). If you need variable-length messages, consider to put a vector or a array<char, MAXLEN> inside the message. Of course, array is fixed length but it preserves POD-ness, so it might be easier to work with. If you use vector, you'd want a scatter/gather read into a buffer sequence like I showed above for the write side.
Instead of reinterpreting between inconsistent short and unsigned short types, perhaps just spell the type with the standard sizes: std::uint16_t everywhere.
Keep in mind that you are not taking into account byte order so your protocol will NOT be portable across compilers/architectures.
Provisional Fixes
This is the listing I ended up with after reviewing the code you shared.
Live On Coliru
#include <boost/asio.hpp>
#include <iostream>
namespace ba = boost::asio;
using boost::asio::ip::tcp;
using boost::system::error_code;
using TCPSocket = tcp::socket;
struct processedData { };
struct propertiesPacket { };
// File server.cpp
class ServerConnection : public std::enable_shared_from_this<ServerConnection> {
public:
ServerConnection(TCPSocket socket) : socket_(std::move(socket))
{ }
void start() {
std::clog << __PRETTY_FUNCTION__ << std::endl;
doRead();
}
private:
void doRead()
{
std::clog << __PRETTY_FUNCTION__ << std::endl;
auto self(shared_from_this());
socket_.async_read_some(
ba::buffer(&properties_, sizeof(properties_)),
[this, self](error_code ec, std::size_t length) {
std::clog << "received: " << length << std::endl;
if (!ec) {
processData();
std::vector<ba::const_buffer> msg{
ba::buffer(&filePacketSize_, sizeof(uint16_t)),
ba::buffer(&filePacket_, filePacketSize_),
};
ba::async_write(socket_, msg,
[this, self = shared_from_this()](
error_code ec, std::size_t length) {
std::clog << " written: " << length
<< std::endl;
if (!ec) {
doRead();
}
});
}
});
}
void processData() {
std::clog << __PRETTY_FUNCTION__ << std::endl;
/* Data is processed */
}
TCPSocket socket_;
propertiesPacket properties_{};
processedData filePacket_{};
uint16_t filePacketSize_ = sizeof(filePacket_);
};
class Server
{
public:
using IOContext = ba::io_context;
using TCPAcceptor = tcp::acceptor;
Server(IOContext& io, uint16_t port)
: socket_(io)
, acceptor_(io, {tcp::v4(), port})
{
doAccept();
}
private:
void doAccept()
{
std::clog << __PRETTY_FUNCTION__ << std::endl;
acceptor_.async_accept(socket_, [this](error_code ec) {
if (!ec) {
std::clog << "Accepted " << socket_.remote_endpoint()
<< std::endl;
std::make_shared<ServerConnection>(std::move(socket_))->start();
doAccept();
} else {
std::clog << "Accept " << ec.message() << std::endl;
}
});
}
TCPSocket socket_;
TCPAcceptor acceptor_;
};
// File client.cpp
int main(int argc, char *argv[])
{
ba::io_context io;
Server s{io, 6869};
std::thread server_thread{[&io] {
io.run();
}};
// always check argc!
std::vector<std::string> args(argv, argv + argc);
if (args.size() == 1)
args = {"demo", "127.0.0.1", "6869"};
// avoid race with server accept thread
post(io, [&io, args] {
try {
propertiesPacket properties;
// ...
// We set the data inside the properties struct
// ...
tcp::socket socket(io);
tcp::resolver resolver(io);
connect(socket, resolver.resolve(args.at(1), args.at(2)));
write(socket, ba::buffer(&properties, sizeof(properties)));
uint16_t responseSize{};
ba::read(socket, ba::buffer(&responseSize, sizeof(uint16_t)));
std::clog << "Client responseSize: " << responseSize << std::endl;
processedData response{};
assert(responseSize <= sizeof(response));
ba::read(socket, ba::buffer(&response, responseSize));
// ...
// The client handles the data
// ...
// for online demo:
io.stop();
} catch (std::exception const& e) {
std::clog << e.what() << std::endl;
}
});
io.run_one();
server_thread.join();
}
Printing something similar to
void Server::doAccept()
Server::doAccept()::<lambda(boost::system::error_code)> Success
void ServerConnection::start()
void ServerConnection::doRead()
void Server::doAccept()
received: 1
void ServerConnection::processData()
written: 3
void ServerConnection::doRead()
Client responseSize: 1

HTTP proxy example in C++

So I've been trying to write a proxy in C++ using the boost.asio. My initial project includes the client that writes a string message into a socket, a server that receives this message and writes a string message into a socket, and a proxy that works with the two mentioned sockets.
The proxy code looks like this (The future intention is handle multiple connections and to use the transfered data somehow, and the callbacks would perform some actual work other than logging):
#include "commondata.h"
#include <boost/bind.hpp>
#include <boost/enable_shared_from_this.hpp>
using namespace boost::asio;
using ip::tcp;
using std::cout;
using std::endl;
class con_handler : public boost::enable_shared_from_this<con_handler> {
private:
tcp::socket client_socket;
tcp::socket server_socket;
enum { max_length = 1024 };
char client_data[max_length];
char server_data[max_length];
public:
typedef boost::shared_ptr<con_handler> pointer;
con_handler(boost::asio::io_service& io_service):
server_socket(io_service),
client_socket(io_service) {
memset(client_data, 0, max_length);
memset(server_data, 0, max_length);
server_socket.connect( tcp::endpoint( boost::asio::ip::address::from_string(SERVERIP), SERVERPORT ));
}
// creating the pointer
static pointer create(boost::asio::io_service& io_service) {
return pointer(new con_handler(io_service));
}
//socket creation
tcp::socket& socket() {
return client_socket;
}
void start() {
//read the data into the input buffer
client_socket.async_read_some(
boost::asio::buffer(client_data, max_length),
boost::bind(&con_handler::handle_read,
shared_from_this(),
client_data,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
server_socket.async_write_some(
boost::asio::buffer(client_data, max_length),
boost::bind(&con_handler::handle_write,
shared_from_this(),
client_data,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
server_socket.async_read_some(
boost::asio::buffer(server_data, max_length),
boost::bind(&con_handler::handle_read,
shared_from_this(),
server_data,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
client_socket.async_write_some(
boost::asio::buffer(server_data, max_length),
boost::bind(&con_handler::handle_write,
shared_from_this(),
server_data,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void handle_read(const char* data, const boost::system::error_code& err, size_t bytes_transferred) {
if (!err) {
cout << "proxy handle_read" << endl;
cout << data << endl;
} else {
std::cerr << "error: " << err.message() << std::endl;
client_socket.close();
}
}
void handle_write(const char* data, const boost::system::error_code& err, size_t bytes_transferred) {
if (!err) {
cout << "proxy handle_write" << endl;
cout << data << endl;
} else {
std::cerr << "error: " << err.message() << endl;
client_socket.close();
}
}
};
class Server {
private:
boost::asio::io_service io_service;
tcp::acceptor acceptor_;
void start_accept() {
// socket
con_handler::pointer connection = con_handler::create(io_service);
// asynchronous accept operation and wait for a new connection.
acceptor_.async_accept(connection->socket(),
boost::bind(&Server::handle_accept, this, connection,
boost::asio::placeholders::error));
}
public:
//constructor for accepting connection from client
Server()
: acceptor_(io_service, tcp::endpoint(tcp::v4(), PROXYPORT)) {
start_accept();
}
void handle_accept(const con_handler::pointer& connection, const boost::system::error_code& err) {
if (!err) {
connection->start();
}
start_accept();
}
boost::asio::io_service& get_io_service() {
return io_service;
}
};
int main(int argc, char *argv[]) {
try {
Server server;
server.get_io_service().run();
} catch(std::exception& e) {
std::cerr << e.what() << endl;
}
return 0;
}
If the messages sent are strings (which I've used initially to test if my code works at all), then all of the callbacks are called the way I wanted them to be called, and the thing seems to be working.
Here's the stdout of the proxy for that case:
user#laptop:$ ./proxy
proxy handle_read
message from the client
proxy handle_write
message from the client
proxy handle_read
message from server
proxy handle_write
message from server
So the client sends the "message from the client" string, which is received and saved by the proxy, the same string is sent to the server, then the server sends back the "message from server" string, which is also received and saved by the proxy and then is sent to the client.
The problem appears when I try to use the actual web server (Apache) and an application like JMeter to talk to each other. This is the stdout for this case:
user#laptop:$ ./proxy
proxy handle_write
proxy handle_write
proxy handle_read
GET / HTTP/1.1
Connection: keep-alive
Host: 127.0.0.1:1337
User-Agent: Apache-HttpClient/4.5.5 (Java/11.0.8)
error: End of file
The JMeter test then fails with a timeout (that is when the proxy gets the EOF error), and no data seems to be sent to the apache webserver. The questions that I have for now are then why the callbacks are called in another order comparing to the case when the string messages are sent and why the data is not being transferred to the server socket, I guess. Thanks in advance for any help!
Abbreviating from start():
client_socket.async_read_some (buffer(client_data), ...);
server_socket.async_write_some (buffer(client_data), ...);
server_socket.async_read_some (buffer(server_data), ...);
client_socket.async_write_some (buffer(server_data), ...);
//read the data into the input
client_socket.async_read_some (buffer(client_data), ...);
server_socket.async_write_some (buffer(client_data), ...);
server_socket.async_read_some (buffer(server_data), ...);
client_socket.async_write_some (buffer(server_data), ...);
That's... not how async operations work. They run asynchronously, meaning that they will all immediately return.
You're simultaneously reading and writing from some buffers, without waiting for valid data. Also, you're writing the full buffer always, regardless of how much was received.
All of this spells Undefined Behaviour.
Start simple
Conceptually you just want to read:
void start() {
//read the data into the input buffer
client_socket.async_read_some(
boost::asio::buffer(client_data, max_length),
boost::bind(&con_handler::handle_read,
shared_from_this(),
client_data,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
Now, once you received data, you might want to relay that:
void handle_read(const char* data, const boost::system::error_code& err, size_t bytes_transferred) {
if (!err) {
std::cout << "proxy handle_read" << std::endl;
server_socket.async_write_some(
boost::asio::buffer(client_data, bytes_transferred),
boost::bind(&con_handler::handle_write,
shared_from_this(),
client_data,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
} else {
std::cerr << "error: " << err.message() << std::endl;
client_socket.close();
}
}
Note that it seems a bit arbitrary to only close one side of the connection on errors. You probably at least want to cancel() any async operations on both, optionally shutdown() and then just let the shared_ptr destruct your con_handler.
Full Duplex
Now, for full-duplex operation you can indeed start the reverse relay at the same time. It gets a little unweildy to maintain the call chains in separate methods (after all you don't just switch the buffers, but also the socket pairs).
It might be instructive to realize that you're doing the same thing twice:
client -> [...buffer...] -> server
server -> [...buffer...] -> client
You can encapsulate each side in a class, and avoid duplicating all the code:
struct relay {
tcp::socket &from, &to;
std::array<char, max_length> buf{};
void run_relay(pointer self) {
from.async_read_some(asio::buffer(buf),
[this, self](error_code ec, size_t n) {
if (ec) return handle(from, ec);
/*
*std::cout
* << "From " << from.remote_endpoint()
* << ": " << std::quoted(std::string_view(buf.data(), n))
* << std::endl;
*/
async_write(to, asio::buffer(buf, n), [this, self](error_code ec, size_t) {
if (ec) return handle(to, ec);
run_relay(self);
});
});
}
void handle(tcp::socket& which, error_code ec = {}) {
if (ec == asio::error::eof) {
// soft "error" - allow write to complete
std::cout << "EOF on " << which.remote_endpoint() << std::endl;
which.shutdown(tcp::socket::shutdown_receive, ec);
}
if (ec) {
from.cancel();
to.cancel();
std::string reason = ec.message();
auto fep = from.remote_endpoint(ec),
tep = to.remote_endpoint(ec);
std::cout << "Stopped relay " << fep << " -> " << tep << " due to " << reason << std::endl;
}
}
} c_to_s {client_socket, server_socket, {0}},
s_to_c {server_socket, client_socket, {0}};
Note
we sidestepped the bind mess by using lambdas
we cancel both ends of the relay on error
we use a std::array buffer - more safe and easier to use
we only write as many bytes as were received, regardless of the size of the buffer
we don't schedule another read until the write has completed to avoid clobbering the data in buf
Let's implement con_handler start again
Using the relay from just above:
void start() {
c_to_s.run_relay(shared_from_this());
s_to_c.run_relay(shared_from_this());
}
That's all. We pass ourselves so the con_handler stays alive until all operations complete.
DEMO Live On Coliru
#define PROXYPORT 8899
#define SERVERIP "173.203.57.63" // coliru IP at the time
#define SERVERPORT 80
#include <boost/enable_shared_from_this.hpp>
#include <boost/asio.hpp>
#include <iostream>
#include <iomanip>
namespace asio = boost::asio;
using boost::asio::ip::tcp;
using boost::system::error_code;
using namespace std::chrono_literals;
class con_handler : public boost::enable_shared_from_this<con_handler> {
public:
con_handler(asio::io_service& io_service):
server_socket(io_service),
client_socket(io_service)
{
server_socket.connect({ asio::ip::address::from_string(SERVERIP), SERVERPORT });
}
// creating the pointer
using pointer = boost::shared_ptr<con_handler>;
static pointer create(asio::io_service& io_service) {
return pointer(new con_handler(io_service));
}
//socket creation
tcp::socket& socket() {
return client_socket;
}
void start() {
c_to_s.run_relay(shared_from_this());
s_to_c.run_relay(shared_from_this());
}
private:
tcp::socket server_socket;
tcp::socket client_socket;
enum { max_length = 1024 };
struct relay {
tcp::socket &from, &to;
std::array<char, max_length> buf{};
void run_relay(pointer self) {
from.async_read_some(asio::buffer(buf),
[this, self](error_code ec, size_t n) {
if (ec) return handle(from, ec);
/*
*std::cout
* << "From " << from.remote_endpoint()
* << ": " << std::quoted(std::string_view(buf.data(), n))
* << std::endl;
*/
async_write(to, asio::buffer(buf, n), [this, self](error_code ec, size_t) {
if (ec) return handle(to, ec);
run_relay(self);
});
});
}
void handle(tcp::socket& which, error_code ec = {}) {
if (ec == asio::error::eof) {
// soft "error" - allow write to complete
std::cout << "EOF on " << which.remote_endpoint() << std::endl;
which.shutdown(tcp::socket::shutdown_receive, ec);
}
if (ec) {
from.cancel();
to.cancel();
std::string reason = ec.message();
auto fep = from.remote_endpoint(ec),
tep = to.remote_endpoint(ec);
std::cout << "Stopped relay " << fep << " -> " << tep << " due to " << reason << std::endl;
}
}
} c_to_s {client_socket, server_socket, {0}},
s_to_c {server_socket, client_socket, {0}};
};
class Server {
asio::io_service io_service;
tcp::acceptor acceptor_;
void start_accept() {
// socket
auto connection = con_handler::create(io_service);
// asynchronous accept operation and wait for a new connection.
acceptor_.async_accept(
connection->socket(),
[connection, this](error_code ec) {
if (!ec) connection->start();
start_accept();
});
}
public:
Server() : acceptor_(io_service, {{}, PROXYPORT}) {
start_accept();
}
void run() {
io_service.run_for(5s); // .run();
}
};
int main() {
Server().run();
}
When run with
printf "GET / HTTP/1.1\r\nHost: coliru.stacked-crooked.com\r\n\r\n" | nc 127.0.0.1 8899
The server prints:
EOF on 127.0.0.1:36452
And the netcat receives reply:
HTTP/1.1 200 OK
Content-Type: text/html;charset=utf-8
Content-Length: 8616
Server: WEBrick/1.4.2 (Ruby/2.5.1/2018-03-29) OpenSSL/1.0.2g
Date: Sat, 01 Aug 2020 00:25:10 GMT
Connection: Keep-Alive
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Frameset//EN">
<html>
....
</html>
Summary
Thinking clearly about what you are trying to achieve, avoids accidentally complexity. It allowed us to come up with a good building block (relay), evaporating complexity.

How can I get around this "Network connection aborted by local system" using Boost Asio with a shared pointer?

I'm trying to write a TCP client using several different examples using Asio from Boost 1.60. The connection works properly for probably 30 seconds or so, but disconnects with the error:
The network connection was aborted by the local system
I've attempted to set up a "ping/pong" setup to keep the connection alive but it still terminates. The only previous Stack Overflow answers I've found suggested using Boost's shared_from_this and a shared pointer, which I've adapted my code to use. But the problem persists.
Setting up the Connection object and its thread:
boost::asio::io_service ios;
boost::asio::ip::tcp::resolver res(ios);
boost::shared_ptr<Connection> conn = boost::shared_ptr<Connection>(new Connection(ios));
conn->Start(res.resolve(boost::asio::ip::tcp::resolver::query("myserver", "10635")));
boost::thread t(boost::bind(&boost::asio::io_service::run, &ios));
Here's the relevant portions of the Connection class (I made sure to use shared_from_this() everywhere else, too):
class Connection : public boost::enable_shared_from_this<Connection>
{
public:
Connection(boost::asio::io_service &io_service)
: stopped_(false),
socket_(io_service),
deadline_(io_service),
heartbeat_timer_(io_service)
{
}
void Start(tcp::resolver::iterator endpoint_iter)
{
start_connect(endpoint_iter);
deadline_.async_wait(boost::bind(&Connection::check_deadline, shared_from_this()));
}
private:
void start_read()
{
deadline_.expires_from_now(boost::posix_time::seconds(30));
boost::asio::async_read_until(socket_, input_buffer_, 0x1f,
boost::bind(&Connection::handle_read, shared_from_this(), _1));
}
void handle_read(const boost::system::error_code& ec)
{
if (stopped_)
return;
if (!ec)
{
std::string line;
std::istream is(&input_buffer_);
std::getline(is, line);
if (!line.empty())
{
std::cout << "Received: " << line << "\n";
}
start_read();
}
else
{
// THIS IS WHERE THE ERROR IS LOGGED
std::cout << "Error on receive: " << ec.message() << "\n";
Stop();
}
}
void check_deadline()
{
if (stopped_)
return;
if (deadline_.expires_at() <= deadline_timer::traits_type::now())
{
socket_.close();
deadline_.expires_at(boost::posix_time::pos_infin);
}
deadline_.async_wait(boost::bind(&Connection::check_deadline, shared_from_this()));
}
};
The issue turned out to be on the server's end. The server wasn't sending the "pong" response to the client's ping properly, so the async_read_until() call never finished and consequently never reset the deadline timer.

boost asio async reading and writing to socket using queue

I am working on a simple TCP server that reads and writes it's messages to thread safe queue. The application can then use those queue to safely read and write to the socket even from different threads.
The problem I am facing is that I cannot async_read. My queue has the pop operation which returns the next element to be processed but it blocks if nothing is available. So once I call pop the async_read callback of course isn't fired anymore. Is there a way I can integrate such a queue into boost asio or do I have to completely rewrite?
Below is a short example I made to show the problem I am having. Once a TCP connection is estabilished I create a new thread that will run the application under that tcp_connection. Afterwards I want to start async_read and async_write. I have been breaking my head on this for a couple of hours and I really don't know how to solve this.
class tcp_connection : public std::enable_shared_from_this<tcp_connection>
{
public:
static std::shared_ptr<tcp_connection> create(boost::asio::io_service &io_service) {
return std::shared_ptr<tcp_connection>(new tcp_connection(io_service));
}
boost::asio::ip::tcp::socket& get_socket()
{
return this->socket;
}
void app_start()
{
while(1)
{
// Pop is a blocking call.
auto inbound_message = this->inbound_messages.pop();
std::cout << "Got message in app thread: " << inbound_message << ". Sending it back to client." << std::endl;
this->outbound_messages.push(inbound_message);
}
}
void start() {
this->app_thread = std::thread(&tcp_connection::app_start, shared_from_this());
boost::asio::async_read_until(this->socket, this->input_stream, "\r\n",
strand.wrap(boost::bind(&tcp_connection::handle_read, shared_from_this(), boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)));
// Start async writing here. The message to send are in the outbound_message queue. But a Pop operation blocks
// empty() is also available to check whether the queue is empty.
// So how can I async write without blocking the read.
// block...
auto message = this->outbound_messages.pop();
boost::asio::async_write(this->socket, boost::asio::buffer(message),
strand.wrap(boost::bind(&tcp_connection::handle_write, shared_from_this(), boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)));
}
void handle_read(const boost::system::error_code& e, size_t bytes_read)
{
std::cout << "handle_read called" << std::endl;
if (e)
{
std::cout << "Error handle_read: " << e.message() << std::endl;
return;
}
if (bytes_read != 0)
{
std::istream istream(&this->input_stream);
std::string message;
message.resize(bytes_read);
istream.read(&message[0], bytes_read);
std::cout << "Got message: " << message << std::endl;
this->inbound_messages.push(message);
}
boost::asio::async_read_until(this->socket, this->input_stream, "\r\n",
strand.wrap(boost::bind(&tcp_connection::handle_read, shared_from_this(), boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)));
}
void handle_write(const boost::system::error_code& e, size_t /*bytes_transferred*/)
{
if (e)
{
std::cout << "Error handle_write: " << e.message() << std::endl;
return;
}
// block...
auto message = this->outbound_messages.pop();
boost::asio::async_write(this->socket, boost::asio::buffer(message),
strand.wrap(boost::bind(&tcp_connection::handle_write, shared_from_this(), boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)));
}
private:
tcp_connection(boost::asio::io_service& io_service) : socket(io_service), strand(io_service)
{
}
boost::asio::ip::tcp::socket socket;
boost::asio::strand strand;
boost::asio::streambuf input_stream;
std::thread app_thread;
concurrent_queue<std::string> inbound_messages;
concurrent_queue<std::string> outbound_messages;
};
class tcp_server
{
public:
tcp_server(boost::asio::io_service& io_service)
: acceptor(io_service, boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), 9001))
{
start_accept();
}
private:
void start_accept()
{
std::shared_ptr<tcp_connection> new_connection =
tcp_connection::create(acceptor.get_io_service());
acceptor.async_accept(new_connection->get_socket(),
boost::bind(&tlcp_tcp_server::handle_accept, this, new_connection, boost::asio::placeholders::error));
}
void handle_accept(std::shared_ptr<tcp_connection> new_connection,
const boost::system::error_code& error)
{
if (!error)
{
new_connection->start();
}
start_accept();
}
boost::asio::ip::tcp::acceptor acceptor;
};
It looks to me as if you want an async_pop method which takes an error message placeholder and callback handler. When you receive a message, check whether there is an outstanding handler and if so, pop the message, deregister the handler and call it. Similarly when registering the async_pop, if there is already a message waiting, pop the message and post a call to the handler without registering it.
You might want to derive the async_pop class from a polymorphic base base of type pop_operation or similar.

How to prevent ASIO based server from terminating

I have been reading some Boost ASIO tutorials. So far, my understanding is that the entire send and receive is a loop that can be iterated only once. Please have a look at the following simple code:
client.cpp:
#include <boost/asio.hpp>
#include <boost/array.hpp>
#include <iostream>
#include <string>
boost::asio::io_service io_service;
boost::asio::ip::tcp::resolver resolver(io_service);
boost::asio::ip::tcp::socket sock(io_service);
boost::array<char, 4096> buffer;
void read_handler(const boost::system::error_code &ec, std::size_t bytes_transferred)
{
if (!ec)
{
std::cout << std::string(buffer.data(), bytes_transferred) << std::endl;
sock.async_read_some(boost::asio::buffer(buffer), read_handler);
}
}
void connect_handler(const boost::system::error_code &ec)
{
if (!ec)
{
sock.async_read_some(boost::asio::buffer(buffer), read_handler);
}
}
void resolve_handler(const boost::system::error_code &ec, boost::asio::ip::tcp::resolver::iterator it)
{
if (!ec)
{
sock.async_connect(*it, connect_handler);
}
}
int main()
{
boost::asio::ip::tcp::resolver::query query("localhost", "2013");
resolver.async_resolve(query, resolve_handler);
io_service.run();
}
the program resolves an address, connects to server and reads the data, and finally ends when there is no data.
My question: How can i continue this loop? I mean, How can I keep this connection between a client and server during the entire lifetime of my application so that the server sends data whenever it has something to send?
I tried to break this circle but everything seams trapped inside io_service.run()
Same question holds in case of the my sever also:
server.cpp :
#include <boost/asio.hpp>
#include <string>
boost::asio::io_service io_service;
boost::asio::ip::tcp::endpoint endpoint(boost::asio::ip::tcp::v4(), 2013);
boost::asio::ip::tcp::acceptor acceptor(io_service, endpoint);
boost::asio::ip::tcp::socket sock(io_service);
std::string data = "Hello, world!";
void write_handler(const boost::system::error_code &ec, std::size_t bytes_transferred)
{
}
void accept_handler(const boost::system::error_code &ec)
{
if (!ec)
{
boost::asio::async_write(sock, boost::asio::buffer(data), write_handler);
}
}
int main()
{
acceptor.listen();
acceptor.async_accept(sock, accept_handler);
io_service.run();
}
This is just an example. In a real application, I may like to keep the socket open and reuse it for other data exchanges(both read and write). How may I do that.
I value your kind comments. If you have references to some easy solutions addressing this issue, I appreciate if you mention it.
Thank you
Update (server sample code)
Based on the answer given below(update 2), I wrote the server code. Please note that the code is simplified (able to compile&run though). Also note that the io_service will never return coz it is always is waiting for a new connection. And that is how the io_service.run never returns and runs for ever. whenever you want io_service.run to return, just make the acceptor not to accept anymore. please do this in one of the many ways that i don't currently remember.(seriously, how do we do that in a clean way? :) )
enjoy:
#include <boost/asio.hpp>
#include <boost/thread.hpp>
#include <string>
#include <iostream>
#include <vector>
#include <time.h>
boost::asio::io_service io_service;
boost::asio::ip::tcp::endpoint endpoint(boost::asio::ip::tcp::v4(), 2013);
boost::asio::ip::tcp::acceptor acceptor(io_service, endpoint);
//boost::asio::ip::tcp::socket sock(io_service);
std::string data = "Hello, world!";
class Observer;
std::vector<Observer*> observers;
class Observer
{
public:
Observer(boost::asio::ip::tcp::socket *socket_):socket_obs(socket_){}
void notify(std::string data)
{
std::cout << "notify called data[" << data << "]" << std::endl;
boost::asio::async_write(*socket_obs, boost::asio::buffer(data) , boost::bind(&Observer::write_handler, this,boost::asio::placeholders::error));
}
void write_handler(const boost::system::error_code &ec)
{
if (!ec) //no error: done, just wait for the next notification
return;
socket_obs->close(); //client will get error and exit its read_handler
observers.erase(std::find(observers.begin(), observers.end(),this));
std::cout << "Observer::write_handler returns as nothing was written" << std::endl;
}
private:
boost::asio::ip::tcp::socket *socket_obs;
};
class server
{
public:
void CreatSocketAndAccept()
{
socket_ = new boost::asio::ip::tcp::socket(io_service);
observers.push_back(new Observer(socket_));
acceptor.async_accept(*socket_,boost::bind(&server::handle_accept, this,boost::asio::placeholders::error));
}
server(boost::asio::io_service& io_service)
{
acceptor.listen();
CreatSocketAndAccept();
}
void handle_accept(const boost::system::error_code& e)
{
CreatSocketAndAccept();
}
private:
boost::asio::ip::tcp::socket *socket_;
};
class Agent
{
public:
void update(std::string data)
{
if(!observers.empty())
{
// std::cout << "calling notify data[" << data << "]" << std::endl;
observers[0]->notify(data);
}
}
};
Agent agent;
void AgentSim()
{
int i = 0;
sleep(10);//wait for me to start client
while(i++ < 10)
{
std::ostringstream out("");
out << data << i ;
// std::cout << "calling update data[" << out.str() << "]" << std::endl;
agent.update(out.str());
sleep(1);
}
}
void run()
{
io_service.run();
std::cout << "io_service returned" << std::endl;
}
int main()
{
server server_(io_service);
boost::thread thread_1(AgentSim);
boost::thread thread_2(run);
thread_2.join();
thread_1.join();
}
You can simplify the logic of asio based porgrams like follows: each function that calls an async_X function provides a handler. This is a bit like transitions between states of a state machine, where the handlers are the states and the async-calls are transitions between states. Just exiting a handler without calling a async_* function is like a transition to an end state. Everything the program "does" (sending data, receiving data, connecting sockets etc.) occurs during the transitions.
If you see it that way, your client looks like this (only "good path", i.e. without errors):
<start> --(resolve)----> resolve_handler
resolve_handler --(connect)----> connect_handler
connect_handler --(read data)--> read_handler
read_handler --(read data)--> read_handler
Your server loks like this:
<start> --(accept)-----> accept handler
accept_handler --(write data)-> write_handler
write_handler ---------------> <end>
Since your write_handler does not do anything, it makes a transition to the end state, meaning ioservice::run returns. The question now is, what do you want to do, after the data has been written to the socket? Depending on that, you will have to define a corresponding transition, i.e. an async-call that does what you want to do.
Update:
from your comment I see you want to wait for the next data to be ready i.e. for the next tick. The transitions then look like this:
write_handler --(wait for tick/data)--> dataready
dataready --(write data)----------> write_handler
You see, this introduces a new state (handler), I called it dataready, you could as well call it tick_handler or something else. The transition back to the write_handler is easy:
void dataready()
{
// get the new data...
async_write(sock, buffer(data), write_handler);
}
The transition from the write_handler can be a simple async_wait on some timer. If the data come from "outside" and you don't know exactly when they will be ready, wait for some time, check if the data are there, and if they are not, wait some more time:
write_handler --(wait some time)--> checkForData
checkForData:no --(wait some time)--> checkForData
checkForData:yes --(write data)------> write_handler
or, in (pseudo)code:
void write_handler(const error_code &ec, size_t bytes_transferred)
{
//...
async_wait(ticklenght, checkForData);
}
void checkForData(/*insert wait handler signature here*/)
{
if (dataIsReady())
{
async_write(sock, buffer(data), write_handler);
}
else
{
async_wait(shortTime, checkForData):
}
}
Update 2:
According to your comment, you already have an agent that does the time management somehow (calling update every tick). Here's how I would solve that:
Let the agent have a list of observers that get notified when there is new data in an update call.
Let each observer handle one client connection (socket).
Let the server just wait for incomming connections, create observers from them and register them with the Agent.
I am not very firm in the exact syntax of ASIO, so this will be handwavy pseudocode:
Server:
void Server::accept_handler()
{
obs = new Observer(socket);
agent.register(obs);
new socket; //observer takes care of the old one
async_accept(..., accept_handler);
}
Agent:
void Agent::update()
{
if (newDataAvailable())
{
for (auto& obs : observers)
{
obs->notify(data);
}
}
}
Observer:
void Observer::notify(data)
{
async_write(sock, data, write_handler);
}
void Observer::write_handler(error_code ec, ...)
{
if (!ec) //no error: done, just wait for the next notification
return;
//on error: close the connection and unregister
agent.unregister(this);
socket.close(); //client will get error and exit its read_handler
}