How to implement an IPC protocol using Boost ASIO? - c++

I'm trying to implement a simple IPC protocol for a project that will be built using Boost ASIO. The idea is to have the communication be done through IP/TCP, with a server with the backend and a client that will be using the data received from the server to build the frontend. The whole session would go like this:
The connection is established
The client sends a 2 byte packet with some information that will be used by the server to build its response (this is stored as the struct propertiesPacket)
The server processes the data received and stores the output in a struct of variable size called processedData
The server sends a 2 byte unsigned integer that will indicate the client what size the struct it will receive has (let's say the struct is of size n bytes)
The server sends the struct data as a n byte packet
The connection is ended
I tried implementing this by myself, following the great tutorial available in Boost ASIO's documentation, as well as the examples included in the library and some repos I found on Github, but as this is my first hand working with networking and IPC, I couldn't make it work, my client returns an exception saying the connection was reset by the peer.
What I have right now is this:
// File client.cpp
int main(int argc, char *argv[])
{
try {
propertiesPacket properties;
// ...
// We set the data inside the properties struct
// ...
boost::asio::io_context io;
boost::asio::ip::tcp::socket socket(io);
boost::asio::ip::tcp::resolver resolver(io);
boost::asio::connect(socket, resolver.resolve(argv[1], argv[2]));
boost::asio::write(socket, boost::asio::buffer(&properties, sizeof(propertiesPacket)));
unsigned short responseSize {};
boost::asio::read(socket, boost::asio::buffer(&responseSize, sizeof(short)));
processedData* response = reinterpret_cast<processedData*>(malloc(responseSize));
boost::asio::read(socket, boost::asio::buffer(response, responseSize));
// ...
// The client handles the data
// ...
return 0;
} catch (std::exception &e) {
std::cerr << e.what() << std::endl;
}
}
// File server.cpp
class ServerConnection
: public std::enable_shared_from_this<ServerConnection>
{
public:
using TCPSocket = boost::asio::ip::tcp::socket;
ServerConnection::ServerConnection(TCPSocket socket)
: socket_(std::move(socket)),
properties_(nullptr),
filePacket_(nullptr),
filePacketSize_(0)
{
}
void start() { doRead(); }
private:
void doRead()
{
auto self(shared_from_this());
socket_.async_read_some(boost::asio::buffer(properties_, sizeof(propertiesPacket)),
[this, self](boost::system::error_code ec, std::size_t /*length*/)
{
if (!ec) {
processData();
doWrite(&filePacketSize_, sizeof(short));
doWrite(filePacket_, sizeof(*filePacket_));
}
});
}
void doWrite(void* data, size_t length)
{
auto self(shared_from_this());
boost::asio::async_write(socket_, boost::asio::buffer(data, length),
[this, self](boost::system::error_code ec, std::size_t /*length*/)
{
if (!ec) { doRead(); }
});
}
void processData()
{ /* Data is processed */ }
TCPSocket socket_;
propertiesPacket* properties_;
processedData* filePacket_;
short filePacketSize_;
};
class Server
{
public:
using IOContext = boost::asio::io_context;
using TCPSocket = boost::asio::ip::tcp::socket;
using TCPAcceptor = boost::asio::ip::tcp::acceptor;
Server::Server(IOContext& io, short port)
: socket_(io),
acceptor_(io, boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), port))
{
doAccept();
}
private:
void doAccept()
{
acceptor_.async_accept(socket_,
[this](boost::system::error_code ec)
{
if (!ec) {
std::make_shared<ServerConnection>(std::move(socket_))->start();
}
doAccept();
});
}
TCPSocket socket_;
TCPAcceptor acceptor_;
};
What did I do wrong? My guess is that inside the doRead function, calling multiple times the doWrite function, when that function then also calls doRead is in part what's causing problems, but I don't know what the correct way of writing data asynchronously multiple times is. But I'm also sure that isn't the only part of my code that isn't behaving as I think it should.

Besides the problems with the code shown that I mentioned in the comments, there is indeed the problem that you suspected:
My guess is that inside the doRead function, calling multiple times the doWrite function, when that function then also calls doRead is in part what's causing problems
The fact that "doRead" is in the same function isn't necessarily a problem (that's just full-duplex socket IO). However "calling multiple times" is. See the docs:
This operation is implemented in terms of zero or more calls to the stream's async_write_some function, and is known as a composed operation. The program must ensure that the stream performs no other write operations (such as async_write, the stream's async_write_some function, or any other composed operations that perform writes) until this operation completes.
The usual way is to put the whole message in a single buffer, but if that would be "expensive" to copy, you can use a BufferSequence, which is known as scatter/gather buffers.
Specifically, you would replace
doWrite(&filePacketSize_, sizeof(short));
doWrite(filePacket_, sizeof(*filePacket_));
with something like
std::vector<boost::asio::const_buffer> msg{
boost::asio::buffer(&filePacketSize_, sizeof(short)),
boost::asio::buffer(filePacket_, sizeof(*filePacket_)),
};
doWrite(msg);
Note that this assumes that filePacketSize and filePacket have been assigned proper values!
You could of course modify do_write to accept the buffer sequence:
template <typename Buffers> void doWrite(Buffers msg)
{
auto self(shared_from_this());
boost::asio::async_write(
socket_, msg,
[this, self](boost::system::error_code ec, std::size_t /*length*/) {
if (!ec) {
doRead();
}
});
}
But in your case I'd simplify by inlining the body (now that you don't call it more than once anyway).
SIDE NOTES
Don't use new or delete. NEVER use malloc in C++. Never use reinterpret_cast<> (except in the very rarest of exceptions that the standard allows!). Instead of
processedData* response = reinterpret_cast<processedData*>(malloc(responseSize));
Just use
processedData response;
(optionally add {} for value-initialization of aggregates). If you need variable-length messages, consider to put a vector or a array<char, MAXLEN> inside the message. Of course, array is fixed length but it preserves POD-ness, so it might be easier to work with. If you use vector, you'd want a scatter/gather read into a buffer sequence like I showed above for the write side.
Instead of reinterpreting between inconsistent short and unsigned short types, perhaps just spell the type with the standard sizes: std::uint16_t everywhere.
Keep in mind that you are not taking into account byte order so your protocol will NOT be portable across compilers/architectures.
Provisional Fixes
This is the listing I ended up with after reviewing the code you shared.
Live On Coliru
#include <boost/asio.hpp>
#include <iostream>
namespace ba = boost::asio;
using boost::asio::ip::tcp;
using boost::system::error_code;
using TCPSocket = tcp::socket;
struct processedData { };
struct propertiesPacket { };
// File server.cpp
class ServerConnection : public std::enable_shared_from_this<ServerConnection> {
public:
ServerConnection(TCPSocket socket) : socket_(std::move(socket))
{ }
void start() {
std::clog << __PRETTY_FUNCTION__ << std::endl;
doRead();
}
private:
void doRead()
{
std::clog << __PRETTY_FUNCTION__ << std::endl;
auto self(shared_from_this());
socket_.async_read_some(
ba::buffer(&properties_, sizeof(properties_)),
[this, self](error_code ec, std::size_t length) {
std::clog << "received: " << length << std::endl;
if (!ec) {
processData();
std::vector<ba::const_buffer> msg{
ba::buffer(&filePacketSize_, sizeof(uint16_t)),
ba::buffer(&filePacket_, filePacketSize_),
};
ba::async_write(socket_, msg,
[this, self = shared_from_this()](
error_code ec, std::size_t length) {
std::clog << " written: " << length
<< std::endl;
if (!ec) {
doRead();
}
});
}
});
}
void processData() {
std::clog << __PRETTY_FUNCTION__ << std::endl;
/* Data is processed */
}
TCPSocket socket_;
propertiesPacket properties_{};
processedData filePacket_{};
uint16_t filePacketSize_ = sizeof(filePacket_);
};
class Server
{
public:
using IOContext = ba::io_context;
using TCPAcceptor = tcp::acceptor;
Server(IOContext& io, uint16_t port)
: socket_(io)
, acceptor_(io, {tcp::v4(), port})
{
doAccept();
}
private:
void doAccept()
{
std::clog << __PRETTY_FUNCTION__ << std::endl;
acceptor_.async_accept(socket_, [this](error_code ec) {
if (!ec) {
std::clog << "Accepted " << socket_.remote_endpoint()
<< std::endl;
std::make_shared<ServerConnection>(std::move(socket_))->start();
doAccept();
} else {
std::clog << "Accept " << ec.message() << std::endl;
}
});
}
TCPSocket socket_;
TCPAcceptor acceptor_;
};
// File client.cpp
int main(int argc, char *argv[])
{
ba::io_context io;
Server s{io, 6869};
std::thread server_thread{[&io] {
io.run();
}};
// always check argc!
std::vector<std::string> args(argv, argv + argc);
if (args.size() == 1)
args = {"demo", "127.0.0.1", "6869"};
// avoid race with server accept thread
post(io, [&io, args] {
try {
propertiesPacket properties;
// ...
// We set the data inside the properties struct
// ...
tcp::socket socket(io);
tcp::resolver resolver(io);
connect(socket, resolver.resolve(args.at(1), args.at(2)));
write(socket, ba::buffer(&properties, sizeof(properties)));
uint16_t responseSize{};
ba::read(socket, ba::buffer(&responseSize, sizeof(uint16_t)));
std::clog << "Client responseSize: " << responseSize << std::endl;
processedData response{};
assert(responseSize <= sizeof(response));
ba::read(socket, ba::buffer(&response, responseSize));
// ...
// The client handles the data
// ...
// for online demo:
io.stop();
} catch (std::exception const& e) {
std::clog << e.what() << std::endl;
}
});
io.run_one();
server_thread.join();
}
Printing something similar to
void Server::doAccept()
Server::doAccept()::<lambda(boost::system::error_code)> Success
void ServerConnection::start()
void ServerConnection::doRead()
void Server::doAccept()
received: 1
void ServerConnection::processData()
written: 3
void ServerConnection::doRead()
Client responseSize: 1

Related

HTTP proxy example in C++

So I've been trying to write a proxy in C++ using the boost.asio. My initial project includes the client that writes a string message into a socket, a server that receives this message and writes a string message into a socket, and a proxy that works with the two mentioned sockets.
The proxy code looks like this (The future intention is handle multiple connections and to use the transfered data somehow, and the callbacks would perform some actual work other than logging):
#include "commondata.h"
#include <boost/bind.hpp>
#include <boost/enable_shared_from_this.hpp>
using namespace boost::asio;
using ip::tcp;
using std::cout;
using std::endl;
class con_handler : public boost::enable_shared_from_this<con_handler> {
private:
tcp::socket client_socket;
tcp::socket server_socket;
enum { max_length = 1024 };
char client_data[max_length];
char server_data[max_length];
public:
typedef boost::shared_ptr<con_handler> pointer;
con_handler(boost::asio::io_service& io_service):
server_socket(io_service),
client_socket(io_service) {
memset(client_data, 0, max_length);
memset(server_data, 0, max_length);
server_socket.connect( tcp::endpoint( boost::asio::ip::address::from_string(SERVERIP), SERVERPORT ));
}
// creating the pointer
static pointer create(boost::asio::io_service& io_service) {
return pointer(new con_handler(io_service));
}
//socket creation
tcp::socket& socket() {
return client_socket;
}
void start() {
//read the data into the input buffer
client_socket.async_read_some(
boost::asio::buffer(client_data, max_length),
boost::bind(&con_handler::handle_read,
shared_from_this(),
client_data,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
server_socket.async_write_some(
boost::asio::buffer(client_data, max_length),
boost::bind(&con_handler::handle_write,
shared_from_this(),
client_data,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
server_socket.async_read_some(
boost::asio::buffer(server_data, max_length),
boost::bind(&con_handler::handle_read,
shared_from_this(),
server_data,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
client_socket.async_write_some(
boost::asio::buffer(server_data, max_length),
boost::bind(&con_handler::handle_write,
shared_from_this(),
server_data,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void handle_read(const char* data, const boost::system::error_code& err, size_t bytes_transferred) {
if (!err) {
cout << "proxy handle_read" << endl;
cout << data << endl;
} else {
std::cerr << "error: " << err.message() << std::endl;
client_socket.close();
}
}
void handle_write(const char* data, const boost::system::error_code& err, size_t bytes_transferred) {
if (!err) {
cout << "proxy handle_write" << endl;
cout << data << endl;
} else {
std::cerr << "error: " << err.message() << endl;
client_socket.close();
}
}
};
class Server {
private:
boost::asio::io_service io_service;
tcp::acceptor acceptor_;
void start_accept() {
// socket
con_handler::pointer connection = con_handler::create(io_service);
// asynchronous accept operation and wait for a new connection.
acceptor_.async_accept(connection->socket(),
boost::bind(&Server::handle_accept, this, connection,
boost::asio::placeholders::error));
}
public:
//constructor for accepting connection from client
Server()
: acceptor_(io_service, tcp::endpoint(tcp::v4(), PROXYPORT)) {
start_accept();
}
void handle_accept(const con_handler::pointer& connection, const boost::system::error_code& err) {
if (!err) {
connection->start();
}
start_accept();
}
boost::asio::io_service& get_io_service() {
return io_service;
}
};
int main(int argc, char *argv[]) {
try {
Server server;
server.get_io_service().run();
} catch(std::exception& e) {
std::cerr << e.what() << endl;
}
return 0;
}
If the messages sent are strings (which I've used initially to test if my code works at all), then all of the callbacks are called the way I wanted them to be called, and the thing seems to be working.
Here's the stdout of the proxy for that case:
user#laptop:$ ./proxy
proxy handle_read
message from the client
proxy handle_write
message from the client
proxy handle_read
message from server
proxy handle_write
message from server
So the client sends the "message from the client" string, which is received and saved by the proxy, the same string is sent to the server, then the server sends back the "message from server" string, which is also received and saved by the proxy and then is sent to the client.
The problem appears when I try to use the actual web server (Apache) and an application like JMeter to talk to each other. This is the stdout for this case:
user#laptop:$ ./proxy
proxy handle_write
proxy handle_write
proxy handle_read
GET / HTTP/1.1
Connection: keep-alive
Host: 127.0.0.1:1337
User-Agent: Apache-HttpClient/4.5.5 (Java/11.0.8)
error: End of file
The JMeter test then fails with a timeout (that is when the proxy gets the EOF error), and no data seems to be sent to the apache webserver. The questions that I have for now are then why the callbacks are called in another order comparing to the case when the string messages are sent and why the data is not being transferred to the server socket, I guess. Thanks in advance for any help!
Abbreviating from start():
client_socket.async_read_some (buffer(client_data), ...);
server_socket.async_write_some (buffer(client_data), ...);
server_socket.async_read_some (buffer(server_data), ...);
client_socket.async_write_some (buffer(server_data), ...);
//read the data into the input
client_socket.async_read_some (buffer(client_data), ...);
server_socket.async_write_some (buffer(client_data), ...);
server_socket.async_read_some (buffer(server_data), ...);
client_socket.async_write_some (buffer(server_data), ...);
That's... not how async operations work. They run asynchronously, meaning that they will all immediately return.
You're simultaneously reading and writing from some buffers, without waiting for valid data. Also, you're writing the full buffer always, regardless of how much was received.
All of this spells Undefined Behaviour.
Start simple
Conceptually you just want to read:
void start() {
//read the data into the input buffer
client_socket.async_read_some(
boost::asio::buffer(client_data, max_length),
boost::bind(&con_handler::handle_read,
shared_from_this(),
client_data,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
Now, once you received data, you might want to relay that:
void handle_read(const char* data, const boost::system::error_code& err, size_t bytes_transferred) {
if (!err) {
std::cout << "proxy handle_read" << std::endl;
server_socket.async_write_some(
boost::asio::buffer(client_data, bytes_transferred),
boost::bind(&con_handler::handle_write,
shared_from_this(),
client_data,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
} else {
std::cerr << "error: " << err.message() << std::endl;
client_socket.close();
}
}
Note that it seems a bit arbitrary to only close one side of the connection on errors. You probably at least want to cancel() any async operations on both, optionally shutdown() and then just let the shared_ptr destruct your con_handler.
Full Duplex
Now, for full-duplex operation you can indeed start the reverse relay at the same time. It gets a little unweildy to maintain the call chains in separate methods (after all you don't just switch the buffers, but also the socket pairs).
It might be instructive to realize that you're doing the same thing twice:
client -> [...buffer...] -> server
server -> [...buffer...] -> client
You can encapsulate each side in a class, and avoid duplicating all the code:
struct relay {
tcp::socket &from, &to;
std::array<char, max_length> buf{};
void run_relay(pointer self) {
from.async_read_some(asio::buffer(buf),
[this, self](error_code ec, size_t n) {
if (ec) return handle(from, ec);
/*
*std::cout
* << "From " << from.remote_endpoint()
* << ": " << std::quoted(std::string_view(buf.data(), n))
* << std::endl;
*/
async_write(to, asio::buffer(buf, n), [this, self](error_code ec, size_t) {
if (ec) return handle(to, ec);
run_relay(self);
});
});
}
void handle(tcp::socket& which, error_code ec = {}) {
if (ec == asio::error::eof) {
// soft "error" - allow write to complete
std::cout << "EOF on " << which.remote_endpoint() << std::endl;
which.shutdown(tcp::socket::shutdown_receive, ec);
}
if (ec) {
from.cancel();
to.cancel();
std::string reason = ec.message();
auto fep = from.remote_endpoint(ec),
tep = to.remote_endpoint(ec);
std::cout << "Stopped relay " << fep << " -> " << tep << " due to " << reason << std::endl;
}
}
} c_to_s {client_socket, server_socket, {0}},
s_to_c {server_socket, client_socket, {0}};
Note
we sidestepped the bind mess by using lambdas
we cancel both ends of the relay on error
we use a std::array buffer - more safe and easier to use
we only write as many bytes as were received, regardless of the size of the buffer
we don't schedule another read until the write has completed to avoid clobbering the data in buf
Let's implement con_handler start again
Using the relay from just above:
void start() {
c_to_s.run_relay(shared_from_this());
s_to_c.run_relay(shared_from_this());
}
That's all. We pass ourselves so the con_handler stays alive until all operations complete.
DEMO Live On Coliru
#define PROXYPORT 8899
#define SERVERIP "173.203.57.63" // coliru IP at the time
#define SERVERPORT 80
#include <boost/enable_shared_from_this.hpp>
#include <boost/asio.hpp>
#include <iostream>
#include <iomanip>
namespace asio = boost::asio;
using boost::asio::ip::tcp;
using boost::system::error_code;
using namespace std::chrono_literals;
class con_handler : public boost::enable_shared_from_this<con_handler> {
public:
con_handler(asio::io_service& io_service):
server_socket(io_service),
client_socket(io_service)
{
server_socket.connect({ asio::ip::address::from_string(SERVERIP), SERVERPORT });
}
// creating the pointer
using pointer = boost::shared_ptr<con_handler>;
static pointer create(asio::io_service& io_service) {
return pointer(new con_handler(io_service));
}
//socket creation
tcp::socket& socket() {
return client_socket;
}
void start() {
c_to_s.run_relay(shared_from_this());
s_to_c.run_relay(shared_from_this());
}
private:
tcp::socket server_socket;
tcp::socket client_socket;
enum { max_length = 1024 };
struct relay {
tcp::socket &from, &to;
std::array<char, max_length> buf{};
void run_relay(pointer self) {
from.async_read_some(asio::buffer(buf),
[this, self](error_code ec, size_t n) {
if (ec) return handle(from, ec);
/*
*std::cout
* << "From " << from.remote_endpoint()
* << ": " << std::quoted(std::string_view(buf.data(), n))
* << std::endl;
*/
async_write(to, asio::buffer(buf, n), [this, self](error_code ec, size_t) {
if (ec) return handle(to, ec);
run_relay(self);
});
});
}
void handle(tcp::socket& which, error_code ec = {}) {
if (ec == asio::error::eof) {
// soft "error" - allow write to complete
std::cout << "EOF on " << which.remote_endpoint() << std::endl;
which.shutdown(tcp::socket::shutdown_receive, ec);
}
if (ec) {
from.cancel();
to.cancel();
std::string reason = ec.message();
auto fep = from.remote_endpoint(ec),
tep = to.remote_endpoint(ec);
std::cout << "Stopped relay " << fep << " -> " << tep << " due to " << reason << std::endl;
}
}
} c_to_s {client_socket, server_socket, {0}},
s_to_c {server_socket, client_socket, {0}};
};
class Server {
asio::io_service io_service;
tcp::acceptor acceptor_;
void start_accept() {
// socket
auto connection = con_handler::create(io_service);
// asynchronous accept operation and wait for a new connection.
acceptor_.async_accept(
connection->socket(),
[connection, this](error_code ec) {
if (!ec) connection->start();
start_accept();
});
}
public:
Server() : acceptor_(io_service, {{}, PROXYPORT}) {
start_accept();
}
void run() {
io_service.run_for(5s); // .run();
}
};
int main() {
Server().run();
}
When run with
printf "GET / HTTP/1.1\r\nHost: coliru.stacked-crooked.com\r\n\r\n" | nc 127.0.0.1 8899
The server prints:
EOF on 127.0.0.1:36452
And the netcat receives reply:
HTTP/1.1 200 OK
Content-Type: text/html;charset=utf-8
Content-Length: 8616
Server: WEBrick/1.4.2 (Ruby/2.5.1/2018-03-29) OpenSSL/1.0.2g
Date: Sat, 01 Aug 2020 00:25:10 GMT
Connection: Keep-Alive
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Frameset//EN">
<html>
....
</html>
Summary
Thinking clearly about what you are trying to achieve, avoids accidentally complexity. It allowed us to come up with a good building block (relay), evaporating complexity.

io_context.run() for boost beast server

I have a RESTServer.hpp implemented using boost.beast as shown below.
#pragma once
#include <boost/property_tree/json_parser.hpp>
#include <boost/beast/core.hpp>
#include <boost/beast/http.hpp>
#include <boost/beast/version.hpp>
#include <boost/asio.hpp>
#include <chrono>
#include <cstdlib>
#include <ctime>
#include <iostream>
#include <memory>
#include <string>
namespace beast = boost::beast;
namespace http = beast::http;
namespace net = boost::asio;
using tcp = boost::asio::ip::tcp;
class RESTServer : public std::enable_shared_from_this<RESTServer> {
public:
RESTServer(tcp::socket socket)
: m_socket(std::move(socket)) {
}
void start() {
readRequest();
checkDeadline();
}
private:
tcp::socket m_socket;
beast::flat_buffer m_buffer{8192};
http::request<http::dynamic_body> m_request;
http::response<http::dynamic_body> m_response;
net::steady_timer m_deadline{m_socket.get_executor(), std::chrono::seconds(60)};
void readRequest() {
auto self = shared_from_this();
http::async_read(m_socket, m_buffer, m_request, [self](beast::error_code ec, std::size_t bytes_transferred) {
boost::ignore_unused(bytes_transferred);
if (!ec) {
self->processRequest();
}
});
}
void processRequest() {
m_response.version(m_request.version());
m_response.keep_alive(false);
switch (m_request.method()) {
case http::verb::get:
m_response.result(http::status::ok);
m_response.set(http::field::server, "Beast");
createResponse();
break;
case http::verb::post:
m_response.result(http::status::ok);
m_response.set(http::field::server, "Beast");
createResponse();
break;
default:
m_response.result(http::status::bad_request);
m_response.set(http::field::content_type, "text/plain");
beast::ostream(m_response.body())
<< "Invalid request-method '"
<< std::string(m_request.method_string())
<< "'";
break;
}
writeResponse();
}
void createResponse() {
if(request_.target() == "/count")
{
response_.set(http::field::content_type, "text/html");
beast::ostream(response_.body())
<< "<html>\n"
<< "<head><title>Request count</title></head>\n"
<< "<body>\n"
<< "<h1>Request count</h1>\n"
<< "<p>There have been "
<< my_program_state::request_count()
<< " requests so far.</p>\n"
<< "</body>\n"
<< "</html>\n";
}
else if(request_.target() == "/time")
{
response_.set(http::field::content_type, "text/html");
beast::ostream(response_.body())
<< "<html>\n"
<< "<head><title>Current time</title></head>\n"
<< "<body>\n"
<< "<h1>Current time</h1>\n"
<< "<p>The current time is "
<< my_program_state::now()
<< " seconds since the epoch.</p>\n"
<< "</body>\n"
<< "</html>\n";
}
else
{
response_.result(http::status::not_found);
response_.set(http::field::content_type, "text/plain");
beast::ostream(response_.body()) << "File not found\r\n";
}
}
void writeResponse() {
auto self = shared_from_this();
m_response.set(http::field::content_length, m_response.body().size());
http::async_write(m_socket, m_response,
[self](beast::error_code ec, std::size_t) {
self->m_socket.shutdown(tcp::socket::shutdown_send, ec);
self->m_deadline.cancel();
});
}
void checkDeadline() {
auto self = shared_from_this();
m_deadline.async_wait([self](beast::error_code ec) {
if (!ec) {
self->m_socket.close(ec);
}
});
}
};
void httpServer(tcp::acceptor& acceptor, tcp::socket& socket) {
acceptor.async_accept(socket, [&](beast::error_code ec) {
if (!ec) {
std::make_shared<RESTServer>(std::move(socket))->start();
}
httpServer(acceptor, socket);
});
}
I also have a RESTClient RESTClient.hpp and RESTClient.cpp as shown below.
RESTClient.hpp
#pragma once
#include <boost/beast/core.hpp>
#include <boost/beast/http.hpp>
#include <boost/beast/version.hpp>
#include <boost/asio/strand.hpp>
#include <boost/property_tree/json_parser.hpp>
#include <boost/property_tree/ptree.hpp>
namespace beast = boost::beast;
namespace http = beast::http;
namespace net = boost::asio;
using tcp = boost::asio::ip::tcp;
// Performs an HTTP GET and prints the response
class RESTClient : public std::enable_shared_from_this<RESTClient> {
public:
explicit RESTClient(net::io_context& ioc);
virtual ~RESTClient();
virtual void run(char const* host, char const* port, char const* target, int version);
virtual void onResolve(beast::error_code ec, tcp::resolver::results_type results);
virtual void onConnect(beast::error_code ec, tcp::resolver::results_type::endpoint_type);
virtual void onWrite(beast::error_code ec, std::size_t bytes_transferred);
virtual void onRead(beast::error_code ec, std::size_t bytes_transferred);
private:
void createGetRequest(char const* host, char const* target, int version);
void createPostRequest(char const* host, char const* target, int version, char const *body);
std::string createBody();
tcp::resolver m_resolver;
beast::tcp_stream m_stream;
beast::flat_buffer m_buffer; // (Must persist between reads)
http::request<http::string_body> m_httpRequest;
http::response<http::string_body> m_httpResponse;
};
RESTClient.cpp
#include <boost/beast/core.hpp>
#include <boost/beast/http.hpp>
#include <boost/beast/version.hpp>
#include <boost/asio/strand.hpp>
#include <boost/lexical_cast.hpp>
#include <cstdlib>
#include <iostream>
#include <memory>
#include <string>
#include "RESTClient.hpp"
namespace beast = boost::beast;
namespace http = beast::http;
namespace net = boost::asio;
using tcp = boost::asio::ip::tcp;
void fail(beast::error_code ec, char const* what) {
std::cerr << what << ": " << ec.message() << "\n";
}
RESTClient::RESTClient(net::io_context& ioc)
: m_resolver(net::make_strand(ioc)), m_stream(net::make_strand(ioc)) {
}
RESTClient::~RESTClient() = default;
void RESTClient::run(char const* host, char const* port, char const* target, int version) {
createPostRequest(host, target, version, createBody().c_str());
m_resolver.async_resolve(host, port, beast::bind_front_handler(
&RESTClient::onResolve,
shared_from_this()));
}
void RESTClient::onResolve(beast::error_code ec, tcp::resolver::results_type results) {
if (ec) {
return fail(ec, "resolve");
}
std::cout << "onResolve ******" << std::endl;
m_stream.expires_after(std::chrono::seconds(30));
m_stream.async_connect(results, beast::bind_front_handler(
&RESTClient::onConnect,
shared_from_this()));
}
void
RESTClient::onConnect(beast::error_code ec, tcp::resolver::results_type::endpoint_type) {
if (ec) {
return fail(ec, "connect");
}
std::cout << "onConnect ******" << std::endl;
m_stream.expires_after(std::chrono::seconds(30));
http::async_write(m_stream, m_httpRequest,
beast::bind_front_handler(
&RESTClient::onWrite,
shared_from_this()));
}
void
RESTClient::onWrite(beast::error_code ec, std::size_t bytes_transferred) {
boost::ignore_unused(bytes_transferred);
if (ec) {
return fail(ec, "write");
}
std::cout << "onWrite ******" << std::endl;
http::async_read(m_stream, m_buffer, m_httpResponse, beast::bind_front_handler(
&RESTClient::onRead,
shared_from_this()));
}
void RESTClient::onRead(beast::error_code ec, std::size_t bytes_transferred) {
boost::ignore_unused(bytes_transferred);
if (ec) {
return fail(ec, "read");
}
std::cout << "onRead ******" << std::endl;
std::cout << m_httpResponse << std::endl;
m_stream.socket().shutdown(tcp::socket::shutdown_both, ec);
if (ec && ec != beast::errc::not_connected) {
return fail(ec, "shutdown");
}
}
void RESTClient::createGetRequest(char const* host, char const* target, int version) {
m_httpRequest.version(version);
m_httpRequest.method(http::verb::get);
m_httpRequest.target(target);
m_httpRequest.set(http::field::host, host);
m_httpRequest.set(http::field::user_agent, BOOST_BEAST_VERSION_STRING);
}
void RESTClient::createPostRequest(char const* host, char const* target, int version, char const* body) {
m_httpRequest.version(version);
m_httpRequest.method(http::verb::post);
m_httpRequest.target(target);
m_httpRequest.set(http::field::host, host);
m_httpRequest.set(http::field::user_agent, BOOST_BEAST_VERSION_STRING);
m_httpRequest.set(http::field::content_length, boost::lexical_cast<std::string>(strlen(body)));
m_httpRequest.set(http::field::body, body);
m_httpRequest.prepare_payload();
}
std::string RESTClient::createBody() {
boost::property_tree::ptree tree;
boost::property_tree::read_json("test.json",tree);
std::basic_stringstream<char> jsonStream;
boost::property_tree::json_parser::write_json(jsonStream, tree, false);
std::cout << "json stream :" << jsonStream.str() << std::endl;
return jsonStream.str();
}
int main(int argc, char** argv) {
// Check command line arguments.
if (argc != 4 && argc != 5) {
std::cerr <<
"Usage: http-client-async <host> <port> <target> [<HTTP version: 1.0 or 1.1(default)>]\n" <<
"Example:\n" <<
" http-client-async www.example.com 80 /\n" <<
" http-client-async www.example.com 80 / 1.0\n";
return EXIT_FAILURE;
}
auto const host = argv[1];
auto const port = argv[2];
auto const target = argv[3];
int version = argc == 5 && !std::strcmp("1.0", argv[4]) ? 10 : 11;
// The io_context is required for all I/O
net::io_context ioc;
std::cout << "version: " << version << std::endl;
// Launch the asynchronous operation
std::make_shared<RESTClient>(ioc)->run(host, port, target, version);
// Run the I/O service. The call will return when
// the get operation is complete.
ioc.run();
return EXIT_SUCCESS;
}
Now I want to test my RESTClient using googletest. In the unit test, I want to use the RESTServer to simulate the response to the client. My test class is shown below.
class MyTest : public ::testing::Test{
virtual void SetUp(){
httpServer(m_acceptor, m_socket);
}
net::ip::address m_address = net::ip::make_address("0.0.0.0");
unsigned short m_port = static_cast<unsigned short>(8080);
net::io_context m_ioc{1};
tcp::acceptor m_acceptor{m_ioc, {m_address, m_port}};
tcp::socket m_socket{m_ioc};
};
My question is the following.
When I implement the class MyTest, I need to pass an io_context to both the httpServer and RESTClient. Should the same io_context, be passed to both Client and Server? or should the io_context, be different. Can someone throw some light on this? and also explain the reason. I would like to understand what an io_context actually means?
Should the same io_context, be passed to both Client and Server? or should the io_context, be different.
Can someone throw some light on this? and also explain the reason. I would like to understand what an io_context actually means?
That is really up to you: an io_context provides the context in which the asynchronous calls such as async_resolve and async_write run in. Think of io_context::run as your event loop.
Your typically steps involve
creating the io_context
providing io_context with some work to do (i.e. your async_resolve, async_connect, async_reads and async_write, async_wait on timers, etc.)
calling run either in some thread.
The run call blocks until the io_context runs out of work unless you provide it with a work object.
Other points:
You should note that typically your asynchronous handlers add more work to the io_context causing run to not just run out of work and exit.
Whether to create an explicit work object or not depends on your specific application design. Personally I prefer to be in control of every asynchronous operation executed and also to be responsible for a "clean" shutdown i.e. cancelling all outstanding work and letting all started operations finish cleanly. It is also possible to simply stop the io_context but that might be careless. You would need to create a work object if you need to call runio_context. Typically if you write a server, you already have a listening socket and already have work, so there's no need to add a work object. Similarly, you might have a periodic timer, etc.
Other threading models are also possible using the run_one call. You would typically use this when you have some other event loop/library running.
More advanced threading models and scaling work across multiple threads can be accomplished using this sample by the asio author.
Coming back to your question: keep in mind that you still need to provide an execution thread to the io_context in order to execute the IO calls. So one io_context is simpler to manage (one context + one blocking call to run). It's probably more efficient to have one context as you would avoid unnecessary thread context switching.

async_send data not sent

[disclaimer] I am new to boost.
Looking into boost::asio and tried to create a simple asynchronous TCP server with the following functionality:
Listen for connections on port 13
When connected, receive data
If data received == time, then return current datetime, else return a predefined string ("Something else was requested")
The problem:
Although, I accept the connection and receive the data, when transmitting data using async_send, although I receive no error and the value of bytes_transferred is correct, I receive empty data on the client side.
If I try to transmit the data from within handle_accept (instead of handle_read), this works fine.
The implementation:
I worked on the boost asio tutorial found here:
Instantiate a tcp_server object, that basically initiates the acceptor and starts listening. as shown below:
int main()
{
try
{
boost::asio::io_service io_service;
tcp_server server(io_service);
io_service.run();
}
catch (std::exception& e)
{
std::cerr << e.what() << std::endl;
}
return 0;
}
and in tcp_server:
class tcp_server
{
public:
tcp_server(boost::asio::io_service& io_service)
: acceptor_(io_service, tcp::endpoint(tcp::v4(), 13))
{
start_accept();
}
private:
void start_accept()
{
using std::cout;
tcp_connection::pointer new_connection =
tcp_connection::create(acceptor_.get_io_service());
acceptor_.async_accept(new_connection->socket(),
boost::bind(&tcp_server::handle_accept, this, new_connection,
boost::asio::placeholders::error));
cout << "Done";
}
...
}
Once a connection is accepted, I am handling it as shown below:
void handle_accept(tcp_connection::pointer new_connection,
const boost::system::error_code& error)
{
if (!error)
{
new_connection->start();
}
start_accept();
}
Below is the tcp_connection::start() method:
void start()
{
boost::asio::async_read(socket_, boost::asio::buffer(inputBuffer_),
boost::bind(&tcp_connection::handle_read, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
/* the snippet below works here - but not in handle_read
outputBuffer_ = make_daytime_string();
boost::asio::async_write(socket_, boost::asio::buffer(outputBuffer_),
boost::bind(&tcp_connection::handle_write, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));*/
}
and in handle_read:
void handle_read(const boost::system::error_code& error, size_t bytes_transferred)
{
outputBuffer_ = make_daytime_string();
if (strcmp(inputBuffer_, "time"))
{
/*this does not work - correct bytes_transferred but nothing shown on receiving end */
boost::asio::async_write(socket_, boost::asio::buffer(outputBuffer_),
boost::bind(&tcp_connection::handle_write, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
else
{
outputBuffer_ = "Something else was requested";//, 128);
boost::asio::async_write(socket_, boost::asio::buffer(outputBuffer_),
boost::bind(&tcp_connection::handle_write, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
}
The handle_write is shown below:
void handle_write(const boost::system::error_code& error,
size_t bytes_transferred)
{
if (!error)
{
std::cout << "Bytes transferred: " << bytes_transferred;
std::cout << "Message sent: " << outputBuffer_;
}
else
{
std::cout << "Error in writing: " << error.message();
}
}
Note the following regarding handle_write (and this is the really strange thing):
There is no error
The bytes_transferred variable has the correct value
outputBuffer_ has the correct value (as set in handle_read)
Nevertheless, the package received at the client side (Packet Sender) is empty (as far as data is concerned).
The complete code is shared here.
Complete test program (c++14). Note the handling of asynchronous buffering when responding to a receive - there may be a send already in progress.
#include <boost/asio.hpp>
#include <thread>
#include <future>
#include <vector>
#include <array>
#include <memory>
#include <mutex>
#include <condition_variable>
#include <iterator>
#include <iostream>
namespace asio = boost::asio;
asio::io_service server_service;
asio::io_service::work server_work{server_service};
bool listening = false;
std::condition_variable cv_listening;
std::mutex management_mutex;
auto const shared_query = asio::ip::tcp::resolver::query(asio::ip::tcp::v4(), "localhost", "8082");
void client()
try
{
asio::io_service client_service;
asio::ip::tcp::socket socket(client_service);
auto lock = std::unique_lock<std::mutex>(management_mutex);
cv_listening.wait(lock, [] { return listening; });
lock.unlock();
asio::ip::tcp::resolver resolver(client_service);
asio::connect(socket, resolver.resolve(shared_query));
auto s = std::string("time\ntime\ntime\n");
asio::write(socket, asio::buffer(s));
socket.shutdown(asio::ip::tcp::socket::shutdown_send);
asio::streambuf sb;
boost::system::error_code sink;
asio::read(socket, sb, sink);
std::cout << std::addressof(sb);
socket.close();
server_service.stop();
}
catch(const boost::system::system_error& se)
{
std::cerr << "client: " << se.code().message() << std::endl;
}
struct connection
: std::enable_shared_from_this<connection>
{
connection(asio::io_service& ios)
: strand_(ios)
{
}
void run()
{
asio::async_read_until(socket_, buffer_, "\n",
strand_.wrap([self = shared_from_this()](auto const&ec, auto size)
{
if (size == 0 )
{
// error condition
boost::system::error_code sink;
self->socket_.shutdown(asio::ip::tcp::socket::shutdown_receive, sink);
}
else {
self->buffer_.commit(size);
std::istream is(std::addressof(self->buffer_));
std::string str;
while (std::getline(is, str))
{
if (str == "time") {
self->queue_send("eight o clock");
}
}
self->run();
}
}));
}
void queue_send(std::string s)
{
assert(strand_.running_in_this_thread());
s += '\n';
send_buffers_pending_.push_back(std::move(s));
nudge_send();
}
void nudge_send()
{
assert(strand_.running_in_this_thread());
if (send_buffers_sending_.empty() and not send_buffers_pending_.empty())
{
std::swap(send_buffers_pending_, send_buffers_sending_);
std::vector<asio::const_buffers_1> send_buffers;
send_buffers.reserve(send_buffers_sending_.size());
std::transform(send_buffers_sending_.begin(), send_buffers_sending_.end(),
std::back_inserter(send_buffers),
[](auto&& str) {
return asio::buffer(str);
});
asio::async_write(socket_, send_buffers,
strand_.wrap([self = shared_from_this()](auto const& ec, auto size)
{
// should check for errors here...
self->send_buffers_sending_.clear();
self->nudge_send();
}));
}
}
asio::io_service::strand strand_;
asio::ip::tcp::socket socket_{strand_.get_io_service()};
asio::streambuf buffer_;
std::vector<std::string> send_buffers_pending_;
std::vector<std::string> send_buffers_sending_;
};
void begin_accepting(asio::ip::tcp::acceptor& acceptor)
{
auto candidate = std::make_shared<connection>(acceptor.get_io_service());
acceptor.async_accept(candidate->socket_, [candidate, &acceptor](auto const& ec)
{
if (not ec) {
candidate->run();
begin_accepting(acceptor);
}
});
}
void server()
try
{
asio::ip::tcp::acceptor acceptor(server_service);
asio::ip::tcp::resolver resolver(server_service);
auto first = resolver.resolve(shared_query);
acceptor.open(first->endpoint().protocol());
acceptor.bind(first->endpoint());
acceptor.listen();
begin_accepting(acceptor);
auto lock = std::unique_lock<std::mutex>(management_mutex);
listening = true;
lock.unlock();
cv_listening.notify_all();
server_service.run();
}
catch(const boost::system::system_error& se)
{
std::cerr << "server: " << se.code().message() << std::endl;
}
int main()
{
using future_type = std::future<void>;
auto stuff = std::array<future_type, 2> {{std::async(std::launch::async, client),
std::async(std::launch::async, server)}};
for (auto& f : stuff) f.wait();
}
There are multiple issues in this code. Some of them may be responsible for your problem:
TCP has no definition of packets, so there's no guarantee that you will ever receive time at once in handle_read. You need a statemachine for that and to respect the bytes_transferred info. If you only have received a part of the message you need to continue at the correct offset. Or you can use asio utility functions, like reading exactly a length of bytes or reading a line.
In addition the last point, you shouldn't really compare the received data with strcmp. That will only work if the remote also sends a null terminator over the connection - does it?
You don't check whether an error happend, although that might manifest itself in other errors.
You are possibly issueing multiple concurrent async writes if you receive multiple data fragments in a shart timespan. This is not valid in asio.
More important, you mutate the send buffer (outputBuffer_) while the send is in progress. This will pretty much lead to undefined behavior. asio might try to write a piece of memory which is no longer valid.
I have solved the problem with the collective help of the comments provided in the question. The behavior that I was experiencing was because of the functionality of async_read. More specifically in the boost asio documentation it reads:
This function is used to asynchronously read a certain number of bytes
of data from a stream. The function call always returns immediately.
The asynchronous operation will continue until one of the following
conditions is true:
The supplied buffers are full. That is, the bytes transferred is equal to the sum of the buffer sizes.
An error occurred.
The inputBuffer_ I was using to read the input, was a 128 char array. The client I was using, would only transfer the real data (without padding), and therefore the async_read would not return until the connection was closed by the client (or 128 bytes of data were transferred). When the connection was closed by the client there was no way to send back the requested data. This is also the reason that it was working with #Arunmu's simple python tcp client (because he was sending 128 bytes of data always).
To fix the issues, I made the following changes (the full working code is supplied here for reference):
In tcp_connection::start: I am now using async_read_until to read the incoming data (and use \n as a delimiter). The input is stored in a boost::asio::streambuf. async_read is guaranteed to return once the delimiter has been found, or an error has occurred. So there is no chance to issue multiple async_write concurrently.
In handle_read: I have included error checking, which made it much simpler to debug.

Permission refused when connecting to domain socket created by Boost.Asio

I'm trying to create a server that receives connections via domain sockets. I can start the server and I can see the socket being created on the filesystem. But whenever I try to connect to it via socat I get the following error:
2015/03/02 14:00:10 socat[62720] E connect(3, LEN=19 AF=1 "/var/tmp/rpc.sock", 19): Connection refused
This is my Asio code (only the .cpp files). Despite the post title I'm using the Boost-free version of Asio but I don't think that would be a problem.
namespace myapp {
DomainListener::DomainListener(const string& addr) : socket{this->service}, Listener{addr} {
remove(this->address.c_str());
stream_protocol::endpoint ep(this->address);
stream_protocol::acceptor acceptor(this->service, ep);
acceptor.async_accept(this->socket, ep, bind(&DomainListener::accept_callback, this, _1));
}
DomainListener::~DomainListener() {
this->service.stop();
remove(this->address.c_str());
}
void DomainListener::accept_callback(const error_code& ec) noexcept {
this->socket.async_read_some(asio::buffer(this->data), bind(&DomainListener::read_data, this, _1, _2));
}
void DomainListener::read_data(const error_code& ec, size_t length) noexcept {
//std::cerr << "AAA" << std::endl;
//std::cerr << this->data[0] << std::endl;
//std::cerr << "BBB" << std::endl;
}
}
Listener::Listener(const string& addr) : work{asio::io_service::work(this->service)} {
this->address = addr;
}
void Listener::listen() {
this->service.run();
}
Listener::~Listener() {
}
In the code that uses these classes I call listen() whenever I want to start listening to the socket for connections.
I've managed to get this to work with libuv and changed to Asio because I thought it would make for more readable code but I'm finding the documentation to be very ambiguous.
The issue is most likely the lifetime of the acceptor.
The acceptor is an automatic variable in the DomainListener constructor. When the DomainListener constructor completes, the acceptor is destroyed, causing the acceptor to close and cancel outstanding operations, such as the async_accept operations. Cancelled operations will be provided an error code of asio::error::operation_aborted and scheduled for deferred invocation within the io_service. Hence, there may not be an active listener when attempting to connect to the domain socket. For more details on the affects of IO object destruction, see this answer.
DomainListener::DomainListener(const string&) : /* ... */
{
// ...
stream_protocol::acceptor acceptor(...);
acceptor.async_accept(..., bind(accept_callback, ...));
} // acceptor destroyed, and accept_callback likely cancelled
To resolve this, consider extending the lifetime of the acceptor by making it a data member for DomainListener. Additionally, checking the error_code provided to asynchronous operations can provide more insight into the asynchronous call chains.
Here is a complete minimal example demonstrating using domain sockets with Asio.
#include <cstdio>
#include <iostream>
#include <boost/array.hpp>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
/// #brief server demonstrates using domain sockets to accept
/// and read from a connection.
class server
{
public:
server(
boost::asio::io_service& io_service,
const std::string& file)
: io_service_(io_service),
acceptor_(io_service_,
boost::asio::local::stream_protocol::endpoint(file)),
client_(io_service_)
{
std::cout << "start accepting connection" << std::endl;
acceptor_.async_accept(client_,
boost::bind(&server::handle_accept, this,
boost::asio::placeholders::error));
}
private:
void handle_accept(const boost::system::error_code& error)
{
std::cout << "handle_accept: " << error.message() << std::endl;
if (error) return;
std::cout << "start reading" << std::endl;
client_.async_read_some(boost::asio::buffer(buffer_),
boost::bind(&server::handle_read, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void handle_read(
const boost::system::error_code& error,
std::size_t bytes_transferred)
{
std::cout << "handle_read: " << error.message() << std::endl;
if (error) return;
std::cout << "read: ";
std::cout.write(buffer_.begin(), bytes_transferred);
std::cout.flush();
}
private:
boost::asio::io_service& io_service_;
boost::asio::local::stream_protocol::acceptor acceptor_;
boost::asio::local::stream_protocol::socket client_;
std::array<char, 1024> buffer_;
};
int main(int argc, char* argv[])
{
if (argc != 2)
{
std::cerr << "Usage: <file>\n";
return 1;
}
// Remove file on startup and exit.
std::string file(argv[1]);
struct file_remover
{
file_remover(std::string file): file_(file) { std::remove(file.c_str()); }
~file_remover() { std::remove(file_.c_str()); }
std::string file_;
} remover(file);
// Create and run the server.
boost::asio::io_service io_service;
server s(io_service, file);
io_service.run();
}
Coliru does not have socat installed, so the following commands use OpenBSD netcat to write "asio domain socket example" to the domain socket:
export SOCKFILE=$PWD/example.sock
./a.out $SOCKFILE &
sleep 1
echo "asio domain socket example" | nc -U $SOCKFILE
Which outputs:
start accepting connection
handle_accept: Success
start reading
handle_read: Success
read: asio domain socket example

How to prevent ASIO based server from terminating

I have been reading some Boost ASIO tutorials. So far, my understanding is that the entire send and receive is a loop that can be iterated only once. Please have a look at the following simple code:
client.cpp:
#include <boost/asio.hpp>
#include <boost/array.hpp>
#include <iostream>
#include <string>
boost::asio::io_service io_service;
boost::asio::ip::tcp::resolver resolver(io_service);
boost::asio::ip::tcp::socket sock(io_service);
boost::array<char, 4096> buffer;
void read_handler(const boost::system::error_code &ec, std::size_t bytes_transferred)
{
if (!ec)
{
std::cout << std::string(buffer.data(), bytes_transferred) << std::endl;
sock.async_read_some(boost::asio::buffer(buffer), read_handler);
}
}
void connect_handler(const boost::system::error_code &ec)
{
if (!ec)
{
sock.async_read_some(boost::asio::buffer(buffer), read_handler);
}
}
void resolve_handler(const boost::system::error_code &ec, boost::asio::ip::tcp::resolver::iterator it)
{
if (!ec)
{
sock.async_connect(*it, connect_handler);
}
}
int main()
{
boost::asio::ip::tcp::resolver::query query("localhost", "2013");
resolver.async_resolve(query, resolve_handler);
io_service.run();
}
the program resolves an address, connects to server and reads the data, and finally ends when there is no data.
My question: How can i continue this loop? I mean, How can I keep this connection between a client and server during the entire lifetime of my application so that the server sends data whenever it has something to send?
I tried to break this circle but everything seams trapped inside io_service.run()
Same question holds in case of the my sever also:
server.cpp :
#include <boost/asio.hpp>
#include <string>
boost::asio::io_service io_service;
boost::asio::ip::tcp::endpoint endpoint(boost::asio::ip::tcp::v4(), 2013);
boost::asio::ip::tcp::acceptor acceptor(io_service, endpoint);
boost::asio::ip::tcp::socket sock(io_service);
std::string data = "Hello, world!";
void write_handler(const boost::system::error_code &ec, std::size_t bytes_transferred)
{
}
void accept_handler(const boost::system::error_code &ec)
{
if (!ec)
{
boost::asio::async_write(sock, boost::asio::buffer(data), write_handler);
}
}
int main()
{
acceptor.listen();
acceptor.async_accept(sock, accept_handler);
io_service.run();
}
This is just an example. In a real application, I may like to keep the socket open and reuse it for other data exchanges(both read and write). How may I do that.
I value your kind comments. If you have references to some easy solutions addressing this issue, I appreciate if you mention it.
Thank you
Update (server sample code)
Based on the answer given below(update 2), I wrote the server code. Please note that the code is simplified (able to compile&run though). Also note that the io_service will never return coz it is always is waiting for a new connection. And that is how the io_service.run never returns and runs for ever. whenever you want io_service.run to return, just make the acceptor not to accept anymore. please do this in one of the many ways that i don't currently remember.(seriously, how do we do that in a clean way? :) )
enjoy:
#include <boost/asio.hpp>
#include <boost/thread.hpp>
#include <string>
#include <iostream>
#include <vector>
#include <time.h>
boost::asio::io_service io_service;
boost::asio::ip::tcp::endpoint endpoint(boost::asio::ip::tcp::v4(), 2013);
boost::asio::ip::tcp::acceptor acceptor(io_service, endpoint);
//boost::asio::ip::tcp::socket sock(io_service);
std::string data = "Hello, world!";
class Observer;
std::vector<Observer*> observers;
class Observer
{
public:
Observer(boost::asio::ip::tcp::socket *socket_):socket_obs(socket_){}
void notify(std::string data)
{
std::cout << "notify called data[" << data << "]" << std::endl;
boost::asio::async_write(*socket_obs, boost::asio::buffer(data) , boost::bind(&Observer::write_handler, this,boost::asio::placeholders::error));
}
void write_handler(const boost::system::error_code &ec)
{
if (!ec) //no error: done, just wait for the next notification
return;
socket_obs->close(); //client will get error and exit its read_handler
observers.erase(std::find(observers.begin(), observers.end(),this));
std::cout << "Observer::write_handler returns as nothing was written" << std::endl;
}
private:
boost::asio::ip::tcp::socket *socket_obs;
};
class server
{
public:
void CreatSocketAndAccept()
{
socket_ = new boost::asio::ip::tcp::socket(io_service);
observers.push_back(new Observer(socket_));
acceptor.async_accept(*socket_,boost::bind(&server::handle_accept, this,boost::asio::placeholders::error));
}
server(boost::asio::io_service& io_service)
{
acceptor.listen();
CreatSocketAndAccept();
}
void handle_accept(const boost::system::error_code& e)
{
CreatSocketAndAccept();
}
private:
boost::asio::ip::tcp::socket *socket_;
};
class Agent
{
public:
void update(std::string data)
{
if(!observers.empty())
{
// std::cout << "calling notify data[" << data << "]" << std::endl;
observers[0]->notify(data);
}
}
};
Agent agent;
void AgentSim()
{
int i = 0;
sleep(10);//wait for me to start client
while(i++ < 10)
{
std::ostringstream out("");
out << data << i ;
// std::cout << "calling update data[" << out.str() << "]" << std::endl;
agent.update(out.str());
sleep(1);
}
}
void run()
{
io_service.run();
std::cout << "io_service returned" << std::endl;
}
int main()
{
server server_(io_service);
boost::thread thread_1(AgentSim);
boost::thread thread_2(run);
thread_2.join();
thread_1.join();
}
You can simplify the logic of asio based porgrams like follows: each function that calls an async_X function provides a handler. This is a bit like transitions between states of a state machine, where the handlers are the states and the async-calls are transitions between states. Just exiting a handler without calling a async_* function is like a transition to an end state. Everything the program "does" (sending data, receiving data, connecting sockets etc.) occurs during the transitions.
If you see it that way, your client looks like this (only "good path", i.e. without errors):
<start> --(resolve)----> resolve_handler
resolve_handler --(connect)----> connect_handler
connect_handler --(read data)--> read_handler
read_handler --(read data)--> read_handler
Your server loks like this:
<start> --(accept)-----> accept handler
accept_handler --(write data)-> write_handler
write_handler ---------------> <end>
Since your write_handler does not do anything, it makes a transition to the end state, meaning ioservice::run returns. The question now is, what do you want to do, after the data has been written to the socket? Depending on that, you will have to define a corresponding transition, i.e. an async-call that does what you want to do.
Update:
from your comment I see you want to wait for the next data to be ready i.e. for the next tick. The transitions then look like this:
write_handler --(wait for tick/data)--> dataready
dataready --(write data)----------> write_handler
You see, this introduces a new state (handler), I called it dataready, you could as well call it tick_handler or something else. The transition back to the write_handler is easy:
void dataready()
{
// get the new data...
async_write(sock, buffer(data), write_handler);
}
The transition from the write_handler can be a simple async_wait on some timer. If the data come from "outside" and you don't know exactly when they will be ready, wait for some time, check if the data are there, and if they are not, wait some more time:
write_handler --(wait some time)--> checkForData
checkForData:no --(wait some time)--> checkForData
checkForData:yes --(write data)------> write_handler
or, in (pseudo)code:
void write_handler(const error_code &ec, size_t bytes_transferred)
{
//...
async_wait(ticklenght, checkForData);
}
void checkForData(/*insert wait handler signature here*/)
{
if (dataIsReady())
{
async_write(sock, buffer(data), write_handler);
}
else
{
async_wait(shortTime, checkForData):
}
}
Update 2:
According to your comment, you already have an agent that does the time management somehow (calling update every tick). Here's how I would solve that:
Let the agent have a list of observers that get notified when there is new data in an update call.
Let each observer handle one client connection (socket).
Let the server just wait for incomming connections, create observers from them and register them with the Agent.
I am not very firm in the exact syntax of ASIO, so this will be handwavy pseudocode:
Server:
void Server::accept_handler()
{
obs = new Observer(socket);
agent.register(obs);
new socket; //observer takes care of the old one
async_accept(..., accept_handler);
}
Agent:
void Agent::update()
{
if (newDataAvailable())
{
for (auto& obs : observers)
{
obs->notify(data);
}
}
}
Observer:
void Observer::notify(data)
{
async_write(sock, data, write_handler);
}
void Observer::write_handler(error_code ec, ...)
{
if (!ec) //no error: done, just wait for the next notification
return;
//on error: close the connection and unregister
agent.unregister(this);
socket.close(); //client will get error and exit its read_handler
}