I have been reading some Boost ASIO tutorials. So far, my understanding is that the entire send and receive is a loop that can be iterated only once. Please have a look at the following simple code:
client.cpp:
#include <boost/asio.hpp>
#include <boost/array.hpp>
#include <iostream>
#include <string>
boost::asio::io_service io_service;
boost::asio::ip::tcp::resolver resolver(io_service);
boost::asio::ip::tcp::socket sock(io_service);
boost::array<char, 4096> buffer;
void read_handler(const boost::system::error_code &ec, std::size_t bytes_transferred)
{
if (!ec)
{
std::cout << std::string(buffer.data(), bytes_transferred) << std::endl;
sock.async_read_some(boost::asio::buffer(buffer), read_handler);
}
}
void connect_handler(const boost::system::error_code &ec)
{
if (!ec)
{
sock.async_read_some(boost::asio::buffer(buffer), read_handler);
}
}
void resolve_handler(const boost::system::error_code &ec, boost::asio::ip::tcp::resolver::iterator it)
{
if (!ec)
{
sock.async_connect(*it, connect_handler);
}
}
int main()
{
boost::asio::ip::tcp::resolver::query query("localhost", "2013");
resolver.async_resolve(query, resolve_handler);
io_service.run();
}
the program resolves an address, connects to server and reads the data, and finally ends when there is no data.
My question: How can i continue this loop? I mean, How can I keep this connection between a client and server during the entire lifetime of my application so that the server sends data whenever it has something to send?
I tried to break this circle but everything seams trapped inside io_service.run()
Same question holds in case of the my sever also:
server.cpp :
#include <boost/asio.hpp>
#include <string>
boost::asio::io_service io_service;
boost::asio::ip::tcp::endpoint endpoint(boost::asio::ip::tcp::v4(), 2013);
boost::asio::ip::tcp::acceptor acceptor(io_service, endpoint);
boost::asio::ip::tcp::socket sock(io_service);
std::string data = "Hello, world!";
void write_handler(const boost::system::error_code &ec, std::size_t bytes_transferred)
{
}
void accept_handler(const boost::system::error_code &ec)
{
if (!ec)
{
boost::asio::async_write(sock, boost::asio::buffer(data), write_handler);
}
}
int main()
{
acceptor.listen();
acceptor.async_accept(sock, accept_handler);
io_service.run();
}
This is just an example. In a real application, I may like to keep the socket open and reuse it for other data exchanges(both read and write). How may I do that.
I value your kind comments. If you have references to some easy solutions addressing this issue, I appreciate if you mention it.
Thank you
Update (server sample code)
Based on the answer given below(update 2), I wrote the server code. Please note that the code is simplified (able to compile&run though). Also note that the io_service will never return coz it is always is waiting for a new connection. And that is how the io_service.run never returns and runs for ever. whenever you want io_service.run to return, just make the acceptor not to accept anymore. please do this in one of the many ways that i don't currently remember.(seriously, how do we do that in a clean way? :) )
enjoy:
#include <boost/asio.hpp>
#include <boost/thread.hpp>
#include <string>
#include <iostream>
#include <vector>
#include <time.h>
boost::asio::io_service io_service;
boost::asio::ip::tcp::endpoint endpoint(boost::asio::ip::tcp::v4(), 2013);
boost::asio::ip::tcp::acceptor acceptor(io_service, endpoint);
//boost::asio::ip::tcp::socket sock(io_service);
std::string data = "Hello, world!";
class Observer;
std::vector<Observer*> observers;
class Observer
{
public:
Observer(boost::asio::ip::tcp::socket *socket_):socket_obs(socket_){}
void notify(std::string data)
{
std::cout << "notify called data[" << data << "]" << std::endl;
boost::asio::async_write(*socket_obs, boost::asio::buffer(data) , boost::bind(&Observer::write_handler, this,boost::asio::placeholders::error));
}
void write_handler(const boost::system::error_code &ec)
{
if (!ec) //no error: done, just wait for the next notification
return;
socket_obs->close(); //client will get error and exit its read_handler
observers.erase(std::find(observers.begin(), observers.end(),this));
std::cout << "Observer::write_handler returns as nothing was written" << std::endl;
}
private:
boost::asio::ip::tcp::socket *socket_obs;
};
class server
{
public:
void CreatSocketAndAccept()
{
socket_ = new boost::asio::ip::tcp::socket(io_service);
observers.push_back(new Observer(socket_));
acceptor.async_accept(*socket_,boost::bind(&server::handle_accept, this,boost::asio::placeholders::error));
}
server(boost::asio::io_service& io_service)
{
acceptor.listen();
CreatSocketAndAccept();
}
void handle_accept(const boost::system::error_code& e)
{
CreatSocketAndAccept();
}
private:
boost::asio::ip::tcp::socket *socket_;
};
class Agent
{
public:
void update(std::string data)
{
if(!observers.empty())
{
// std::cout << "calling notify data[" << data << "]" << std::endl;
observers[0]->notify(data);
}
}
};
Agent agent;
void AgentSim()
{
int i = 0;
sleep(10);//wait for me to start client
while(i++ < 10)
{
std::ostringstream out("");
out << data << i ;
// std::cout << "calling update data[" << out.str() << "]" << std::endl;
agent.update(out.str());
sleep(1);
}
}
void run()
{
io_service.run();
std::cout << "io_service returned" << std::endl;
}
int main()
{
server server_(io_service);
boost::thread thread_1(AgentSim);
boost::thread thread_2(run);
thread_2.join();
thread_1.join();
}
You can simplify the logic of asio based porgrams like follows: each function that calls an async_X function provides a handler. This is a bit like transitions between states of a state machine, where the handlers are the states and the async-calls are transitions between states. Just exiting a handler without calling a async_* function is like a transition to an end state. Everything the program "does" (sending data, receiving data, connecting sockets etc.) occurs during the transitions.
If you see it that way, your client looks like this (only "good path", i.e. without errors):
<start> --(resolve)----> resolve_handler
resolve_handler --(connect)----> connect_handler
connect_handler --(read data)--> read_handler
read_handler --(read data)--> read_handler
Your server loks like this:
<start> --(accept)-----> accept handler
accept_handler --(write data)-> write_handler
write_handler ---------------> <end>
Since your write_handler does not do anything, it makes a transition to the end state, meaning ioservice::run returns. The question now is, what do you want to do, after the data has been written to the socket? Depending on that, you will have to define a corresponding transition, i.e. an async-call that does what you want to do.
Update:
from your comment I see you want to wait for the next data to be ready i.e. for the next tick. The transitions then look like this:
write_handler --(wait for tick/data)--> dataready
dataready --(write data)----------> write_handler
You see, this introduces a new state (handler), I called it dataready, you could as well call it tick_handler or something else. The transition back to the write_handler is easy:
void dataready()
{
// get the new data...
async_write(sock, buffer(data), write_handler);
}
The transition from the write_handler can be a simple async_wait on some timer. If the data come from "outside" and you don't know exactly when they will be ready, wait for some time, check if the data are there, and if they are not, wait some more time:
write_handler --(wait some time)--> checkForData
checkForData:no --(wait some time)--> checkForData
checkForData:yes --(write data)------> write_handler
or, in (pseudo)code:
void write_handler(const error_code &ec, size_t bytes_transferred)
{
//...
async_wait(ticklenght, checkForData);
}
void checkForData(/*insert wait handler signature here*/)
{
if (dataIsReady())
{
async_write(sock, buffer(data), write_handler);
}
else
{
async_wait(shortTime, checkForData):
}
}
Update 2:
According to your comment, you already have an agent that does the time management somehow (calling update every tick). Here's how I would solve that:
Let the agent have a list of observers that get notified when there is new data in an update call.
Let each observer handle one client connection (socket).
Let the server just wait for incomming connections, create observers from them and register them with the Agent.
I am not very firm in the exact syntax of ASIO, so this will be handwavy pseudocode:
Server:
void Server::accept_handler()
{
obs = new Observer(socket);
agent.register(obs);
new socket; //observer takes care of the old one
async_accept(..., accept_handler);
}
Agent:
void Agent::update()
{
if (newDataAvailable())
{
for (auto& obs : observers)
{
obs->notify(data);
}
}
}
Observer:
void Observer::notify(data)
{
async_write(sock, data, write_handler);
}
void Observer::write_handler(error_code ec, ...)
{
if (!ec) //no error: done, just wait for the next notification
return;
//on error: close the connection and unregister
agent.unregister(this);
socket.close(); //client will get error and exit its read_handler
}
Related
I'm trying to implement a simple IPC protocol for a project that will be built using Boost ASIO. The idea is to have the communication be done through IP/TCP, with a server with the backend and a client that will be using the data received from the server to build the frontend. The whole session would go like this:
The connection is established
The client sends a 2 byte packet with some information that will be used by the server to build its response (this is stored as the struct propertiesPacket)
The server processes the data received and stores the output in a struct of variable size called processedData
The server sends a 2 byte unsigned integer that will indicate the client what size the struct it will receive has (let's say the struct is of size n bytes)
The server sends the struct data as a n byte packet
The connection is ended
I tried implementing this by myself, following the great tutorial available in Boost ASIO's documentation, as well as the examples included in the library and some repos I found on Github, but as this is my first hand working with networking and IPC, I couldn't make it work, my client returns an exception saying the connection was reset by the peer.
What I have right now is this:
// File client.cpp
int main(int argc, char *argv[])
{
try {
propertiesPacket properties;
// ...
// We set the data inside the properties struct
// ...
boost::asio::io_context io;
boost::asio::ip::tcp::socket socket(io);
boost::asio::ip::tcp::resolver resolver(io);
boost::asio::connect(socket, resolver.resolve(argv[1], argv[2]));
boost::asio::write(socket, boost::asio::buffer(&properties, sizeof(propertiesPacket)));
unsigned short responseSize {};
boost::asio::read(socket, boost::asio::buffer(&responseSize, sizeof(short)));
processedData* response = reinterpret_cast<processedData*>(malloc(responseSize));
boost::asio::read(socket, boost::asio::buffer(response, responseSize));
// ...
// The client handles the data
// ...
return 0;
} catch (std::exception &e) {
std::cerr << e.what() << std::endl;
}
}
// File server.cpp
class ServerConnection
: public std::enable_shared_from_this<ServerConnection>
{
public:
using TCPSocket = boost::asio::ip::tcp::socket;
ServerConnection::ServerConnection(TCPSocket socket)
: socket_(std::move(socket)),
properties_(nullptr),
filePacket_(nullptr),
filePacketSize_(0)
{
}
void start() { doRead(); }
private:
void doRead()
{
auto self(shared_from_this());
socket_.async_read_some(boost::asio::buffer(properties_, sizeof(propertiesPacket)),
[this, self](boost::system::error_code ec, std::size_t /*length*/)
{
if (!ec) {
processData();
doWrite(&filePacketSize_, sizeof(short));
doWrite(filePacket_, sizeof(*filePacket_));
}
});
}
void doWrite(void* data, size_t length)
{
auto self(shared_from_this());
boost::asio::async_write(socket_, boost::asio::buffer(data, length),
[this, self](boost::system::error_code ec, std::size_t /*length*/)
{
if (!ec) { doRead(); }
});
}
void processData()
{ /* Data is processed */ }
TCPSocket socket_;
propertiesPacket* properties_;
processedData* filePacket_;
short filePacketSize_;
};
class Server
{
public:
using IOContext = boost::asio::io_context;
using TCPSocket = boost::asio::ip::tcp::socket;
using TCPAcceptor = boost::asio::ip::tcp::acceptor;
Server::Server(IOContext& io, short port)
: socket_(io),
acceptor_(io, boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), port))
{
doAccept();
}
private:
void doAccept()
{
acceptor_.async_accept(socket_,
[this](boost::system::error_code ec)
{
if (!ec) {
std::make_shared<ServerConnection>(std::move(socket_))->start();
}
doAccept();
});
}
TCPSocket socket_;
TCPAcceptor acceptor_;
};
What did I do wrong? My guess is that inside the doRead function, calling multiple times the doWrite function, when that function then also calls doRead is in part what's causing problems, but I don't know what the correct way of writing data asynchronously multiple times is. But I'm also sure that isn't the only part of my code that isn't behaving as I think it should.
Besides the problems with the code shown that I mentioned in the comments, there is indeed the problem that you suspected:
My guess is that inside the doRead function, calling multiple times the doWrite function, when that function then also calls doRead is in part what's causing problems
The fact that "doRead" is in the same function isn't necessarily a problem (that's just full-duplex socket IO). However "calling multiple times" is. See the docs:
This operation is implemented in terms of zero or more calls to the stream's async_write_some function, and is known as a composed operation. The program must ensure that the stream performs no other write operations (such as async_write, the stream's async_write_some function, or any other composed operations that perform writes) until this operation completes.
The usual way is to put the whole message in a single buffer, but if that would be "expensive" to copy, you can use a BufferSequence, which is known as scatter/gather buffers.
Specifically, you would replace
doWrite(&filePacketSize_, sizeof(short));
doWrite(filePacket_, sizeof(*filePacket_));
with something like
std::vector<boost::asio::const_buffer> msg{
boost::asio::buffer(&filePacketSize_, sizeof(short)),
boost::asio::buffer(filePacket_, sizeof(*filePacket_)),
};
doWrite(msg);
Note that this assumes that filePacketSize and filePacket have been assigned proper values!
You could of course modify do_write to accept the buffer sequence:
template <typename Buffers> void doWrite(Buffers msg)
{
auto self(shared_from_this());
boost::asio::async_write(
socket_, msg,
[this, self](boost::system::error_code ec, std::size_t /*length*/) {
if (!ec) {
doRead();
}
});
}
But in your case I'd simplify by inlining the body (now that you don't call it more than once anyway).
SIDE NOTES
Don't use new or delete. NEVER use malloc in C++. Never use reinterpret_cast<> (except in the very rarest of exceptions that the standard allows!). Instead of
processedData* response = reinterpret_cast<processedData*>(malloc(responseSize));
Just use
processedData response;
(optionally add {} for value-initialization of aggregates). If you need variable-length messages, consider to put a vector or a array<char, MAXLEN> inside the message. Of course, array is fixed length but it preserves POD-ness, so it might be easier to work with. If you use vector, you'd want a scatter/gather read into a buffer sequence like I showed above for the write side.
Instead of reinterpreting between inconsistent short and unsigned short types, perhaps just spell the type with the standard sizes: std::uint16_t everywhere.
Keep in mind that you are not taking into account byte order so your protocol will NOT be portable across compilers/architectures.
Provisional Fixes
This is the listing I ended up with after reviewing the code you shared.
Live On Coliru
#include <boost/asio.hpp>
#include <iostream>
namespace ba = boost::asio;
using boost::asio::ip::tcp;
using boost::system::error_code;
using TCPSocket = tcp::socket;
struct processedData { };
struct propertiesPacket { };
// File server.cpp
class ServerConnection : public std::enable_shared_from_this<ServerConnection> {
public:
ServerConnection(TCPSocket socket) : socket_(std::move(socket))
{ }
void start() {
std::clog << __PRETTY_FUNCTION__ << std::endl;
doRead();
}
private:
void doRead()
{
std::clog << __PRETTY_FUNCTION__ << std::endl;
auto self(shared_from_this());
socket_.async_read_some(
ba::buffer(&properties_, sizeof(properties_)),
[this, self](error_code ec, std::size_t length) {
std::clog << "received: " << length << std::endl;
if (!ec) {
processData();
std::vector<ba::const_buffer> msg{
ba::buffer(&filePacketSize_, sizeof(uint16_t)),
ba::buffer(&filePacket_, filePacketSize_),
};
ba::async_write(socket_, msg,
[this, self = shared_from_this()](
error_code ec, std::size_t length) {
std::clog << " written: " << length
<< std::endl;
if (!ec) {
doRead();
}
});
}
});
}
void processData() {
std::clog << __PRETTY_FUNCTION__ << std::endl;
/* Data is processed */
}
TCPSocket socket_;
propertiesPacket properties_{};
processedData filePacket_{};
uint16_t filePacketSize_ = sizeof(filePacket_);
};
class Server
{
public:
using IOContext = ba::io_context;
using TCPAcceptor = tcp::acceptor;
Server(IOContext& io, uint16_t port)
: socket_(io)
, acceptor_(io, {tcp::v4(), port})
{
doAccept();
}
private:
void doAccept()
{
std::clog << __PRETTY_FUNCTION__ << std::endl;
acceptor_.async_accept(socket_, [this](error_code ec) {
if (!ec) {
std::clog << "Accepted " << socket_.remote_endpoint()
<< std::endl;
std::make_shared<ServerConnection>(std::move(socket_))->start();
doAccept();
} else {
std::clog << "Accept " << ec.message() << std::endl;
}
});
}
TCPSocket socket_;
TCPAcceptor acceptor_;
};
// File client.cpp
int main(int argc, char *argv[])
{
ba::io_context io;
Server s{io, 6869};
std::thread server_thread{[&io] {
io.run();
}};
// always check argc!
std::vector<std::string> args(argv, argv + argc);
if (args.size() == 1)
args = {"demo", "127.0.0.1", "6869"};
// avoid race with server accept thread
post(io, [&io, args] {
try {
propertiesPacket properties;
// ...
// We set the data inside the properties struct
// ...
tcp::socket socket(io);
tcp::resolver resolver(io);
connect(socket, resolver.resolve(args.at(1), args.at(2)));
write(socket, ba::buffer(&properties, sizeof(properties)));
uint16_t responseSize{};
ba::read(socket, ba::buffer(&responseSize, sizeof(uint16_t)));
std::clog << "Client responseSize: " << responseSize << std::endl;
processedData response{};
assert(responseSize <= sizeof(response));
ba::read(socket, ba::buffer(&response, responseSize));
// ...
// The client handles the data
// ...
// for online demo:
io.stop();
} catch (std::exception const& e) {
std::clog << e.what() << std::endl;
}
});
io.run_one();
server_thread.join();
}
Printing something similar to
void Server::doAccept()
Server::doAccept()::<lambda(boost::system::error_code)> Success
void ServerConnection::start()
void ServerConnection::doRead()
void Server::doAccept()
received: 1
void ServerConnection::processData()
written: 3
void ServerConnection::doRead()
Client responseSize: 1
I am working on a simple TCP server that reads and writes it's messages to thread safe queue. The application can then use those queue to safely read and write to the socket even from different threads.
The problem I am facing is that I cannot async_read. My queue has the pop operation which returns the next element to be processed but it blocks if nothing is available. So once I call pop the async_read callback of course isn't fired anymore. Is there a way I can integrate such a queue into boost asio or do I have to completely rewrite?
Below is a short example I made to show the problem I am having. Once a TCP connection is estabilished I create a new thread that will run the application under that tcp_connection. Afterwards I want to start async_read and async_write. I have been breaking my head on this for a couple of hours and I really don't know how to solve this.
class tcp_connection : public std::enable_shared_from_this<tcp_connection>
{
public:
static std::shared_ptr<tcp_connection> create(boost::asio::io_service &io_service) {
return std::shared_ptr<tcp_connection>(new tcp_connection(io_service));
}
boost::asio::ip::tcp::socket& get_socket()
{
return this->socket;
}
void app_start()
{
while(1)
{
// Pop is a blocking call.
auto inbound_message = this->inbound_messages.pop();
std::cout << "Got message in app thread: " << inbound_message << ". Sending it back to client." << std::endl;
this->outbound_messages.push(inbound_message);
}
}
void start() {
this->app_thread = std::thread(&tcp_connection::app_start, shared_from_this());
boost::asio::async_read_until(this->socket, this->input_stream, "\r\n",
strand.wrap(boost::bind(&tcp_connection::handle_read, shared_from_this(), boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)));
// Start async writing here. The message to send are in the outbound_message queue. But a Pop operation blocks
// empty() is also available to check whether the queue is empty.
// So how can I async write without blocking the read.
// block...
auto message = this->outbound_messages.pop();
boost::asio::async_write(this->socket, boost::asio::buffer(message),
strand.wrap(boost::bind(&tcp_connection::handle_write, shared_from_this(), boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)));
}
void handle_read(const boost::system::error_code& e, size_t bytes_read)
{
std::cout << "handle_read called" << std::endl;
if (e)
{
std::cout << "Error handle_read: " << e.message() << std::endl;
return;
}
if (bytes_read != 0)
{
std::istream istream(&this->input_stream);
std::string message;
message.resize(bytes_read);
istream.read(&message[0], bytes_read);
std::cout << "Got message: " << message << std::endl;
this->inbound_messages.push(message);
}
boost::asio::async_read_until(this->socket, this->input_stream, "\r\n",
strand.wrap(boost::bind(&tcp_connection::handle_read, shared_from_this(), boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)));
}
void handle_write(const boost::system::error_code& e, size_t /*bytes_transferred*/)
{
if (e)
{
std::cout << "Error handle_write: " << e.message() << std::endl;
return;
}
// block...
auto message = this->outbound_messages.pop();
boost::asio::async_write(this->socket, boost::asio::buffer(message),
strand.wrap(boost::bind(&tcp_connection::handle_write, shared_from_this(), boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)));
}
private:
tcp_connection(boost::asio::io_service& io_service) : socket(io_service), strand(io_service)
{
}
boost::asio::ip::tcp::socket socket;
boost::asio::strand strand;
boost::asio::streambuf input_stream;
std::thread app_thread;
concurrent_queue<std::string> inbound_messages;
concurrent_queue<std::string> outbound_messages;
};
class tcp_server
{
public:
tcp_server(boost::asio::io_service& io_service)
: acceptor(io_service, boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), 9001))
{
start_accept();
}
private:
void start_accept()
{
std::shared_ptr<tcp_connection> new_connection =
tcp_connection::create(acceptor.get_io_service());
acceptor.async_accept(new_connection->get_socket(),
boost::bind(&tlcp_tcp_server::handle_accept, this, new_connection, boost::asio::placeholders::error));
}
void handle_accept(std::shared_ptr<tcp_connection> new_connection,
const boost::system::error_code& error)
{
if (!error)
{
new_connection->start();
}
start_accept();
}
boost::asio::ip::tcp::acceptor acceptor;
};
It looks to me as if you want an async_pop method which takes an error message placeholder and callback handler. When you receive a message, check whether there is an outstanding handler and if so, pop the message, deregister the handler and call it. Similarly when registering the async_pop, if there is already a message waiting, pop the message and post a call to the handler without registering it.
You might want to derive the async_pop class from a polymorphic base base of type pop_operation or similar.
[disclaimer] I am new to boost.
Looking into boost::asio and tried to create a simple asynchronous TCP server with the following functionality:
Listen for connections on port 13
When connected, receive data
If data received == time, then return current datetime, else return a predefined string ("Something else was requested")
The problem:
Although, I accept the connection and receive the data, when transmitting data using async_send, although I receive no error and the value of bytes_transferred is correct, I receive empty data on the client side.
If I try to transmit the data from within handle_accept (instead of handle_read), this works fine.
The implementation:
I worked on the boost asio tutorial found here:
Instantiate a tcp_server object, that basically initiates the acceptor and starts listening. as shown below:
int main()
{
try
{
boost::asio::io_service io_service;
tcp_server server(io_service);
io_service.run();
}
catch (std::exception& e)
{
std::cerr << e.what() << std::endl;
}
return 0;
}
and in tcp_server:
class tcp_server
{
public:
tcp_server(boost::asio::io_service& io_service)
: acceptor_(io_service, tcp::endpoint(tcp::v4(), 13))
{
start_accept();
}
private:
void start_accept()
{
using std::cout;
tcp_connection::pointer new_connection =
tcp_connection::create(acceptor_.get_io_service());
acceptor_.async_accept(new_connection->socket(),
boost::bind(&tcp_server::handle_accept, this, new_connection,
boost::asio::placeholders::error));
cout << "Done";
}
...
}
Once a connection is accepted, I am handling it as shown below:
void handle_accept(tcp_connection::pointer new_connection,
const boost::system::error_code& error)
{
if (!error)
{
new_connection->start();
}
start_accept();
}
Below is the tcp_connection::start() method:
void start()
{
boost::asio::async_read(socket_, boost::asio::buffer(inputBuffer_),
boost::bind(&tcp_connection::handle_read, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
/* the snippet below works here - but not in handle_read
outputBuffer_ = make_daytime_string();
boost::asio::async_write(socket_, boost::asio::buffer(outputBuffer_),
boost::bind(&tcp_connection::handle_write, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));*/
}
and in handle_read:
void handle_read(const boost::system::error_code& error, size_t bytes_transferred)
{
outputBuffer_ = make_daytime_string();
if (strcmp(inputBuffer_, "time"))
{
/*this does not work - correct bytes_transferred but nothing shown on receiving end */
boost::asio::async_write(socket_, boost::asio::buffer(outputBuffer_),
boost::bind(&tcp_connection::handle_write, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
else
{
outputBuffer_ = "Something else was requested";//, 128);
boost::asio::async_write(socket_, boost::asio::buffer(outputBuffer_),
boost::bind(&tcp_connection::handle_write, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
}
The handle_write is shown below:
void handle_write(const boost::system::error_code& error,
size_t bytes_transferred)
{
if (!error)
{
std::cout << "Bytes transferred: " << bytes_transferred;
std::cout << "Message sent: " << outputBuffer_;
}
else
{
std::cout << "Error in writing: " << error.message();
}
}
Note the following regarding handle_write (and this is the really strange thing):
There is no error
The bytes_transferred variable has the correct value
outputBuffer_ has the correct value (as set in handle_read)
Nevertheless, the package received at the client side (Packet Sender) is empty (as far as data is concerned).
The complete code is shared here.
Complete test program (c++14). Note the handling of asynchronous buffering when responding to a receive - there may be a send already in progress.
#include <boost/asio.hpp>
#include <thread>
#include <future>
#include <vector>
#include <array>
#include <memory>
#include <mutex>
#include <condition_variable>
#include <iterator>
#include <iostream>
namespace asio = boost::asio;
asio::io_service server_service;
asio::io_service::work server_work{server_service};
bool listening = false;
std::condition_variable cv_listening;
std::mutex management_mutex;
auto const shared_query = asio::ip::tcp::resolver::query(asio::ip::tcp::v4(), "localhost", "8082");
void client()
try
{
asio::io_service client_service;
asio::ip::tcp::socket socket(client_service);
auto lock = std::unique_lock<std::mutex>(management_mutex);
cv_listening.wait(lock, [] { return listening; });
lock.unlock();
asio::ip::tcp::resolver resolver(client_service);
asio::connect(socket, resolver.resolve(shared_query));
auto s = std::string("time\ntime\ntime\n");
asio::write(socket, asio::buffer(s));
socket.shutdown(asio::ip::tcp::socket::shutdown_send);
asio::streambuf sb;
boost::system::error_code sink;
asio::read(socket, sb, sink);
std::cout << std::addressof(sb);
socket.close();
server_service.stop();
}
catch(const boost::system::system_error& se)
{
std::cerr << "client: " << se.code().message() << std::endl;
}
struct connection
: std::enable_shared_from_this<connection>
{
connection(asio::io_service& ios)
: strand_(ios)
{
}
void run()
{
asio::async_read_until(socket_, buffer_, "\n",
strand_.wrap([self = shared_from_this()](auto const&ec, auto size)
{
if (size == 0 )
{
// error condition
boost::system::error_code sink;
self->socket_.shutdown(asio::ip::tcp::socket::shutdown_receive, sink);
}
else {
self->buffer_.commit(size);
std::istream is(std::addressof(self->buffer_));
std::string str;
while (std::getline(is, str))
{
if (str == "time") {
self->queue_send("eight o clock");
}
}
self->run();
}
}));
}
void queue_send(std::string s)
{
assert(strand_.running_in_this_thread());
s += '\n';
send_buffers_pending_.push_back(std::move(s));
nudge_send();
}
void nudge_send()
{
assert(strand_.running_in_this_thread());
if (send_buffers_sending_.empty() and not send_buffers_pending_.empty())
{
std::swap(send_buffers_pending_, send_buffers_sending_);
std::vector<asio::const_buffers_1> send_buffers;
send_buffers.reserve(send_buffers_sending_.size());
std::transform(send_buffers_sending_.begin(), send_buffers_sending_.end(),
std::back_inserter(send_buffers),
[](auto&& str) {
return asio::buffer(str);
});
asio::async_write(socket_, send_buffers,
strand_.wrap([self = shared_from_this()](auto const& ec, auto size)
{
// should check for errors here...
self->send_buffers_sending_.clear();
self->nudge_send();
}));
}
}
asio::io_service::strand strand_;
asio::ip::tcp::socket socket_{strand_.get_io_service()};
asio::streambuf buffer_;
std::vector<std::string> send_buffers_pending_;
std::vector<std::string> send_buffers_sending_;
};
void begin_accepting(asio::ip::tcp::acceptor& acceptor)
{
auto candidate = std::make_shared<connection>(acceptor.get_io_service());
acceptor.async_accept(candidate->socket_, [candidate, &acceptor](auto const& ec)
{
if (not ec) {
candidate->run();
begin_accepting(acceptor);
}
});
}
void server()
try
{
asio::ip::tcp::acceptor acceptor(server_service);
asio::ip::tcp::resolver resolver(server_service);
auto first = resolver.resolve(shared_query);
acceptor.open(first->endpoint().protocol());
acceptor.bind(first->endpoint());
acceptor.listen();
begin_accepting(acceptor);
auto lock = std::unique_lock<std::mutex>(management_mutex);
listening = true;
lock.unlock();
cv_listening.notify_all();
server_service.run();
}
catch(const boost::system::system_error& se)
{
std::cerr << "server: " << se.code().message() << std::endl;
}
int main()
{
using future_type = std::future<void>;
auto stuff = std::array<future_type, 2> {{std::async(std::launch::async, client),
std::async(std::launch::async, server)}};
for (auto& f : stuff) f.wait();
}
There are multiple issues in this code. Some of them may be responsible for your problem:
TCP has no definition of packets, so there's no guarantee that you will ever receive time at once in handle_read. You need a statemachine for that and to respect the bytes_transferred info. If you only have received a part of the message you need to continue at the correct offset. Or you can use asio utility functions, like reading exactly a length of bytes or reading a line.
In addition the last point, you shouldn't really compare the received data with strcmp. That will only work if the remote also sends a null terminator over the connection - does it?
You don't check whether an error happend, although that might manifest itself in other errors.
You are possibly issueing multiple concurrent async writes if you receive multiple data fragments in a shart timespan. This is not valid in asio.
More important, you mutate the send buffer (outputBuffer_) while the send is in progress. This will pretty much lead to undefined behavior. asio might try to write a piece of memory which is no longer valid.
I have solved the problem with the collective help of the comments provided in the question. The behavior that I was experiencing was because of the functionality of async_read. More specifically in the boost asio documentation it reads:
This function is used to asynchronously read a certain number of bytes
of data from a stream. The function call always returns immediately.
The asynchronous operation will continue until one of the following
conditions is true:
The supplied buffers are full. That is, the bytes transferred is equal to the sum of the buffer sizes.
An error occurred.
The inputBuffer_ I was using to read the input, was a 128 char array. The client I was using, would only transfer the real data (without padding), and therefore the async_read would not return until the connection was closed by the client (or 128 bytes of data were transferred). When the connection was closed by the client there was no way to send back the requested data. This is also the reason that it was working with #Arunmu's simple python tcp client (because he was sending 128 bytes of data always).
To fix the issues, I made the following changes (the full working code is supplied here for reference):
In tcp_connection::start: I am now using async_read_until to read the incoming data (and use \n as a delimiter). The input is stored in a boost::asio::streambuf. async_read is guaranteed to return once the delimiter has been found, or an error has occurred. So there is no chance to issue multiple async_write concurrently.
In handle_read: I have included error checking, which made it much simpler to debug.
I recently met a problem with boost::asio asynchronous tasks. I want to return a pointer on an object listening to a port.
It works when I use the socket.read_some method but this method blocks my main and I want my MyClass::create method to return.
So I tried a async_read call but I saw that inside my read() method, no asynchronous tasks are launched. I tried to figure out what may cause the problem but see no solution to this issue.
Here is my code, here it's not with an async_read but with an async_wait, and the same problem appears, the timer is not launched.
Thanks for any help I might get.
The header file:
#ifndef MYCLASS_HPP
#define MYCLASS_HPP
#include <memory>
#include <boost/asio.hpp>
class MyClass
{
public:
MyClass(boost::asio::io_service& ios);
void read();
void read_handler(const boost::system::error_code& error);
static std::shared_ptr<MyClass> create(std:: string const & host, uint16_t port);
bool connect (std::string const & host, uint16_t port);
void connect_handler(const boost::system::error_code& error);
boost::asio::ip::tcp::socket m_socket;
bool m_flag;
std::vector<uint8_t> m_buffer;
};
#endif
Source file:
#include "MyClass.hpp"
#include <boost/bind.hpp>
MyClass::MyClass(boost::asio::io_service& ios)
:m_flag(false), m_socket(ios), m_buffer(20)
{
}
void MyClass::read_handler(const boost::system::error_code& er)
{
std::cout << "Timer waited 5 sec" << std::endl;
}
void MyClass::read()
{
boost::asio::deadline_timer t(m_socket.get_io_service(),boost::posix_time::seconds(5));
t.async_wait(boost::bind(&MyClass::read_handler,this,boost::asio::placeholders::error));
m_socket.get_io_service().run();//Should make the io_service wait for all asynchronous tasks to finish
std::cout << "This message should be displayed after the wait" << std::endl;
}
void MyClass::connect_handler(const boost::system::error_code& error)
{
if(!error)
{
std::cout << "Connection done" << std::endl;
m_flag = 1;
}
else
{
std::cout << "Error in connection: " << error.message() << std::endl;
}
}
//connect method
bool MyClass::connect(std::string const & host, uint16_t port)
{
boost::asio::ip::tcp::endpoint endpoint(boost::asio::ip::address::from_string(host),port);
m_socket.async_connect(endpoint,
boost::bind(&MyClass::connect_handler, this,
boost::asio::placeholders::error));
m_socket.get_io_service().run();//Wait async_connect and connect_handler to finish
if (m_flag == 0) return false;
else return true;
}
std::shared_ptr<MyClass> MyClass::create(std:: string const & host, uint16_t port)
{
boost::asio::io_service ios;
std::shared_ptr<MyClass> ptr(new MyClass(ios));
bool bol = ptr->connect(host, port);
ptr->read();
//while(1){}
if(bol == true)
{
//connection success, reading currently listening, pointer is returned to the user
return ptr;
}
else
{
//connection failure, pointer is still returned to the user but not listening as he's not connected
return ptr;
}
}
And my main:
#include <boost/date_time/posix_time/posix_time.hpp>
#include <boost/asio.hpp>
#include "MyClass.hpp"
int main()
{
try
{
std::cout << "Creation of instance" << std::endl;
std::shared_ptr <MyClass> var = MyClass::create("127.0.0.1", 8301);
std::cout << "Instance created" << std::endl;
}
catch (std::exception& e)
{
std::cerr << e.what() << std::endl;
}
return 0;
}
I figured out how to solve my problem.
I had indeed problems with io_service being destroyed after "create" method, so the pointer returned in the main was not able to continue reading.
I had to call run() at one point to launch callbacks but i couldn't do it in the main, as I wanted the main to keep doing other things.
So I created a class launching a separated thread and containing an io_service. That thread is calling run() periodically. It was then added as an attribute to MyClass.
Now I have the call to "create" returning a pointer to MyClass who doesn't stop whatever asynchronous task was launched in MyClass.
I'm trying to create a server that receives connections via domain sockets. I can start the server and I can see the socket being created on the filesystem. But whenever I try to connect to it via socat I get the following error:
2015/03/02 14:00:10 socat[62720] E connect(3, LEN=19 AF=1 "/var/tmp/rpc.sock", 19): Connection refused
This is my Asio code (only the .cpp files). Despite the post title I'm using the Boost-free version of Asio but I don't think that would be a problem.
namespace myapp {
DomainListener::DomainListener(const string& addr) : socket{this->service}, Listener{addr} {
remove(this->address.c_str());
stream_protocol::endpoint ep(this->address);
stream_protocol::acceptor acceptor(this->service, ep);
acceptor.async_accept(this->socket, ep, bind(&DomainListener::accept_callback, this, _1));
}
DomainListener::~DomainListener() {
this->service.stop();
remove(this->address.c_str());
}
void DomainListener::accept_callback(const error_code& ec) noexcept {
this->socket.async_read_some(asio::buffer(this->data), bind(&DomainListener::read_data, this, _1, _2));
}
void DomainListener::read_data(const error_code& ec, size_t length) noexcept {
//std::cerr << "AAA" << std::endl;
//std::cerr << this->data[0] << std::endl;
//std::cerr << "BBB" << std::endl;
}
}
Listener::Listener(const string& addr) : work{asio::io_service::work(this->service)} {
this->address = addr;
}
void Listener::listen() {
this->service.run();
}
Listener::~Listener() {
}
In the code that uses these classes I call listen() whenever I want to start listening to the socket for connections.
I've managed to get this to work with libuv and changed to Asio because I thought it would make for more readable code but I'm finding the documentation to be very ambiguous.
The issue is most likely the lifetime of the acceptor.
The acceptor is an automatic variable in the DomainListener constructor. When the DomainListener constructor completes, the acceptor is destroyed, causing the acceptor to close and cancel outstanding operations, such as the async_accept operations. Cancelled operations will be provided an error code of asio::error::operation_aborted and scheduled for deferred invocation within the io_service. Hence, there may not be an active listener when attempting to connect to the domain socket. For more details on the affects of IO object destruction, see this answer.
DomainListener::DomainListener(const string&) : /* ... */
{
// ...
stream_protocol::acceptor acceptor(...);
acceptor.async_accept(..., bind(accept_callback, ...));
} // acceptor destroyed, and accept_callback likely cancelled
To resolve this, consider extending the lifetime of the acceptor by making it a data member for DomainListener. Additionally, checking the error_code provided to asynchronous operations can provide more insight into the asynchronous call chains.
Here is a complete minimal example demonstrating using domain sockets with Asio.
#include <cstdio>
#include <iostream>
#include <boost/array.hpp>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
/// #brief server demonstrates using domain sockets to accept
/// and read from a connection.
class server
{
public:
server(
boost::asio::io_service& io_service,
const std::string& file)
: io_service_(io_service),
acceptor_(io_service_,
boost::asio::local::stream_protocol::endpoint(file)),
client_(io_service_)
{
std::cout << "start accepting connection" << std::endl;
acceptor_.async_accept(client_,
boost::bind(&server::handle_accept, this,
boost::asio::placeholders::error));
}
private:
void handle_accept(const boost::system::error_code& error)
{
std::cout << "handle_accept: " << error.message() << std::endl;
if (error) return;
std::cout << "start reading" << std::endl;
client_.async_read_some(boost::asio::buffer(buffer_),
boost::bind(&server::handle_read, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void handle_read(
const boost::system::error_code& error,
std::size_t bytes_transferred)
{
std::cout << "handle_read: " << error.message() << std::endl;
if (error) return;
std::cout << "read: ";
std::cout.write(buffer_.begin(), bytes_transferred);
std::cout.flush();
}
private:
boost::asio::io_service& io_service_;
boost::asio::local::stream_protocol::acceptor acceptor_;
boost::asio::local::stream_protocol::socket client_;
std::array<char, 1024> buffer_;
};
int main(int argc, char* argv[])
{
if (argc != 2)
{
std::cerr << "Usage: <file>\n";
return 1;
}
// Remove file on startup and exit.
std::string file(argv[1]);
struct file_remover
{
file_remover(std::string file): file_(file) { std::remove(file.c_str()); }
~file_remover() { std::remove(file_.c_str()); }
std::string file_;
} remover(file);
// Create and run the server.
boost::asio::io_service io_service;
server s(io_service, file);
io_service.run();
}
Coliru does not have socat installed, so the following commands use OpenBSD netcat to write "asio domain socket example" to the domain socket:
export SOCKFILE=$PWD/example.sock
./a.out $SOCKFILE &
sleep 1
echo "asio domain socket example" | nc -U $SOCKFILE
Which outputs:
start accepting connection
handle_accept: Success
start reading
handle_read: Success
read: asio domain socket example