Async read completes, but buffer does not contain expected results - c++

I've been following numerous tutorials online on learning Asynchronous Networking in Asio, so if I've made a really obvious mistake, there's your explanation.
Nonetheless, I've written a program that sets up both a client and server simultaneously and tries to communicate between the two. Simply connecting and making requests to send/receive data seem to be working fine, but the data itself isn't being sent.
#define ASIO_STANDALONE
#include<asio.hpp>
#include<thread>
#include<iostream>
#include<vector>
#include<array>
#include<mutex>
#include<memory>
#include<functional>
#define IPADDRESS "127.0.0.1"
#define PORT "6118"
enum side_type {
t_server, t_client
};
std::mutex m_lock;
std::array<char, 32> clientBuffer;
std::array<char, 32> serverBuffer;
bool stop(false);
void read_function(const asio::error_code&, size_t, std::shared_ptr<asio::ip::tcp::socket>, std::array<char, 32> &, side_type &);
void write_function(const asio::error_code&, size_t, std::shared_ptr<asio::ip::tcp::socket>, std::array<char, 32> &, side_type &);
void read_function(const asio::error_code& ec, size_t bytes_read, std::shared_ptr<asio::ip::tcp::socket> socket, std::array<char, 32> & buffer, side_type & type) {
if (ec) return;
using namespace std;
using namespace std::placeholders;
char value = buffer[0];
{
lock_guard<mutex> guard(m_lock);
string type_str = type == t_server ? "Server" : "Client";
cout << "Value of " << int(value) << " read by " << type_str << "." << endl;
}
if (value >= 100) stop = true;
else {
if(type == t_server)
buffer[0] = value + 1;
socket->async_write_some(asio::buffer(&buffer[0], buffer.max_size()), bind(write_function, _1, _2, socket, buffer, type));
}
}
void write_function(const asio::error_code& ec, size_t bytes_written, std::shared_ptr<asio::ip::tcp::socket> socket, std::array<char, 32> & buffer, side_type & type) {
if (ec) return;
using namespace std;
using namespace std::placeholders;
socket->async_read_some(asio::buffer(&buffer[0], buffer.max_size()), bind(read_function, _1, _2, socket, buffer, type));
}
void work_function(std::shared_ptr<asio::io_service> io_service) {
using namespace std;
asio::error_code ec;
while (!ec) {
try {
io_service->run(ec);
break;
}
catch (exception & e) {
lock_guard<mutex> guard(m_lock);
cout << "Exception thrown: \"" << e.what() << "\"." << endl;
}
}
}
void connect_function(const asio::error_code & ec, std::shared_ptr<asio::ip::tcp::socket> socket) {
using namespace std;
using namespace std::placeholders;
lock_guard<mutex> guard(m_lock);
if (ec) {
cout << "Error Connecting: " << ec << endl;
}
else {
cout << "Successful Connection!" << endl;
socket->async_read_some(asio::buffer(&clientBuffer[0], clientBuffer.max_size()), bind(read_function, _1, _2, socket, clientBuffer, t_client));
}
}
void accept_function(const asio::error_code & ec, std::shared_ptr<asio::ip::tcp::socket> socket) {
using namespace std;
using namespace std::placeholders;
lock_guard<mutex> guard(m_lock);
if (ec) {
cout << "Error Accepting: " << ec << endl;
}
else {
cout << "Successful Acception!" << endl;
serverBuffer[0] = 0;
socket->async_write_some(asio::buffer(&serverBuffer[0], serverBuffer.max_size()), bind(write_function, _1, _2, socket, serverBuffer, t_server));
}
}
int main(int argc, char** argv) {
using namespace std;
using namespace std::placeholders;
shared_ptr<asio::io_service> io_service(new asio::io_service());
shared_ptr<asio::io_service::work> work(new asio::io_service::work(*io_service));
vector<shared_ptr<thread>> threads;
int num_of_threads = thread::hardware_concurrency();
for (auto i = 0; i < thread::hardware_concurrency(); i++) {
threads.push_back(shared_ptr<thread>(new thread(work_function, io_service)));
}
using namespace asio::ip;
tcp::resolver resolver(*io_service);
tcp::resolver::query query(IPADDRESS, PORT);
tcp::resolver::iterator iterator = resolver.resolve(query);
tcp::endpoint endpoint = *iterator;
cout << "Connecting to " << endpoint << endl;
shared_ptr<tcp::acceptor> acceptor(new tcp::acceptor(*io_service));
shared_ptr<tcp::socket> acc_socket(new tcp::socket(*io_service));
shared_ptr<tcp::socket> socket(new tcp::socket(*io_service));
acceptor->open(endpoint.protocol());
acceptor->set_option(tcp::acceptor::reuse_address(false));
acceptor->bind(endpoint);
acceptor->listen(asio::socket_base::max_connections);
acceptor->async_accept(*acc_socket, bind(accept_function, _1, acc_socket));
asio::error_code ec;
socket->async_connect(endpoint, bind(connect_function, _1, socket));
//while (!stop);
cout << "Press Any Key to Continue..." << endl;
cin.get();
socket->shutdown(tcp::socket::shutdown_both, ec);
socket->close(ec);
work.reset();
while (!io_service->stopped());
for (shared_ptr<thread> & t : threads) {
t->join();
}
return 0;
}
As output, I've been getting the following:
Connecting to 127.0.0.1:6118
Press Any Key to Continue...
Successful Connection!
Successful Acception!
Value of 0 read by Client.
Value of 0 read by Server.
Value of 0 read by Client.
Value of 1 read by Server.
Value of 0 read by Client.
Value of 2 read by Server.
Value of 0 read by Client.
Value of 3 read by Server.
......
Value of 0 read by Client.
Value of 98 read by Server.
Value of 0 read by Client.
Value of 99 read by Server.
Value of 0 read by Client.
Value of 100 read by Server.
However, what I'm expecting is:
Connecting to 127.0.0.1:6118
Press Any Key to Continue...
Successful Connection!
Successful Acception!
Value of 0 read by Client.
Value of 0 read by Server.
Value of 1 read by Client.
Value of 1 read by Server.
Value of 2 read by Client.
Value of 2 read by Server.
Value of 3 read by Client.
Value of 3 read by Server.
......
Value of 98 read by Client.
Value of 98 read by Server.
Value of 99 read by Client.
Value of 99 read by Server.
Value of 100 read by Client.
Value of 100 read by Server.
Clearly what's happening is that the Server buffer is getting updated (when I manually increment the value), while the Client Buffer never gets updated by the async_read_some function. Additionally, because the client buffer never gets updated, the server is just reading in old values (also without getting updated) and thus technically has incorrect output as well. However, I don't know what's wrong. I'm passing in all my buffers the way I think I'm supposed to, and all the functions seem to be bound correctly, but the data isn't being passed. So what did I do wrong?

The problem is that a copy of the buffer is being bound to the completion handler, which is a different buffer than that which is provided to the asynchronous operations:
socket->async_read_some(asio::buffer(buffer), std::bind(..., buffer, ...));
// ^~~~~~ = reference ^~~~~~ = copy
In the above snippet, the async_read_some() operation will operate on buffer, and the completion handler will be provided a copy of buffer before the operation has made any modifications. To resolve this, use std::ref() to pass a reference to std::bind().
socket->async_read_some(asio::buffer(buffer), std::bind(..., std::ref(buffer), ...));
// ^~~~~~ = reference ^~~~~~ = reference
In this case, passing a reference will also fix a potential case where undefined behavior could have been invoked. The async_write_some() and async_read_some() operations require that ownership of the underlying buffer memory is retained by the caller, who must guarantee that it remains valid until the completion handler is called. When std::bind() was being provided a copy of the buffer, the buffer's lifetime was bound to the functor object returned from std::bind(), which may have ended before the completion handler was invoked.
void read_function(
...,
std::shared_ptr<asio::ip::tcp::socket> socket,
std::array<char, 32>& buffer,
...)
{
...
socket->async_write_some(asio::buffer(buffer), handler);
} // buffer's lifetime ends shortly after returning from this function
socket->async_read_some(
asio::buffer(buffer),
std::bind(&read_function, ..., socket, buffer, ...));
Here is an example demonstrating the fundamental problem and behavior:
#include <array>
#include <cassert>
#include <functional>
int get_data(std::array<char, 32>& data)
{
return data[0];
}
int main()
{
std::array<char, 32> data;
data[0] = 0;
auto fn_copy = std::bind(&get_data, data);
auto fn_ref = std::bind(&get_data, std::ref(data));
data[0] = 1;
assert(0 == fn_copy());
assert(1 == fn_ref());
}

Your Readhandler and WriteHander:
void read_function(const asio::error_code&, size_t, std::shared_ptr<asio::ip::tcp::socket>, std::array<char, 32> &, side_type &);
void write_function(const asio::error_code&, size_t, std::shared_ptr<asio::ip::tcp::socket>, std::array<char, 32> &, side_type &);
don't conform to the asio Read handler and Write handler requirements. I.e. just:
void read_function(const asio::error_code&, size_t);
void write_function(const asio::error_code&, size_t);
Your application needs to "own" the read and write buffers and not expect their locations to be sent back to you by the handlers. If you use clientBuffer and serverBuffer where appropriate, it should work correctly.

Related

How to implement an IPC protocol using Boost ASIO?

I'm trying to implement a simple IPC protocol for a project that will be built using Boost ASIO. The idea is to have the communication be done through IP/TCP, with a server with the backend and a client that will be using the data received from the server to build the frontend. The whole session would go like this:
The connection is established
The client sends a 2 byte packet with some information that will be used by the server to build its response (this is stored as the struct propertiesPacket)
The server processes the data received and stores the output in a struct of variable size called processedData
The server sends a 2 byte unsigned integer that will indicate the client what size the struct it will receive has (let's say the struct is of size n bytes)
The server sends the struct data as a n byte packet
The connection is ended
I tried implementing this by myself, following the great tutorial available in Boost ASIO's documentation, as well as the examples included in the library and some repos I found on Github, but as this is my first hand working with networking and IPC, I couldn't make it work, my client returns an exception saying the connection was reset by the peer.
What I have right now is this:
// File client.cpp
int main(int argc, char *argv[])
{
try {
propertiesPacket properties;
// ...
// We set the data inside the properties struct
// ...
boost::asio::io_context io;
boost::asio::ip::tcp::socket socket(io);
boost::asio::ip::tcp::resolver resolver(io);
boost::asio::connect(socket, resolver.resolve(argv[1], argv[2]));
boost::asio::write(socket, boost::asio::buffer(&properties, sizeof(propertiesPacket)));
unsigned short responseSize {};
boost::asio::read(socket, boost::asio::buffer(&responseSize, sizeof(short)));
processedData* response = reinterpret_cast<processedData*>(malloc(responseSize));
boost::asio::read(socket, boost::asio::buffer(response, responseSize));
// ...
// The client handles the data
// ...
return 0;
} catch (std::exception &e) {
std::cerr << e.what() << std::endl;
}
}
// File server.cpp
class ServerConnection
: public std::enable_shared_from_this<ServerConnection>
{
public:
using TCPSocket = boost::asio::ip::tcp::socket;
ServerConnection::ServerConnection(TCPSocket socket)
: socket_(std::move(socket)),
properties_(nullptr),
filePacket_(nullptr),
filePacketSize_(0)
{
}
void start() { doRead(); }
private:
void doRead()
{
auto self(shared_from_this());
socket_.async_read_some(boost::asio::buffer(properties_, sizeof(propertiesPacket)),
[this, self](boost::system::error_code ec, std::size_t /*length*/)
{
if (!ec) {
processData();
doWrite(&filePacketSize_, sizeof(short));
doWrite(filePacket_, sizeof(*filePacket_));
}
});
}
void doWrite(void* data, size_t length)
{
auto self(shared_from_this());
boost::asio::async_write(socket_, boost::asio::buffer(data, length),
[this, self](boost::system::error_code ec, std::size_t /*length*/)
{
if (!ec) { doRead(); }
});
}
void processData()
{ /* Data is processed */ }
TCPSocket socket_;
propertiesPacket* properties_;
processedData* filePacket_;
short filePacketSize_;
};
class Server
{
public:
using IOContext = boost::asio::io_context;
using TCPSocket = boost::asio::ip::tcp::socket;
using TCPAcceptor = boost::asio::ip::tcp::acceptor;
Server::Server(IOContext& io, short port)
: socket_(io),
acceptor_(io, boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), port))
{
doAccept();
}
private:
void doAccept()
{
acceptor_.async_accept(socket_,
[this](boost::system::error_code ec)
{
if (!ec) {
std::make_shared<ServerConnection>(std::move(socket_))->start();
}
doAccept();
});
}
TCPSocket socket_;
TCPAcceptor acceptor_;
};
What did I do wrong? My guess is that inside the doRead function, calling multiple times the doWrite function, when that function then also calls doRead is in part what's causing problems, but I don't know what the correct way of writing data asynchronously multiple times is. But I'm also sure that isn't the only part of my code that isn't behaving as I think it should.
Besides the problems with the code shown that I mentioned in the comments, there is indeed the problem that you suspected:
My guess is that inside the doRead function, calling multiple times the doWrite function, when that function then also calls doRead is in part what's causing problems
The fact that "doRead" is in the same function isn't necessarily a problem (that's just full-duplex socket IO). However "calling multiple times" is. See the docs:
This operation is implemented in terms of zero or more calls to the stream's async_write_some function, and is known as a composed operation. The program must ensure that the stream performs no other write operations (such as async_write, the stream's async_write_some function, or any other composed operations that perform writes) until this operation completes.
The usual way is to put the whole message in a single buffer, but if that would be "expensive" to copy, you can use a BufferSequence, which is known as scatter/gather buffers.
Specifically, you would replace
doWrite(&filePacketSize_, sizeof(short));
doWrite(filePacket_, sizeof(*filePacket_));
with something like
std::vector<boost::asio::const_buffer> msg{
boost::asio::buffer(&filePacketSize_, sizeof(short)),
boost::asio::buffer(filePacket_, sizeof(*filePacket_)),
};
doWrite(msg);
Note that this assumes that filePacketSize and filePacket have been assigned proper values!
You could of course modify do_write to accept the buffer sequence:
template <typename Buffers> void doWrite(Buffers msg)
{
auto self(shared_from_this());
boost::asio::async_write(
socket_, msg,
[this, self](boost::system::error_code ec, std::size_t /*length*/) {
if (!ec) {
doRead();
}
});
}
But in your case I'd simplify by inlining the body (now that you don't call it more than once anyway).
SIDE NOTES
Don't use new or delete. NEVER use malloc in C++. Never use reinterpret_cast<> (except in the very rarest of exceptions that the standard allows!). Instead of
processedData* response = reinterpret_cast<processedData*>(malloc(responseSize));
Just use
processedData response;
(optionally add {} for value-initialization of aggregates). If you need variable-length messages, consider to put a vector or a array<char, MAXLEN> inside the message. Of course, array is fixed length but it preserves POD-ness, so it might be easier to work with. If you use vector, you'd want a scatter/gather read into a buffer sequence like I showed above for the write side.
Instead of reinterpreting between inconsistent short and unsigned short types, perhaps just spell the type with the standard sizes: std::uint16_t everywhere.
Keep in mind that you are not taking into account byte order so your protocol will NOT be portable across compilers/architectures.
Provisional Fixes
This is the listing I ended up with after reviewing the code you shared.
Live On Coliru
#include <boost/asio.hpp>
#include <iostream>
namespace ba = boost::asio;
using boost::asio::ip::tcp;
using boost::system::error_code;
using TCPSocket = tcp::socket;
struct processedData { };
struct propertiesPacket { };
// File server.cpp
class ServerConnection : public std::enable_shared_from_this<ServerConnection> {
public:
ServerConnection(TCPSocket socket) : socket_(std::move(socket))
{ }
void start() {
std::clog << __PRETTY_FUNCTION__ << std::endl;
doRead();
}
private:
void doRead()
{
std::clog << __PRETTY_FUNCTION__ << std::endl;
auto self(shared_from_this());
socket_.async_read_some(
ba::buffer(&properties_, sizeof(properties_)),
[this, self](error_code ec, std::size_t length) {
std::clog << "received: " << length << std::endl;
if (!ec) {
processData();
std::vector<ba::const_buffer> msg{
ba::buffer(&filePacketSize_, sizeof(uint16_t)),
ba::buffer(&filePacket_, filePacketSize_),
};
ba::async_write(socket_, msg,
[this, self = shared_from_this()](
error_code ec, std::size_t length) {
std::clog << " written: " << length
<< std::endl;
if (!ec) {
doRead();
}
});
}
});
}
void processData() {
std::clog << __PRETTY_FUNCTION__ << std::endl;
/* Data is processed */
}
TCPSocket socket_;
propertiesPacket properties_{};
processedData filePacket_{};
uint16_t filePacketSize_ = sizeof(filePacket_);
};
class Server
{
public:
using IOContext = ba::io_context;
using TCPAcceptor = tcp::acceptor;
Server(IOContext& io, uint16_t port)
: socket_(io)
, acceptor_(io, {tcp::v4(), port})
{
doAccept();
}
private:
void doAccept()
{
std::clog << __PRETTY_FUNCTION__ << std::endl;
acceptor_.async_accept(socket_, [this](error_code ec) {
if (!ec) {
std::clog << "Accepted " << socket_.remote_endpoint()
<< std::endl;
std::make_shared<ServerConnection>(std::move(socket_))->start();
doAccept();
} else {
std::clog << "Accept " << ec.message() << std::endl;
}
});
}
TCPSocket socket_;
TCPAcceptor acceptor_;
};
// File client.cpp
int main(int argc, char *argv[])
{
ba::io_context io;
Server s{io, 6869};
std::thread server_thread{[&io] {
io.run();
}};
// always check argc!
std::vector<std::string> args(argv, argv + argc);
if (args.size() == 1)
args = {"demo", "127.0.0.1", "6869"};
// avoid race with server accept thread
post(io, [&io, args] {
try {
propertiesPacket properties;
// ...
// We set the data inside the properties struct
// ...
tcp::socket socket(io);
tcp::resolver resolver(io);
connect(socket, resolver.resolve(args.at(1), args.at(2)));
write(socket, ba::buffer(&properties, sizeof(properties)));
uint16_t responseSize{};
ba::read(socket, ba::buffer(&responseSize, sizeof(uint16_t)));
std::clog << "Client responseSize: " << responseSize << std::endl;
processedData response{};
assert(responseSize <= sizeof(response));
ba::read(socket, ba::buffer(&response, responseSize));
// ...
// The client handles the data
// ...
// for online demo:
io.stop();
} catch (std::exception const& e) {
std::clog << e.what() << std::endl;
}
});
io.run_one();
server_thread.join();
}
Printing something similar to
void Server::doAccept()
Server::doAccept()::<lambda(boost::system::error_code)> Success
void ServerConnection::start()
void ServerConnection::doRead()
void Server::doAccept()
received: 1
void ServerConnection::processData()
written: 3
void ServerConnection::doRead()
Client responseSize: 1

Boost::Asio::Read is not populating buffer

Here is my server class, which renders an async event to send a string to my client, when connected.
The message is definitely dispatched to the client, as the writehandler is invoked successfully without any errors:
class Server {
private:
void writeHandler(ServerConnection connection, const boost::system::error_code &error_code,
std::size_t bytes_transferred) {
if (!(error_code)) {
std::cout << "SENT "<<bytes_transferred <<" BYTES"<< std::endl;
}
}
void renderWriteEvent(ServerConnection connection, const std::string& str) {
std::cout << "RENDERING WRITE EVENT" << std::endl;
connection->write = str;
boost::asio::async_write(connection->socket, boost::asio::buffer(connection->write),
boost::bind(&Server::writeHandler, this, connection,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
};
Now on the client side, after successfully connecting to the server, I call
void renderRead(){
std::cout<<"Available Bytes: "<<socket.available()<<std::endl;
std::string foo;
boost::system::error_code error_code;
std::size_t x = socket.read_some(boost::asio::buffer(foo), error_code);
std::cout<<error_code.message()<<std::endl;
std::cout<<"Bytes read: "<<x<<std::endl;
std::cout<<"Available Bytes: "<<socket.available()<<std::endl;
std::cout<<foo<<std::endl;
//boost::asio::async_read(socket, boost::asio::buffer(read_string), boost::bind(&Client::readHandler, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred));
}
which outputs "Available Bytes: 12"
Then, in calling boost::asio::read, I get 0 bytes read, and no error. I don't understand what's wrong. After the read, the number of bytes available for reading in the socket stream is still printed to be 12
A key point here is that read_some() doesn't allocate any memory, it fills memory that is provided to it. For your code, this means ASIO will only replace the data already existing inside of foo, and it will never exceed these bounds.
But you have std::string foo;, which is a default-constructed string, aka an empty string.
So ASIO is populating the buffer you are passing just fine. However, you are passing it a buffer with no room in it. ASIO fills it as much as possible: 0 bytes.
You can test this for yourself by adding the following to your code:
std::string foo;
std::cout << "Available room in buffer: "<< foo.size() << std::endl;
The fix would be to pass a buffer with memory already allocated. You could initialize the string with a length, but using a raw block of bytes that you interpret later as a string_view is more explicit.
constexpr std::size_t buffer_size = 32;
std::array<char, buffer_size> foo;
std::size_t x = socket.read_some(boost::asio::buffer(foo), error_code);
//...
std::string_view message(foo.data(), x);
std::cout << message << std::endl;

Ping(ICMP) multiple destinations parallely Using Boost.asio

I have modified ICMP pinging implementation (https://think-async.com/Asio/asio-1.18.0/src/examples/cpp03/icmp/ping.cpp) to ping multiple destination concurrently instead of sequentially as shown in the example. I tried with std::thread and std::async(along with futures).
But it works as expected only when all the destination are not reachable. Is it not possible to do it concurrently? I had disabled re-pinging on result/timeout in the pinger class
const char* ping(const char* destination)
{
asio::io_context io_context;
pinger p(io_context, destination);
io_context.run();
return p.get();
}
int main()
{
std::future<const char*> a1 = std::async(std::launch::async, ping, "10.2.7.196");
std::future<const char*> a2 = std::async(std::launch::async, ping, "10.2.7.19");
std::cout<<a1.get()<<std::endl;
std::cout<<a2.get()<<std::endl;
}
You wouldn't need std::async¹.
But from the little bit of code you show I can can guess² that your error is returning raw char const*. The chance is considerable that they refer to data inside pinger that - obviously - isn't valid anymore when the future is completed (pinger would be out of scope).
A typical way for this to happen is if you stored output in a std::string member and returned that from get() using .c_str().
A reason why it would "work" for unreachable targets would be if get() simply returned a string literal like return "unreachable", which would NOT have the lifetime problem described above.
Ditching The Crystal Ball
So, imagining a correct way to return results:
Live On Wandbox³
#include <boost/asio.hpp>
#include <boost/bind/bind.hpp>
namespace asio = boost::asio;
#include "icmp_header.hpp"
#include "ipv4_header.hpp"
using asio::steady_timer;
using asio::ip::icmp;
namespace chrono = asio::chrono;
class pinger {
public:
pinger(asio::io_context& io_context, const char* destination)
: resolver_(io_context), socket_(io_context, icmp::v4()),
timer_(io_context), sequence_number_(0), num_replies_(0) {
destination_ = *resolver_.resolve(icmp::v4(), destination, "").begin();
start_send();
start_receive();
}
std::string get() { auto r = _output.str(); _output.str(""); return r; }
private:
void start_send() {
std::string body("\"Hello!\" from Asio ping.");
// Create an ICMP header for an echo request.
icmp_header echo_request;
echo_request.type(icmp_header::echo_request);
echo_request.code(0);
echo_request.identifier(get_identifier());
echo_request.sequence_number(++sequence_number_);
compute_checksum(echo_request, body.begin(), body.end());
// Encode the request packet.
asio::streambuf request_buffer;
std::ostream os(&request_buffer);
os << echo_request << body;
// Send the request.
time_sent_ = steady_timer::clock_type::now();
socket_.send_to(request_buffer.data(), destination_);
// Wait up to five seconds for a reply.
num_replies_ = 0;
timer_.expires_at(time_sent_ + chrono::seconds(5));
timer_.async_wait(boost::bind(&pinger::handle_timeout, this));
}
void handle_timeout() {
if (num_replies_ == 0)
_output << "Request timed out";
//// Requests must be sent no less than one second apart.
//timer_.expires_at(time_sent_ + chrono::seconds(1));
//timer_.async_wait(boost::bind(&pinger::start_send, this));
}
void start_receive() {
// Discard any data already in the buffer.
reply_buffer_.consume(reply_buffer_.size());
// Wait for a reply. We prepare the buffer to receive up to 64KB.
socket_.async_receive(reply_buffer_.prepare(65536),
boost::bind(&pinger::handle_receive, this,
boost::placeholders::_2));
}
void handle_receive(std::size_t length) {
// The actual number of bytes received is committed to the buffer so
// that we can extract it using a std::istream object.
reply_buffer_.commit(length);
// Decode the reply packet.
std::istream is(&reply_buffer_);
ipv4_header ipv4_hdr;
icmp_header icmp_hdr;
is >> ipv4_hdr >> icmp_hdr;
// We can receive all ICMP packets received by the host, so we need to
// filter out only the echo replies that match the our identifier and
// expected sequence number.
if (is && icmp_hdr.type() == icmp_header::echo_reply &&
icmp_hdr.identifier() == get_identifier() &&
icmp_hdr.sequence_number() == sequence_number_) {
// If this is the first reply, interrupt the five second timeout.
if (num_replies_++ == 0)
timer_.cancel();
// Print out some information about the reply packet.
chrono::steady_clock::time_point now = chrono::steady_clock::now();
chrono::steady_clock::duration elapsed = now - time_sent_;
_output
<< length - ipv4_hdr.header_length() << " bytes from "
<< ipv4_hdr.source_address()
<< ": icmp_seq=" << icmp_hdr.sequence_number()
<< ", ttl=" << ipv4_hdr.time_to_live() << ", time="
<< chrono::duration_cast<chrono::milliseconds>(elapsed).count();
}
//start_receive();
}
static unsigned short get_identifier() {
#if defined(ASIO_WINDOWS)
return static_cast<unsigned short>(::GetCurrentProcessId());
#else
return static_cast<unsigned short>(::getpid());
#endif
}
std::ostringstream _output;
icmp::resolver resolver_;
icmp::endpoint destination_;
icmp::socket socket_;
steady_timer timer_;
unsigned short sequence_number_;
chrono::steady_clock::time_point time_sent_;
asio::streambuf reply_buffer_;
std::size_t num_replies_;
};
std::string ping1(const char* destination) {
asio::io_context io_context;
pinger p(io_context, destination);
io_context.run();
return p.get();
}
#include <list>
#include <iostream>
int main(int argc, char** argv) {
std::list<std::future<std::string> > futures;
for (char const* arg : std::vector(argv+1, argv+argc)) {
futures.push_back(std::async(std::launch::async, ping1, arg));
}
for (auto& f : futures) {
std::cout << f.get() << std::endl;
}
}
As you can see I made the list of destinations command line parameters. Therefore, when I run it like:
sudo ./sotest 127.0.0.{1..100} |& sort | uniq -c
I get this output:
1 32 bytes from 127.0.0.12: icmp_seq=1, ttl=64, time=0
1 32 bytes from 127.0.0.16: icmp_seq=1, ttl=64, time=0
7 32 bytes from 127.0.0.44: icmp_seq=1, ttl=64, time=0
1 32 bytes from 127.0.0.77: icmp_seq=1, ttl=64, time=1
1 32 bytes from 127.0.0.82: icmp_seq=1, ttl=64, time=1
1 32 bytes from 127.0.0.9: icmp_seq=1, ttl=64, time=0
88 Request timed out
I'm not actually sure why so many time out, but the point is correct code now. This code runs and completes UBSan/ASan clean. See below for the fix discovered later, though
Now, Let's Drop The Future
The futures are likely creating a lot of overhead. As is the fact that you have an io_service per ping. Let's do it all on a single one.
#include <list>
#include <iostream>
int main(int argc, char** argv) {
asio::io_context io_context;
std::list<pinger> pingers;
for (char const* arg : std::vector(argv+1, argv+argc)) {
pingers.emplace_back(io_context, arg);
}
io_context.run();
for (auto& p : pingers) {
std::cout << p.get() << std::endl;
}
}
Note that the synchronization point here is io_context.run(), just like before, except now it runs all the pings in one go, on the main thread.
Correcting Cancellation
So, I noticed now why so many pings were misrepresented as unreachable.
The reason is because handle_receive needs to filter out ICMP replies that are not in response to our ping, so if that happens we need to continue start_receive() until we get it:
void start_receive() {
// Discard any data already in the buffer.
reply_buffer_.consume(reply_buffer_.size());
// Wait for a reply. We prepare the buffer to receive up to 64KB.
socket_.async_receive(reply_buffer_.prepare(65536),
boost::bind(&pinger::handle_receive, this,
boost::asio::placeholders::error(),
boost::asio::placeholders::bytes_transferred()));
}
void handle_receive(boost::system::error_code ec, std::size_t length) {
if (ec) {
if (ec == boost::asio::error::operation_aborted) {
_output << "Request timed out";
} else {
_output << "error: " << ec.message();
}
return;
}
// The actual number of bytes received is committed to the buffer so
// that we can extract it using a std::istream object.
reply_buffer_.commit(length);
// Decode the reply packet.
std::istream is(&reply_buffer_);
ipv4_header ipv4_hdr;
icmp_header icmp_hdr;
is >> ipv4_hdr >> icmp_hdr;
// We can receive all ICMP packets received by the host, so we need to
// filter out only the echo replies that match the our identifier and
// expected sequence number.
if (is && icmp_hdr.type() == icmp_header::echo_reply &&
icmp_hdr.identifier() == get_identifier() &&
icmp_hdr.sequence_number() == sequence_number_) {
// If this is the first reply, interrupt the five second timeout.
if (num_replies_++ == 0)
timer_.cancel();
// Print out some information about the reply packet.
chrono::steady_clock::time_point now = chrono::steady_clock::now();
chrono::steady_clock::duration elapsed = now - time_sent_;
_output
<< length - ipv4_hdr.header_length() << " bytes from "
<< ipv4_hdr.source_address()
<< ": icmp_seq=" << icmp_hdr.sequence_number()
<< ", ttl=" << ipv4_hdr.time_to_live() << ", time="
<< chrono::duration_cast<chrono::milliseconds>(elapsed).count();
} else start_receive();
}
Now, handle_timeout can be simplified to:
void handle_timeout() {
if (num_replies_ == 0) {
socket_.cancel(); // _output is set in response to error_code
}
}
In fact, we might simplify to remove num_replies altogether, but I'll leave this as an exorcism for the reader
Full Demo
Live On Wandbox
#include <boost/asio.hpp>
#include <boost/bind/bind.hpp>
namespace asio = boost::asio;
#include "icmp_header.hpp"
#include "ipv4_header.hpp"
using asio::steady_timer;
using asio::ip::icmp;
namespace chrono = asio::chrono;
class pinger {
public:
pinger(asio::io_context& io_context, const char* destination)
: resolver_(io_context), socket_(io_context, icmp::v4()),
timer_(io_context), sequence_number_(0), num_replies_(0) {
destination_ = *resolver_.resolve(icmp::v4(), destination, "").begin();
start_send();
start_receive();
}
std::string get() { auto r = _output.str(); _output.str(""); return r; }
private:
void start_send() {
std::string body("\"Hello!\" from Asio ping.");
// Create an ICMP header for an echo request.
icmp_header echo_request;
echo_request.type(icmp_header::echo_request);
echo_request.code(0);
echo_request.identifier(get_identifier());
echo_request.sequence_number(++sequence_number_);
compute_checksum(echo_request, body.begin(), body.end());
// Encode the request packet.
asio::streambuf request_buffer;
std::ostream os(&request_buffer);
os << echo_request << body;
// Send the request.
time_sent_ = steady_timer::clock_type::now();
socket_.send_to(request_buffer.data(), destination_);
// Wait up to five seconds for a reply.
num_replies_ = 0;
timer_.expires_at(time_sent_ + chrono::seconds(5));
timer_.async_wait(boost::bind(&pinger::handle_timeout, this));
}
void handle_timeout() {
if (num_replies_ == 0) {
socket_.cancel(); // _output is set in response to error_code
}
}
void start_receive() {
// Discard any data already in the buffer.
reply_buffer_.consume(reply_buffer_.size());
// Wait for a reply. We prepare the buffer to receive up to 64KB.
socket_.async_receive(reply_buffer_.prepare(65536),
boost::bind(&pinger::handle_receive, this,
boost::asio::placeholders::error(),
boost::asio::placeholders::bytes_transferred()));
}
void handle_receive(boost::system::error_code ec, std::size_t length) {
if (ec) {
if (ec == boost::asio::error::operation_aborted) {
_output << "Request timed out";
} else {
_output << "error: " << ec.message();
}
return;
}
// The actual number of bytes received is committed to the buffer so
// that we can extract it using a std::istream object.
reply_buffer_.commit(length);
// Decode the reply packet.
std::istream is(&reply_buffer_);
ipv4_header ipv4_hdr;
icmp_header icmp_hdr;
is >> ipv4_hdr >> icmp_hdr;
// We can receive all ICMP packets received by the host, so we need to
// filter out only the echo replies that match the our identifier and
// expected sequence number.
if (is && icmp_hdr.type() == icmp_header::echo_reply &&
icmp_hdr.identifier() == get_identifier() &&
icmp_hdr.sequence_number() == sequence_number_) {
// If this is the first reply, interrupt the five second timeout.
if (num_replies_++ == 0)
timer_.cancel();
// Print out some information about the reply packet.
chrono::steady_clock::time_point now = chrono::steady_clock::now();
chrono::steady_clock::duration elapsed = now - time_sent_;
_output
<< length - ipv4_hdr.header_length() << " bytes from "
<< ipv4_hdr.source_address()
<< ": icmp_seq=" << icmp_hdr.sequence_number()
<< ", ttl=" << ipv4_hdr.time_to_live() << ", time="
<< chrono::duration_cast<chrono::milliseconds>(elapsed).count();
} else start_receive();
}
static unsigned short get_identifier() {
#if defined(ASIO_WINDOWS)
return static_cast<unsigned short>(::GetCurrentProcessId());
#else
return static_cast<unsigned short>(::getpid());
#endif
}
std::ostringstream _output;
icmp::resolver resolver_;
icmp::endpoint destination_;
icmp::socket socket_;
steady_timer timer_;
unsigned short sequence_number_;
chrono::steady_clock::time_point time_sent_;
asio::streambuf reply_buffer_;
std::size_t num_replies_;
};
#include <list>
#include <iostream>
int main(int argc, char** argv) {
asio::io_context io_context;
std::list<pinger> pingers;
for (char const* arg : std::vector(argv+1, argv+argc)) {
pingers.emplace_back(io_context, arg);
}
io_context.run();
for (auto& p : pingers) {
std::cout << p.get() << std::endl;
}
}
Now the output of e.g. time sudo ./sotest 127.0.0.{1..100} 18.0.0.1 is as expected:
32 bytes from 127.0.0.1: icmp_seq=1, ttl=64, time=8
32 bytes from 127.0.0.2: icmp_seq=1, ttl=64, time=8
32 bytes from 127.0.0.3: icmp_seq=1, ttl=64, time=8
32 bytes from 127.0.0.4: icmp_seq=1, ttl=64, time=8
...
32 bytes from 127.0.0.98: icmp_seq=1, ttl=64, time=0
32 bytes from 127.0.0.99: icmp_seq=1, ttl=64, time=0
32 bytes from 127.0.0.100: icmp_seq=1, ttl=64, time=0
Request timed out
¹ in fact that is rarely/never the right tool
² using my crystal ball
³ obviously we have no permissions to craft ICMP packets, let alone send them on Wandbox

Thread-safety when accessing data from N-theads in context of an async TCP-server

As the title says i have a question concerning the following scenario (simplyfied example):
Assume that i have an object of the Generator-Class below, which continuously updates its dataChunk member ( running in the main thread).
class Generator
{
void generateData();
uint8_t dataChunk[999];
}
Furthermore i have an async. acceptor of TCP-connections to which 1-N clients can connect to (running in a second thread).
The acceptor starts a new thread for each new client-connection, in which an object of the Connection class below, receives a request message from the client and provides a fraction of the dataChunk (belonging to the Generator) as an answer. Then waits for a new request and so on...
class Connection
{
void setDataChunk(uint8_t* dataChunk);
void handleRequest();
uint8_t* dataChunk;
}
Finally the actual question: The desired behaviour is that the Generator object generates a new dataChunk and waits until all 1-N Connection objects have delt with their client requests until it generates a new dataChunk.
How do i lock the dataChunk for writing access of the Generator object while the Connection objects deal with their requests, but all Connection objects in their respective threads are supposed to have reading-access at the same time during their request-handling phase.
On the other hand the Connection objects are supposed to wait for a new dataChunk after dealing with their respective request, without dropping a new client request.
--> I think a single mutex won't do the trick here.
My first idea was to share a struct between the objects with a semaphore for the Generator and a vector of semaphores for the connections. With these, every object could "understand" the state of the full-system and work accordingly.
What to you guys think, what is best practice i cases like this?
Thanks in advance!
There are several ways to solve it.
You can use std::shared_mutex.
void Connection::handleRequest()
{
while(true)
{
std::shared_lock<std::shared_mutex> lock(GeneratorObj.shared_mutex);
if(GeneratorObj.DataIsAvailable()) // we need to know that data is available
{
// Send to client
break;
}
}
}
void Generator::generateData()
{
std::unique_lock<std::shared_mutex> lock(GeneratorObj.shared_mutex);
// Generate data
}
Or you can use a boost::lockfree::queue, but data structures will be different.
How do i lock the dataChunk for writing access of the Generator object while the Connection objects deal with their requests, but all Connection objects in their respective threads are supposed to have reading-access at the same time during their request-handling phase.
I'd make a logical chain of operations, that includes the generation.
Here's a sample:
it is completely single threaded
accepts unbounded connections and deals with dropped connections
it uses a deadline_timer object to signal a barrier when waiting for to send of a chunck to (many) connections. This makes it convenient to put the generateData call in an async call chain.
Live On Coliru
#include <boost/asio.hpp>
#include <list>
#include <iostream>
namespace ba = boost::asio;
using ba::ip::tcp;
using boost::system::error_code;
using Clock = std::chrono::high_resolution_clock;
using Duration = Clock::duration;
using namespace std::chrono_literals;
struct Generator {
void generateData();
uint8_t dataChunk[999];
};
struct Server {
Server(unsigned short port) : _port(port) {
_barrier.expires_at(boost::posix_time::neg_infin);
_acc.set_option(tcp::acceptor::reuse_address());
accept_loop();
}
void generate_loop() {
assert(n_sending == 0);
garbage_collect(); // remove dead connections, don't interfere with sending
if (_socks.empty()) {
std::clog << "No more connections; pausing Generator\n";
} else {
_gen.generateData();
_barrier.expires_at(boost::posix_time::pos_infin);
for (auto& s : _socks) {
++n_sending;
ba::async_write(s, ba::buffer(_gen.dataChunk), [this,&s](error_code ec, size_t written) {
assert(n_sending);
--n_sending; // even if failed, decreases pending operation
if (ec) {
std::cerr << "Write: " << ec.message() << "\n";
s.close();
}
std::clog << "Written: " << written << ", " << n_sending << " to go\n";
if (!n_sending) {
// green light to generate next chunk
_barrier.expires_at(boost::posix_time::neg_infin);
}
});
}
_barrier.async_wait([this](error_code ec) {
if (ec && ec != ba::error::operation_aborted)
std::cerr << "Client activity: " << ec.message() << "\n";
else generate_loop();
});
}
}
void accept_loop() {
_acc.async_accept(_accepting, [this](error_code ec) {
if (ec) {
std::cerr << "Accept fail: " << ec.message() << "\n";
} else {
std::clog << "Accepted: " << _accepting.remote_endpoint() << "\n";
_socks.push_back(std::move(_accepting));
if (_socks.size() == 1) // first connection?
generate_loop(); // start generator
accept_loop();
}
});
}
void run_for(Duration d) {
_svc.run_for(d);
}
void garbage_collect() {
_socks.remove_if([](tcp::socket& s) { return !s.is_open(); });
}
private:
ba::io_service _svc;
unsigned short _port;
tcp::acceptor _acc { _svc, { {}, _port } };
tcp::socket _accepting {_svc};
std::list<tcp::socket> _socks;
Generator _gen;
size_t n_sending = 0;
ba::deadline_timer _barrier {_svc};
};
int main() {
Server s(6767);
s.run_for(3s); // COLIRU
}
#include <fstream>
// synchronously generate random data chunks
void Generator::generateData() {
std::ifstream ifs("/dev/urandom", std::ios::binary);
ifs.read(reinterpret_cast<char*>(dataChunk), sizeof(dataChunk));
std::clog << "Generated chunk: " << ifs.gcount() << "\n";
}
Prints (for just the 1 client):
Accepted: 127.0.0.1:60870
Generated chunk: 999
Written: 999, 0 to go
Generated chunk: 999
[... snip ~4000 lines ...]
Written: 999, 0 to go
Generated chunk: 999
Write: Broken pipe
Written: 0, 0 to go
No more connections; pausing Generator

boost::asio read/write trouble

I started to learn the boost::asio and tried to make simple client-server application. At now I have troubles with server. Here it code:
int main(int argc, char* argv[])
{
using namespace boost::asio;
io_service service;
ip::tcp::endpoint endp(ip::tcp::v4(), 2001);
ip::tcp::acceptor acc(service, endp);
for (;;)
{
socker_ptr sock(new ip::tcp::socket(service));
acc.accept(*sock);
for (;;)
{
byte data[512];
size_t len = sock->read_some(buffer(data)); // <--- here exception at second iteration
if (len > 0)
write(*sock, buffer("ok", 2));
}
}
}
It correctly accepted the client socket, correctly read, then it write data and strarted new iteration. On second iteration throwed exception. It looks like:
And I don`t get why it happens?
I just need that server must read/write continuosly while the client present. And when the client gone the server must accept next client.
So the main question: why excpection happens and what how to aviod it?
...
UPDATE1: I found that at first iteration the error code of both read/write operation is successful. But (!) on second iteration at place where exception reised the error code is "End of file". But why?
You get the end of file condition because the remote end of the connection closed or dropped the connection.
You should be handling the system errors, or using the overloads that take a reference to boost::system::error_code. How else would you ever terminate the infinite loop?
Live On Coliru
#include <boost/asio.hpp>
#include <iostream>
int main()
{
using namespace boost::asio;
io_service service;
ip::tcp::endpoint endp(ip::tcp::v4(), 2001);
ip::tcp::acceptor acc(service, endp);
for (;;)
{
ip::tcp::socket sock(service);
acc.accept(sock);
boost::system::error_code ec;
while (!ec)
{
uint8_t data[512];
size_t len = sock.read_some(buffer(data), ec);
if (len > 0)
{
std::cout << "received " << len << " bytes\n";
write(sock, buffer("ok", 2));
}
}
std::cout << "Closed: " << ec.message() << "\n";
}
}