Boost ASIO performing async write/read/write handshake with a timer - c++

I have an application where I need to connect to a socket, send a handshake message (send command1, get response, send command2), and then receive data. It is set to expire after a timeout, stop the io_service, and then attempt to reconnect. There is no error message when I do my first async_write but the following async_read waits until the timer expires, and then reconnects in an infinite loop.
My code looks like:
#include <boost/asio.hpp>
#include <boost/bind/bind.hpp>
#include <iostream>
#include <string>
#include <memory>
#include <boost/date_time/posix_time/posix_time.hpp>
using namespace std;
using boost::asio::ip::tcp;
static shared_ptr<boost::asio::io_service> _ios;
static shared_ptr<boost::asio::deadline_timer> timer;
static shared_ptr<boost::asio::ip::tcp::socket> tcp_sock;
static shared_ptr<tcp::resolver> _resolver;
static boost::asio::ip::tcp::resolver::results_type eps;
string buffer(1024,0);
void handle_read(const boost::system::error_code& ec, size_t bytes)
{
if (ec)
{
cout << "error: " << ec.message() << endl;
_ios->stop();
return;
}
// got first response, send off reply
if (buffer == "response")
{
boost::asio::async_write(*tcp_sock, boost::asio::buffer("command2",7),
[](auto ec, auto bytes)
{
if (ec)
{
cout << "write error: " << ec.message() << endl;
_ios->stop();
return;
}
});
}
else
{
// parse incoming data
}
// attempt next read
timer->expires_from_now(boost::posix_time::seconds(10));
boost::asio::async_read(*tcp_sock, boost::asio::buffer(buffer,buffer.size()), handle_read);
}
void get_response()
{
timer->expires_from_now(boost::posix_time::seconds(10));
boost::asio::async_read(*tcp_sock, boost::asio::buffer(buffer,buffer.size()), handle_read);
}
void on_connected(const boost::system::error_code& ec, tcp::endpoint)
{
if (!tcp_sock->is_open())
{
cout << "socket is not open" << endl;
_ios->stop();
}
else if (ec)
{
cout << "error: " << ec.message() << endl;
_ios->stop();
return;
}
else
{
cout << "connected" << endl;
// do handshake (no errors?)
boost::asio::async_write(*tcp_sock, boost::asio::buffer("command1",7),
[](auto ec, auto bytes)
{
if (ec)
{
cout << "write error: " << ec.message() << endl;
_ios->stop();
return;
}
get_response();
});
}
}
void check_timer()
{
if (timer->expires_at() <= boost::asio::deadline_timer::traits_type::now())
{
tcp_sock->close();
timer->expires_at(boost::posix_time::pos_infin);
}
timer->async_wait(boost::bind(check_deadline));
}
void init(string ip, string port)
{
// set/reset data and connect
_resolver.reset(new tcp::resolver(*_ios));
eps = _resolver->resolve(ip, port);
timer.reset(new boost::asio::deadline_timer(*_ios));
tcp_sock.reset(new boost::asio::ip::tcp::socket(*_ios));
timer->expires_from_now(boost::posix_time::seconds(5));
// start async connect
boost::asio::async_connect(*tcp_sock, eps, on_connected);
timer->async_wait(boost::bind(check_timer));
}
int main(int argc, char** argv)
{
while (1)
{
// start new io context
_ios.reset(new boost::asio::io_service);
init(argv[1],argv[2]);
_ios->run();
cout << "try reconnect" << endl;
}
return 0;
}
Why would I be timing out? When I do a netcat and follow the same procedure things look ok. I get no errors from the async_write indicating that there are any errors and I am making sure to not call the async_read for the response until I am in the write handler.

Others have been spot on. You use "blanket" read, which means it only completes at error (like EOF) or when the buffer is full (docs)
Besides your code is over-complicated (excess dynamic allocation, manual new, globals, etc).
The following simplified/cleaned up version still exhibits your problem: http://coliru.stacked-crooked.com/a/8f5d0820b3cee186
Since it looks like you just want to limit over-all time of the request, I'd suggest dropping the timer and just limit the time to run the io_context.
Also showing how to use '\n' for message delimiter and avoid manually managing dynamic buffers:
Live On Coliru
#include <boost/asio.hpp>
#include <iomanip>
#include <iostream>
#include <memory>
#include <string>
namespace asio = boost::asio;
using asio::ip::tcp;
using boost::system::error_code;
using namespace std::literals;
struct Client {
#define HANDLE(memfun) std::bind(&Client::memfun, this, std::placeholders::_1, std::placeholders::_2)
Client(std::string const& ip, std::string const& port) {
async_connect(_sock, tcp::resolver{_ios}.resolve(ip, port), HANDLE(on_connected));
}
void run() { _ios.run_for(10s); }
private:
asio::io_service _ios;
asio::ip::tcp::socket _sock{_ios};
std::string _buffer;
void on_connected(error_code ec, tcp::endpoint) {
std::cout << "on_connected: " << ec.message() << std::endl;
if (ec)
return;
async_write(_sock, asio::buffer("command1\n"sv), [this](error_code ec, size_t) {
std::cout << "write: " << ec.message() << std::endl;
if (!ec)
get_response();
});
}
void get_response() {
async_read_until(_sock, asio::dynamic_buffer(_buffer /*, 1024*/), "\n", HANDLE(on_read));
}
void on_read(error_code ec, size_t bytes) {
std::cout << "handle_read: " << ec.message() << " " << bytes << std::endl;
if (ec)
return;
auto cmd = _buffer.substr(0, bytes);
_buffer.erase(0, bytes);
// got first response, send off reply
std::cout << "Handling command " << quoted(cmd) << std::endl;
if (cmd == "response\n") {
async_write(_sock, asio::buffer("command2\n"sv), [](error_code ec, size_t) {
std::cout << "write2: " << ec.message() << std::endl;
});
} else {
// TODO parse cmd
}
get_response(); // attempt next read
}
};
int main(int argc, char** argv) {
assert(argc == 3);
while (1) {
Client(argv[1], argv[2]).run();
std::this_thread::sleep_for(1s); // for demo on COLIRU
std::cout << "try reconnect" << std::endl;
}
}
With output live on coliru:
on_connected: Connection refused
try reconnect
on_connected: Success
write: Success
command1
handle_read: Success 4
Handling command "one
"
handle_read: Success 9
Handling command "response
"
write2: Success
command2
handle_read: Success 6
Handling command "three
"
handle_read: End of file 0
try reconnect
on_connected: Success
write: Success
command1
Local interactive demo:
Sidenote: as long as resolve() isn't happening asynchronously it will not be subject to the timeouts.

Related

Boost Asio and Udp Poll() No incoming data

I have to handle information from 100 ports in parallel for 100ms per second.
I am using Ubuntu OS.
I did some research and i saw that poll() function is a good candidate, to avoid to open 100 threads to handle in parallel data coming on udp protocol.
I did main part with boost and I tried to integrate poll() with boost.
The problem is when i am trying to send by client data to the server, I receive nothing.
According to wireshark, data are coming on the right host. (localhost, port 1234)
Did I miss something or did I put something wrong ?
The test code (server) :
#include <deque>
#include <iostream>
#include <chrono>
#include <thread>
#include <sys/poll.h>
#include <boost/optional.hpp>
#include <boost/asio.hpp>
#include <boost/bind/bind.hpp>
using boost::asio::ip::udp;
using namespace boost::asio;
using namespace std::chrono_literals;
std::string ip_address = "127.0.0.1";
template<typename T, size_t N>
size_t arraySize( T(&)[N] )
{
return(N);
}
class UdpReceiver
{
using Resolver = udp::resolver;
using Sockets = std::deque<udp::socket>;
using EndPoint = udp::endpoint;
using Buffer = std::array<char, 100>; // receiver buffer
public:
explicit UdpReceiver()
: work_(std::ref(resolver_context)), thread_( [this]{ resolver_context.run(); })
{ }
~UdpReceiver()
{
work_ = boost::none; // using work to keep run active always !
thread_.join();
}
void async_resolve(udp::resolver::query const& query_) {
resolver_context.post([this, query_] { do_resolve(query_); });
}
// callback for event-loop in main thread
void run_handler(int fd_idx) {
// start reading
auto result = read(fd_idx, receive_buf.data(), sizeof(Buffer));
// increment number of received packets
received_packets = received_packets + 1;
std::cout << "Received bytes " << result << " current recorded packets " << received_packets <<'\n';
// run handler posted from resolver threads
handler_context.poll();
handler_context.reset();
}
static void handle_receive(boost::system::error_code error, udp::resolver::iterator const& iterator) {
std::cout << "handle_resolve:\n"
" " << error.message() << "\n";
if (!error)
std::cout << " " << iterator->endpoint() << "\n";
}
// get current file descriptor
int fd(size_t idx)
{
return sockets[idx].native_handle();
}
private:
void do_resolve(boost::asio::ip::udp::resolver::query const& query_) {
boost::system::error_code error;
Resolver resolver(resolver_context);
Resolver::iterator result = resolver.resolve(query_, error);
sockets.emplace_back(udp::socket(resolver_context, result->endpoint()));
// post handler callback to service running in main thread
resolver_context.post(boost::bind(&UdpReceiver::handle_receive, error, result));
}
private:
Sockets sockets;
size_t received_packets = 0;
EndPoint remote_receiver;
Buffer receive_buf {};
io_context resolver_context;
io_context handler_context;
boost::optional<boost::asio::io_context::work> work_;
std::thread thread_;
};
int main (int argc, char** argv)
{
UdpReceiver udpReceiver;
udpReceiver.async_resolve(udp::resolver::query(ip_address, std::to_string(1234)));
//logic
pollfd fds[2] { };
for(int i = 0; i < arraySize(fds); ++i)
{
fds[i].fd = udpReceiver.fd(0);
fds[i].events = 0;
fds[i].events |= POLLIN;
fcntl(fds[i].fd, F_SETFL, O_NONBLOCK);
}
// simple event-loop
while (true) {
if (poll(fds, arraySize(fds), -1)) // waiting for wakeup call. Timeout - inf
{
for(auto &fd : fds)
{
if(fd.revents & POLLIN) // checking if we have something to read
{
fd.revents = 0; // reset kernel message
udpReceiver.run_handler(fd.fd); // call resolve handler. Do read !
}
}
}
}
return 0;
}
This looks like a confused mix of C style poll code and Asio code. The point is
you don't need poll (Asio does it internally (or epoll/select/kqueue/IOCP - whatever is available)
UDP is connectionless, so you don't need more than one socket to receive all "connections" (senders)
I'd replace it all with a single udp::socket on a single thread. You don't even have to manage the thread/work:
net::thread_pool io(1); // single threaded
udp::socket s{io, {{}, 1234}};
Let's run an asynchronous receive loop for 5s:
std::array<char, 100> receive_buffer;
udp::endpoint sender;
std::function<void(error_code, size_t)> read_loop;
read_loop = [&](error_code ec, size_t bytes) {
if (bytes != size_t(-1)) {
//std::cout << "read_loop (" << ec.message() << ")\n";
if (ec)
return;
received_packets += 1;
unique_senders.insert(sender);
//std::cout << "Received:" << bytes << " sender:" << sender << " recorded:" << received_packets << "\n";
//std::cout << std::string_view(receive_buffer.data(), bytes) << "\n";
}
s.async_receive_from(net::buffer(receive_buffer), sender, read_loop);
};
read_loop(error_code{}, -1); // prime the async pump
// after 5s stop
std::this_thread::sleep_for(5s);
post(io, [&s] { s.cancel(); });
io.join();
At the end, we can report the statistics:
std::cout << "A total of " << received_packets << " were received from "
<< unique_senders.size() << " unique senders\n";
With a similated load in bash:
function client() { while read a; do echo "$a" > /dev/udp/localhost/1234 ; done < /etc/dictionaries-common/words; }
for a in {1..20}; do client& done; time wait
We get:
A total of 294808 were received from 28215 unique senders
real 0m5,007s
user 0m0,801s
sys 0m0,830s
This is obviously not optimized, the bottle neck here is likely the many many bash subshells being launched for the clients.
Full Listing
#include <boost/asio.hpp>
#include <boost/bind/bind.hpp>
#include <iostream>
#include <set>
namespace net = boost::asio;
using boost::asio::ip::udp;
using boost::system::error_code;
using namespace std::chrono_literals;
int main ()
{
net::thread_pool io(1); // single threaded
udp::socket s{io, {{}, 1234}};
std::set<udp::endpoint> unique_senders;
size_t received_packets = 0;
{
std::array<char, 100> receive_buffer;
udp::endpoint sender;
std::function<void(error_code, size_t)> read_loop;
read_loop = [&](error_code ec, size_t bytes) {
if (bytes != size_t(-1)) {
//std::cout << "read_loop (" << ec.message() << ")\n";
if (ec)
return;
received_packets += 1;
unique_senders.insert(sender);
//std::cout << "Received:" << bytes << " sender:" << sender << " recorded:" << received_packets << "\n";
//std::cout << std::string_view(receive_buffer.data(), bytes) << "\n";
}
s.async_receive_from(net::buffer(receive_buffer), sender, read_loop);
};
read_loop(error_code{}, -1); // prime the async pump
// after 5s stop
std::this_thread::sleep_for(5s);
post(io, [&s] { s.cancel(); });
io.join();
}
std::cout << "A total of " << received_packets << " were received from "
<< unique_senders.size() << " unique senders\n";
}

RESTServer loop forever to accept new connections outside the class

I have the following RESTServer implemented using boost::beast. The way the server is started is using
void http_server(tcp::acceptor& acceptor, tcp::socket& socket).
Logically acceptor and socket should logically belong to http_connection class,instead of as a separate function outside. What is the reason it is implemented like this?
#include <boost/beast/core.hpp>
#include <boost/beast/http.hpp>
#include <boost/beast/version.hpp>
#include <boost/asio.hpp>
#include <chrono>
#include <cstdlib>
#include <ctime>
#include <iostream>
#include <memory>
#include <string>
namespace beast = boost::beast; // from <boost/beast.hpp>
namespace http = beast::http; // from <boost/beast/http.hpp>
namespace net = boost::asio; // from <boost/asio.hpp>
using tcp = boost::asio::ip::tcp; // from <boost/asio/ip/tcp.hpp>
namespace my_program_state
{
std::size_t
request_count()
{
static std::size_t count = 0;
return ++count;
}
std::time_t
now()
{
return std::time(0);
}
}
class http_connection : public std::enable_shared_from_this<http_connection>
{
public:
http_connection(tcp::socket socket)
: socket_(std::move(socket))
{
}
// Initiate the asynchronous operations associated with the connection.
void
start()
{
read_request();
check_deadline();
}
private:
// The socket for the currently connected client.
tcp::socket socket_;
// The buffer for performing reads.
beast::flat_buffer buffer_{8192};
// The request message.
http::request<http::dynamic_body> request_;
// The response message.
http::response<http::dynamic_body> response_;
// The timer for putting a deadline on connection processing.
net::steady_timer deadline_{
socket_.get_executor(), std::chrono::seconds(60)};
// Asynchronously receive a complete request message.
void
read_request()
{
auto self = shared_from_this();
http::async_read(
socket_,
buffer_,
request_,
[self](beast::error_code ec,
std::size_t bytes_transferred)
{
boost::ignore_unused(bytes_transferred);
if(!ec)
self->process_request();
});
}
// Determine what needs to be done with the request message.
void
process_request()
{
response_.version(request_.version());
response_.keep_alive(false);
switch(request_.method())
{
case http::verb::get:
response_.result(http::status::ok);
response_.set(http::field::server, "Beast");
create_response();
break;
default:
// We return responses indicating an error if
// we do not recognize the request method.
response_.result(http::status::bad_request);
response_.set(http::field::content_type, "text/plain");
beast::ostream(response_.body())
<< "Invalid request-method '"
<< std::string(request_.method_string())
<< "'";
break;
}
write_response();
}
// Construct a response message based on the program state.
void
create_response()
{
if(request_.target() == "/count")
{
response_.set(http::field::content_type, "text/html");
beast::ostream(response_.body())
<< "<html>\n"
<< "<head><title>Request count</title></head>\n"
<< "<body>\n"
<< "<h1>Request count</h1>\n"
<< "<p>There have been "
<< my_program_state::request_count()
<< " requests so far.</p>\n"
<< "</body>\n"
<< "</html>\n";
}
else if(request_.target() == "/time")
{
response_.set(http::field::content_type, "text/html");
beast::ostream(response_.body())
<< "<html>\n"
<< "<head><title>Current time</title></head>\n"
<< "<body>\n"
<< "<h1>Current time</h1>\n"
<< "<p>The current time is "
<< my_program_state::now()
<< " seconds since the epoch.</p>\n"
<< "</body>\n"
<< "</html>\n";
}
else
{
response_.result(http::status::not_found);
response_.set(http::field::content_type, "text/plain");
beast::ostream(response_.body()) << "File not found\r\n";
}
}
// Asynchronously transmit the response message.
void
write_response()
{
auto self = shared_from_this();
response_.set(http::field::content_length, response_.body().size());
http::async_write(
socket_,
response_,
[self](beast::error_code ec, std::size_t)
{
self->socket_.shutdown(tcp::socket::shutdown_send, ec);
self->deadline_.cancel();
});
}
// Check whether we have spent enough time on this connection.
void
check_deadline()
{
auto self = shared_from_this();
deadline_.async_wait(
[self](beast::error_code ec)
{
if(!ec)
{
// Close socket to cancel any outstanding operation.
self->socket_.close(ec);
}
});
}
};
// "Loop" forever accepting new connections.
void
http_server(tcp::acceptor& acceptor, tcp::socket& socket)
{
acceptor.async_accept(socket,
[&](beast::error_code ec)
{
if(!ec)
std::make_shared<http_connection>(std::move(socket))->start();
http_server(acceptor, socket);
});
}
int
main(int argc, char* argv[])
{
try
{
// Check command line arguments.
if(argc != 3)
{
std::cerr << "Usage: " << argv[0] << " <address> <port>\n";
std::cerr << " For IPv4, try:\n";
std::cerr << " receiver 0.0.0.0 80\n";
std::cerr << " For IPv6, try:\n";
std::cerr << " receiver 0::0 80\n";
return EXIT_FAILURE;
}
auto const address = net::ip::make_address(argv[1]);
unsigned short port = static_cast<unsigned short>(std::atoi(argv[2]));
net::io_context ioc{1};
tcp::acceptor acceptor{ioc, {address, port}};
tcp::socket socket{ioc};
http_server(acceptor, socket);
ioc.run();
}
catch(std::exception const& e)
{
std::cerr << "Error: " << e.what() << std::endl;
return EXIT_FAILURE;
}
}
One reason would be the author wanted to separate the logic.
Moreover i think it would complicate the procedure to create new sessions if you would move the listener code into the client session.
The acceptor and socket object should stay independent, you may reference it in your client and use it if you want but since these a more "global" and unique objects, it should stay outside of the session. Instead it can be also put into a separate class.
Roughly speaking the acceptor should just listen for incoming connection attempts from remote hosts and create the sessions accordingly.

Gracefully shutdown boost::beast HTTPServer

I have a http server (boost beast) taken from here Boost Beast HTTP Server. The function void
http_server(tcp::acceptor& acceptor, tcp::socket& socket) keeps the server always running. I want to know if there is a graceful way to shutdown the server.
#include <boost/beast/core.hpp>
#include <boost/beast/http.hpp>
#include <boost/beast/version.hpp>
#include <boost/asio.hpp>
#include <chrono>
#include <cstdlib>
#include <ctime>
#include <iostream>
#include <memory>
#include <string>
namespace beast = boost::beast; // from <boost/beast.hpp>
namespace http = beast::http; // from <boost/beast/http.hpp>
namespace net = boost::asio; // from <boost/asio.hpp>
using tcp = boost::asio::ip::tcp; // from <boost/asio/ip/tcp.hpp>
namespace my_program_state
{
std::size_t
request_count()
{
static std::size_t count = 0;
return ++count;
}
std::time_t
now()
{
return std::time(0);
}
}
class http_connection : public std::enable_shared_from_this<http_connection>
{
public:
http_connection(tcp::socket socket)
: socket_(std::move(socket))
{
}
// Initiate the asynchronous operations associated with the connection.
void
start()
{
read_request();
check_deadline();
}
private:
// The socket for the currently connected client.
tcp::socket socket_;
// The buffer for performing reads.
beast::flat_buffer buffer_{8192};
// The request message.
http::request<http::dynamic_body> request_;
// The response message.
http::response<http::dynamic_body> response_;
// The timer for putting a deadline on connection processing.
net::steady_timer deadline_{
socket_.get_executor(), std::chrono::seconds(60)};
// Asynchronously receive a complete request message.
void
read_request()
{
auto self = shared_from_this();
http::async_read(
socket_,
buffer_,
request_,
[self](beast::error_code ec,
std::size_t bytes_transferred)
{
boost::ignore_unused(bytes_transferred);
if(!ec)
self->process_request();
});
}
// Determine what needs to be done with the request message.
void
process_request()
{
response_.version(request_.version());
response_.keep_alive(false);
switch(request_.method())
{
case http::verb::get:
response_.result(http::status::ok);
response_.set(http::field::server, "Beast");
create_response();
break;
default:
// We return responses indicating an error if
// we do not recognize the request method.
response_.result(http::status::bad_request);
response_.set(http::field::content_type, "text/plain");
beast::ostream(response_.body())
<< "Invalid request-method '"
<< std::string(request_.method_string())
<< "'";
break;
}
write_response();
}
// Construct a response message based on the program state.
void
create_response()
{
if(request_.target() == "/count")
{
response_.set(http::field::content_type, "text/html");
beast::ostream(response_.body())
<< "<html>\n"
<< "<head><title>Request count</title></head>\n"
<< "<body>\n"
<< "<h1>Request count</h1>\n"
<< "<p>There have been "
<< my_program_state::request_count()
<< " requests so far.</p>\n"
<< "</body>\n"
<< "</html>\n";
}
else if(request_.target() == "/time")
{
response_.set(http::field::content_type, "text/html");
beast::ostream(response_.body())
<< "<html>\n"
<< "<head><title>Current time</title></head>\n"
<< "<body>\n"
<< "<h1>Current time</h1>\n"
<< "<p>The current time is "
<< my_program_state::now()
<< " seconds since the epoch.</p>\n"
<< "</body>\n"
<< "</html>\n";
}
else
{
response_.result(http::status::not_found);
response_.set(http::field::content_type, "text/plain");
beast::ostream(response_.body()) << "File not found\r\n";
}
}
// Asynchronously transmit the response message.
void
write_response()
{
auto self = shared_from_this();
response_.set(http::field::content_length, response_.body().size());
http::async_write(
socket_,
response_,
[self](beast::error_code ec, std::size_t)
{
self->socket_.shutdown(tcp::socket::shutdown_send, ec);
self->deadline_.cancel();
});
}
// Check whether we have spent enough time on this connection.
void
check_deadline()
{
auto self = shared_from_this();
deadline_.async_wait(
[self](beast::error_code ec)
{
if(!ec)
{
// Close socket to cancel any outstanding operation.
self->socket_.close(ec);
}
});
}
};
// "Loop" forever accepting new connections.
void
http_server(tcp::acceptor& acceptor, tcp::socket& socket)
{
acceptor.async_accept(socket,
[&](beast::error_code ec)
{
if(!ec)
std::make_shared<http_connection>(std::move(socket))->start();
http_server(acceptor, socket);
});
}
int
main(int argc, char* argv[])
{
try
{
// Check command line arguments.
if(argc != 3)
{
std::cerr << "Usage: " << argv[0] << " <address> <port>\n";
std::cerr << " For IPv4, try:\n";
std::cerr << " receiver 0.0.0.0 80\n";
std::cerr << " For IPv6, try:\n";
std::cerr << " receiver 0::0 80\n";
return EXIT_FAILURE;
}
auto const address = net::ip::make_address(argv[1]);
unsigned short port = static_cast<unsigned short>(std::atoi(argv[2]));
net::io_context ioc{1};
tcp::acceptor acceptor{ioc, {address, port}};
tcp::socket socket{ioc};
http_server(acceptor, socket);
ioc.run();
}
catch(std::exception const& e)
{
std::cerr << "Error: " << e.what() << std::endl;
return EXIT_FAILURE;
}
Call the stop method on your io_context object to make it break out of the run loop.
That is:
ioc.stop();
https://www.boost.org/doc/libs/1_72_0/doc/html/boost_asio/reference/io_context/stop.html

Boost asio, async_read and acync_write not calling callbacks

I'm encapsulating the boost-asio socket, but I got an issue with it, but neither async_read nor async_write calls their callback function and I don't understand why.
I've tried using async_read_some but had the same issue.
Here's the code I've written so far
#include <iostream>
#include "socket.hpp"
Socket::Socket()
{
boost::asio::ip::tcp::endpoint ep_tmp(boost::asio::ip::tcp::v4(), 4242);
endpoint = ep_tmp;
acceptor = new boost::asio::ip::tcp::acceptor(ios, endpoint);
tcp_socket = new boost::asio::ip::tcp::socket(ios);
acceptor->listen();
}
Socket::~Socket()
{
delete(acceptor);
delete(tcp_socket);
}
void Socket::get_connection()
{
acceptor->async_accept(*tcp_socket, [](const boost::system::error_code &ec)
{
std::cout << "Connection received." << std::endl;
if (ec)
std::cout << "Error " << ec << std::endl;
});
this->exec();
}
void Socket::send(std::string &message)
{
async_write(*tcp_socket, boost::asio::buffer(message),
[](const boost::system::error_code &ec,
std::size_t bytes_transferred)
{
std::cout << "Sending datas." << std::endl;
if (ec)
std::cout << "Error " << ec << std::endl;
else
std::cout << bytes_transferred << " bytes transferred." << std::endl;
});
}
void Socket::receive(void)
{
char *buf;
buf = (char *)malloc(sizeof(char) * 50);
buf = (char *)memset(buf, 0, 50);
async_read(*tcp_socket, boost::asio::buffer(buf, 50),
[](const boost::system::error_code &ec,
std::size_t bytes_transferred)
{
std::cout << "Receiving datas." << std::endl;
if (ec)
std::cout << "Error " << ec << std::endl;
else
std::cout << bytes_transferred
<< " bytes transferred." << std::endl;
});
}
void Socket::exec(void)
{
ios.run();
}
int main()
{
Socket serv;
std::string data_test;
data_test = "Test\n";
serv.get_connection();
serv.send(data_test);
serv.exec();
serv.receive();
serv.exec();
return (0);
}
The malloc bit is temporary until I find a way to do it without using C.
I'd be really thankful if someone could enlighten me on that issue
You have to call io_service::reset before second and later calls to io_service::run. And you probably want to use synchronous API instead, as your current approach absolutely defeats the purpose of asynchronicity.
I'm with yuri: prefer non-async unless you know what you're doing.
It could look like this: http://coliru.stacked-crooked.com/a/523a7828a9aee4b2
#include <boost/asio.hpp>
#include <iostream>
namespace ba = boost::asio;
using ba::ip::tcp;
class Socket {
public:
Socket() { acceptor.listen(); }
void get_connection();
void exec();
void send(std::string const &message);
void receive(void);
private:
ba::io_service ios;
tcp::endpoint endpoint{ tcp::v4(), 4242 };
tcp::acceptor acceptor{ ios, endpoint };
tcp::socket tcp_socket{ ios };
};
void Socket::get_connection() {
acceptor.accept(tcp_socket);
std::cout << "Connection received.\n";
}
void Socket::send(std::string const &message) {
std::cout << "Sending datas.\n";
auto bytes_transferred = ba::write(tcp_socket, ba::buffer(message));
std::cout << bytes_transferred << " bytes transferred.\n";
}
void Socket::receive(void) {
std::cout << "Receiving datas.\n";
char buf[50] = { 0 };
auto bytes_transferred = ba::read(tcp_socket, ba::buffer(buf));
std::cout << bytes_transferred << " bytes transferred.\n";
}
int main() {
Socket serv;
serv.get_connection();
serv.send("Test\n");
serv.receive();
}
If you want async behaviour, you have to manage the lifetimes of each buffer/connection-specific resource. There are many examples of that, e.g. in the docs or here: http://coliru.stacked-crooked.com/a/95e2000e49b4db1d
On the perils of buffer lifetime: client server simple example nonblocking

Asio: Prevent asynchronous client from being deleted?

I have the following code, trying to code an asynchronous client.
The problem is that in main(), the Client gets deleted in the try-catch block, because execution leaves the scope.
I've tried to come up with a solution to this problem, like adding a while(true), but I don't like this approach. Also, I don't prefer a getchar().
Due to the asynchronous nature of the calls, both connect() and loop() returns immediately.
How can I fix this?
#include <iostream>
#include <thread>
#include <string>
#include <boost\asio.hpp>
#include <Windows.h>
#define DELIM "\r\n"
using namespace boost;
class Client {
public:
Client(const std::string& raw_ip_address, unsigned short port_num) :
m_ep(asio::ip::address::from_string(raw_ip_address), port_num), m_sock(m_ios)
{
m_work.reset(new asio::io_service::work(m_ios));
m_thread.reset(new std::thread([this]() {
m_ios.run();
}));
m_sock.open(m_ep.protocol());
}
void connect()
{
m_sock.async_connect(m_ep, [this](const system::error_code& ec)
{
if (ec != 0) {
std::cout << "async_connect() error: " << ec.message() << " (" << ec.value() << ") " << std::endl;
return;
}
std::cout << "Connection to server has been established." << std::endl;
});
}
void loop()
{
std::thread t = std::thread([this]()
{
recv();
});
t.join();
}
void recv()
{
asio::async_read_until(m_sock, buf, DELIM, [this](const system::error_code& ec, std::size_t bytes_transferred)
{
if (ec != 0) {
std::cout << "async_read_until() error: " << ec.message() << " (" << ec.value() << ") " << std::endl;
return;
}
std::istream is(&buf);
std::string req;
std::getline(is, req, '\r');
is.get(); // discard newline
std::cout << "Received: " << req << std::endl;
if (req == "alive") {
recv();
}
else if (req == "close") {
close();
return;
}
else {
send(req + DELIM);
}
});
}
void send(std::string resp)
{
auto s = std::make_shared<std::string>(resp);
asio::async_write(m_sock, asio::buffer(*s), [this, s](const system::error_code& ec, std::size_t bytes_transferred)
{
if (ec != 0) {
std::cout << "async_write() error: " << ec.message() << " (" << ec.value() << ") " << std::endl;
return;
}
else {
recv();
}
});
}
void close()
{
m_sock.close();
m_work.reset();
m_thread->join();
}
private:
asio::io_service m_ios;
asio::ip::tcp::endpoint m_ep;
asio::ip::tcp::socket m_sock;
std::unique_ptr<asio::io_service::work> m_work;
std::unique_ptr<std::thread> m_thread;
asio::streambuf buf;
};
int main()
{
const std::string raw_ip_address = "127.0.0.1";
const unsigned short port_num = 8001;
try {
Client client(raw_ip_address, port_num);
client.connect();
client.loop();
}
catch (system::system_error &err) {
std::cout << "main() error: " << err.what() << " (" << err.code() << ") " << std::endl;
return err.code().value();
}
return 0;
}
You've not really understood how asio works. Typically in the main thread(s) you will call io_service::run() (which will handle all the asynchronous events.)
To ensure the lifetime of the Client, use a shared_ptr<> and ensure this shared pointer is used in the handlers. For example..
io_service service;
{
// Create the client - outside of this scope, asio will manage
// the life time of the client
auto client = make_shared<Client>(service);
client->connect(); // setup the connect operation..
}
// Now run the io service event loop - this will block until there are no more
// events to handle
service.run();
Now you need to refactor your Client code:
class Client : public std::enable_shared_from_this<Client> {
Client(io_service& service): socket_(service) ...
{ }
void connect() {
// By copying the shared ptr to the lambda, the life time of
// Client is guaranteed
socket_.async_connect(endpoint_, [self = this->shared_from_this()](auto ec)
{
if (ec) {
return;
}
// Read
self->read(self);
});
}
void read(shared_ptr<Client> self) {
// By copying the shared ptr to the lambda, the life time of
// Client is guaranteed
asio::async_read_until(socket_, buffer_, DELIM, [self](auto ec, auto size)
{
if (ec) {
return;
}
// Handle the data
// Setup the next read operation
self->read(self)
});
}
};
You have a thread for the read operation - which is not necessary. That will register one async read operation and return immediately. You need to register a new read operation to continue reading the socket (as I've sketched out..)
You can post any function to io_service via post(Handler)
http://www.boost.org/doc/libs/1_60_0/doc/html/boost_asio/reference/io_service/post.html
Then in the main() do something like:
while (!exit) {
io_service.run_one();
}
Or call io_service::run_one or io_service::run in the main()