HTTP stress test tool using boost asio async_connect problem - c++

Using the following boost::asio code I run a loop of 1M sequential http calls to a Docker node.js simple http service that generates random numbers, but after a few thousand calls I start getting async_connect errors. The node.js part is not producing any errors and I believe it works OK.
To avoid resolving the host in every call and trying to speed-up, I am caching the endpoint, which makes no difference, I have tested both ways.
Can anyone see what is wrong with my code below?
Are there any best practices for a stress-test tool using asio that I am missing?
//------------------------------------------------------------------------------
// https://www.boost.org/doc/libs/1_70_0/libs/beast/doc/html/beast/using_io/timeouts.html
HttpResponse HttpClientAsyncBase::_http(HttpRequest&& req)
{
using namespace boost::beast;
namespace net = boost::asio;
using tcp = net::ip::tcp;
HttpResponse res;
req.prepare_payload();
boost::beast::error_code ec = {};
const HOST_INFO host = resolve(req.host(), req.port, req.resolve);
net::io_context m_io;
boost::asio::spawn(m_io, [&](boost::asio::yield_context yield)
{
size_t retries = 0;
tcp_stream stream(m_io);
if (req.timeout_seconds == 0) get_lowest_layer(stream).expires_never();
else get_lowest_layer(stream).expires_after(std::chrono::seconds(req.timeout_seconds));
get_lowest_layer(stream).async_connect(host, yield[ec]);
if (ec) return;
http::async_write(stream, req, yield[ec]);
if (ec)
{
stream.close();
return;
}
flat_buffer buffer;
http::async_read(stream, buffer, res, yield[ec]);
stream.close();
});
m_io.run();
if (ec)
throw boost::system::system_error(ec);
return std::move(res);
}
I have tried both sync/async implementations of a boost http client and I get the exact same problem.
The error I get is "You were not connected because a duplicate name exists on the network. If joining a domain, go to System in Control Panel to change the computer name and try again. If joining a workgroup, choose another workgroup name [system:52]"

So, I decided to... just try. I made your code into self-contained example:
#include <boost/asio/spawn.hpp>
#include <boost/beast.hpp>
#include <fmt/ranges.h>
#include <iostream>
namespace http = boost::beast::http;
//------------------------------------------------------------------------------
// https://www.boost.org/doc/libs/1_70_0/libs/beast/doc/html/beast/using_io/timeouts.html
struct HttpRequest : http::request<http::string_body> { // SEHE: don't do this
using base_type = http::request<http::string_body>;
using base_type::base_type;
std::string host() const { return "127.0.0.1"; }
uint16_t port = 80;
bool resolve = true;
int timeout_seconds = 0;
};
using HttpResponse = http::response<http::vector_body<uint8_t> >; // Do this or aggregation instead
struct HttpClientAsyncBase {
HttpResponse _http(HttpRequest&& req);
using HOST_INFO = boost::asio::ip::tcp::endpoint;
static HOST_INFO resolve(std::string const& host, uint16_t port, bool resolve) {
namespace net = boost::asio;
using net::ip::tcp;
net::io_context ioc;
tcp::resolver r(ioc);
using flags = tcp::resolver::query::flags;
auto f = resolve ? flags::address_configured
: static_cast<flags>(flags::numeric_host | flags::numeric_host);
tcp::resolver::query q(tcp::v4(), host, std::to_string(port), f);
auto it = r.resolve(q);
assert(it.size());
return HOST_INFO{it->endpoint()};
}
};
HttpResponse HttpClientAsyncBase::_http(HttpRequest&& req) {
using namespace boost::beast;
namespace net = boost::asio;
using net::ip::tcp;
HttpResponse res;
req.prepare_payload();
boost::beast::error_code ec = {};
const HOST_INFO host = resolve(req.host(), req.port, req.resolve);
net::io_context m_io;
spawn(m_io, [&](net::yield_context yield) {
// size_t retries = 0;
tcp_stream stream(m_io);
if (req.timeout_seconds == 0)
get_lowest_layer(stream).expires_never();
else
get_lowest_layer(stream).expires_after(std::chrono::seconds(req.timeout_seconds));
get_lowest_layer(stream).async_connect(host, yield[ec]);
if (ec)
return;
http::async_write(stream, req, yield[ec]);
if (ec) {
stream.close();
return;
}
flat_buffer buffer;
http::async_read(stream, buffer, res, yield[ec]);
stream.close();
});
m_io.run();
if (ec)
throw boost::system::system_error(ec);
return res;
}
int main() {
for (int i = 0; i<100'000; ++i) {
HttpClientAsyncBase hcab;
HttpRequest r(http::verb::get, "/bytes/10", 11);
r.timeout_seconds = 0;
r.port = 80;
r.resolve = false;
auto res = hcab._http(std::move(r));
std::cout << res.base() << "\n";
fmt::print("Data: {::02x}\n", res.body());
}
}
(Side note, this is using docker run -p 80:80 kennethreitz/httpbin to run the server side)
While this is about 10x faster than running curl to do the equivalent requests in a bash loop, none of this is particularly stressing. There's nothing async about it, and it seems resource usage is mild and stable, e.g. memory profiled:
(for completeness I verified identical results with timeout_seconds = 1)
Since what you're doing is literally the opposite of async IO, I'd write it much simpler:
struct HttpClientAsyncBase {
net::io_context m_io;
HttpResponse _http(HttpRequest&& req);
static auto resolve(std::string const& host, uint16_t port, bool resolve);
};
HttpResponse HttpClientAsyncBase::_http(HttpRequest&& req) {
HttpResponse res;
req.requestObject.prepare_payload();
const auto host = resolve(req.host(), req.port, req.resolve);
beast::tcp_stream stream(m_io);
if (req.timeout_seconds == 0)
stream.expires_never();
else
stream.expires_after(std::chrono::seconds(req.timeout_seconds));
stream.connect(host);
write(stream, req.requestObject);
beast::flat_buffer buffer;
read(stream, buffer, res);
stream.close();
return res;
}
That's just simpler, runs faster and does the same, down to the exceptions.
But, you're probably trying to cause stress, perhaps you instead need to reuse some connections and multi-thread?
You can see a very complete example of just that here:
How do I make this HTTPS connection persistent in Beast?
It includes reconnecting dropped connections, connections to different hosts, varied requests etc.

Alan's comments gave me the right pointers and I soon found using netstat -a that it was a ports leakage problem with thousands of ports in TIME_WAIT state after running the code for some brief time.
The root cause was both on the client and the server:
In node.js server I had to make sure that responses close the connection by
adding
response.setHeader("connection", "close");
In boost::asio C++ code I replaced stream.close() with
stream.socket().shutdown(boost::asio::ip::tcp::socket::shutdown_both, ec);
That seems to make all the difference. Also I made sure to use
req.set(boost::beast::http::field::connection, "close");
in my requests.
I verfied with the tool running for over 5 hours with no problems at all, so I guess the problem is solved!

Implementing 'Abortive TCP/IP Close' with boost::asio to treat EADDRNOTAVAIL and TIME_WAIT for HTTP client stress test tool
I am revisting the issue to offer an alternative that actually worked much better. Reminding that the objective was to develop a stress test tool for hitting a server with 1M requests. Even though my previous solution worked on Windows, when I loaded the executable on Docker/Alpine it started crashing with SEGFAULT errors that I was unable to trace. The root cause seems to be related to boost::asio::spawn(m_io, [&](boost::asio::yield_context yield) but time pressured me to solve the HTTP problem.
I decided to use synch HTTP and treat EADDRNOTAVAIL and TIME_WAIT errors by following suggestions from Disable TIME_WAIT with boost sockets and TIME_WAIT with boost asio and template code from https://www.boost.org/doc/libs/1_80_0/libs/beast/example/http/client/sync/http_client_sync.cpp.
For anyone having EADDRNOTAVAIL and TIME_WAIT with boost::asio, the solution that worked for me and it is actually much faster than before on both Windows, Linux and Dockers is the following:
HttpResponse HttpClientSyncBase::_http(HttpRequest&& req)
{
namespace beast = boost::beast;
namespace http = beast::http;
namespace net = boost::asio;
using tcp = net::ip::tcp;
HttpResponse res;
req.prepare_payload();
const auto host = req.host();
const auto port = req.port;
const auto target = req.target();
const bool abortive_close = boost::iequals(req.header("Connection"), "close");
const bool download_large_file = boost::iequals(req.header("X-LARGE-FILE-HINT"), "YES");
beast::error_code ec;
net::io_context ioc;
// Resolve host:port for IPv4
tcp::resolver resolver(ioc);
const auto endpoints = resolver.resolve(boost::asio::ip::tcp::v4(), host, port);
// Create stream and set timeouts
beast::tcp_stream stream(ioc);
if (req.timeout_seconds == 0) boost::beast::get_lowest_layer(stream).expires_never();
else boost::beast::get_lowest_layer(stream).expires_after(std::chrono::seconds(req.timeout_seconds));
// Caution: we can get address_not_available[EADDRNOTAVAIL] due to TIME_WAIT port exhaustion
stream.connect(endpoints, ec);
if (ec == boost::system::errc::address_not_available)
throw beast::system_error{ ec };
// Write HTTP request
http::write(stream, req);
// Read HTTP response (or download large file >8MB)
beast::flat_buffer buffer;
if (download_large_file)
{
_HttpResponse tmp;
boost::beast::http::response_parser<boost::beast::http::string_body> parser{ std::move(tmp) };
parser.body_limit(boost::none);
boost::beast::http::read(stream, buffer, parser);
res = HttpResponse(std::move(parser.release()));
}
else
{
http::read(stream, buffer, res);
}
// Try to shut down socket gracefully
stream.socket().shutdown(tcp::socket::shutdown_both, ec);
if (abortive_close)
{
// Read until no more data are in socket buffers
// https://stackoverflow.com/questions/58983527/disable-time-wait-with-boost-sockets
try
{
http::response<http::dynamic_body> res;
beast::flat_buffer buffer;
http::read(stream, buffer, res);
}
catch (...)
{
// should get end of stream here, ignore it
}
// Perform "Abortive TCP/IP Close" to minimize TIME_WAIT port exhaustion
// https://stackoverflow.com/questions/35006324/time-wait-with-boost-asio
try
{
// enable linger with timeout 0 to force abortive close
boost::asio::socket_base::linger option(true, 0);
stream.socket().set_option(option);
stream.close();
}
catch (...)
{
}
}
else
{
try { stream.close(); } catch (...) {}
}
// Ignore not_connected and end_of_stream errors, handle the rest
if (ec && ec != beast::errc::not_connected && ec != beast::http::error::end_of_stream)
throw beast::system_error{ ec };
return std::move(res);
}
In the sample above I should add error handling in write but I guess anyone can do it. _HttpResponse is the following and is the base for HttpResponse.
using _HttpRequest = boost::beast::http::message<true, boost::beast::http::string_body, boost::beast::http::fields>;
using _HttpResponse = boost::beast::http::message<false, boost::beast::http::string_body, boost::beast::http::fields>;
using HttpHeaders = boost::beast::http::header<1, boost::beast::http::basic_fields<std::allocator<char>>>;
For what is worth, when I started the estimation for the job was 5-7 days. Using connetion=close in my previous solution it got down to 7-8 hours. Using Abortive TCP/IP Close I got down to 1.5 hours.
Funny thing is, the server, also boost::asio, could handle the stress while the original stress tool didn't. Finally both the server and its stress test tool work just fine! The code also demonstrates how to download a large file (over 8MB) which was another side-problem, as I needed to download the test results from the server.

Related

How to handle ping request on client (boost::beast::websocket)

Imagine that you have some websocket client, that downloading some data in loop like this:
#include <boost/asio.hpp>
#include <boost/beast.hpp>
#include "nlohmann/json.hpp"
namespace beast = boost::beast;
namespace websocket = beast::websocket;
using tcp = boost::asio::ip::tcp;
class Client {
public:
Client(boost::asio::io_context &ctx) : ws_{ctx}, ctx_{ctx} {
ws_.set_option(websocket::stream_base::timeout::suggested(boost::beast::role_type::client));
#define HOST "127.0.0.1"
#define PORT "8000"
boost::asio::connect(ws_.next_layer(), tcp::resolver{ctx_}.resolve(HOST, PORT));
ws_.handshake(HOST ":" PORT, "/api/v1/music");
#undef HOST
#undef PORT
}
~Client() {
if (ws_.is_open()) {
ws_.close(websocket::normal);
}
}
nlohmann::json NextPacket(std::size_t offset) {
nlohmann::json request;
request["offset"] = offset;
ws_.write(boost::asio::buffer(request.dump()));
beast::flat_buffer buffer;
ws_.read(buffer);
return nlohmann::json::parse(std::string_view{reinterpret_cast<const char *>(buffer.data().data()), buffer.size()});
}
private:
boost::beast::websocket::stream<boost::asio::ip::tcp::socket> ws_;
boost::asio::io_context &ctx_;
};
// ... some function
int main() {
boost::asio::io_context context;
boost::asio::executor_work_guard<boost::asio::io_context::executor_type> guard{context.get_executor()};
std::thread{[&context]() { context.run(); }}.detach();
static constexpr std::size_t kSomeVeryBigConstant{1'000'000'000};
Client client{context};
std::size_t offset{};
while (offset < kSomeVeryBigConstant) {
offset += client.NextPacket(offset)["offset"].get<std::size_t>();
// UPDATE:
userDefinedLongPauseHere();
}
}
On the server side we have ping requests with some frequency. Were should I handle ping requests? As I understand it, control_callback controls calls to ping, pong and close functions, not requests. With the read or read_async functions, I also cannot catch the ping request.
Beast responds to pings with pongs automatically, as described here: https://github.com/boostorg/beast/issues/899#issuecomment-346333014
Whenever you call read(), it can process a ping and send a pong without you knowing about that.

boost:asio::write writes to socket successfully, but server doesn't see the data

I've written a simple code sample that writes some data to the socket towards a simple TCP echo server. The data is written successfully to the socket (writtenBytes > 0), but the server doesn't respond that it has received the data.
The application is run in a Docker devcontainer, and from the development container, I'm communicating with the tcp-server-echo container on the same network.
io_service ioservice;
tcp::socket tcp_socket{ioservice};
void TestTcpConnection() {
boost::asio::ip::tcp::resolver nameResolver{ioservice};
boost::asio::ip::tcp::resolver::query query{"tcp-server-echo", "9000"};
boost::system::error_code ec{};
auto iterator = nameResolver.resolve(query, ec);
if (ec == 0) {
boost::asio::ip::tcp::resolver::iterator end{};
boost::asio::ip::tcp::endpoint endpoint = *iterator;
tcp_socket.connect(endpoint, ec);
if (ec == 0) {
std::string str{"Hello world test"};
while (tcp_socket.is_open()) {
auto writtenBytes =
boost::asio::write(tcp_socket, boost::asio::buffer(str));
if (writtenBytes > 0) {
// this line is executed successfully every time.
// writtenBytes == 13, which equals to str.length()
std::cout << "Bytes written successfully!\n";
}
using namespace std::chrono_literals;
std::this_thread::sleep_for(2000ms);
}
}
}
In this case writtenBytes > 0 is a sign of a successful write to the socket.
The echo server is based on istio/tcp-echo-server:1.2 image. I can ping it from my devcontainer by name or IP address with no issues. Also, when I write a similar code sample but using async functions (async_resolve, async_connect, except for the write operation, which is not async), and a separate thread to run ioservice, the server does see my data and responds appropriately.
Why the server doesn't see my data in case of no-async writes? Thanks in advance.
It turned out the issue was with the Docker container that received the message. The image istio/tcp-echo-server:1.2 doesn't write to logs unless you send the data with \n in the end.

Socket read URL

i want, to get the url content (http://127.0.0.1:1337/test/test), so in this case "/test/test", how i can do that?
using tcp = boost::asio::ip::tcp;
void ts3plugin_initWebsocket() {
try
{
auto const address = boost::asio::ip::make_address("127.0.0.1");
auto const port = static_cast<unsigned short>(std::atoi("1337"));
boost::asio::io_context ioc{ 1 };
tcp::acceptor acceptor{ ioc, {address, port} };
while (true) {
tcp::socket socket{ ioc };
acceptor.accept(socket);
ts3Functions.logMessage("Connected", LogLevel_INFO, "Plugin", 1);
}
}
catch (const std::exception& e)
{
char msg[512];
snprintf(msg, sizeof(msg), "Error: %s", e.what());
ts3Functions.logMessage(msg, LogLevel_INFO, "Plugin", 1);
}
}
This has little to do with Asio, and everything with HTTP. You want to make a GET request, see e.g. https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/GET
Going by the naming you might also want to upgrade to Websocket protocol over HTTP. Instead of figuring out how to do all that, perhaps just go by one of the Beast examples:
https://www.boost.org/doc/libs/1_78_0/libs/beast/doc/html/beast/examples.html

boost::asio::ip::tcp::socket - How to bind to a specific local port

I am making a client socket.
To make things easier for my testers, I'd like to specify the network card and port that the socket will use.
Yesterday, in my Google search, I found: Binding boost asio to local tcp endpoint
By performing the open, bind, and async_connect, I was able to bind to a specific network card and I started seeing traffic in Wireshark.
However, Wireshark reports that the socket has been given a random port rather than the one I specified. I would think if the port was in use it would have filled out the error_code passed to the bind method.
What am I doing wrong?
Here is my minimal example, extracted and edited from my real solution.
// Boost Includes
#include <boost/asio.hpp>
#include <boost/atomic.hpp>
#include <boost/bind.hpp>
#include <boost/thread.hpp>
#include <boost/thread/condition_variable.hpp>
// Standard Includes
#include <exception>
#include <memory>
#include <string>
#include <sstream>
boost::asio::io_service g_ioService; /** ASIO sockets require an io_service to run on*/
boost::thread g_thread; /** thread that will run the io_service and hence where callbacks are called*/
boost::asio::ip::tcp::socket g_socket(g_ioService); /** Aync socket*/
boost::asio::ip::tcp::resolver g_resolver(g_ioService); /** Resolves IP Addresses*/
//--------------------------------------------------------------------------------------------------
void OnConnect(const boost::system::error_code & errorCode, boost::asio::ip::tcp::resolver::iterator endpoint)
{
if (errorCode || endpoint == boost::asio::ip::tcp::resolver::iterator())
{
// Error - An error occured while attempting to connect
throw std::runtime_error("An error occured while attempting to connect");
}
// We connected to an endpoint
/*
// Start reading from the socket
auto callback = boost::bind(OnReceive, boost::asio::placeholders::error);
boost::asio::async_read_until(g_socket, m_receiveBuffer, '\n', callback);
*/
}
//--------------------------------------------------------------------------------------------------
void Connect()
{
const std::string hostName = "10.84.0.36";
const unsigned int port = 1007;
// Resolve to translate the server machine name into a list of endpoints
std::ostringstream converter;
converter << port;
const std::string portAsString = converter.str();
boost::asio::ip::tcp::resolver::query query(hostName, portAsString);
boost::system::error_code errorCode;
boost::asio::ip::tcp::resolver::iterator itEnd;
boost::asio::ip::tcp::resolver::iterator itEndpoint = g_resolver.resolve(query, errorCode);
if (errorCode || itEndpoint == itEnd)
{
// Error - Could not resolve either machine
throw std::runtime_error("Could not resolve either machine");
}
g_socket.open(boost::asio::ip::tcp::v4(), errorCode);
if (errorCode)
{
// Could open the g_socket
throw std::runtime_error("Could open the g_socket");
}
boost::asio::ip::tcp::endpoint localEndpoint(boost::asio::ip::address::from_string("10.86.0.18"), 6000);
g_socket.bind(localEndpoint, errorCode);
if (errorCode)
{
// Could bind the g_socket to local endpoint
throw std::runtime_error("Could bind the socket to local endpoint");
}
// Attempt to asynchronously connect using each possible end point until we find one that works
boost::asio::async_connect(g_socket, itEndpoint, boost::bind(OnConnect, boost::asio::placeholders::error, boost::asio::placeholders::iterator));
}
//--------------------------------------------------------------------------------------------------
void g_ioServiceg_threadProc()
{
try
{
// Connect to the server
Connect();
// Run the asynchronous callbacks from the g_socket on this thread
// Until the io_service is stopped from another thread
g_ioService.run();
}
catch (...)
{
throw std::runtime_error("unhandled exception caught from io_service g_thread");
}
}
//--------------------------------------------------------------------------------------------------
int main()
{
// Start up the IO service thread
g_thread.swap(boost::thread(g_ioServiceg_threadProc));
// Hang out awhile
boost::this_thread::sleep_for(boost::chrono::seconds(60));
// Stop the io service and allow the g_thread to exit
// This will cancel any outstanding work on the io_service
g_ioService.stop();
// Join our g_thread
if (g_thread.joinable())
{
g_thread.join();
}
return true;
}
As you can see in the following screenshot, a random port 32781 was selected rather than my requested port 6000.
I doubt topic starter is still interested in this question, but for all of future seekers like myself, here is the solution.
The issue here is that boost::asio::connect closes the socket before calling connect for every endpoint in the provided range:
From boost/asio/impl/connect.hpp:
template <typename Protocol BOOST_ASIO_SVC_TPARAM,
typename Iterator, typename ConnectCondition>
Iterator connect(basic_socket<Protocol BOOST_ASIO_SVC_TARG>& s,
Iterator begin, Iterator end, ConnectCondition connect_condition,
boost::system::error_code& ec)
{
ec = boost::system::error_code();
for (Iterator iter = begin; iter != end; ++iter)
{
iter = (detail::call_connect_condition(connect_condition, ec, iter, end));
if (iter != end)
{
s.close(ec); // <------
s.connect(*iter, ec);
if (!ec)
return iter;
}
...
}
That is why bound address is reset. To keep it bound one can use socket.connect/async_connect(...) directly
6000 is the remote endpoint port, and it is correctly used (otherwise, you wouldn't be connecting to the server side).
From: https://idea.popcount.org/2014-04-03-bind-before-connect/
A TCP/IP connection is identified by a four element tuple: {source IP, source port, destination IP, destination port}. To establish a TCP/IP connection only a destination IP and port number are needed, the operating system automatically selects source IP and port.
Since you do not bind to a local port, one is selected randomly from the "ephemeral port range". This is, by far, the usual way to connect.
Fear not:
It is possible to ask the kernel to select a specific source IP and port by calling bind() before calling connect()
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Let the source address be 192.168.1.21:1234
s.bind(("192.168.1.21", 1234))
s.connect(("www.google.com", 80))
The sample is python.
You do that, but still get another port. It's likely that the hint port is not available.
Check the information on SO_REUSEADDR and SO_REUSEPORT in the linked article

boost socket comms are not working past one exchange

I am converting an app which had a very simple heartbeat / status monitoring connection between two services. As that now needs to be made to run on linux in addition to windows, I thought I'd use boost (v1.51, and I cannot upgrade - linux compilers are too old and windows compiler is visual studio 2005) to accomplish the task of making it platform agnostic (considering, I really would prefer not to either have two code files, one for each OS, or a littering of #defines throughout the code, when boost offers the possibility of being pleasant to read (6mos after I've checked in and forgotten this code!)
My problem now, is the connection is timing out. Actually, it's not really working at all.
First time through, the 'status' message is sent, it's received by the server end which sends back an appropriate response. Server end then goes back to waiting on the socket for another message. Client end (this code), sends the 'status' message again... but this time, the server never receives it and the read_some() call blocks until the socket times out. I find it really strange that
The server end has not changed. The only thing that's changed, is my having altered the client code from basic winsock2 sockets, to this code. Previously, it connected and just looped through send / recv calls until the program was aborted or the 'lockdown' message was received.
Why would subsequent calls (to send) silently fail to send anything on the socket and, what do I need to adjust in order to restore the simple send / recv flow?
#include <boost/signals2/signal.hpp>
#include <boost/bind.hpp>
#include <iostream>
#include <boost/array.hpp>
#include <boost/asio.hpp>
#include <boost/thread.hpp>
using boost::asio::ip::tcp;
using namespace std;
boost::system::error_code ServiceMonitorThread::ConnectToPeer(
tcp::socket &socket,
tcp::resolver::iterator endpoint_iterator)
{
boost::system::error_code error;
int tries = 0;
for (; tries < maxTriesBeforeAbort; tries++)
{
boost::asio::connect(socket, endpoint_iterator, error);
if (!error)
{
break;
}
else if (error != make_error_code(boost::system::errc::success))
{
// Error connecting to service... may not be running?
cerr << error.message() << endl;
boost::this_thread::sleep_for(boost::chrono::milliseconds(200));
}
}
if (tries == maxTriesBeforeAbort)
{
error = make_error_code(boost::system::errc::host_unreachable);
}
return error;
}
// Main thread-loop routine.
void ServiceMonitorThread::run()
{
boost::system::error_code error;
tcp::resolver resolver(io_service);
tcp::resolver::query query(hostnameOrAddress, to_string(port));
tcp::resolver::iterator endpoint_iterator = resolver.resolve(query);
tcp::socket socket(io_service);
error = ConnectToPeer(socket, endpoint_iterator);
if (error && error == boost::system::errc::host_unreachable)
{
TerminateProgram();
}
boost::asio::streambuf command;
std::ostream command_stream(&command);
command_stream << "status\n";
boost::array<char, 10> response;
int retry = 0;
while (retry < maxTriesBeforeAbort)
{
// A 1s request interval is more than sufficient for status checking.
boost::this_thread::sleep_for(boost::chrono::seconds(1));
// Send the command to the network monitor server service.
boost::asio::write(socket, command, error);
if (error)
{
// Error sending to socket
cerr << error.message() << endl;
retry++;
continue;
}
// Clear the response buffer, then read the network monitor status.
response.assign(0);
/* size_t bytes_read = */ socket.read_some(boost::asio::buffer(response), error);
if (error)
{
if (error == make_error_code(boost::asio::error::eof))
{
// Connection was dropped, re-connect to the service.
error = ConnectToPeer(socket, endpoint_iterator);
if (error && error == make_error_code(boost::system::errc::host_unreachable))
{
TerminateProgram();
}
continue;
}
else
{
cerr << error.message() << endl;
retry++;
continue;
}
}
// Examine the response message.
if (strncmp(response.data(), "normal", 6) != 0)
{
retry++;
// If we received the lockdown response, then terminate.
if (strncmp(response.data(), "lockdown", 8) == 0)
{
break;
}
// Not an expected response, potential error, retry to see if it was merely an aberration.
continue;
}
// If we arrived here, the exchange was successful; reset the retry count.
if (retry > 0)
{
retry = 0;
}
}
// If retry count was incremented, then we have likely encountered an issue; shut things down.
if (retry != 0)
{
TerminateProgram();
}
}
When a streambuf is provided directly to an I/O operation as the buffer, then the I/O operation will manage the input sequence appropriately by either commiting read data or consuming written data. Hence, in the following code, command is empty after the first iteration:
boost::asio::streambuf command;
std::ostream command_stream(&command);
command_stream << "status\n";
// `command`'s input sequence contains "status\n".
while (retry < maxTriesBeforeAbort)
{
...
// write all of `command`'s input sequence to the socket.
boost::asio::write(socket, command, error);
// `command.size()` is 0, as the write operation will consume the data.
// Subsequent write operations with `command` will be no-ops.
...
}
One solution would be to use std::string as the buffer:
std::string command("status\n");
while (retry < maxTriesBeforeAbort)
{
...
boost::asio::write(socket, boost::asio::buffer(command), error);
...
}
For more details on streambuf usage, consider reading this answer.