How to make a timeout at receiving in boost::asio udp::socket? - c++

I create an one-thread application which exchanges with another one via UDP. When the second is disconnecting, my socket::receive_from blocks and I don't know how to solve this problem not changing the entire program into multi-threads or async interactions.
I thought that next may be a solution:
std::chrono::milliseconds timeout{4};
boost::system::error_code err;
data_t buffer(kPackageMaxSize);
std::size_t size = 0;
const auto status = std::async(std::launch::async,
[&]{
size = socket_.receive_from(boost::asio::buffer(buffer), dst_, 0, err);
}
).wait_for(timeout);
switch (status)
{
case std::future_status::timeout: /*...*/ break;
}
But I achieved a new problem: Qt Creator (GDB 11.1) (I don't have ability to try something yet) began to fall when I am debugging. If it runs without, the solution also not always works.

PS. As for "it doesn't work when debugging", debugging (specifically breakpoints) obviously changes timing. Also, keep in mind network operations have varying latency and UDP isn't a guaranteed protocol: messages may not be delivered.
Asio stands for "Asynchronous IO". As you might suspect, this means that asynchronous IO is a built-in feature, it's the entire purpose of the library. See overview/core/async.html: Concurrency Without Threads
It's not necessary to complicate with std::async. In your case I'd suggest using async_receive_from with use_future, as it is closest to the model you opted for:
Live On Coliru
#include <boost/asio.hpp>
#include <iostream>
#include <iomanip>
namespace net = boost::asio;
using net::ip::udp;
using namespace std::chrono_literals;
constexpr auto kPackageMaxSize = 65520;
using data_t = std::vector<char>;
int main() {
net::thread_pool ioc;
udp::socket socket_(ioc, udp::v4());
socket_.bind({{}, 8989});
udp::endpoint ep;
data_t buffer(kPackageMaxSize);
auto fut =
socket_.async_receive_from(net::buffer(buffer), ep, net::use_future);
switch (fut.wait_for(4ms)) {
case std::future_status::ready: {
buffer.resize(fut.get()); // never blocks here
std::cout << "Received " << buffer.size() << " bytes: "
<< std::quoted(
std::string_view(buffer.data(), buffer.size()))
<< "\n";
break;
}
case std::future_status::timeout:
case std::future_status::deferred: {
std::cout << "Timeout\n";
socket_.cancel(); // stop the IO operation
// fut.get() would throw system_error(net::error::operation_aborted)
break;
}
}
ioc.join();
}
The Coliru output:
Received 12 bytes: "Hello World
"
Locally demonstrating both timeout and successful path:

Related

Close Boost Websocket from Server side, C++, tcp::acceptor accept() timeout?

UPDATE:
Well it appears that I need to address my issue with an asynchronous implementation. I will update my posting with a new direction, once I've completed testing
Original:
I'm currently writing a multiserver application that will collect, share, and request information from multiple machines. In some cases, Machine A will request information from Machine B but will need to send it to Machine C, which will reply to A. Without getting too deep into what the application is going to do I need some help with my client application.
I have my client application designed with two threads. I used this example from boost, as the basis for my design.
Thread one will open a Client Websocket with Machine-A, it will stream a series of data points and commands. Here is a stripped-down version of my code
#include "Poco/Clock.h"
#include "Poco/Task.h"
#include "Poco/Thread.h"
#include <boost/asio.hpp>
#include <boost/beast.hpp>
#include <jsoncons/json.hpp>
namespace beast = boost::beast; // from <boost/beast.hpp>
namespace http = beast::http; // from <boost/beast/http.hpp>
namespace websocket = beast::websocket; // from <boost/beast/websocket.hpp>
namespace net = boost::asio; // from <boost/asio.hpp>
using tcp = net::ip::tcp; // from <boost/asio/ip/tcp.hpp>
class ResponseChannel : public Poco::Runnable {
void do_session(tcp::socket socket)
{
try {
websocket::stream<tcp::socket> ws{std::move(socket)};
ws.set_option(websocket::stream_base::decorator(
[](websocket::response_type& res) {
res.set(http::field::server,
std::string(BOOST_BEAST_VERSION_STRING) +
" websocket-server-sync");
}));
ws.accept();
for (;;) {
beast::flat_buffer buffer;
ws.read(buffer);
if (ws.got_binary()) {
// do something
}
}
} catch (beast::system_error const& se) {
if (se.code() != websocket::error::closed) {
std::cerr << "do_session1 ->: " << se.code().message()
<< std::endl;
return;
}
} catch (std::exception const& e) {
std::cerr << "do_session2 ->: " << e.what() << std::endl;
return;
}
}
virtual void run()
{
auto const address = net::ip::make_address(host);
auto const port = static_cast<unsigned short>(respPort);
try {
net::io_context ioc{1};
tcp::acceptor acceptor{ioc, {address, port}};
tcp::socket socket{ioc};
for (; keep_running;) {
acceptor.accept(socket);
std::thread(&ResponseChannel::do_session, this,
std::move(socket))
.detach();
}
} catch (const std::exception& e) {
std::cout << "run: " << e.what() << std::endl;
}
}
void _terminate() { keep_running = false; }
public:
std::string host;
int respPort;
bool keep_running = true;
int responseCount = 0;
std::vector<long long int> latency_times;
long long int time_sum;
Poco::Clock* responseClock;
};
int main()
{
using namespace std::chrono_literals;
Poco::Clock clock = Poco::Clock();
Poco::Thread response_thread;
ResponseChannel response_channel;
response_channel.responseClock = &clock;
response_channel.host = "0.0.0.0";
response_channel.respPort = 8080;
response_thread.start(response_channel);
response_thread.setPriority(Poco::Thread::Priority::PRIO_HIGH);
// doing some work here. work will vary depending on command-line arguments
std::this_thread::sleep_for(30s);
response_channel.keep_running = false;
response_thread.join();
}
The way I have designed the multiple machines works as expected regarding sending commands to Machine-B and receiving results from Machine-C.
The issue I'm facing is closing out Thread 2, which contains my local response channel.
I went back and forth between Poco::Thread and Poco::Task, but I decided that I do not want to use Task, as it would be a mistake to be able to close the 2nd thread/task from the main thread. I need to know that all packets have been received before closing down the 2nd thread.
So I need to close events down only once I have received a websocket::error::closed flag from Machine-C. Shutting down the websocket, detached, thread is no issue, as when the flag arrives it takes care of that for me.
However, as part of the loop process for reconnecting after a closed socket, the thread just waits for a new connection.
acceptor.accept(socket);
It's blocking, and through the documentation, there doesn't seem to be a timeout feature. I see that there is a close option, but my attempt to use close simply threw an exception. Which ultimately added complexity, I didn't want.
Ultimately, I want the Server to continuously loop through a series of connections from both Machine-B and Machine-C, but only after my client application has ended. The last thing I do before waiting for the Poco::Thread to complete is to set the flag that I no longer want the Websocket server to run.
I've put that flag before the blocking accept() call. This would work, only with perfect timing of the flag going up, a new connection is opened and then closed, before looping back to wait for a new connection.
Ideally, there would be a timeout so that it would loop around, first checking if it timed out, allow for a periodic check if I wanted the thread to remain open.
Has anyone ever run into this?

C++ multithreading closes TCP connection

I work on a C++ server where I wait for an network connection. If I get one I put the socket into a new thread and listen for further inputs. But the problem is that as soon as I have the socket in a new thread the TCP connection is disconnected. I'm using the SFML library.
Here's some code:
main.cpp:
int main() {
std::list<std::thread> user_connections;
sf::TcpListener listener;
listener.listen(PORT);
while (true)
{
sf::TcpSocket client;
listener.accept(client);
Protocol user_connection;
std::thread new_con (&Protocol::connect, &user_connection, std::ref(client));
new_con.detach();
user_connections.push_back(std::move(new_con)); // user_connections is a list
}
protocol.cpp:
class Protocol {
public:
void connect(sf::TcpSocket& client)
{
std::cout << "Address: " << client.getRemoteAddress() << ":" << client.getRemotePort() << std::endl;
}
}
This prints out:
Address: 0.0.0.0:0
And if I try to send any kind of message I get the status 4 which is according to the documentation disconnected.
EDIT:
According to #Ted Lyngmo it's because I need to put client in a list, because otherwise it runs out of scope. Now if I try to put it in a list via:
std::list<sf::TcpSocket> clients; // executed before while loop
// [...]
clients.push_back(client); // in the while loop
I get the error: (pastebin).
This is something built on your current threaded code. It may be a good idea to use a single threaded design and use the sf::SocketSelector to wait for events on the listener and all the connected clients instead.
In this lazy solution disconnected clients will not be removed from the servers list of clients until a new client is connected.
I've tried to explain it with comments in the code which is an echoing kind of server, so you can telnet to it, send messages and get them back.
#include <SFML/Network.hpp>
#include <atomic>
#include <iostream>
#include <list>
#include <thread>
constexpr uint16_t PORT = 2048; // what you have in your code.
// A simple struct to keep a client and thread
struct client_thread {
sf::TcpSocket client{};
std::thread thread{};
// The main thread can check "done" to remove this client_thread from its list:
std::atomic<bool> done{false};
~client_thread() {
// instead of detaching, join()
if(thread.joinable()) thread.join();
}
};
// the connect function gets a reference to a client_thread instead
void connect(client_thread& clith) {
constexpr std::size_t BufSize = 1024;
auto& [client, thread, done] = clith; // for convenience
std::cout << "thread: Address: " << client.getRemoteAddress() << ":"
<< client.getRemotePort() << std::endl;
std::string buffer(BufSize, '\0');
std::size_t received;
while(client.receive(buffer.data(), buffer.size(), received) == sf::Socket::Done) {
// remove ASCII control chars (cr and newline etc.)
while(received && buffer[received - 1] < ' ') --received;
buffer.resize(received);
std::cout << buffer << std::endl;
// send something back
buffer = "You sent >" + buffer + "<\n";
client.send(buffer.c_str(), buffer.size());
// restore the size
buffer.resize(BufSize);
}
std::cout << "thread: client disconnected\n";
client.disconnect();
// set done to true so the main thread can remove the client_thread
done = true;
}
int main() {
sf::TcpListener listener;
// check that listening actually works
if(listener.listen(PORT) != sf::Socket::Done) return 1;
// now a list of client_thread instead:
std::list<client_thread> user_connections;
while(true) {
// create a client_thread to use when listening
auto& clith = user_connections.emplace_back();
auto& [client, thread, _] = clith; // for convenience
std::cout << "main: listening ...\n";
sf::Socket::Status status = listener.accept(client);
if(status == sf::Socket::Done) {
std::cout << "main: got connection\n";
thread = std::thread(connect, std::ref(clith));
} else {
std::cout << "main: accept not done\n";
}
// remove disconnected clients, pre C++20
for(auto it = user_connections.begin(); it != user_connections.end();) {
// check the atomic bool in all threads
if(it->done) {
std::cout << "main: removing old connection\n";
it = user_connections.erase(it);
} else {
++it;
}
}
// remove disconnected clients, >= C++20
//
// std::erase_if(user_connections,
// [](auto& clith) -> bool { return clith.done; });
}
}
Edit regarding your edited question where you're trying to put the client in a list:
You're trying to copy the sf::TcpSocket and it's not copyable. What's worse, it's not even moveable. The reason the code in my answer works is because it avoids both copying and moving by using std::list::emplace_back to construct the element in place in the list.
It is apparently both sf::TcpSocket client and Protocol user_connection are destroyed. It's no use to only keep the thread alive, your thread only holds references to client and user_connection, but both of them are destroyed soon after your thread is created (and maybe not even started running).
I read a little bit on the SMFL library and unfortunately, at least the client, which is an object of TCPSocket, is not copyable, nor movable. The SMFL library must be a very old library. Any modern socket library will design socket to be at least movable, meaning that you can move your socket into the thread, or move it to the std::list or std::vector you created.
So, to use SMFL library, which was written without modern C++11 support (the copy & move in C++ was introduced in C++ 2011), together with C++11 library (std::thread), will be quite painful.
You can probably use std::shared_ptr to hold a newly created protocol & client, and pass shared_ptr into thread or into the list you created.
I don't know what Protocol exactly does, a rough pseudo code is as follows,
std::shared_ptr<TcpSocket> client = std::make_shared<TcpSocket>();
listener.accept(*client);
std::shared_ptr<Protocol> protocol = std::make_shared<Protocol>();
// copy the pointer into thread, they will be deleted after the thread is done
std::thread new_con ( [client, protocol] () { protocol->connect(*client); } );
or, protocol can probably be defined in the thread,
std::shared_ptr<TcpSocket> client = std::make_shared<TcpSocket>();
listener.accept(*client);
std::thread new_con ( [client] () {
Protocol protocol;
protocol.connect(*client);
} );

boost::asio UDP Broadcast Client Only Receives "fast" Packets

I have written a UDP Broadcast client using boost::asio. It works, but with a caveat. If I send packets very fast (at least one every 100ms or so), it seems to receive all of them. However, if I send only a single packet, it doesn't seem to catch it. I'm using an async receive, so I can't imagine why it is not working. The data itself is fairly small and will always be less than the allocated buffer size. When it does recieve the "fast" packets, they are correct and contain only the data from a single "send". In the debugger, it will properly break once per packet sent.
Header:
class BroadcastClient
{
public:
BroadcastClient();
std::optional<std::string> poll();
protected:
void handle_read(const boost::system::error_code& error, std::size_t bytes_transferred);
private:
std::future<void> ioFuture;
std::vector<uint8_t> buffer;
std::string result;
boost::asio::io_service ioService;
std::unique_ptr<boost::asio::ip::udp::socket> socket;
uint16_t port{ 8888 };
boost::asio::ip::udp::endpoint sender_endpoint;
};
Implementation:
BroadcastClient::BroadcastClient()
{
this->socket = std::make_unique<boost::asio::ip::udp::socket>(
this->ioService, boost::asio::ip::udp::endpoint(boost::asio::ip::address_v4::any(), this->port));
this->socket->set_option(boost::asio::socket_base::broadcast(true));
this->socket->set_option(boost::asio::socket_base::reuse_address(true));
this->ioFuture = std::async(std::launch::async, [this] { this->ioService.run(); });
this->buffer.resize(4096);
this->socket->async_receive_from(
boost::asio::buffer(this->buffer, this->buffer.size()), sender_endpoint,
boost::bind(&BroadcastClient::handle_read, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred));
}
void BroadcastClient::handle_read(const boost::system::error_code& error, std::size_t bytes_transferred)
{
if(!error)
{
this->result += std::string(std::begin(buffer), std::begin(buffer) + buffer.size());
std::fill(std::begin(buffer), std::end(buffer), 0);
this->socket->async_receive_from(
boost::asio::buffer(this->buffer, this->buffer.size()), sender_endpoint,
boost::bind(&BroadcastClient::handle_read, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred));
}
}
std::optional<std::string> BroadcastClient::poll()
{
if(this->result.empty() == false)
{
auto copy = this->result;
this->result.clear();
return copy;
}
return {};
}
I had a long search going, because broadcast UDP can be finicky. Then I spotted your future<void>. Not only would I not trust std::async to do what you expect (it can do pretty much anything), but also, there's a potentially lethal race, and this is 99% certain your issue:
you launch the async task - it will start /some time in the future/
only then you add the async_receive_from operation. If the task had already started, the queue would have been empty, the run() completes and the future is made ready. Indeed, this is visible when you:
ioService.run();
std::clog << "End of run " << std::boolalpha << ioService.stopped() << std::endl;
It was printing
End of run true
most of the time for me. I suggest using a thread:
ioThread = std::thread([this] {
ioService.run();
std::clog << "End of run " << std::boolalpha << ioService.stopped() << std::endl;
});
with corresponding join:
~BroadcastClient() {
std::clog << "~BroadcastClient()" << std::endl;
ioThread.join();
}
To be complete, also handle exceptions: Should the exception thrown by boost::asio::io_service::run() be caught? or use thread_pool(1) which is nice because it also replaces your io_service.
Alternatively, use a work guard (io_service::work or make_executor_guard).
Now, I can't seem to make it miss packets when testing locally.
More Review
In general you want to know earlier when error conditions arise in your code, so report on error in handle_read, because such a condition leads to the async loop to terminate. See below for more fixed handle_read
The result buffer is not thread safe and you access it from multiple threads¹. That invoked Undefined Behavior. Add synchronization, or use e.g. atomic exchanges.
¹ to be sure that the poll happens on the service thread you'd have to post the poll operation to the io_service. That's not possible because the service is private
You use buffer.size() in handle_read but that's hard-coded (4096). You probably wanted bytes_transferred
result.append(std::begin(buffer), std::begin(buffer) + bytes_transferred);
Also avoids an unnecessary temporary. Also, now you don't have to reset the buffer to zeroes:
void BroadcastClient::handle_read(const boost::system::error_code& error, std::size_t bytes_transferred) {
if (!error) {
std::lock_guard lk(result_mx);
result.append(std::begin(buffer), std::begin(buffer) + bytes_transferred);
start_read();
} else {
std::clog << "handle_read: " << error.message() << std::endl;
}
}
why is socket dynamically instantiated? In fact, you should initialize it in the constructor initializer list, or since C++11 from the NSMI:
uint16_t port{ 8888 };
boost::asio::io_service ioService;
udp::socket socket { ioService, { {}, port } };
There's duplication of the async_receive_from call. This calls for a start_read or similar method. Also, consider using a lambda to reduce the code and not rely on old-fashioned boost::bind:
void BroadcastClient::start_read() {
socket.async_receive_from(
boost::asio::buffer(buffer), sender_endpoint,
[this](auto ec, size_t xfr) { handle_read(ec, xfr); });
}
Full Listing
Live On Coliru
#include <boost/asio.hpp>
#include <iostream>
#include <iomanip>
#include <thread>
#include <mutex>
using namespace std::chrono_literals;
class BroadcastClient {
using socket_base = boost::asio::socket_base;
using udp = boost::asio::ip::udp;
public:
BroadcastClient();
~BroadcastClient() {
std::clog << "~BroadcastClient()" << std::endl;
socket.cancel();
work.reset();
ioThread.join();
}
std::optional<std::string> poll();
protected:
void start_read();
void handle_read(const boost::system::error_code& error, std::size_t bytes_transferred);
private:
uint16_t port{ 8888 };
boost::asio::io_service ioService;
boost::asio::executor_work_guard<
boost::asio::io_service::executor_type> work { ioService.get_executor() };
udp::socket socket { ioService, { {}, port } };
std::thread ioThread;
std::string buffer = std::string(4096, '\0');
std::mutex result_mx;
std::string result;
udp::endpoint sender_endpoint;
};
BroadcastClient::BroadcastClient() {
socket.set_option(socket_base::broadcast(true));
socket.set_option(socket_base::reuse_address(true));
ioThread = std::thread([this] {
ioService.run();
std::clog << "Service thread, stopped? " << std::boolalpha << ioService.stopped() << std::endl;
});
start_read(); // actually okay now because of `work` guard
}
void BroadcastClient::start_read() {
socket.async_receive_from(
boost::asio::buffer(buffer), sender_endpoint,
[this](auto ec, size_t xfr) { handle_read(ec, xfr); });
}
void BroadcastClient::handle_read(const boost::system::error_code& error, std::size_t bytes_transferred) {
if (!error) {
std::lock_guard lk(result_mx);
result.append(std::begin(buffer), std::begin(buffer) + bytes_transferred);
start_read();
} else {
std::clog << "handle_read: " << error.message() << std::endl;
}
}
std::optional<std::string> BroadcastClient::poll() {
std::lock_guard lk(result_mx);
if (result.empty())
return std::nullopt;
else
return std::move(result);
}
constexpr auto now = std::chrono::steady_clock::now;
int main() {
BroadcastClient bcc;
for (auto start = now(); now() - start < 3s;) {
if (auto r = bcc.poll())
std::cout << std::quoted(r.value()) << std::endl;
std::this_thread::sleep_for(100ms);
}
} // BroadcastClient destructor safely cancels the work
Tested live with
g++ -std=c++17 -O2 -Wall -pedantic -pthread main.cpp
while sleep .05; do echo -n "hello world $RANDOM" | netcat -w 0 -u 127.0.0.1 8888 ; done&
./a.out
kill %1
Prints
"hello world 18422"
"hello world 3810"
"hello world 26191hello world 10419"
"hello world 23666hello world 18552"
"hello world 2076"
"hello world 19871hello world 8978"
"hello world 1836"
"hello world 11134hello world 16603"
"hello world 3748hello world 8089"
"hello world 27946"
"hello world 14834hello world 15274"
"hello world 26555hello world 6695"
"hello world 32419"
"hello world 26996hello world 26796"
"hello world 9882"
"hello world 680hello world 29358"
"hello world 9723hello world 31163"
"hello world 3646"
"hello world 10602hello world 22562"
"hello world 18394hello world 17229"
"hello world 20028"
"hello world 14444hello world 3890"
"hello world 16258"
"hello world 28555hello world 21184"
"hello world 31342hello world 30891"
"hello world 3088"
"hello world 1051hello world 5638"
"hello world 24308hello world 7748"
"hello world 18398"
~BroadcastClient()
handle_read: Operation canceled
Service thread, stopped? true
Old answer contents which may /still/ be of interest
Wait. I noticed this is not "regular" peer-to-peer UDP.
As far as I understand, multicast works courtesy of routers. They have to maintain complex tables of endpoints "subscribed" so they know where to forward the actual packets.
Many routers struggle with these, there are builtin pitfalls with the reliability, especially on WiFi etc. It would /not/ surprise me if you had a router (or rather a topology that includes the router) that is struggling with this too and just stops "remembering" the participating endpoints in a multicast group at some time interval.
I think tables of this type have to be kept in every hop on the route (including the kernel which may have to keep track of several processes for the same multicast group).
Some hints about this:
https://superuser.com/questions/1287485/udp-broadcasting-not-working-on-some-routers
One oft heard piece of advice is:
if you can, use multicast for dicscovery, switch to unicast after
try to be specific about the bound interface (in your code you might want to replace address_v4::any() with something like lo (127.0.0.1) or whatever ip address identifies your NIC.

Boost.Asio contrived example inexplicably blocking

I've used Boost.Asio extensively, but I've came across a problem with a unit test that I don't understand. I've reduced the problem down to a very contrived example:
#include <string>
#include <chrono>
#include <thread>
#include <mutex>
#include <condition_variable>
#include <boost/asio.hpp>
#define BOOST_TEST_MODULE My_Module
#define BOOST_TEST_DYN_LINK
#include <boost/test/unit_test.hpp>
#include <boost/test/auto_unit_test.hpp>
using namespace std::string_literals;
using namespace std::chrono_literals;
namespace BA = boost::asio;
namespace BAI = BA::ip;
BOOST_AUTO_TEST_CASE(test)
{
std::mutex m;
std::condition_variable cv;
BA::io_service servicer;
auto io_work = std::make_unique<BA::io_service::work>(servicer);
auto thread = std::thread{[&]() {
servicer.run();
}};
auto received_response = false;
auto server_buf = std::array<std::uint8_t, 4096>{};
auto server_sock = BAI::tcp::socket{servicer};
auto acceptor = BAI::tcp::acceptor{servicer,
BAI::tcp::endpoint{BAI::tcp::v4(), 20123}};
acceptor.async_accept(server_sock, [&](auto&& ec) {
if (ec) {
BOOST_TEST_MESSAGE(ec.message());
}
BOOST_REQUIRE(!ec);
BOOST_TEST_MESSAGE("Accepted connection from " << server_sock.remote_endpoint() <<
", reading...");
BA::async_read(server_sock,
BA::buffer(server_buf),
[&](auto&& ec, auto&& bytes_read){
std::unique_lock<decltype(m)> ul(m);
received_response = true;
if (ec) {
BOOST_TEST_MESSAGE(ec.message());
}
BOOST_REQUIRE(!ec);
const auto str = std::string{server_buf.begin(),
server_buf.begin() + bytes_read};
BOOST_TEST_MESSAGE("Read: " << str);
ul.unlock();
cv.notify_one();
});
});
const auto send_str = "hello"s;
auto client_sock = BAI::tcp::socket{servicer, BAI::tcp::v4()};
client_sock.async_connect(BAI::tcp::endpoint{BAI::tcp::v4(), 20123},
[&](auto&& ec) {
if (ec) {
BOOST_TEST_MESSAGE(ec.message());
}
BOOST_REQUIRE(!ec);
BOOST_TEST_MESSAGE("Connected...");
BA::async_write(client_sock,
BA::buffer(send_str),
[&](auto&& ec, auto&& bytes_written) {
if (ec) {
BOOST_TEST_MESSAGE(ec.message());
}
BOOST_REQUIRE(!ec);
BOOST_TEST_MESSAGE("Written " << bytes_written << " bytes");
});
});
std::unique_lock<decltype(m)> ul(m);
cv.wait_for(ul, 2s, [&](){ return received_response; });
BOOST_CHECK(received_response);
io_work.reset();
servicer.stop();
if (thread.joinable()) {
thread.join();
}
}
which I compile with:
g++ -std=c++17 source.cc -l boost_unit_test_framework -pthread -l boost_system -ggdb
The output is:
Accepted connection from 127.0.0.1:51688, reading...
Connected...
Written 5 bytes
And then it times out.
Running through the debugger shows that the async_read handler is never called. Pausing execution during the phase where it doesn't seem to be doing anything, shows that the main thread is waiting on the condition_variable (cv) and the io_service thread is on an epoll_wait.
I seem to be deadlocking but can't see how.
This is how the function is defined to work, it waits for exactly the number of bytes that the buffer has space for (http://www.boost.org/doc/libs/1_62_0/doc/html/boost_asio/reference/async_read/overload1.html).
Try this one instead: http://www.boost.org/doc/libs/1_62_0/doc/html/boost_asio/reference/async_read/overload2.html
You can give a callback to decide whether the read is complete and that could include waiting for and checking a length provided by another channel once the writer has written its message (if you've determined a deadlock-free way to do that) or just before the message proper.
Adding this completion condition makes it work:
[&](auto&& ec, auto&& bytes_read){
return bytes_read < 5 ? 5 - bytes_read : 0;
},
The answer provided by #codeshot is correct, but it is one of several solutions - which is most appropriate is dependent entirely upon the protocol you're using across the TCP connection.
For example, in a traditional Key-Length-Value style protocol, you would do two reads:
Using boost::asio::async_read (or equivalent) to read into fixed length buffer to obtain the fixed-length header
Use the length specified by the header to create a buffer of the required size, and repeat step 1 using it
There's a good example of this in the chat server example code.
If you were using HTTP or RTSP (the latter is what I was trying to do), then you don't know how much is data is coming, all you care about is receiving a packet's worth of data (I know this is an oversimplification due to the Content-Length header in responses, chunked transfer encoding, etc. but bear with me). For this you need async_read_some (or equivalent), see the HTTP server example.

using boost:asio with select? blocking on TCP input OR file update

I had intended to have a thread in my program which would wait on two file descriptors, one for a socket and a second one for a FD describing the file system (specifically waiting to see if a new file is added to a directory). Since I expect to rarely see either the new file added or new TCP messages coming in I wanted to have one thread waiting for either input and handle whichever input is detected when it occures rather then bothering with seperate threads.
I then (finally!) got permission from the 'boss' to use boost. So now I want to replace the basic sockets with boost:asio. Only I'm running into a small problem. It seems like asio implimented it's own version of select rather then providing a FD I could use with select directly. This leaves me uncertain how I can block on both conditions, new file and TCP input, at the same time when one only works with select and the other doesn't seem to support the use of select. Is there an easy work around to this I'm missing?
ASIO is best used asynchronously (that's what it stands for): you can set up handlers for both TCP reads and the file descriptor activity, and the handlers would be called for you.
Here's a demo example to get you started (written for Linux with inotify support):
#include <iostream>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <sys/inotify.h>
namespace asio = boost::asio;
void start_notify_handler();
void start_accept_handler();
// this stuff goes into your class, only global for the simplistic demo
asio::streambuf buf(1024);
asio::io_service io_svc;
asio::posix::stream_descriptor stream_desc(io_svc);
asio::ip::tcp::socket sock(io_svc);
asio::ip::tcp::endpoint end(asio::ip::tcp::v4(), 1234);
asio::ip::tcp::acceptor acceptor(io_svc, end);
// this gets called on file system activity
void notify_handler(const boost::system::error_code&,
std::size_t transferred)
{
size_t processed = 0;
while(transferred - processed >= sizeof(inotify_event))
{
const char* cdata = processed
+ asio::buffer_cast<const char*>(buf.data());
const inotify_event* ievent =
reinterpret_cast<const inotify_event*>(cdata);
processed += sizeof(inotify_event) + ievent->len;
if(ievent->len > 0 && ievent->mask & IN_OPEN)
std::cout << "Someone opened " << ievent->name << '\n';
}
start_notify_handler();
}
// this gets called when nsomeone connects to you on TCP port 1234
void accept_handler(const boost::system::error_code&)
{
std::cout << "Someone connected from "
<< sock.remote_endpoint().address() << '\n';
sock.close(); // dropping connection: this is just a demo
start_accept_handler();
}
void start_notify_handler()
{
stream_desc.async_read_some( buf.prepare(buf.max_size()),
boost::bind(&notify_handler, asio::placeholders::error,
asio::placeholders::bytes_transferred));
}
void start_accept_handler()
{
acceptor.async_accept(sock,
boost::bind(&accept_handler, asio::placeholders::error));
}
int main()
{
int raw_fd = inotify_init(); // error handling ignored
stream_desc.assign(raw_fd);
inotify_add_watch(raw_fd, ".", IN_OPEN);
start_notify_handler();
start_accept_handler();
io_svc.run();
}