boost::asio UDP Broadcast Client Only Receives "fast" Packets - c++

I have written a UDP Broadcast client using boost::asio. It works, but with a caveat. If I send packets very fast (at least one every 100ms or so), it seems to receive all of them. However, if I send only a single packet, it doesn't seem to catch it. I'm using an async receive, so I can't imagine why it is not working. The data itself is fairly small and will always be less than the allocated buffer size. When it does recieve the "fast" packets, they are correct and contain only the data from a single "send". In the debugger, it will properly break once per packet sent.
Header:
class BroadcastClient
{
public:
BroadcastClient();
std::optional<std::string> poll();
protected:
void handle_read(const boost::system::error_code& error, std::size_t bytes_transferred);
private:
std::future<void> ioFuture;
std::vector<uint8_t> buffer;
std::string result;
boost::asio::io_service ioService;
std::unique_ptr<boost::asio::ip::udp::socket> socket;
uint16_t port{ 8888 };
boost::asio::ip::udp::endpoint sender_endpoint;
};
Implementation:
BroadcastClient::BroadcastClient()
{
this->socket = std::make_unique<boost::asio::ip::udp::socket>(
this->ioService, boost::asio::ip::udp::endpoint(boost::asio::ip::address_v4::any(), this->port));
this->socket->set_option(boost::asio::socket_base::broadcast(true));
this->socket->set_option(boost::asio::socket_base::reuse_address(true));
this->ioFuture = std::async(std::launch::async, [this] { this->ioService.run(); });
this->buffer.resize(4096);
this->socket->async_receive_from(
boost::asio::buffer(this->buffer, this->buffer.size()), sender_endpoint,
boost::bind(&BroadcastClient::handle_read, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred));
}
void BroadcastClient::handle_read(const boost::system::error_code& error, std::size_t bytes_transferred)
{
if(!error)
{
this->result += std::string(std::begin(buffer), std::begin(buffer) + buffer.size());
std::fill(std::begin(buffer), std::end(buffer), 0);
this->socket->async_receive_from(
boost::asio::buffer(this->buffer, this->buffer.size()), sender_endpoint,
boost::bind(&BroadcastClient::handle_read, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred));
}
}
std::optional<std::string> BroadcastClient::poll()
{
if(this->result.empty() == false)
{
auto copy = this->result;
this->result.clear();
return copy;
}
return {};
}

I had a long search going, because broadcast UDP can be finicky. Then I spotted your future<void>. Not only would I not trust std::async to do what you expect (it can do pretty much anything), but also, there's a potentially lethal race, and this is 99% certain your issue:
you launch the async task - it will start /some time in the future/
only then you add the async_receive_from operation. If the task had already started, the queue would have been empty, the run() completes and the future is made ready. Indeed, this is visible when you:
ioService.run();
std::clog << "End of run " << std::boolalpha << ioService.stopped() << std::endl;
It was printing
End of run true
most of the time for me. I suggest using a thread:
ioThread = std::thread([this] {
ioService.run();
std::clog << "End of run " << std::boolalpha << ioService.stopped() << std::endl;
});
with corresponding join:
~BroadcastClient() {
std::clog << "~BroadcastClient()" << std::endl;
ioThread.join();
}
To be complete, also handle exceptions: Should the exception thrown by boost::asio::io_service::run() be caught? or use thread_pool(1) which is nice because it also replaces your io_service.
Alternatively, use a work guard (io_service::work or make_executor_guard).
Now, I can't seem to make it miss packets when testing locally.
More Review
In general you want to know earlier when error conditions arise in your code, so report on error in handle_read, because such a condition leads to the async loop to terminate. See below for more fixed handle_read
The result buffer is not thread safe and you access it from multiple threads¹. That invoked Undefined Behavior. Add synchronization, or use e.g. atomic exchanges.
¹ to be sure that the poll happens on the service thread you'd have to post the poll operation to the io_service. That's not possible because the service is private
You use buffer.size() in handle_read but that's hard-coded (4096). You probably wanted bytes_transferred
result.append(std::begin(buffer), std::begin(buffer) + bytes_transferred);
Also avoids an unnecessary temporary. Also, now you don't have to reset the buffer to zeroes:
void BroadcastClient::handle_read(const boost::system::error_code& error, std::size_t bytes_transferred) {
if (!error) {
std::lock_guard lk(result_mx);
result.append(std::begin(buffer), std::begin(buffer) + bytes_transferred);
start_read();
} else {
std::clog << "handle_read: " << error.message() << std::endl;
}
}
why is socket dynamically instantiated? In fact, you should initialize it in the constructor initializer list, or since C++11 from the NSMI:
uint16_t port{ 8888 };
boost::asio::io_service ioService;
udp::socket socket { ioService, { {}, port } };
There's duplication of the async_receive_from call. This calls for a start_read or similar method. Also, consider using a lambda to reduce the code and not rely on old-fashioned boost::bind:
void BroadcastClient::start_read() {
socket.async_receive_from(
boost::asio::buffer(buffer), sender_endpoint,
[this](auto ec, size_t xfr) { handle_read(ec, xfr); });
}
Full Listing
Live On Coliru
#include <boost/asio.hpp>
#include <iostream>
#include <iomanip>
#include <thread>
#include <mutex>
using namespace std::chrono_literals;
class BroadcastClient {
using socket_base = boost::asio::socket_base;
using udp = boost::asio::ip::udp;
public:
BroadcastClient();
~BroadcastClient() {
std::clog << "~BroadcastClient()" << std::endl;
socket.cancel();
work.reset();
ioThread.join();
}
std::optional<std::string> poll();
protected:
void start_read();
void handle_read(const boost::system::error_code& error, std::size_t bytes_transferred);
private:
uint16_t port{ 8888 };
boost::asio::io_service ioService;
boost::asio::executor_work_guard<
boost::asio::io_service::executor_type> work { ioService.get_executor() };
udp::socket socket { ioService, { {}, port } };
std::thread ioThread;
std::string buffer = std::string(4096, '\0');
std::mutex result_mx;
std::string result;
udp::endpoint sender_endpoint;
};
BroadcastClient::BroadcastClient() {
socket.set_option(socket_base::broadcast(true));
socket.set_option(socket_base::reuse_address(true));
ioThread = std::thread([this] {
ioService.run();
std::clog << "Service thread, stopped? " << std::boolalpha << ioService.stopped() << std::endl;
});
start_read(); // actually okay now because of `work` guard
}
void BroadcastClient::start_read() {
socket.async_receive_from(
boost::asio::buffer(buffer), sender_endpoint,
[this](auto ec, size_t xfr) { handle_read(ec, xfr); });
}
void BroadcastClient::handle_read(const boost::system::error_code& error, std::size_t bytes_transferred) {
if (!error) {
std::lock_guard lk(result_mx);
result.append(std::begin(buffer), std::begin(buffer) + bytes_transferred);
start_read();
} else {
std::clog << "handle_read: " << error.message() << std::endl;
}
}
std::optional<std::string> BroadcastClient::poll() {
std::lock_guard lk(result_mx);
if (result.empty())
return std::nullopt;
else
return std::move(result);
}
constexpr auto now = std::chrono::steady_clock::now;
int main() {
BroadcastClient bcc;
for (auto start = now(); now() - start < 3s;) {
if (auto r = bcc.poll())
std::cout << std::quoted(r.value()) << std::endl;
std::this_thread::sleep_for(100ms);
}
} // BroadcastClient destructor safely cancels the work
Tested live with
g++ -std=c++17 -O2 -Wall -pedantic -pthread main.cpp
while sleep .05; do echo -n "hello world $RANDOM" | netcat -w 0 -u 127.0.0.1 8888 ; done&
./a.out
kill %1
Prints
"hello world 18422"
"hello world 3810"
"hello world 26191hello world 10419"
"hello world 23666hello world 18552"
"hello world 2076"
"hello world 19871hello world 8978"
"hello world 1836"
"hello world 11134hello world 16603"
"hello world 3748hello world 8089"
"hello world 27946"
"hello world 14834hello world 15274"
"hello world 26555hello world 6695"
"hello world 32419"
"hello world 26996hello world 26796"
"hello world 9882"
"hello world 680hello world 29358"
"hello world 9723hello world 31163"
"hello world 3646"
"hello world 10602hello world 22562"
"hello world 18394hello world 17229"
"hello world 20028"
"hello world 14444hello world 3890"
"hello world 16258"
"hello world 28555hello world 21184"
"hello world 31342hello world 30891"
"hello world 3088"
"hello world 1051hello world 5638"
"hello world 24308hello world 7748"
"hello world 18398"
~BroadcastClient()
handle_read: Operation canceled
Service thread, stopped? true
Old answer contents which may /still/ be of interest
Wait. I noticed this is not "regular" peer-to-peer UDP.
As far as I understand, multicast works courtesy of routers. They have to maintain complex tables of endpoints "subscribed" so they know where to forward the actual packets.
Many routers struggle with these, there are builtin pitfalls with the reliability, especially on WiFi etc. It would /not/ surprise me if you had a router (or rather a topology that includes the router) that is struggling with this too and just stops "remembering" the participating endpoints in a multicast group at some time interval.
I think tables of this type have to be kept in every hop on the route (including the kernel which may have to keep track of several processes for the same multicast group).
Some hints about this:
https://superuser.com/questions/1287485/udp-broadcasting-not-working-on-some-routers
One oft heard piece of advice is:
if you can, use multicast for dicscovery, switch to unicast after
try to be specific about the bound interface (in your code you might want to replace address_v4::any() with something like lo (127.0.0.1) or whatever ip address identifies your NIC.

Related

How to make a timeout at receiving in boost::asio udp::socket?

I create an one-thread application which exchanges with another one via UDP. When the second is disconnecting, my socket::receive_from blocks and I don't know how to solve this problem not changing the entire program into multi-threads or async interactions.
I thought that next may be a solution:
std::chrono::milliseconds timeout{4};
boost::system::error_code err;
data_t buffer(kPackageMaxSize);
std::size_t size = 0;
const auto status = std::async(std::launch::async,
[&]{
size = socket_.receive_from(boost::asio::buffer(buffer), dst_, 0, err);
}
).wait_for(timeout);
switch (status)
{
case std::future_status::timeout: /*...*/ break;
}
But I achieved a new problem: Qt Creator (GDB 11.1) (I don't have ability to try something yet) began to fall when I am debugging. If it runs without, the solution also not always works.
PS. As for "it doesn't work when debugging", debugging (specifically breakpoints) obviously changes timing. Also, keep in mind network operations have varying latency and UDP isn't a guaranteed protocol: messages may not be delivered.
Asio stands for "Asynchronous IO". As you might suspect, this means that asynchronous IO is a built-in feature, it's the entire purpose of the library. See overview/core/async.html: Concurrency Without Threads
It's not necessary to complicate with std::async. In your case I'd suggest using async_receive_from with use_future, as it is closest to the model you opted for:
Live On Coliru
#include <boost/asio.hpp>
#include <iostream>
#include <iomanip>
namespace net = boost::asio;
using net::ip::udp;
using namespace std::chrono_literals;
constexpr auto kPackageMaxSize = 65520;
using data_t = std::vector<char>;
int main() {
net::thread_pool ioc;
udp::socket socket_(ioc, udp::v4());
socket_.bind({{}, 8989});
udp::endpoint ep;
data_t buffer(kPackageMaxSize);
auto fut =
socket_.async_receive_from(net::buffer(buffer), ep, net::use_future);
switch (fut.wait_for(4ms)) {
case std::future_status::ready: {
buffer.resize(fut.get()); // never blocks here
std::cout << "Received " << buffer.size() << " bytes: "
<< std::quoted(
std::string_view(buffer.data(), buffer.size()))
<< "\n";
break;
}
case std::future_status::timeout:
case std::future_status::deferred: {
std::cout << "Timeout\n";
socket_.cancel(); // stop the IO operation
// fut.get() would throw system_error(net::error::operation_aborted)
break;
}
}
ioc.join();
}
The Coliru output:
Received 12 bytes: "Hello World
"
Locally demonstrating both timeout and successful path:

Close Boost Websocket from Server side, C++, tcp::acceptor accept() timeout?

UPDATE:
Well it appears that I need to address my issue with an asynchronous implementation. I will update my posting with a new direction, once I've completed testing
Original:
I'm currently writing a multiserver application that will collect, share, and request information from multiple machines. In some cases, Machine A will request information from Machine B but will need to send it to Machine C, which will reply to A. Without getting too deep into what the application is going to do I need some help with my client application.
I have my client application designed with two threads. I used this example from boost, as the basis for my design.
Thread one will open a Client Websocket with Machine-A, it will stream a series of data points and commands. Here is a stripped-down version of my code
#include "Poco/Clock.h"
#include "Poco/Task.h"
#include "Poco/Thread.h"
#include <boost/asio.hpp>
#include <boost/beast.hpp>
#include <jsoncons/json.hpp>
namespace beast = boost::beast; // from <boost/beast.hpp>
namespace http = beast::http; // from <boost/beast/http.hpp>
namespace websocket = beast::websocket; // from <boost/beast/websocket.hpp>
namespace net = boost::asio; // from <boost/asio.hpp>
using tcp = net::ip::tcp; // from <boost/asio/ip/tcp.hpp>
class ResponseChannel : public Poco::Runnable {
void do_session(tcp::socket socket)
{
try {
websocket::stream<tcp::socket> ws{std::move(socket)};
ws.set_option(websocket::stream_base::decorator(
[](websocket::response_type& res) {
res.set(http::field::server,
std::string(BOOST_BEAST_VERSION_STRING) +
" websocket-server-sync");
}));
ws.accept();
for (;;) {
beast::flat_buffer buffer;
ws.read(buffer);
if (ws.got_binary()) {
// do something
}
}
} catch (beast::system_error const& se) {
if (se.code() != websocket::error::closed) {
std::cerr << "do_session1 ->: " << se.code().message()
<< std::endl;
return;
}
} catch (std::exception const& e) {
std::cerr << "do_session2 ->: " << e.what() << std::endl;
return;
}
}
virtual void run()
{
auto const address = net::ip::make_address(host);
auto const port = static_cast<unsigned short>(respPort);
try {
net::io_context ioc{1};
tcp::acceptor acceptor{ioc, {address, port}};
tcp::socket socket{ioc};
for (; keep_running;) {
acceptor.accept(socket);
std::thread(&ResponseChannel::do_session, this,
std::move(socket))
.detach();
}
} catch (const std::exception& e) {
std::cout << "run: " << e.what() << std::endl;
}
}
void _terminate() { keep_running = false; }
public:
std::string host;
int respPort;
bool keep_running = true;
int responseCount = 0;
std::vector<long long int> latency_times;
long long int time_sum;
Poco::Clock* responseClock;
};
int main()
{
using namespace std::chrono_literals;
Poco::Clock clock = Poco::Clock();
Poco::Thread response_thread;
ResponseChannel response_channel;
response_channel.responseClock = &clock;
response_channel.host = "0.0.0.0";
response_channel.respPort = 8080;
response_thread.start(response_channel);
response_thread.setPriority(Poco::Thread::Priority::PRIO_HIGH);
// doing some work here. work will vary depending on command-line arguments
std::this_thread::sleep_for(30s);
response_channel.keep_running = false;
response_thread.join();
}
The way I have designed the multiple machines works as expected regarding sending commands to Machine-B and receiving results from Machine-C.
The issue I'm facing is closing out Thread 2, which contains my local response channel.
I went back and forth between Poco::Thread and Poco::Task, but I decided that I do not want to use Task, as it would be a mistake to be able to close the 2nd thread/task from the main thread. I need to know that all packets have been received before closing down the 2nd thread.
So I need to close events down only once I have received a websocket::error::closed flag from Machine-C. Shutting down the websocket, detached, thread is no issue, as when the flag arrives it takes care of that for me.
However, as part of the loop process for reconnecting after a closed socket, the thread just waits for a new connection.
acceptor.accept(socket);
It's blocking, and through the documentation, there doesn't seem to be a timeout feature. I see that there is a close option, but my attempt to use close simply threw an exception. Which ultimately added complexity, I didn't want.
Ultimately, I want the Server to continuously loop through a series of connections from both Machine-B and Machine-C, but only after my client application has ended. The last thing I do before waiting for the Poco::Thread to complete is to set the flag that I no longer want the Websocket server to run.
I've put that flag before the blocking accept() call. This would work, only with perfect timing of the flag going up, a new connection is opened and then closed, before looping back to wait for a new connection.
Ideally, there would be a timeout so that it would loop around, first checking if it timed out, allow for a periodic check if I wanted the thread to remain open.
Has anyone ever run into this?

C++ multithreading closes TCP connection

I work on a C++ server where I wait for an network connection. If I get one I put the socket into a new thread and listen for further inputs. But the problem is that as soon as I have the socket in a new thread the TCP connection is disconnected. I'm using the SFML library.
Here's some code:
main.cpp:
int main() {
std::list<std::thread> user_connections;
sf::TcpListener listener;
listener.listen(PORT);
while (true)
{
sf::TcpSocket client;
listener.accept(client);
Protocol user_connection;
std::thread new_con (&Protocol::connect, &user_connection, std::ref(client));
new_con.detach();
user_connections.push_back(std::move(new_con)); // user_connections is a list
}
protocol.cpp:
class Protocol {
public:
void connect(sf::TcpSocket& client)
{
std::cout << "Address: " << client.getRemoteAddress() << ":" << client.getRemotePort() << std::endl;
}
}
This prints out:
Address: 0.0.0.0:0
And if I try to send any kind of message I get the status 4 which is according to the documentation disconnected.
EDIT:
According to #Ted Lyngmo it's because I need to put client in a list, because otherwise it runs out of scope. Now if I try to put it in a list via:
std::list<sf::TcpSocket> clients; // executed before while loop
// [...]
clients.push_back(client); // in the while loop
I get the error: (pastebin).
This is something built on your current threaded code. It may be a good idea to use a single threaded design and use the sf::SocketSelector to wait for events on the listener and all the connected clients instead.
In this lazy solution disconnected clients will not be removed from the servers list of clients until a new client is connected.
I've tried to explain it with comments in the code which is an echoing kind of server, so you can telnet to it, send messages and get them back.
#include <SFML/Network.hpp>
#include <atomic>
#include <iostream>
#include <list>
#include <thread>
constexpr uint16_t PORT = 2048; // what you have in your code.
// A simple struct to keep a client and thread
struct client_thread {
sf::TcpSocket client{};
std::thread thread{};
// The main thread can check "done" to remove this client_thread from its list:
std::atomic<bool> done{false};
~client_thread() {
// instead of detaching, join()
if(thread.joinable()) thread.join();
}
};
// the connect function gets a reference to a client_thread instead
void connect(client_thread& clith) {
constexpr std::size_t BufSize = 1024;
auto& [client, thread, done] = clith; // for convenience
std::cout << "thread: Address: " << client.getRemoteAddress() << ":"
<< client.getRemotePort() << std::endl;
std::string buffer(BufSize, '\0');
std::size_t received;
while(client.receive(buffer.data(), buffer.size(), received) == sf::Socket::Done) {
// remove ASCII control chars (cr and newline etc.)
while(received && buffer[received - 1] < ' ') --received;
buffer.resize(received);
std::cout << buffer << std::endl;
// send something back
buffer = "You sent >" + buffer + "<\n";
client.send(buffer.c_str(), buffer.size());
// restore the size
buffer.resize(BufSize);
}
std::cout << "thread: client disconnected\n";
client.disconnect();
// set done to true so the main thread can remove the client_thread
done = true;
}
int main() {
sf::TcpListener listener;
// check that listening actually works
if(listener.listen(PORT) != sf::Socket::Done) return 1;
// now a list of client_thread instead:
std::list<client_thread> user_connections;
while(true) {
// create a client_thread to use when listening
auto& clith = user_connections.emplace_back();
auto& [client, thread, _] = clith; // for convenience
std::cout << "main: listening ...\n";
sf::Socket::Status status = listener.accept(client);
if(status == sf::Socket::Done) {
std::cout << "main: got connection\n";
thread = std::thread(connect, std::ref(clith));
} else {
std::cout << "main: accept not done\n";
}
// remove disconnected clients, pre C++20
for(auto it = user_connections.begin(); it != user_connections.end();) {
// check the atomic bool in all threads
if(it->done) {
std::cout << "main: removing old connection\n";
it = user_connections.erase(it);
} else {
++it;
}
}
// remove disconnected clients, >= C++20
//
// std::erase_if(user_connections,
// [](auto& clith) -> bool { return clith.done; });
}
}
Edit regarding your edited question where you're trying to put the client in a list:
You're trying to copy the sf::TcpSocket and it's not copyable. What's worse, it's not even moveable. The reason the code in my answer works is because it avoids both copying and moving by using std::list::emplace_back to construct the element in place in the list.
It is apparently both sf::TcpSocket client and Protocol user_connection are destroyed. It's no use to only keep the thread alive, your thread only holds references to client and user_connection, but both of them are destroyed soon after your thread is created (and maybe not even started running).
I read a little bit on the SMFL library and unfortunately, at least the client, which is an object of TCPSocket, is not copyable, nor movable. The SMFL library must be a very old library. Any modern socket library will design socket to be at least movable, meaning that you can move your socket into the thread, or move it to the std::list or std::vector you created.
So, to use SMFL library, which was written without modern C++11 support (the copy & move in C++ was introduced in C++ 2011), together with C++11 library (std::thread), will be quite painful.
You can probably use std::shared_ptr to hold a newly created protocol & client, and pass shared_ptr into thread or into the list you created.
I don't know what Protocol exactly does, a rough pseudo code is as follows,
std::shared_ptr<TcpSocket> client = std::make_shared<TcpSocket>();
listener.accept(*client);
std::shared_ptr<Protocol> protocol = std::make_shared<Protocol>();
// copy the pointer into thread, they will be deleted after the thread is done
std::thread new_con ( [client, protocol] () { protocol->connect(*client); } );
or, protocol can probably be defined in the thread,
std::shared_ptr<TcpSocket> client = std::make_shared<TcpSocket>();
listener.accept(*client);
std::thread new_con ( [client] () {
Protocol protocol;
protocol.connect(*client);
} );

asio async operations aren't processed

I am following ASIO's async_tcp_echo_server.cpp example to write a server.
My server logic looks like this (.cpp part):
1.Server startup:
bool Server::Start()
{
mServerThread = std::thread(&Server::ServerThreadFunc, this, std::ref(ios));
//ios is asio::io_service
}
2.Init acceptor and listen for incoming connection:
void Server::ServerThreadFunc(io_service& service)
{
tcp::endpoint endp{ address::from_string(LOCAL_HOST),MY_PORT };
mAcceptor = acceptor_ptr(new tcp::acceptor{ service,endp });
// Add a job to start accepting connections.
StartAccept(*mAcceptor);
// Process event loop.Hang here till service terminated
service.run();
std::cout << "Server thread exiting." << std::endl;
}
3.Accept a connection and start reading from the client:
void Server::StartAccept(tcp::acceptor& acceptor)
{
acceptor.async_accept([&](std::error_code err, tcp::socket socket)
{
if (!err)
{
std::make_shared<Connection>(std::move(socket))->StartRead(mCounter);
StartAccept(acceptor);
}
else
{
std::cerr << "Error:" << "Failed to accept new connection" << err.message() << std::endl;
return;
}
});
}
void Connection::StartRead(uint32_t frameIndex)
{
asio::async_read(mSocket, asio::buffer(&mHeader, sizeof(XHeader)), std::bind(&Connection::ReadHandler, shared_from_this(), std::placeholders::_1, std::placeholders::_2, frameIndex));
}
So the Connection instance finally triggers ReadHandler callback where I perform actual read and write:
void Connection::ReadHandler(const asio::error_code& error, size_t bytes_transfered, uint32_t frameIndex)
{
if (bytes_transfered == sizeof(XHeader))
{
uint32_t reply;
if (mHeader.code == 12345)
{
reply = (uint32_t)12121;
size_t len = asio::write(mSocket, asio::buffer(&reply, sizeof(uint32_t)));
}
else
{
reply = (uint32_t)0;
size_t len = asio::write(mSocket, asio::buffer(&reply, sizeof(uint32_t)));
this->mSocket.shutdown(tcp::socket::shutdown_both);
return;
}
}
while (mSocket.is_open())
{
XPacket packet;
packet.dataSize = rt->buff.size();
packet.data = rt->buff.data();
std::vector<asio::const_buffer> buffers;
buffers.push_back(asio::buffer(&packet.dataSize,sizeof(uint64_t)));
buffers.push_back(asio::buffer(packet.data, packet.dataSize));
auto self(shared_from_this());
asio::async_write(mSocket, buffers,
[this, self](const asio::error_code error, size_t bytes_transfered)
{
if (error)
{
ERROR(200, "Error sending packet");
ERROR(200, error.message().c_str());
}
}
);
}
}
Now, here is the problem. The server receives data from the client and sends ,using sync asio::write, fine. But when it comes to to asio::async_read or asio::async_write inside the while loop, the method's lambda callback never gets triggered, unless I put io_context().run_one(); immediately after that. I don't understand why I see this behaviour. I do call io_service.run() right after acceptor init, so it blocks there till the server exit. The only difference of my code from the asio example, as far as I can tell, is that I run my logic from a custom thread.
Your callback isn't returning, preventing the event loop from executing other handlers.
In general, if you want an asynchronous flow, you would chain callbacks e.g. callback checks is_open(), and if true calls async_write() with itself as the callback.
In either case, the callback returns.
This allows the event loop to run, calling your callback, and so on.
In short, you should make sure your asynchronous callbacks always return in a reasonable time frame.

boost::asio::streambuf asserts "iterator out of bounds"

Client sends to server near about 165kB of data. At first all is fine.
But when client send the same data once again(165kB), I receive an assert on server side.
Assert contains information about "iterator out of bounds"
On the call stack, there is some information about read_until method.
So I think that I made a mistake.
TCP Asynchronous Server code is below:
Code for handle_read:
void Session::handle_read(const boost::system::error_code& a_error,
size_t a_nbytestransferred)
{
if (!a_error)
{
std::ostringstream dataToRetrive;
dataToRetrive << &m_bufferRead;
boost::thread threads(boost::bind(retriveMessageFromClient,
shared_from_this(), dataToRetrive.str()));
boost::asio::async_write(m_socket, m_bufferWrite,
boost::bind(&Session::handle_write,
shared_from_this(), boost::asio::placeholders::error));
}
else
disconnect();
}
Code for handle_write:
void Session::handle_write(const boost::system::error_code& a_error)
{
if (!a_error)
{
boost::asio::async_read_until(m_socket,
m_bufferRead, boost::regex(G_strREQUESTEND),
boost::bind(&Session::handle_read, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
else
disconnect();
}
Both m_bufferRead, m_bufferWrite are members of class Session.
class Session...
boost::asio::streambuf m_bufferRead;
boost::asio::streambuf m_bufferWrite;
Update
I detected that problem is layed in other place of my code.
After than thread finishs tasks, metdhod
do_writeMessage() is called.
Thread function
void retriveMessageFromClient(boost::shared_ptr<Session>& A_spSesion, std::string A_strDataToRetrive)
{
try
{
std::string strAnswer;
bool bFind = (A_strDataToRetrive.find(G_REGEX_BIG_FILE_BEGIN) != std::string::npos);
if(bFind) // Write large data to osFile
{
A_strDataToRetrive = boost::regex_replace(A_strDataToRetrive, boost::regex(G_REGEX_BIG_FILE_BEGIN), std::string(""));
std::string strClientFolder = str(boost::format("%1%%2%") % CLIENT_PRE_FOLDER_FILE % A_spSesion->getIdentifier());
std::string strClientFile = str(boost::format("%1%\\%2%%3%") % strClientFolder % strClientFolder % CLIENT_EXTENSION);
if ( boost::filesystem::exists(strClientFolder) )
boost::filesystem::remove_all(strClientFolder);
else
boost::filesystem::create_directory( strClientFolder );
std::ofstream osFile(strClientFile.c_str());
osFile << A_strDataToRetrive;
osFile.close();
strAnswer = str(boost::format(G_FILE_WAS_WRITE) % strClientFile);
}
else
{
double dResult = sin (30.0 * 3.14/180);
strAnswer = str(boost::format(G_OPERATION_RESULT) % dResult);
}
// Sleep thread
boost::xtime timeToSleep;
boost::xtime_get(&timeToSleep, boost::TIME_UTC);
timeToSleep.sec += 2;
boost::this_thread::sleep(timeToSleep);
A_spSesion->do_writeMessage(strAnswer);
}
catch (std::exception& e)
{
std::cerr << THREAD_PROBLEM << e.what() << "\n";
}
}
Session do_writeMessage
void Session::do_writeMessage(const std::string& A_strMessage)
{
m_strMessage = A_strMessage;
m_strMessage += G_strRESPONSEEND;
// m_socket.send(boost::asio::buffer(m_strMessage)); It works correctly
m_socket.async_send(boost::asio::buffer(m_strMessage),
boost::bind(&Session::handle_write, shared_from_this(),
boost::asio::placeholders::error)); -- after that assert
}
So finnally I have a problem with asynch_send...
UPDATED
**TCPAsyncServer**::TCPAsyncServer(boost::asio::io_service& A_ioService, short port,
: m_ioService(A_ioService), m_lIDGenerator(0),
m_clientSocket(m_ioService, tcp::endpoint(tcp::v4(),
port)),
{
SessionPtr newSession(new Session(m_ioService, m_mapSessions, ++m_lIDGenerator));
m_clientSocket.async_accept(newSession->getSocket(),
boost::bind(&TCPAsyncServer::handle_ClientAccept, this,
newSession, boost::asio::placeholders::error));
Session contructor
Session::Session(boost::asio::io_service& A_ioService, std::map<long, boost::shared_ptr<Session> >& A_mapSessions, long A_lId)
: m_socket(A_ioService), m_mapSessions(A_mapSessions), m_lIdentifier(A_lId), m_ioService(A_ioService)
{}
Session members
std::map<long, boost::shared_ptr<Session> >& m_mapSessions;
long m_lIdentifier;
boost::asio::ip::tcp::socket m_socket;
boost::asio::io_service& m_ioService;
You need to use prepare, consume, and commit when using asio::streambuf to read and write from a socket. The documentation describes this with an example. It's not obvious to me based on your sample code if you are doing that.
writing
boost::asio::streambuf b;
std::ostream os(&b);
os << "Hello, World!\n";
// try sending some data in input sequence
size_t n = sock.send(b.data());
b.consume(n); // sent data is removed from input sequence
reading
boost::asio::streambuf b;
// reserve 512 bytes in output sequence
boost::asio::streambuf::mutable_buffers_type bufs = b.prepare(512);
size_t n = sock.receive(bufs);
// received data is "committed" from output sequence to input sequence
b.commit(n);
std::istream is(&b);
std::string s;
is >> s;
If you are using async_read / async_read_until you don't need to specify a size for streambuf but do need to ensure the data you read into it is not greater than it maximum allowed size. In relation to the “iterator out of bounds” issue; I have found that telling asio to read when it's already reading causes a race condition for the streambuf to which asio reads which results in the assertion error:
Assert “iterator out of bounds”
You can use something like:
strand_.wrap(boost::bind(&your_class::handle_read, this,
asio::placeholders::error, asio::placeholders::bytes_transferred)));
to help synchronize your threads but you must be careful not to 'wrap' something that is already running with access to shared data.
HTH