Applying a timeout to UDP call receive_from in ASIO - c++

I have the following bit of ASIO code, that synchronously reads UDP packets. The problem is if no data packets of the given size have arrived in a given time frame (30 seconds) I'd like the recieve_from function to return with some kind of error to specif icy timeout.
for (;;)
{
boost::array<char, 1000> recv_buf;
udp::endpoint remote_endpoint;
asio::error_code error;
socket.receive_from(asio::buffer(recv_buf), // <-- require timeout
remote_endpoint, 0, error);
if (error && error != asio::error::message_size)
throw asio::system_error(error);
std::string message = make_daytime_string();
asio::error_code ignored_error;
socket.send_to(asio::buffer(message),
remote_endpoint, 0, ignored_error);
}
Looking at the documentation non of the UDP oriented calls support a time-out mechanism.
What is the correct way (also portable if possible) for having a time-out with syncronous UDP calls in ASIO?

As far as I know, this is not possible. By running a synchronous receive_from you've blocked code execution by a syscall recvmsg from #include <sys/socket.h>.
As portability goes, I cannot speak for Windows but a linux/bsd C-flavoured solution would look like this:
void SignalHandler(int signal) {
// do what you need to do, possibly informing about timeout and calling exit()
}
...
struct sigaction signal_action;
signal_action.sa_flags = 0;
sigemptyset(&signal_action.sa_mask);
signal_action.sa_handler = SignalHandler;
if (sigaction(SIGALRM, &signal_action, NULL) == -1) {
// handle error
}
...
int timeout_in_seconds = 5;
alarm(timeout_in_seconds);
...
socket.receive_from(asio::buffer(recv_buf), remote_endpoint, 0, error);
...
alarm(0);
If this is not at all feasible, I would recommend going full async and run it in a boost::asio::io_service.

Related

Recv() returning SOCKET_ERROR when I connect a client to the server instead of blocking and waiting for message

I am relatively new to network programming and multithreading in C++. Currently my recv() call returns an unknown error. I'm not quite sure where the error coming from at the moment and would appreciate some help.
I used putty to connect to the server locally
class Threaded_TCPListener{
int Threaded_TCPListener::Init()
{
// Initializing WinSock
WSADATA wsData;
WORD ver = MAKEWORD(2,2);
int winSock = WSAStartup(ver, &wsData);
if(winSock != 0)
return winSock;
// Creating listening socket
this->socket = ::socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if(this->socket == INVALID_SOCKET)
return WSAGetLastError();
// Fill sockaddr with ip addr and port
sockaddr_in hint;
hint.sin_family = AF_INET;
hint.sin_port = htons(this->port);
inet_pton(AF_INET, this->ipAddress, &hint.sin_addr);
// Bind hint to socket
if(bind(this->socket, (sockaddr*)&hint, sizeof(hint)) == SOCKET_ERROR)
return WSAGetLastError();
// Start listening on socket
if(listen(this->socket, SOMAXCONN) == SOCKET_ERROR)
return WSAGetLastError();
// Accept first client
this->createAcceptThread();
return 0;
}
int Threaded_TCPListener::Run()
{
bool isRunning = true;
// Read from all clients
std::vector<std::thread> threads;
threads.reserve(this->clients.size());
// Recv from client sockets
for (int i=0; i < this->clients.size(); ++i)
{
threads.emplace_back(std::thread(&Threaded_TCPListener::receiveFromSocket, this, socket));
}
// Wait for all threads to finish
for(std::thread& t : threads)
{
t.detach();
}
return 0;
}
void Threaded_TCPListener::onMessageReceived(int clientSocket, const char* msg, int length)
{
Threaded_TCPListener::broadcastToClients(clientSocket, msg, length);
std::thread t(&Threaded_TCPListener::receiveFromSocket, this, clientSocket);
t.detach();
return;
}
void Threaded_TCPListener::sendMessageToClient(int clientSocket, const char * msg, int length)
{
send(clientSocket, msg, length, 0);
return;
}
void Threaded_TCPListener::broadcastToClients(int senderSocket, const char * msg, int length)
{
std::vector<std::thread> threads;
threads.reserve(clients.size());
// Iterate over all clients
for (int sendSock : this->clients)
{
if(sendSock != senderSocket)
threads.emplace_back(std::thread(&Threaded_TCPListener::sendMessageToClient, this,sendSock, msg, length));
}
// Wait for all threads to finish
for(std::thread& t : threads)
t.join();
return;
}
void Threaded_TCPListener::createAcceptThread()
{
// Start accepting clients on a new thread
this->listeningThread = std::thread(&Threaded_TCPListener::acceptClient, this);
this->listeningThread.detach();
return;
}
void Threaded_TCPListener::acceptClient()
{
int client = accept(this->socket, nullptr, nullptr);
// Error
if(client == INVALID_SOCKET)
{
std::printf("Accept Err: %d\n", WSAGetLastError());
}
// Add client to clients queue
else
{
// Add client to queue
this->clients.emplace(client);
// Client Connect Confirmation
onClientConnected(client); // Prints msg on server
// Create another thread to accept more clients
this->createAcceptThread();
}
return;
}
void Threaded_TCPListener::receiveFromSocket(int receivingSocket)
{
// Byte storage
char buff[MAX_BUFF_SIZE];
// Clear buff
memset(buff, 0, sizeof(buff));
// Receive msg
int bytesRecvd = recv(receivingSocket, buff, MAX_BUFF_SIZE, 0);
if(bytesRecvd <= 0)
{
char err_buff[1024];
strerror_s(err_buff, bytesRecvd);
std::cerr << err_buff;
// Close client
this->clients.erase(receivingSocket);
closesocket(receivingSocket);
onClientDisconnected(receivingSocket); // Prints msg on server
}
else
{
onMessageReceived(receivingSocket, buff, bytesRecvd);
}
}
}
I am trying to create a multithreaded TCP 'server' that will handle incoming clients by having an accept thread continuously running (listening for new connections), and a thread waiting with a recv block for each client connected to the server.
Your Init looks fine:
create socket, bind it, listen on it, start accept thread
In your accept thread's acceptClient looks sort of OK:
print some message
add the client socket to clients queue
create a new accept thread
Your Run makes no sense:
create one thread per element in clients to receive from the listening socket
It looks like you are spawning a new thread for every single socket action. That is a pretty wasteful design. As soon as the thread is done it can go back to doing something else.
So creating a new accept thread in acceptClient is a waste, you could just loop back to the beginning to ::accept the next client. Like so:
acceptClient() {
while (alive) {
int client = accept(socket, ...);
createClientHandler(client);
}
}
What seems to be missing is spawning a new client thread to service the client socket. You currently do this in Run, but that's before any of the clients are actually accepted. And you do it for the wrong socket! Instead, you should be spawning the receiveFromSocket threads in acceptClient, and passing it the client socket. So that's a bug.
In your receiveFromSocket you also need not create another thread to receiveFromSocket again -- just loop back to the beginning.
The biggest concern with this thread-per-action design is that you are spawning sender threads on every incoming message. This means you could actually have several sender threads attempting to ::send on the same TCP socket. That's not very safe.
The order of calls made to WSASend is also the order in which the buffers are transmitted to the transport layer. WSASend should not be called on the same stream-oriented socket concurrently from different threads, because some Winsock providers may split a large send request into multiple transmissions, and this may lead to unintended data interleaving from multiple concurrent send requests on the same stream-oriented socket.
https://learn.microsoft.com/en-us/windows/desktop/api/winsock2/nf-winsock2-wsasend
Similarly, instead of spawning threads in broadcastToClients, I suggest you just spawn one persistent sender thread per client socket in acceptClient (together with the receiveFromSocket thread within some createClientHandler).
To communicate with the sender threads you should use thread-safe blocking queues. Each sender thread would look like this:
while (alive) {
msg = queue.next_message();
send(client_socket, msg);
}
Then on message received you just do:
for (client : clients) {
client.queue.put_message(msg);
}
So to summarize, to handle each client you need a structure like this:
struct Client {
int client_socket;
BlockingQueue queue;
// optionally if you want to keep track of your threads
// to be able to safely clean up
std::thread recv_thread, send_thread;
};
Safe cleanup is a whole other story.
Finally, a remark on this comment in your code:
// Wait for all threads to finish
for(std::thread& t : threads)
{
t.detach();
}
That's almost the opposite to what std::thread::detach does:
https://en.cppreference.com/w/cpp/thread/thread/detach
It allows you to destroy the thread object without having to wait for the thread to finish execution.
There is a misconception in the code in how a TCP server has to be implemented:
You seem to assume that you can have a single server socket file descriptor which can handle all communication. This is not the case. You must have a single dedicated socket file descriptor which is just used for listening and accepting incoming connections, and then you have one additional file descriptor for each existing connection.
In your code I see that you invoke receiveFromSocket() always with the listening socket. This is wrong. Also invoking receiveFromSocket() in a loop for all clients is wrong.
What you rather need to do is:
- Have one dedicated thread which call accept() in a loop. There is no performance benefit in calling accept() from multiple threads.
- One accept() returns a new connection you spawn a new thread which calls recv() in a loop. This will then block and wait for new data as you expect in your question.
You also need to drop the habit of calling individual functions from new threads. This is not multithreaded programming. A thread usually contains a loop. Everything else is usually a design flaw.
Also note that multithreaded programming is still rocket science in 2019, especially in C++. If you are not an absolute expert you will not be able to do it. Also note that absolute experts in multithreaded programming will try to avoid multithreaded programming whenever possible. A lot seemingly concurrent tasks which are I/O bound can better be handled by a single threaded event based system.

boost socket comms are not working past one exchange

I am converting an app which had a very simple heartbeat / status monitoring connection between two services. As that now needs to be made to run on linux in addition to windows, I thought I'd use boost (v1.51, and I cannot upgrade - linux compilers are too old and windows compiler is visual studio 2005) to accomplish the task of making it platform agnostic (considering, I really would prefer not to either have two code files, one for each OS, or a littering of #defines throughout the code, when boost offers the possibility of being pleasant to read (6mos after I've checked in and forgotten this code!)
My problem now, is the connection is timing out. Actually, it's not really working at all.
First time through, the 'status' message is sent, it's received by the server end which sends back an appropriate response. Server end then goes back to waiting on the socket for another message. Client end (this code), sends the 'status' message again... but this time, the server never receives it and the read_some() call blocks until the socket times out. I find it really strange that
The server end has not changed. The only thing that's changed, is my having altered the client code from basic winsock2 sockets, to this code. Previously, it connected and just looped through send / recv calls until the program was aborted or the 'lockdown' message was received.
Why would subsequent calls (to send) silently fail to send anything on the socket and, what do I need to adjust in order to restore the simple send / recv flow?
#include <boost/signals2/signal.hpp>
#include <boost/bind.hpp>
#include <iostream>
#include <boost/array.hpp>
#include <boost/asio.hpp>
#include <boost/thread.hpp>
using boost::asio::ip::tcp;
using namespace std;
boost::system::error_code ServiceMonitorThread::ConnectToPeer(
tcp::socket &socket,
tcp::resolver::iterator endpoint_iterator)
{
boost::system::error_code error;
int tries = 0;
for (; tries < maxTriesBeforeAbort; tries++)
{
boost::asio::connect(socket, endpoint_iterator, error);
if (!error)
{
break;
}
else if (error != make_error_code(boost::system::errc::success))
{
// Error connecting to service... may not be running?
cerr << error.message() << endl;
boost::this_thread::sleep_for(boost::chrono::milliseconds(200));
}
}
if (tries == maxTriesBeforeAbort)
{
error = make_error_code(boost::system::errc::host_unreachable);
}
return error;
}
// Main thread-loop routine.
void ServiceMonitorThread::run()
{
boost::system::error_code error;
tcp::resolver resolver(io_service);
tcp::resolver::query query(hostnameOrAddress, to_string(port));
tcp::resolver::iterator endpoint_iterator = resolver.resolve(query);
tcp::socket socket(io_service);
error = ConnectToPeer(socket, endpoint_iterator);
if (error && error == boost::system::errc::host_unreachable)
{
TerminateProgram();
}
boost::asio::streambuf command;
std::ostream command_stream(&command);
command_stream << "status\n";
boost::array<char, 10> response;
int retry = 0;
while (retry < maxTriesBeforeAbort)
{
// A 1s request interval is more than sufficient for status checking.
boost::this_thread::sleep_for(boost::chrono::seconds(1));
// Send the command to the network monitor server service.
boost::asio::write(socket, command, error);
if (error)
{
// Error sending to socket
cerr << error.message() << endl;
retry++;
continue;
}
// Clear the response buffer, then read the network monitor status.
response.assign(0);
/* size_t bytes_read = */ socket.read_some(boost::asio::buffer(response), error);
if (error)
{
if (error == make_error_code(boost::asio::error::eof))
{
// Connection was dropped, re-connect to the service.
error = ConnectToPeer(socket, endpoint_iterator);
if (error && error == make_error_code(boost::system::errc::host_unreachable))
{
TerminateProgram();
}
continue;
}
else
{
cerr << error.message() << endl;
retry++;
continue;
}
}
// Examine the response message.
if (strncmp(response.data(), "normal", 6) != 0)
{
retry++;
// If we received the lockdown response, then terminate.
if (strncmp(response.data(), "lockdown", 8) == 0)
{
break;
}
// Not an expected response, potential error, retry to see if it was merely an aberration.
continue;
}
// If we arrived here, the exchange was successful; reset the retry count.
if (retry > 0)
{
retry = 0;
}
}
// If retry count was incremented, then we have likely encountered an issue; shut things down.
if (retry != 0)
{
TerminateProgram();
}
}
When a streambuf is provided directly to an I/O operation as the buffer, then the I/O operation will manage the input sequence appropriately by either commiting read data or consuming written data. Hence, in the following code, command is empty after the first iteration:
boost::asio::streambuf command;
std::ostream command_stream(&command);
command_stream << "status\n";
// `command`'s input sequence contains "status\n".
while (retry < maxTriesBeforeAbort)
{
...
// write all of `command`'s input sequence to the socket.
boost::asio::write(socket, command, error);
// `command.size()` is 0, as the write operation will consume the data.
// Subsequent write operations with `command` will be no-ops.
...
}
One solution would be to use std::string as the buffer:
std::string command("status\n");
while (retry < maxTriesBeforeAbort)
{
...
boost::asio::write(socket, boost::asio::buffer(command), error);
...
}
For more details on streambuf usage, consider reading this answer.

boost::asio::async_receive and 0 bytes in socket

Pseudo-code
boost::asio::streambuf my_buffer;
boost::asio::ip::tcp::socket my_socket;
auto read_handler = [this](const boost::system::error_code& ec, size_t bytes_transferred) {
// my logic
};
my_socket.async_receive(my_buffer.prepare(512),
read_handler);
When using traditional recv with non-blocking socket, it returns -1 when there is nothing to read from socket.
But use of async_receive does not call read_handler if there is no data, and it waits infinitely.
How to realize such a logic (asynchronously) that calls read_handler with bytes_transferred == 0 (possibly with error code set) when there is nothing to read from socket?
(async_read_some has the same behavior).
In short, immediately after initiating the async_receive() operation, cancel it. If the completion handler is invoked with boost::asio::error::operation_aborted as the error, then the operation blocked. Otherwise, the read operation completed with success and has read from the socket or failed for other reasons, such as the remote peer closing the connection.
socket.async_receive(boost::asio::buffer(buffer), handler);
socket.cancel();
Within the initiating function of an asynchronous operation, a non-blocking read will attempt to be made. This behavior is subtlety noted in the async_receive() documentation:
Regardless of whether the asynchronous operation completes immediately or not, [...]
Hence, if the operation completes immediately with success or error, then the completion handler will be ready for invocation and is not cancelable. On the other hand, if the operation would block, then it will be enqueued into the reactor for monitoring, where it becomes cancelable.
One can also obtain similar behavior with synchronous operations by enabling non-blocking mode on the socket. When the socket is set to non-blocking, synchronous operations that would block will instead fail with boost::asio::error::would_block.
socket.non_blocking(true);
auto bytes_transferred = socket.receive(
boost::asio::buffer(buffer), 0 /* flags */, error);
Here is a complete example demonstrating these behaviors:
#include <array>
#include <iostream>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
// This example is not interested in the handlers, so provide a noop function
// that will be passed to bind to meet the handler concept requirements.
void noop() {}
void print_status(
const boost::system::error_code& error,
std::size_t bytes_transferred)
{
std::cout << "error = (" << error << ") " << error.message() << "; "
"bytes_transferred = " << bytes_transferred
<< std::endl;
}
int main()
{
using boost::asio::ip::tcp;
// Create all I/O objects.
boost::asio::io_service io_service;
tcp::acceptor acceptor(io_service, tcp::endpoint(tcp::v4(), 0));
tcp::socket socket1(io_service);
tcp::socket socket2(io_service);
// Connect the sockets.
acceptor.async_accept(socket1, boost::bind(&noop));
socket2.async_connect(acceptor.local_endpoint(), boost::bind(&noop));
io_service.run();
io_service.reset();
std::array<char, 512> buffer;
// Scenario: async_receive when socket has no data.
// Within the intiating asynchronous read function, an attempt to read
// data will be made. If it fails, it will be added to the reactor,
// for monitoring where it can be cancelled.
{
std::cout << "Scenario: async_receive when socket has no data"
<< std::endl;
socket1.async_receive(boost::asio::buffer(buffer), &print_status);
socket1.cancel();
io_service.run();
io_service.reset();
}
// Scenario: async_receive when socket has data.
// The operation will complete within the initiating function, and is
// not available for cancellation.
{
std::cout << "Scenario: async_receive when socket has data" << std::endl;
boost::asio::write(socket2, boost::asio::buffer("hello"));
socket1.async_receive(boost::asio::buffer(buffer), &print_status);
socket1.cancel();
io_service.run();
}
// One can also get the same behavior with synchronous operations by
// enabling non_blocking mode.
boost::system::error_code error;
std::size_t bytes_transferred = 0;
socket1.non_blocking(true);
// Scenario: non-blocking synchronous read when socket has no data.
{
std::cout << "Scenario: non-blocking synchronous read when socket"
" has no data." << std::endl;
bytes_transferred = socket1.receive(
boost::asio::buffer(buffer), 0 /* flags */, error);
assert(error == boost::asio::error::would_block);
print_status(error, bytes_transferred);
}
// Scenario: non-blocking synchronous read when socket has data.
{
std::cout << "Scenario: non-blocking synchronous read when socket"
" has data." << std::endl;
boost::asio::write(socket2, boost::asio::buffer("hello"));
bytes_transferred = socket1.receive(
boost::asio::buffer(buffer), 0 /* flags */, error);
print_status(error, bytes_transferred);
}
}
Output:
Scenario: async_receive when socket has no data
error = (system:125) Operation canceled; bytes_transferred = 0
Scenario: async_receive when socket has data
error = (system:0) Success; bytes_transferred = 6
Scenario: non-blocking synchronous read when socket has no data.
error = (system:11) Resource temporarily unavailable; bytes_transferred = 0
Scenario: non-blocking synchronous read when socket has no data.
error = (system:0) Success; bytes_transferred = 6

Setting ASIO timeout for stream

I am trying to a set a timeout for a socket that I have created using ASIO in boost with no luck. I have found the following code elsewhere on the site:
tcp::socket socket(io_service);
struct timeval tv;
tv.tv_sec = 5;
tv.tv_usec = 0;
setsockopt(socket.native(), SOL_SOCKET, SO_RCVTIMEO, &tv, sizeof(tv));
setsockopt(socket.native(), SOL_SOCKET, SO_SNDTIMEO, &tv, sizeof(tv));
boost::asio::connect(socket, endpoint_iterator);
The timeout remains at the same 60 seconds as opposed to the 5 seconds I am looking for in the connect call. What am I missing? Note the connect code works fine in all other cases (where there is no timeout).
The socket options you've set don't apply to connect AFAIK.
This can be accomplished by using the asynchronous asio API as in the following asio example.
The interesting parts are setting the timeout handler:
deadline_.async_wait(boost::bind(&client::check_deadline, this));
Starting the timer
void start_connect(tcp::resolver::iterator endpoint_iter)
{
if (endpoint_iter != tcp::resolver::iterator())
{
std::cout << "Trying " << endpoint_iter->endpoint() << "...\n";
// Set a deadline for the connect operation.
deadline_.expires_from_now(boost::posix_time::seconds(60));
// Start the asynchronous connect operation.
socket_.async_connect(endpoint_iter->endpoint(),
boost::bind(&client::handle_connect,
this, _1, endpoint_iter));
}
else
{
// There are no more endpoints to try. Shut down the client.
stop();
}
}
And closing the socket which should result in the connect completion handler to run.
void check_deadline()
{
if (stopped_)
return;
// Check whether the deadline has passed. We compare the deadline against
// the current time since a new asynchronous operation may have moved the
// deadline before this actor had a chance to run.
if (deadline_.expires_at() <= deadline_timer::traits_type::now())
{
// The deadline has passed. The socket is closed so that any outstanding
// asynchronous operations are cancelled.
socket_.close();
// There is no longer an active deadline. The expiry is set to positive
// infinity so that the actor takes no action until a new deadline is set.
deadline_.expires_at(boost::posix_time::pos_infin);
}
// Put the actor back to sleep.
deadline_.async_wait(boost::bind(&client::check_deadline, this));
}

TCP data rec timeout don't work for me

I have a boost async tcp client that need recevive data from server all time.
I want put there a time out that when don't arive data for n sec disconnect from server and try againg to connect.
and I use vc++.
void tcpclient::Connect(){
.....
socket_.async_connect(*iterator,boost::bind(&tcpclient::AfterConnection,shared_from_this(),boost::asio::placeholders::error));
}
void tcpclient::AfterConnection(const boost::system::error_code& error){
if (!error)
{
SetTimeout();
}
}
void tcpclient::SetTimeout(int sec = 1)
{
SOCKET native_sock = socket_.native();
int result = SOCKET_ERROR;
if (INVALID_SOCKET != native_sock)
{
struct timeval tv;
tv.tv_sec = sec;
tv.tv_usec = 0;
result = setsockopt(native_sock, SOL_SOCKET, SO_RCVTIMEO,(char *)&tv,sizeof(struct timeval));
i = GetLastError();
}
}
and I read like blow:
socket_.async_receive(boost::asio::buffer(buffer, 1024),
boost::bind(&tcpClient::handleReceive,
shared_from_this(),
boost::asio::placeholders::error,
buffer,
boost::asio::placeholders::bytes_transferred)
);
but when I try to simulate case that don't arive data connection stay etablished:
$netstat -ao
TCP 192.168.0.6:62836 192.168.0.5:telnet ESTABLISHED 2840
what is the problem why this happend?
Setting SO_RCVTIMEO causes otherwise-blocking calls to read to return with no data after waiting for the specified time, as a non-blocking socket would have done immediately.
So, the question is, what do you, or Boost, do when read returns an EWOULDBLOCK error? You need to show how you're doing the read; if Boost is handling it in an event loop (based on select or poll) it probably just waits for some data to become available.
If that's the case, a better approach is to register a timer callback with the event loop to fire every second or however often, and check whether you received some data during the previous second. The socket options won't help you with that.