boost asio detecting / avoiding reception buffer overflow - c++

Consider a client sending data to a server using TCP, with boost::asio, in "synchronous mode" (aka "blocking" functions).
Client code (skipped the part about query and io_service):
tcp::resolver::iterator endpoint_iterator = resolver.resolve(query);
tcp::socket socket( io_service );
boost::asio::connect( socket, endpoint_iterator );
std::array<char, 1000> buf = { /* some data */ };
size_t n = socket.send( boost::asio::buffer(buf) );
This will send the whole buffer (1000 bytes) to the connected machine.
Now the server code:
tcp::acceptor acceptor( io_service, tcp::endpoint( tcp::v4(), port ) );
tcp::socket socket( io_service );
boost::system::error_code err;
std::array<char, 500> buff;
size_t n = socket.read_some( boost::asio::buffer(buff), err );
std::cout << "err=" << err.message() << '\n';
What this does: client sends 1000 bytes through the connection, and server attempts to store it in a 500 bytes buffer.
What I expected: an server error status saying that buffer is too small and/or too much data received.
What I get: A "Success" error value, and n=1000 in the server.
What did I miss here ? Can't ASIO detect the buffer overflow ?
Should I proceed using some other classes/functions (streams, maybe?)
Refs (for 1.54, which is the one I use):
buffer function
TCP socket read_some()
TCP socket send()

You're seriously misunderstanding TCP.
TCP is a byte stream. There's no packet boundary inside a TCP stream. Until you close the socket, all bytes form a single stream. (unlike UDP)
Boost.Asio knows this. As long as the stream is open, it can't say how big the stream will eventually be. If you've got a 500 byte buffer, Boost Asio can fill it with the first 500 bytes of the (potentially unbounded) TCP stream.
However, read_some just looks at what's already available. In your case, with just 1000 bytes, it's entirely expected that the whole 1000 bytes are available on your network card. There's no error in that part. It jsut doesn't fit in your buffer, but that's not a problem on the network side.
Neither TCP nor UDP have a way to communicate back that the receiver was expecting a smaller packet. That's application-level logic, and you handle it on the application level. For instance, HTTP has 413 Payload Too Large. Therefore, Boost.Asio doesn't offer a standard mechanism.

You did receive 500 bytes and may read the last 500 bytes by calling asio again. Just saying this as it seems to me that you misundertood the behaviour of asio.

Related

How to properly recv with winsock?

I'm writing a simple http server for a test and I'm rather confused as to how one is supposed to tell where the end of a request is.
recv() returns a negative number on error, 0 on connection close and a positive number receiving data, when there is no more data it just blocks.
I could create some frankenstein that continuously recv's on one thread and checks if it blocked on another thread but there has got to be a better way to do this... How can I tell if there is no more bytes to read for the time being without blocking?
First of all, you should follow the HTTP protocol when reading the HTTP request:
Continue reading from socket until \r\n\r\n is received
Parse the header
If Content-Length is specified, additionally read that many bytes of the request payload
Process the HTTP request
Send HTTP response
Close the socket (HTTP/1.0) or (HTTP/1.1) handle keep-alive, content-encoding, transfer-encoding, trailers, etc, potentially repeating from step 1.
To deal with potentially misbehaving clients, when using blocking sockets it is customary to set a socket timeout prior to issuing recv or send calls.
DWORD recvTimeoutMs = 20000;
setsockopt(socket, SOL_SOCKET, SO_SNDTIMEO, (const char *)&recvTimeoutMs, sizeof(recvTimeoutMs));
DWORD sendTimeoutMs = 30000;
setsockopt(socket, SOL_SOCKET, SO_RCVTIMEO, (const char *)&sendTimeoutMs, sizeof(sendTimeoutMs));
When a recv or send times out, it will fail with WSAGetLastError giving WSAETIMEDOUT (10060).

How to recover from network interruption using boost::asio

I am writing a server that accepts data from a device and processes it. Everything works fine unless there is an interruption in the network (i.e., if I unplug the Ethernet cable, then reconnect it). I'm using read_until() because the protocol that the device uses terminates the packet with a specific sequence of bytes. When the data stream is interrupted, read_until() blocks, as expected. However when the stream starts up again, it remains blocked. If I look at the data stream with Wireshark, the device continues transmitting and each packet is being ACK'ed by the network stack. But if I look at bytes_readable it is always 0. How can I detect the interruption and how to re-establish a connection to the data stream? Below is a code snippet and thanks in advance for any help you can offer. [Go easy on me, this is my first Stack Overflow question....and yes I did try to search for an answer.]
using boost::asio::ip::tcp;
boost::asio::io_service IOservice;
tcp::acceptor acceptor(IOservice, tcp::endpoint(tcp::v4(), listenPort));
tcp::socket socket(IOservice);
acceptor.accept(socket);
for (;;)
{
len = boost::asio::read_until(socket, sbuf, end);
// Process sbuf
// etc.
}
Remember, the client initiates a connection, so the only thing you need to achieve is to re-create the socket and start accepting again. I will keep the format of your snippet but I hope your real code is properly encapsulated.
using SocketType = boost::asio::ip::tcp::socket;
std::unique_ptr<SocketType> CreateSocketAndAccept(
boost::asio::io_service& io_service,
boost::asio::ip::tcp::acceptor& acceptor) {
auto socket = std::make_unique<boost::asio::ip::tcp::socket>(io_service);
boost::system::error_code ec;
acceptor.accept(*socket.get(), ec);
if (ec) {
//TODO: Add handler.
}
return socket;
}
...
auto socket = CreateSocketAndAccept(IOservice, acceptor);
for (;;) {
boost::system::error_code ec;
auto len = boost::asio::read_until(*socket.get(), sbuf, end, ec);
if (ec) // you could be more picky here of course,
// e.g. check against connection_reset, connection_aborted
socket = CreateSocketAndAccept(IOservice, acceptor);
...
}
Footnote: Should go without saying, socket needs to stay in scope.
Edit: Based on the comments bellow.
The listening socket itself does not know whether a client is silent or whether it got cut off. All operations, especially synchronous, should impose a time limit on completion. Consider setting SO_RCVTIMEO or SO_KEEPALIVE (per socket, or system wide, for more info How to use SO_KEEPALIVE option properly to detect that the client at the other end is down?).
Another option is to go async and implement a full fledged "shared" socket server (BOOST example page is a great start).
Either way, you might run into data consistency issues and be forced to deal with it, e.g. when the client detects an interrupted connection, it would resend the data. (or something more complex using higher level protocols)
If you want to stay synchronous, the way I've seen things handled is to destroy the socket when you detect an interruption. The blocking call should throw an exception that you can catch and then start accepting connections again.
for (;;)
{
try {
len = boost::asio::read_until(socket, sbuf, end);
// Process sbuf
// etc.
}
catch (const boost::system::system_error& e) {
// clean up. Start accepting new connections.
}
}
As Tom mentions in his answer, there is no difference between inactivity and ungraceful disconnection so you need an external mechanism to detect this.
If you're expecting continuous data transfer, maybe a timeout per connection on the server side is enough. A simple ping could also work. After accepting a connection, ping your client every X seconds and declare the connection dead if he doesn't answer.

Using boost::asio for simple udp communication

This is a simple problem, but I can't seem to figure out what I am doing wrong. I am attempting to read data sent to a port on a client using Boost and I have the following code which sets up 1) the UDP client, 2) a buffer for reading to and 3) an attempt to read from the socket:
// Set up the socket to read UDP packets on port 10114
boost::asio::io_service io_service;
udp::endpoint endpoint_(udp::v4(), 10114);
udp::socket socket(io_service, endpoint_);
// Data coming across will be 8 bytes per packet
boost::array<char, 8> recv_buf;
// Read data available from port
size_t len = socket.receive_from(
boost::asio::buffer(recv_buf,8), endpoint_);
cout.write(recv_buf.data(), len);
The problem is that the recieve_from function never returns. The server is running on another computer and generating data continuously. I can see traffic on this port on the local computer using Wireshark. So, what am I doing wrong here?
So, it turns out that I need to listen on that port for connections coming from anywhere. As such, the endpoint needs to be setup as
boost::asio::ip::udp::endpoint endpoint_(boost::asio::ip::address::from_string("0.0.0.0"), 10114);
Using this setup, I get the data back that I expect. And fyi, 0.0.0.0 is the same as INADDR_ANY.

recv reads incomplete packet

I have simple function that responsible on receiving packets via socket.
if((recv_size = recv(sock , rx , 50000 ,0)) == SOCKET_ERROR)
{
...
} else
{
...
}
I found that sometimes I receiv incompleate packet. Why? Mybe I should use recv for several times? Packet length never exceeds 50000 bytes.
I use TCP socket.
If you're using TCP it's expected. TCP is a streaming protocol, it doesn't have "packets" or message boundaries, and you can have received all of the "message" or part of it, or even multiple messages. So you might have to call recv multiple times to receive a complete message.
However, since TCP doesn't have message boundaries, you have to implement them yourself on top of TCP, for example by sending the length of the message in a fixed-size header, or have some special end-of-message marker.

boost::asio::ip::tcp::socket is connected?

I want to verify the connection status before performing read/write operations.
Is there a way to make an isConnect() method?
I saw this, but it seems "ugly".
I have tested is_open() function as well, but it doesn't have the expected behavior.
TCP is meant to be robust in the face of a harsh network; even though TCP provides what looks like a persistent end-to-end connection, it's all just a lie, each packet is really just a unique, unreliable datagram.
The connections are really just virtual conduits created with a little state tracked at each end of the connection (Source and destination ports and addresses, and local socket). The network stack uses this state to know which process to give each incoming packet to and what state to put in the header of each outgoing packet.
Because of the underlying — inherently connectionless and unreliable — nature of the network, the stack will only report a severed connection when the remote end sends a FIN packet to close the connection, or if it doesn't receive an ACK response to a sent packet (after a timeout and a couple retries).
Because of the asynchronous nature of asio, the easiest way to be notified of a graceful disconnection is to have an outstanding async_read which will return error::eof immediately when the connection is closed. But this alone still leaves the possibility of other issues like half-open connections and network issues going undetected.
The most effectively way to work around unexpected connection interruption is to use some sort of keep-alive or ping. This occasional attempt to transfer data over the connection will allow expedient detection of an unintentionally severed connection.
The TCP protocol actually has a built-in keep-alive mechanism which can be configured in asio using asio::tcp::socket::keep_alive. The nice thing about TCP keep-alive is that it's transparent to the user-mode application, and only the peers interested in keep-alive need configure it. The downside is that you need OS level access/knowledge to configure the timeout parameters, they're unfortunately not exposed via a simple socket option and usually have default timeout values that are quite large (7200 seconds on Linux).
Probably the most common method of keep-alive is to implement it at the application layer, where the application has a special noop or ping message and does nothing but respond when tickled. This method gives you the most flexibility in implementing a keep-alive strategy.
TCP promises to watch for dropped packets -- retrying as appropriate -- to give you a reliable connection, for some definition of reliable. Of course TCP can't handle cases where the server crashes, or your Ethernet cable falls out or something similar occurs. Additionally, knowing that your TCP connection is up doesn't necessarily mean that a protocol that will go over the TCP connection is ready (eg., your HTTP webserver or your FTP server may be in some broken state).
If you know the protocol being sent over TCP then there is probably a way in that protocol to tell you if things are in good shape (for HTTP it would be a HEAD request)
If you are sure that the remote socket has not sent anything (e.g. because you haven't sent a request to it yet), then you can set your local socket to a non blocking mode and try to read one or more bytes from it.
Given that the server hasn't sent anything, you'll either get a asio::error::would_block or some other error. If former, your local socket has not yet detected a disconnection. If latter, your socket has been closed.
Here is an example code:
#include <iostream>
#include <boost/asio.hpp>
#include <boost/asio/spawn.hpp>
#include <boost/asio/steady_timer.hpp>
using namespace std;
using namespace boost;
using tcp = asio::ip::tcp;
template<class Duration>
void async_sleep(asio::io_service& ios, Duration d, asio::yield_context yield)
{
auto timer = asio::steady_timer(ios);
timer.expires_from_now(d);
timer.async_wait(yield);
}
int main()
{
asio::io_service ios;
tcp::acceptor acceptor(ios, tcp::endpoint(tcp::v4(), 0));
boost::asio::spawn(ios, [&](boost::asio::yield_context yield) {
tcp::socket s(ios);
acceptor.async_accept(s, yield);
// Keep the socket from going out of scope for 5 seconds.
async_sleep(ios, chrono::seconds(5), yield);
});
boost::asio::spawn(ios, [&](boost::asio::yield_context yield) {
tcp::socket s(ios);
s.async_connect(acceptor.local_endpoint(), yield);
// This is essential to make the `read_some` function not block.
s.non_blocking(true);
while (true) {
system::error_code ec;
char c;
// Unfortunately, this only works when the buffer has non
// zero size (tested on Ubuntu 16.04).
s.read_some(asio::mutable_buffer(&c, 1), ec);
if (ec && ec != asio::error::would_block) break;
cerr << "Socket is still connected" << endl;
async_sleep(ios, chrono::seconds(1), yield);
}
cerr << "Socket is closed" << endl;
});
ios.run();
}
And the output:
Socket is still connected
Socket is still connected
Socket is still connected
Socket is still connected
Socket is still connected
Socket is closed
Tested on:
Ubuntu: 16.04
Kernel: 4.15.0-36-generic
Boost: 1.67
Though, I don't know whether or not this behavior depends on any of those versions.
you can send a dummy byte on a socket and see if it will return an error.