Boost.Asio - Make sure that other party received data - c++

I'm using boost::asio and sending a list to a client and closing the socket when finished. Somehow the client sometimes gets an End Of File error before he has received everything.
I'm guessing this has to do with the server closing the socket right after sending the last list entry. Is there an easy way to solve this async_send to call the handler only after the data has been successfully sent?
Or is my End Of File error coming from something else?

Boost.Asio is an operating system independent abstraction layer over TCP and UDP sockets. They provide no guarantee that the other application has received and processed the data. You will need to include this logic in your application, you may want to study the OSI model.
If you're closing the socket immediately after async_send() returns, this is incorrect. You should close it only after the completion handler is invoked.

Related

Integrating Boost Asio with ZeroMQ, Bad File Descriptor?

I'm trying to integrate Boost Asio with ZeroMQ. The messaging is functional for the first connection, but the program exits with the error
"Bad File Descriptor"
when the initial connection ends.
I'm using the Boost.Beast example code of the Async Websocket Server to make a connection with the client. I then open a ZMQ socket. The client sends a message to the server over a Websocket connection, the message is send over a ZMQ socket to a different server, the server will do some processing, the server sends the message back over ZMQ, and the final message is sent back to the client over the same Websocket connection.
I am using This Code to integrate Boost with ZMQ. The line of importance is
int zfd;
optlen = sizeof (zfd);
zmq_getsockopt (zmq_sock_, ZMQ_FD, &zfd, &optlen);
sock_.assign (boost::asio::ip::tcp::v4(), zfd);
This gets a file descriptor from the ZMQ socket and wraps it with the Boost socket so everything plays nice. However, when the destructor is called:
sock_.shutdown(boost::asio::ip::tcp::socket::shutdown_both);
sock_.close();
zmq_close (zmq_sock_);
I get an error that there is a Socket operation on a Non-socket because it seems that the socket has been closed. If I remove the socket shutdown and close, I get a Bad File Descriptor issue with ZMQ. It seems that the Session Websocket object is partially destroying the Asio-ZMQ objects. If I remove the destructor entirely, the program doesn't crash, but it does not work properly anymore. i.e. it won't send any more messages over ZMQ.
I've been struggling with this problem for days and I'm hoping that I can get some help. If it helps, my code takes the my_zmq_req_client class and integrates it into the Boost.Beast session class.
I haven't looked at the linked library, but this fragment
sock_.shutdown(boost::asio::ip::tcp::socket::shutdown_both);
sock_.close();
zmq_close (zmq_sock_);
looks suspicious as sock_.close() is meddling with a socket that wasn't opened by it. I'd suggest it makes a lot more sense to release the socket on the asio side, instead of closing it, so that ZMQ can continue having the responsibility over creation/destruction.
sock_.shutdown(boost::asio::ip::tcp::socket::shutdown_both);
sock_.release();
zmq_close (zmq_sock_);

Multiple writes to QTcpSocket

I have a server and client program where it's convenient for the server to send out two messages to each client. The server calls the write() function on each of the client sockets twice in a row. On the client side, the readyread signal occurs, but when the client reads, it only gets the first message. I fixed this by adding waitForBytesWritten() before each write() the server does, and this seemed to fix the problem. However, I don't know why I can't just write to the buffer twice. I would think there is a better way to solve this problem.

Receiving data from already closed socket?

Suppose I have a server application - the connection is over TCP, using UNIX sockets.
The connection is asynchronous - in other words, clients' and servers' sockets are non-blocking.
Suppose the following situation: in some conditions, the server may decide to send some data to a connected client and immediately close the connection: using shutdown with SHUT_RDWR.
So, my question is - is it guaranteed, that when the client call recv, it will receive the (sent by the server) data?
Or, to receive the data, recv must be called before the server's shutdown? If so, what should I do (or, to be more precise, how should I do this), to make sure, that the data is received by the client?
You can control this behavior with "setsockopt(SO_LINGER)":
man setsockopt
SO_LINGER
Waits to complete the close function if data is present. When this option is enabled and there is unsent data present when the close
function is called, the calling application is blocked during the
close function until the data is transmitted or the connection has
timed out. The close function returns without blocking the caller.
This option has meaning only for stream sockets.
See also:
man read
Beej's Guide to Network Programming
There's no guarantee you will receive any data, let alone this data, but the data pending when the socket is closed is subject to the same guarantees as all the other data: if it arrives it will arrive in order and undamaged and subject to TCP's best efforts.
NB 'Asynchronous' and 'non-blocking' are two different things, not two terms for the same thing.
Once you have successfully written the data to the socket, it is in the kernel's buffer, where it will stay until it has been sent and acknowledged. Shutdown doesn't cause the buffered data to get lost. Closing the socket doesn't cause the buffered data to get lost. Not even the death of the sending process would cause the buffered data to get lost.
You can observe the size of the buffer with netstat. The SendQ column is how much data the kernel still wants to transmit.
After the client has acknowledged everything, the port disappears from the server. This may happen before the client has read the data, in which case it will be in RecvQ on the client. Basically you have nothing to worry about. After a successful write to a TCP socket, every component is trying as hard as it can to make sure that your data gets to the destination unharmed regardless of what happens to the sending socket and/or process.
Well, maybe one thing to worry about: If the client tries to send anything after the server has done its shutdown, it could get a SIGPIPE and die before it has read all the available data from the socket.

Handling POSIX socket read() errors

Currently I am implementing a simple client-server program with just the basic functionalities of read/write.
However I noticed that if for example my server calls a write() to reply my client, and if my client does not have a corresponding read() function, my server program will just hang there.
Currently I am thinking of using a simple timer to define a timeout count, and then to disconnect the client after a certain count, but I am wondering if there is a more elegant/or standard way of handling such errors?
There are two general approaches to prevent server blocking and to handle multiple clients by a single server instance:
use POSIX threads to handle each client's connection. If one thread blocks because of erroneous client, other threads will still continue to run. If the remote client has just disappeared (crashed, network down, etc.), then sooner or later the TCP stack will signal a timeout and the blocked write operation will fail with error.
use non-blocking I/O together with a polling mechanism, e.g. select(2) or poll(2). It is quite harder to program using polling calls though. Network sockets are made non-blocking using fcntl(2) and in cases where a normal write(2) or read(2) on the socket would block an EAGAIN error is returned instead. You can use select(2) or poll(2) to wait for something to happen on the socket with an adjustable timeout period. For example, waiting for the socket to become writable, means that you will be notified when there is enough socket send buffer space, e.g. previously written data was flushed to the client machine TCP stack.
If the client side isn't going to read from the socket anymore, it should close down the socket with close. And if you don't want to do that because the client still might want to write to the socket, then you should at least close the read half with shutdown(fd, SHUT_RD).
This will set it up so the server gets an EPIPE on the write call.
If you don't control the clients... if random clients you didn't write can connect, the server should handle clients actively attempting to be malicious. One way for a client to be malicious is to attempt to force your server to hang. You should use a combination of non-blocking sockets and the timeout mechanism you describe to keep this from happening.
In general you should write the protocols for how the server and client communicate so that neither the server or client are trying to write to the socket when the other side isn't going to be reading. This doesn't mean you have to synchronize them tightly or anything. But, for example, HTTP is defined in such a way that it's quite clear for either side as to whether or not the other side is really expecting them to write anything at any given point in the protocol.

WSAEventSelect model

Hey I'm using the WSAEventSelect for event notifications of sockets. So far everything is cool and working like a charm, but there is one problem.
The client is a .NET application and the server is written in Winsock C++. In the .NET application I'm using System.Net.Sockets.Socket class for TCP/IP. When I call the Socket.Shutdown() and Socket.Close() method, I receive the FD_CLOSE event in the server, which I'm pretty sure is fine. Okay the problem occurs when I check the iErrorCode of WSANETWORKEVENTS which I passed to WSAEnumNetworkEvents. I check it like this
if (listenerNetworkEvents.lNetworkEvents & FD_CLOSE)
{
if (listenerNetworkEvents.iErrorCode[FD_CLOSE_BIT] != 0)
{
// it comes here
// which means there is an error
// and the ERROR I got is
// WSAECONNABORTED
printf("FD_CLOSE failed with error %d\n",
listenerNetworkEvents.iErrorCode[FD_CLOSE_BIT]);
break;
}
closesocket(socketArray[Index]);
}
But it fails with the WSAECONNABORTED error. Why is that so?
EDIT: Btw, I'm running both the client and server on the same computer, is it because of that? And I received the FD_CLOSE event when I do this:
server.Shutdown(SocketShutdown.Both); // in .NET C#, client code
I'm guessing you're calling Shutdown() and then Close() immediately afterward. That will give the symptom you're seeing, because this is "slamming the connection shut". Shutdown() does initiate a graceful disconnect (TCP FIN), but immediately following it with Close() aborts that, sending a TCP RST packet to the remote peer. Your Shutdown(SocketShutdown.Both) call slams the connection shut, too, by the way.
The correct pattern is:
Call Shutdown() with the direction parameter set to "write", meaning we won't be sending any more data to the remote peer. This causes the stack to send the TCP FIN packet.
Go back to waiting for Winsock events. When the remote peer is also done writing, it will call Shutdown("write"), too, causing its stack to send your machine a TCP FIN packet, and for your application to get an FD_CLOSE event. While waiting, your code should be prepared to continue reading from the socket, because the remote peer might still be sending data.
(Please excuse the pseudo-C# above. I don't speak .NET, only C++.)
Both peers are expected to use this same shutdown pattern: each tells the other when it's done writing, and then waits to receive notification that the remote peer is done writing before it closes its socket.
The important thing to realize is that TCP is a bidirectional protocol: each side can send and receive independently of the other. Closing the socket to reading is not a nice thing to do. It's like having a conversation with another person but only talking and being unwilling to listen. The graceful shutdown protocol says, "I'm done talking now. I'm going to wait until you stop talking before I walk away."