Ensuring data is being read with async_read - c++

I am currently testing my network application in very low bandwidth environments. I currently have code that attempts to ensure that the connection is good by making sure I am still receiving information.
Traditionally I have done this by recording the timestamp in my ReadHandler function so that each time it gets called I know I have received data on the socket. With very low bandwidths this isn't sufficient because my ReadHandler is not getting called frequently enough.
I was toying around with the idea of writing my own completion condition function (right now I am using tranfer_at_least(1)) thinking it would get called more frequently and I could record my timestamp there, but I was wondering if there wasn't some other more standard way to go about this.

We had a similar issue in production: some of our connections may be idle for days, but we must detect if the remote is dead ASAP.
We solved it by enabling the TCP_KEEPALIVE option:
boost::asio::socket_base::keep_alive option(true);
mSocketTCP.set_option(option);
which had to be accompanied by new startup script that writes sensible values to /proc/sys/net/ipv4/tcp_keepalive_* which have very long timeouts by default (on LInux)

You can use the read_some method to get partial reads, and deal with the book keeping. This is more efficient than transfer_at_least(1), but you still have to keep track of what is going on.
However, a cleaner approach is just to use a concurrent deadline_timer. If the timer goes off before you are finished, then is taking too long and cancel whatever is going on. If not, just stop the timer and continue. Something like:
boost::asio::deadline_timer t;
t.expires_from_now(boost::posix_time::seconds(20));
t.async_wait(bind(&Class::timed_out, this, _1));
// Do stuff.
if (!t.cancel()) {
// Timer went off, abort
}
// And the timeout method
void Class::timed_out(error_code const& error)
{
if (error == boost::asio::error::operation_aborted) return;
// Deal with the timeout, close the socket, etc.
}

I don't know how to handle low latency of network from within application. Can you be sure if it's network latency, or if peer server or peer application busy and react slowly. Does it matter if it network/server/application quilt?
Even if you can discover network latency and find it's big, what are you going to do?
You can not improve the situation.
Consider other critical case which is a subset of what you're trying to handle - network is down (e.g. you disconnect cable from your machine). Since it a subset of your problem you want to handle it too.
Let's examine the network down effect on active TCP connection.How can you discover your active TCP connection is still alive? Calling send() will success, but it merely says that the message queued in TCP outgoing queue in kernel. TCP stack will try to send it, but since TCP ACK won't be sent back, TCP stack on your side will try to resend it again and again. You can see your message in netstat output (Send-Q column).
I'm aware of the following ways to deal with it:
One standard way is TCP keep alive proposed #Cubby.
Another way is to implement Keep Alive mechanism. Send Keep Alive req message and peer is obligated to send back Keep Alive ack message.
If you don't receive ack message after predefined timeout, try to send Keep Alive req N more times (e.g. N=2). If still no success, close the socket and open it again. If peer server is not available you'll not be abable to open connection, since TCP 3 way handshake requires peer to respond.

Related

Win32 Registered Socket I/O: cancelling pending receive operations?

I've recently begun to implement a UDP socket receiver with Registered I/O on Win32. I've stumbled upon the following issue: I don't see any way to cancel pending RIOReceive()/RIOReceiveEx() operations without closing the socket.
To summarize the basic situation:
During normal operation I want to queue quite a few RIOReceive()/RIOReceiveEx() operations in the request queue to ensure that I get the best possible performance when it comes to receiving the UDP packets.
However, at some point I may want to stop what I'm doing. If UDP packets are still arriving at that point, fine, I can just wait until all pending requests have been processed. Unfortunately, if the sender has also stopped sending UDP packets, I still have the pending receive operations.
That in and by itself is not a problem, because I can just keep going once operations start again.
However, if I want to reconfigure the buffers used in between, I run into an issue. Because the documentation states that it's an error to deregister a buffer with RIO while it's still in use, but as long as receive operations are still pending, the buffers are still officially in use, so I can't do that.
What I've tried so far related to cancellation of these requests:
CancelIo() on the socket (no effect)
CancelSynchronousIo(GetCurrentThread()) (no effect)
shutdown(s, SD_RECEIVE) (success, but no effect, the socket even receives packets afterwards -- though shutdown probably wouldn't have been helpful anyway)
WSAIoctl(s, SIO_FLUSH, ...) because the docs of RIOReceiveEx() mentioned it, but that just gives me WSAEOPNOTSUPP on UDP sockets (probably only useful for TCP and probably also only useful for sending, not receiving)
Just for fun I tried to set setsockopt(s, SOL_SOCKET, SO_RCVTIMEO, ...) with 1ms as the timeout -- and that doesn't appear to have any effect on RIO regardless of whether I call it before or after I queue the RIOReceive()/RIOReceiveEx() calls
Only closing the socket will successfully cancel the I/O.
I've also thought about doing RIOCloseCompletionQueue(), but there I wouldn't even know how to proceed afterwards, since there's no way of reassigning a completion queue to a request queue, as far as I can see, and you can only ever create a single request queue for a socket. (If there was something like RIOCloseRequestQueue() and that did cancel the pending requests, I'd be happy, but the docs only mention that closesocket() will free resources associated with the request queue.)
So what I'm left with is the following:
Either I have to write my logic so that the buffers that are being used are always fixed once the socket is opened, because I can't really ever change them in practice due to requests that could still be pending.
Or I have to close the socket and reopen it every time I want to change something here. But that is a race condition, because I'd have to bind the socket again, and I'd really like to avoid that if possible.
I've tested sending UDP packets to my own socket from a newly created different socket until all of the requests have been 'eaten up' -- and while that works in principle, I really don't like it, because if any kind of firewall rule decides to not allow this, the code would deadlock instantly.
On Linux io_uring I can just cancel existing operations, or even exit the uring, and once that's done, I'm sure that there are no receive operations still active, but the socket is still there and accessible. (And on Linux it's nice that the socket still behaves like a normal socket, on Windows if I create the socket with the WSA_FLAG_REGISTERED_IO flag, I can't use it outside of RIO except for operations such as bind().)
Am I missing something here or is this simply not possible with Registered I/O?

C++ UDP Receving

I have a problem I have been trying to iron out all day. The situation is as follows:
I have a server list - let's say 10 different servers.
I want to send a Proposal broadcast message using sendto command to all 10 servers.
I then want to listen and wait for the 10 servers to respond with an ACK + some message.
After some time, timeout using the data from the servers who had responded. (time will be variable based on the amount of requests)
I would like to make use of UDP so that it is connection independent, but also concerned that if I shoot out all messages at once , I might miss a message since I am not blocking on the revfrom line until all the messages are sent.
I could just wait after each send, but that seems inefficient from a broadcast perspective.
I could also setup a listen thread first, and then run the sendto's on a seperate thread, but then the listener (which is the whole program) is on another thread outside of main.
So my question is two fold: which of these approaches (if any) seem like the best fit given what I am trying to do? Secondly, is there any queue on the socket. Like Lets say its not 10, but 1000 servers - if a message comes in while it is not ready to receive, will this message be dropped?
I am open to suggestions on other ways to implement.
Thanks in advance!
Most personal computers these days are located behind firewalls that will block any incoming UDP packets --- indeed, most personal computers these days are also behind a NAT translation layer and don't even have their own Internet-routable IP address. I'd worry about that before I'd worry about missing the occasional incoming UDP message due to timing issues.
That said, in the case where your client is running on the open Internet (or is behind a firewall that is configured to allow UDP packets in), the timing issue isn't really a problem, because the networking stack allocates an incoming-data buffer for the every socket as part of the socket() call. Once you have successfully called bind() on the socket, any UDP packets arriving at that socket's port will be placed in to the socket's incoming-data buffer, ready to be handed over to your code the next time it calls recvfrom(). Importantly, this buffering will occur whether your thread is currently inside a recvfrom() call, or not.
It is possible for the incoming-data-buffer to fill up (it has a finite size, usually around 64KB); at which point any additional incoming UDP packets will be dropped. The usual way to avoid that is to make sure you call recvfrom() as soon as possible, or if that is not sufficient, you can use setsockopt() to tell the networking stack to make the socket's incoming data-buffer larger.
Meanwhile, your calls to sendto() will likely finish quickly, since sendto() returns as soon as the data in your array is copied into the socket's outgoing-data-buffer. In particular, sendto() does not wait for the bytes to go across the network, or (usually) even for the bytes to get to your network card. At worst, it might block until there is enough room in the outgoing-data-buffer to place the data there; and the outgoing-data-buffer is always draining at the line-speed of your network device.

How to send and receive data up to SO_SNDTIMEO and SO_RCVTIMEO without corrupting connection?

I am currently planning how to develop a man in the middle network application for TCP server that would transfer data between server and client. It would behave as regular client for server and server for remote client without modifying any data. It will be optionally used to detect and measure how long server or client is not able to receive data that is ready to be received in situation when connection is inactive.
I am planning to use blocking send and recv functions. Before any data transfer I would call a setsockopt function to set SO_SNDTIMEO and SO_RCVTIMEO to about 10 - 20 miliseconds assuming it will force blocking send and recv functions to return early in order to let another active connection data to be routed. Running thread per connection looks too expensive. I would not use async sockets here because I can not find guarantee that they will get complete in a parts of second especially when large data amount is being sent or received. High data delays does not look good. I would use very small buffers here but calling function for each received byte looks overkill.
My next assumption would be that is safe to call send or recv later if it has previously terminated by timeout and data was received less than requested.
But I am confused by contradicting information available at msdn.
send function
https://msdn.microsoft.com/en-us/library/windows/desktop/ms740149%28v=vs.85%29.aspx
If no error occurs, send returns the total number of bytes sent, which
can be less than the number requested to be sent in the len parameter.
SOL_SOCKET Socket Options
https://msdn.microsoft.com/en-us/library/windows/desktop/ms740532%28v=vs.85%29.aspx
SO_SNDTIMEO - The timeout, in milliseconds, for blocking send calls.
The default for this option is zero, which indicates that a send
operation will not time out. If a blocking send call times out, the
connection is in an indeterminate state and should be closed.
Are my assumptions correct that I can use these functions like this? Maybe there is more effective way to do this?
Thanks for answers
While you MIGHT implement something along the ideas you have given in your question, there are preferable alternatives on all major systems.
Namely:
kqueue on FreeBSD and family. And on MAC OSX.
epoll on linux and related types of operating systems.
IO completion ports on Windows.
Using those technologies allows you to process traffic on multiple sockets without timeout logics and polling in an efficient, reactive manner. They all can be considered successors of the ancient select() function in socket API.
As for the quoted documentation for send() in your question, it is not really confusing or contradicting. Useful network protocols implement a mechanism to create "backpressure" for situations where a sender tries to send more data than a receiver (and/or the transport channel) can accomodate for. So, an application can only provide more data to send() if the network stack has buffer space ready for it.
If, for example an application tries to send 3Kb worth of data and the tcp/ip stack has only room for 800 bytes, send() might succeed and return that it used 800 bytes of the 3k offered bytes.
The basic approach to forwarding the data on a connection is: Do not read from the incoming socket until you know you can send that data to the outgoing socket. If you read greedily (and buffer on application layer), you deprive the communication channel of its backpressure mechanism.
So basically, the "send capability" should drive the receive actions.
As for using timeouts for this "middle man", there are 2 major scenarios:
You know the sending behavior of the sender application. I.e. if it has some intent on sending any data within your chosen receive timeout at any time. Some applications only send sporadically and any chosen value for a receive timeout could be wrong. Even if it is supposed to send at a specific time interval, your timeouts will cause trouble once someone debugs the sending application.
You want the "middle man" to work for unknown applications (which must not use some encryption for middle man to have a chance, of course). There, you cannot pick any "adequate" timeout value because you know nothing about the sending behavior of the involved application(s).
As a previous poster has suggested, I strongly urge you to reconsider the design of your server so that it employs an asynchronous I/O strategy. This may very well require that you spend significant time learning about each operating systems' preferred approach. It will be time well-spent.
For anything other than a toy application, using blocking I/O in the manner that you suggest will not perform well. Even with short timeouts, it sounds to me as though you won't be able to service new connections until you have completed the work for the current connection. You may also find (with short timeouts) that you're burning more CPU time spinning waiting for work to do than actually doing work.
A previous poster wisely suggested taking a look at Windows I/O completion ports. Take a look at this article I wrote in 2007 for Dr. Dobbs. It's not perfect, but I try to do a decent job of explaining how you can design a simple server that uses a small thread pool to handle potentially large numbers of connections:
Windows I/O Completion Ports
http://www.drdobbs.com/cpp/multithreaded-asynchronous-io-io-comple/201202921
If you're on Linux/FreeBSD/MacOSX, take a look at libevent:
Libevent
http://libevent.org/
Finally, a good, practical book on writing TCP/IP servers and clients is "Practical TCP/IP Sockets in C" by Michael Donahoe and Kenneth Calvert. You could also check out the W. Richard Stevens texts (which cover the topic completely for UNIX.)
In summary, I think you should take some time to learn more about asynchronous socket I/O and the established, best-of-breed approaches for developing servers.
Feel free to private message me if you have questions down the road.

What common programming mistakes can cause stuck CLOSE_WAIT in epoll edge triggered mode?

I'm wondering what common programming situations/bugs might cause a server process I have enter into CLOSE_WAIT but not actually close the socket.
What I'm wanting to do is trigger this situation so that I can fix it. In a normal development environment I've not been able to trigger it, but the same code used on a live server is occasionally getting them so that after many many days we have hundreds of them.
Googling for close_wait and it actually seems to be a very common problem, even in mature and supposedly well written services like nginx.
CLOSE_WAIT is basically when the remote end shut down the socket but the local application has not yet invoked a close() on it. This is usually happens when you are not expecting to read data from the socket and thus aren't watching it for readability.
Many applications for convenience sake will always monitor a socket for readability to detect a close.
A scenario to try out is this:
Peer sends 2k of data and immediately closes the data
Your socket is then registered with epoll and gets a notification for readability
Your application only reads 1k of data
You stop monitoring the socket for readability
(I'm not sure if edge-triggered epoll will end up delivering the shutdown event as a separate event).
See also:
(from man epoll_ctl)
EPOLLRDHUP (since Linux 2.6.17)
Stream socket peer closed connection, or shut down writing half of connection. (This flag is especially useful for writing
simple code
to detect peer shutdown when using Edge Triggered monitoring.)

Is acknowledgment response necessary when using send()/recv() of Winsock?

Using Winsock, C++, I send and receive the data with send()/recv(), TCP connection. I want to be sure that the data has been delivered to the other party, and wonder if it is recommended to send back some acknowledgment message after (if) receiving data with recv.
Here are two possibilities, and please advice which way to go:
If send returns the size of passed buffer, assume that the data has been delivered at least to recv function on the other side of wire. When I say "at least", I mean even if the recv fails there (e.g. due to insufficient buffer, etc.), I don't care, I just want to be sure I've done my server part of work properly - I've sent the data completely (i.e. the data reached the other machine).
Use additional acknowledgment: after receiving the data with recv, send back some ID of received packet (part of header of each data sent) signaling the successful receive operation of that packet. If I don't receive such "acknowledgment message" after some interval, return failure code from the sender function.
The second answer looks more safe, but I don't want to complicate the transfer protocol if it is redundant. Also please note that I'm talking about the TCP connection (which is more safe by itself than UDP).
Is there any other mechanisms (maybe some other APIs? maybe WSARecv()/WSASend() work differently?) of ensuring that the data was delivered to the recv function on the other side?
If you recommend the second way, could you please give me some code snippet that allows me to use recv with timeout to receive the acknowledgment? recv is a blocking operation so it will hang forever if the previous send attempt failed (the other party was not notified). Is there any simple way of using recv with timeout (without creating separate thread every time which would probably be the overkill for each and every send operation).
Also the amount of data I pass to send function might be quite big (several megabytes), so how to choose the timeout for "acknowledgment message"? Maybe I should "split" large buffers and use several send calls? I think it will get quite complicated, please advice!
EDIT: OK, you people are suggesting that TCP/IP stack will handle it (i.e. no manual acknowledgment required), but this is what I found on MSDN page: "The successful completion of a send function does not indicate that the data was successfully delivered and received to the recipient. This function only indicates the data was successfully sent." So even if the TCP mechanism has the ability to ensure data delivery, I can't get that status (success or not) via send() function, or any other Winsock function I know. Do you know any way of getting the status from the TCP layer? Again - return value of send() function seems to be not enough!
========================================================
EDIT 2: OK, I think we agree that even though TCP protocol considers the error handling when something goes wrong, the send() function of Winsock is not capable of reporting the errors (simply because it returns before actual transmitting of data starts by the network driver). So here is a million dollar question: Does the send() function of Winsock at least ensure that no other packets will be delivered to the other party until the current packet will be? In other words, if the sending fails for some network failure (but not reported by send() call), and then the network failure will be fixed before next call of send() function with next chunk of data, will it be ensured that the previous packet (which failed but not reported by send()) will be delivered before the next packet? In other words, is there a chance that the one particular send() function will fail "silently", so that subsequent send() calls will succeed but the first packet will be lost? AGAIN - I'm not talking at the TCP level, I'm talking at the Winsock API level!
Why don't you trust your TCP/IP stack to guarantee delivery. After all, that is the whole point of using TCP instead of UDP.
The existing answers here are mostly correct: if you use TCP you really don't need to worry about reliable delivery of your packets to your peer.
But this is a dangerous view for some systems where data integrity must be taken to the next level: the common criteria auditing requirement FAU_STG.4.1 requires the ability to prevent auditable events if the audit log might suffer a loss of audit entries. (For example, the Linux auditd(8) audit logging daemon can be configured to place the computer in single-user-mode or halt the system completely when there is no more space left for audit logs.) Audit logs from remote systems should probably be maintained until it is known that they have been successfully written to centralized log servers.
Financial transactions would probably be best handled with a more reliable protocol than simple TCP as well -- crediting or debiting accounts would be best handled with a multi-staged protocol to ensure availability of funds, perform the transaction, then report the result of the transaction to the origination point.
TCP allows nearly a gigabyte of in-flight data between two peers (under extreme conditions); depending upon the requirements of your application, you might need to maintain that data at the sending side until you receive positive confirmation from your peer that the data has been properly handled.
Thankfully, most applications aren't this critical; losing a megabyte of data here or there down a socket that reports a closed connection at some point "in the future" really isn't horrible -- we just re-try our HTTP request, or re-attempt the SFTP connection.
Update
A socket will only accept enough data to fill its available window. The window size is negotiated between the two peers during the session handshake. So your calls to send() will begin blocking when the socket's window fills. (The OS might keep letting you add data to its internal buffers too, but at some point the writes will block.) If the peer breaks the connection with a RST or ICMP Unreachable message, a future call to send() will return an error value for Connection Reset or Broken Pipe.
Update 2
I'm not talking at the TCP level, I'm talking at the Winsock API level
This might be the source of confusion. send() has no choice but to adhere to the TCP behavior when used with TCP.
TCP guarantees in-order reliable delivery of a stream of bytes, to the extent that packets can be delivered. (See #Hans's comment about a pony and careless people kicking power cords.) The peer program will see bytes in the correct order they were sent. (Well, okay, TCP also has out-of-band urgent packet delivery, but I haven't actually seen any applications that use it. Using OOB packets, you can get some data out-of-line. Forget I mentioned it.)
If the remote program receives a byte sent on a TCP stream, it reliably received all preceding bytes as well. (Well, there are entire classes of replay attacks that splice together legitimate and fake packets for the remote peer, but those are increasingly difficult on systems with randomized initial sequence numbers. If this is within your threat model, you should be using TLS on top of TCP to provide cryptographically strong tamper evident information. But TLS can't provide better per-packet delivery notification.)
If you use UDP and you care about the data actually being received by the other side you NEED to use ACK, but if you don't need the speed of UDP you should use TCP, as it does the ACKing for you.
I think you are over complicating this, trust your TCP/IP software stack and the reliable delivery it offers. TCP sockets operate on streams of data, not packets. Also one call to send does not guarantee one call to recv.