SIGPIPE not being generated immediately after 1st send - c++

I want to know whether its possible for tcp socket to report any broken pipe error immediately. Currently i am catching the sigpipe signal at the client side when server goes down ... but i found that the sigpipe signal is generated
only after 2nd msg is sent from client to server . what could be the possible reason for this?? If the other socket end went down , then the 1st send must return sigpipe .. y isnt that signal generated immediately..??
Is there any possible explanation to this peculiar behaviour?? And any possible way to get around this??

The TCP stack will only throw an error after some number of retransmission attempts. IIRC, the TCP retransmission timer is initialized to some small number of seconds and the number of retransmissions is typically 5-10. The protocol does not support any other means of detecting a peer that has become unreachable during a data exchange, (ie. someone tripped over the server power cable).

I think using SO_KEEPALIVE option may speed up broken link detection.

I want to know whether its possible for tcp socket to report any broken pipe error immediately
The other end of the pipe is across a network. That network could be slow and unreliable. So one end of the pipe can never instantly tell whether its partner still there. The delay could be quite long, so the O/S is also likely to do some bufferring. These considerations make it practically impossible to immediately detect a broken pipe.
And any possible way to get around this
But why would you want to? The pipe could be broken at any time during trans mission, so you have to handle the general case anyway.

Related

TCP server hangs, not responding to SYN

I have a strange issue with a TCP server that sometimes hangs. The weird issue is that when it hangs it does not receive any new connection, i.e. doesn't respond to the initial TCP SYN packet. I was pretty sure that since TCP handshakes are handled by the kernel, even when a program hangs clients should still at the very least receive the initial SYN,ACK. If anyone knows a situation where a program can hang in a way that prevents the OS from even completing the TCP handshake (and without it ever closing the listening socket) please let me know.
P.S.
The program is written in C++ and the OS is Windows Server 2016.
Most likely, the listen queue is full. Not responding to the initial SYN causes the other side to try another SYN a bit later. With luck, the listen queue won't be full at that time. The program is probably not calling accept (or some similar function) often enough.
It's also possible that the program is using the selective accept functionality (see the lpfnCondition parameter to WSASelect) to choose not to respond to this connection attempt.

ZMQ - Client Server: Client is powered off unexpectedly, how server detects it?

Multiple clients are connected to a single ZMQ_PUSH socket. When a client is powered off unexpectedly, server does not get an alert and keep sending messages to it. Despite of using ZMQ_OBLOCK and setting ZMQ_HWM to 5 (queue only 5 messages at max), my server doesn't get an error until unless client is reconnected and all the messages in queue are received at once.
I recently ran into a similar problem when using ZMQ. We would cut power to interconnected systems, and the subscriber would be unable to reconnect automatically. It turns out the there has recently (past year or so) been implemented a heartbeat mechanism over ZMTP, the underlying protocol used by ZMQ sockets.
If you are using ZMQ version 4.2.0 or greater, look into setting the ZMQ_HEARTBEAT_IVL and ZMQ_HEARTBEAT_TIMEOUT socket options (http://api.zeromq.org/4-2:zmq-setsockopt). These will set the interval between heartbeats (ZMQ_HEARTBEAT_IVL) and how long to wait for the reply until closing the connection (ZMQ_HEARTBEAT_TIMEOUT).
EDIT: You must set these socket options before connecting.
There is nothing in zmq explicitly to detect the unexpected termination of a program at the other end of a socket, or the gratuitous and unexpected failure of a network connection.
There has been historical talk of adding some kind of underlying ping-pong are-you-still-alive internal messaging to zmq, but last time I looked (quite some time ago) it had been decided not to do this.
This does mean that crashes, network failures, etc aren't necessarily handled very cleanly, and your application will not necessarily know what is going on or whether messages have been successfully sent. It is Actor model after all. As you're finding your program may eventually determine something had previously gone wrong. Timeouts in zmtp will spot the failure, and eventually the consequences bubble back up to your program.
To do anything better you'd have to layer something like a ping-pong on top yourself (eg have a separate socket just for that so that you can track the reachability of clients) but that then starts making it very hard to use the nice parts of ZMQ such as push / pull. Which is probably why the (excellent) zmq authors decided not to put it in themselves.
When faced with a similar problem I ended up writing my own transport library. I couldn't find one off the shelf that gave nice behaviour in the face of network failures, crashes, etc. It implemented CSP, not actor model, wasn't terribly fast (an inevitability), didn't do patterns in the zmq sense, but did mean that programs knew exactly where messages were at all times, and knew that clients were alive or unreachable at all times. The CSPness also meant message transfers were an execution rendezvous, so programs know what each other is doing too.

Forced server-side socket close without SO_LINGER > 0 can lose data, right?

I'm writing a cross-platform client application that uses sockets, written in C++. I'm having problems where the server is doing a hard close on the socket when it's done sending me info.
I've been reading other posts on this topic, and I'm not so much interested in the rights or wrong of this approach, but it's seems the server is either explicitly setting SO_LINGER=0, or that's the default behavior on that system (not sure, it's a Linux box).
I can see (in Wireshark) that the data was sent to me followed within milli-seconds by an RST, indicating a hard close by the server. I personally don't agree with this approach as it should be up to the client to shutdown the socket.
Server team are saying there's nothing wrong with that approach (doing a hard close rather than shutdown), it's typical on servers to avoid accumulating TIMED_WAIT sockets. On Windows my select() returns indicating there's something to read (while I haven't read any of this "in transit" data yet).
However, because of the quick arrival of the RST, on Windows recv() returns -1 and I'm seeing a 10054 for the error code (connection reset by peer). This wouldn't be too bad if I could at least get the data that was sent, but it seems that once my client's socket stack sees the RST any unread bytes are no longer made available to me.
On Linux (client), there's no problem. It seems the TCP stack is behaving slightly differently, in that I can read the outstanding bytes before the RST is honoured. I'm having trouble convincing the server guys they have a bug, given that it works for a Linux client.
First off, am I correct? Is this a server-side issue? I can't see that the client end is doing anything wrong, so it must be right?
It seems the server team are adamant that they want to perform the close, and they don't want to in have TIMED_WAITs, so I was going to push for them to add a SO_LINGER of, say 2 seconds? Does that sound like it will solve my problem? From what I understand this will stop the server from sending out a RST so soon after sending data, and should give me a chance to read the outstanding bytes.
Found a definitive answer to my own question:
"...Upon reception of RST segment, the receiving side will immediately abort the connection. This statement has more implications than just meaning that you will not be able to receive or send any more data to/from this connection. It also implies that any unread data still in the TCP reception buffer will be lost..." It cites the book "TCP/IP Internetworking Volume II". I don't have that book, so I can only take his word for it. Doesn't seems to discard data on Linux, only Windows...
Olivier Langlois's blog
The side-effect of fiddling with SO_LINGER to force a reset is that all pending data is lost. The fact that you don't receive it is all the proof you need that the server team is wrong to do this.
RFC 793 cited below says 'this command [ABORT] causes all pending SENDs and RECEIVEs to be aborted, ... and a special RESET message to be sent to the TCP on the other side of the connection.' See also W.R. Stevens, TCP/IP Illustrated, Vol. 1, p. 287: 'Aborting a connection provides two features to the application: (1) any queued data is thrown away and the reset is sent immediately, and (2) the receiver of the RST can tell that the other end did an abort instead of a normal close'. There is similar wording, along with an extract from the BSD code that implements it, in Vol. 2.
The TIME_WAIT state only occurs on a socket which sends a FIN before it has received one: see RFC 793. So the server should be waiting for a FIN from the client, with a suitable timeout, rather than resetting. This will also permit the client to do connection pooling.

Read from socket return value in c++

I need to enforce the return value of read from a socket to equal to zero without closing connection.
I read the following statement in a page saying:
If an end-of-file condition is received or the connection is closed, 0 is returned.
But I don't know how to make it receive that condition after the string I have sent.
Can anyone help?
I'm afraid you can't do that.
If you want read to return zero, you need to close the socket. If you don't want to close the socket, you need to signal "end-of-communication" or "end-of-message" as part of your protocol.
A common way of doing that is prefixing each message with its length. That way the receiving side knows when it's read a complete message and do whatever it wants with it.
If you want the peer's read() or recv() to return zero, you must either close the socket or shut it down for output. In either case you can't sent anything else afterwards. If that constraint doesn't suit you, you will have to revise your requirement, as it doesn't make sense.
Both the "end of file condition" and the "connection closed" condition tell the receiver that no more data can be received on this socket. You cannot simulate that by sending some magic data.
Besides of calling close on the socket you can use shutdown(2) on the socket to only close either the reading side or the writing side. This might help in limited cases but not in the general case.
Perhaps you need some multiplexing syscall like poll(2).
You definitely need to read some good material like Advanced Linux Programming or Advanced Unix Programming.
If you need the TCP/IP transmission to transit application messages, you need to care about packaging and fragmenting explicitly yourself (either by having fixed-size messages, or by having some way to know the logical message size during transmission). Be aware that TCP/IP transmission can be fragmented by the network.

Ensuring data is being read with async_read

I am currently testing my network application in very low bandwidth environments. I currently have code that attempts to ensure that the connection is good by making sure I am still receiving information.
Traditionally I have done this by recording the timestamp in my ReadHandler function so that each time it gets called I know I have received data on the socket. With very low bandwidths this isn't sufficient because my ReadHandler is not getting called frequently enough.
I was toying around with the idea of writing my own completion condition function (right now I am using tranfer_at_least(1)) thinking it would get called more frequently and I could record my timestamp there, but I was wondering if there wasn't some other more standard way to go about this.
We had a similar issue in production: some of our connections may be idle for days, but we must detect if the remote is dead ASAP.
We solved it by enabling the TCP_KEEPALIVE option:
boost::asio::socket_base::keep_alive option(true);
mSocketTCP.set_option(option);
which had to be accompanied by new startup script that writes sensible values to /proc/sys/net/ipv4/tcp_keepalive_* which have very long timeouts by default (on LInux)
You can use the read_some method to get partial reads, and deal with the book keeping. This is more efficient than transfer_at_least(1), but you still have to keep track of what is going on.
However, a cleaner approach is just to use a concurrent deadline_timer. If the timer goes off before you are finished, then is taking too long and cancel whatever is going on. If not, just stop the timer and continue. Something like:
boost::asio::deadline_timer t;
t.expires_from_now(boost::posix_time::seconds(20));
t.async_wait(bind(&Class::timed_out, this, _1));
// Do stuff.
if (!t.cancel()) {
// Timer went off, abort
}
// And the timeout method
void Class::timed_out(error_code const& error)
{
if (error == boost::asio::error::operation_aborted) return;
// Deal with the timeout, close the socket, etc.
}
I don't know how to handle low latency of network from within application. Can you be sure if it's network latency, or if peer server or peer application busy and react slowly. Does it matter if it network/server/application quilt?
Even if you can discover network latency and find it's big, what are you going to do?
You can not improve the situation.
Consider other critical case which is a subset of what you're trying to handle - network is down (e.g. you disconnect cable from your machine). Since it a subset of your problem you want to handle it too.
Let's examine the network down effect on active TCP connection.How can you discover your active TCP connection is still alive? Calling send() will success, but it merely says that the message queued in TCP outgoing queue in kernel. TCP stack will try to send it, but since TCP ACK won't be sent back, TCP stack on your side will try to resend it again and again. You can see your message in netstat output (Send-Q column).
I'm aware of the following ways to deal with it:
One standard way is TCP keep alive proposed #Cubby.
Another way is to implement Keep Alive mechanism. Send Keep Alive req message and peer is obligated to send back Keep Alive ack message.
If you don't receive ack message after predefined timeout, try to send Keep Alive req N more times (e.g. N=2). If still no success, close the socket and open it again. If peer server is not available you'll not be abable to open connection, since TCP 3 way handshake requires peer to respond.