C++ tcp socket connection retry method - c++

After developing a sample client server application which can exchange some data, I'm trying to implement the retry mechanism into it. Currently my application is following below protocol:
Client connects to server (non blocking mode) with 3 secs timeout and with 2 reties.
Start sending data from client with fixed length. Send has some error checking whether it is sending the complete data or not.
Receive response (timeout: 3secs) from server and verify that. If incorrect response received, re-send the data and wait for response. Repeat this for two times if failed.
For the above implementation code sections look likes something below:
connect() and select() for opening connection
select() and send() for data send
select() and recv() for data receiving
Now I'm making the retries based on return types of the socket functions, and if send() or recv() fails I'm retring the same methods. But not recalling connect().
I tested the thing by restarting the server in between the data transfer, and as a result client fails to communicate with the server and it quits after several retries, I believe this is happening as because there is no connect() call on retry methods.
Any suggestions?
Example code for receiving socket data
bool CTCPCommunication::ReceiveSocketData(char* pchBuff, int iBuffLen)
{
bool bReturn = true;
//check whether the socket is ready to receive
fd_set stRead;
FD_ZERO(&stRead);
FD_SET(m_hSocket, &stRead);
int iRet = select(0, &stRead, NULL, NULL, &m_stTimeout);
//if socket is not ready this line will be hit after 3 sec timeout and go to the end
//if it is ready control will go inside the read loop and reads data until data ends or
//socket error is getting triggered continuously for more than 3 secs.
if ((iRet > 0) && (FD_ISSET(m_hSocket, &stRead)))
{
DWORD dwStartTime = GetTickCount();
DWORD dwCurrentTime = 0;
while ((iBuffLen-1) > 0)
{
int iRcvLen = recv(m_hSocket, pchBuff, iBuffLen-1, 0);
dwCurrentTime = GetTickCount();
//receive failed due to socket error
if (iRcvLen == SOCKET_ERROR)
{
if((dwCurrentTime - dwStartTime) >= SOCK_TIMEOUT_SECONDS * 1000)
{
WRITELOG("Call to socket API 'recv' failed after 3 secs continuous retries, error: %d", WSAGetLastError());
bReturn = false;
break;
}
}
//connection closed by remote host
else if (iRcvLen == 0)
{
WRITELOG("recv() returned zero - time to do something: %d", WSAGetLastError());
break;
}
pchBuff += iRcvLen;
iBuffLen -= iRcvLen;
}
}
else
{
WRITELOG("Call to API 'select' failed inside 'ReceiveSocketData', error: %d", WSAGetLastError());
bReturn = false;
}
return bReturn;
}

Currently my application is following below protocol:
Client connects to server (non blocking mode) with 3 secs timeout and with 2 retries.
You can't retry a connection. You have to close the socket whose connect attempt failed, create a new socket, and call connect() again.
Start sending data from client with fixed length. Send has some error checking whether it is sending the complete data or not.
This isn't necessary in blocking mode: the POSIX standard guarantees that a blocking-mode send() will send all the data, or fail with an error.
Receive response (timeout: 3secs) from server and verify that. If incorrect response received, re-send the data and wait for response. Repeat this for two times if failed.
This is a bad idea. Most probably all the data willl arrive including all the retries, or none of it. You need to make sure that your transactions are idempotent if you use this technique. You also need to pay close attention to the actual timeout period. 3 seconds is not adequate in general. A starting point is double the expected service time.
For the above implementation code sections look likes something below:
connect() and select() for opening connection
select() and send() for data send
select() and recv() for data receiving
You don't need the select() in blocking mode. You can just set a read timeout with SO_RCVTIMEO.
Now I'm making the retries based on return types of the socket functions, and if send() or recv() fails I'm retrying the same methods. But not recalling connect().
I tested the thing by restarting the server in between the data transfer, and as a result client fails to communicate with the server and it quits after several retries, I believe this is happening as because there is no connect() call on retry methods.
If that was true you would get an error that said so.

Related

Asynchronous, Non-Blocking Socket Behaviour - WSAEWOULDBLOCK

I have inherited two applications, one Test Harness (a client) running on a Windows 7 PC and one server application running on a Windows 10 PC. I am attempting to communicate between the two using TCP/IP sockets. The Client sends requests (for data in the form of XML) to the Server and the Server then sends the requested data (also XML) back to the client.
The set up is as shown below:
Client Server
-------------------- --------------------
| | Sends Requests | |
| Client Socket | -----------------> | Server Socket |
| | <----------------- | |
| | Sends Data | |
-------------------- --------------------
This process always works on an initial connection (i.e. freshly launched client and server applications). The client has the ability to disconnect from the server, which triggers cleanup of sockets. Upon reconnection, I almost always (it does not always happen, but does most of the time) receive the following error:
"Receive() - The socket is marked as nonblocking and the receive operation would block"
This error is displayed at the client and the socket in question is an asynchronous, non-blocking socket.
The line which causes this SOCKET_ERROR is:
numBytesReceived = theSocket->Receive(theReceiveBuffer, 10000));
where:
- numBytesReceived is an integer (int)
- theSocket is a pointer to a class called CClientSocket which is a specialisation of CASyncSocket, which is part of the MFC C++ Library. This defines the socket object which is embedded within the client. It is an asynchonous, non-blocking socket.
- Receive() is a virtual function within the CASyncSocket object
- theReceiveBuffer is a char array (10000 elements)
In executing the line descirbed above, SOCKET_ERROR is returned from the function and calling theSocket->GetLastError() returns WSAEWOULDBLOCK.
SocketTools highlights that
When a non-blocking (asynchronous) socket attempts to perform an operation that cannot be performed immediately, error 10035 will be returned. This error is not fatal, and should be considered advisory by the application. This error code corresponds to the Windows Sockets error WSAEWOULDBLOCK.
When reading data from a non-blocking socket, this error will be returned if there is no more data available to be read at that time. In this case, the application should wait for the OnRead event to fire which indicates that more data has become available to read. The IsReadable property can be used to determine if there is data that can be read from the socket.
When writing data to a non-blocking socket, this error will be returned if the local socket buffers are filled while waiting for the remote host to read some of the data. When buffer space becomes available, the OnWrite event will fire which indicates that more data can be written. The IsWritable property can be used to determine if data can be written to the socket.
It is important to note that the application will not know how much data can be sent in a single write operation, so it is possible that if the client attempts to send too much data too quickly, this error may be returned multiple times. If this error occurs frequently when sending data it may indicate high network latency or the inability for the remote host to read the data fast enough.
I am consistently getting this error and failing to receive anything on the socket.
Using Wireshark, the following communications occur with the source, destinaton and TCP Bit Flags presented here:
Event: Connect Test Harness to Server via TCP/IP
Client --> Server: SYN
Server --> Client: SYN, ACK
Client --> Server: ACK
This appears to be correct and represents the Three-Way Handshake of connecting.
SocketSniff confirms that a Socket is closed on the client side. It was not possible to get SocketSniff to work with the Windows 10 Server application.
Event: Send a Request for Data from the Test Harness
Client --> Server: PSH, ACK
Server --> Client: PSH, ACK
Client --> Server: ACK
Both request data and received data is confirmed to be exchanged successfully
Event: Disconnect Test Harness from Server
Client --> Server: FIN, ACK
Server --> Client: ACK
Server --> Client: FIN, ACK
Client --> Server: ACK
This appears to be correct and represents the Four-Way handshake of connection closure.
SocketSniff confirms that a Socket is closed on the client side. It was not possible to get SocketSniff to work with the Windows 10 Server application.
Event: Reconnect Test Harness to Server via TCP/IP
Client --> Server: SYN
Server --> Client: SYN, ACK
Client --> Server: ACK
This appears to be correct and represents the Three-Way Handshake of connecting.
SocketSniff confirms that a new Socket is opened on the client side. It was not possible to get SocketSniff to work with the Windows 10 Server application.
Event: Send a Request for Data from the Test Harness
Client --> Server: PSH, ACK
Server --> Client: ACK
We see no data being pushed (PSH) back to the client, yet we do see an acknowledgement.
Has anyone got any ideas what may be going on here? I understand it would be difficult for you to diagnose without seeing the source code, however I was hoping others may have had experience with this error and could point me down the specific route to investigate.
More Info:
The Server initialises a listening thread and binds to 0.0.0.0:49720. The 'WSAStartup()', 'bind()' and 'listen()' functions all return '0', indicating success. This thread persists throughout the lifetime of the server application.
The Server initialises two threads, a read and a write thread. The read thread is responsible for reading request data off its socket and is initialised as follows with a class called Connection:
HANDLE theConnectionReadThread
= CreateThread(NULL, // Security Attributes
0, // Default Stacksize
Connection::connectionReadThreadHandler, // Callback
(LPVOID)this, // Parameter to pass to thread
CREATE_SUSPENDED, // Don't start yet
NULL); // Don't Save Thread ID
The write thread is initialised in a similar way.
In each case, the CreateThread() function returns a suitable HANDLE, e.g.
theConnectionReadThread = 00000570
theConnectionWriteThread = 00000574
The threads actually get started within the following function:
void Connection::startThreads()
{
ResumeThread(theConnectionReadThread);
ResumeThread(theConnectionWriteThread);
}
And this function is called from within another class called ConnectionManager which manages all the possible connections to the server. In this case, I am only concerned with a single connection, for simplicity.
Adding text output to the server application reveals that I can successfully connect/disconnect the client and server several times before the faulty behaviour is observed. For example, Within the connectionReadThreadHandler() and connectionWriteThreadHandler() functions, I am outputing text to a log file as soon as they execute.
When correct behaviour is observed, the following lines are output to the log file:
Connection::ResumeThread(theConnectionReadThread) returned 1
Connection::ResumeThread(theConnectionWriteThread) returned 1
ConnectionReadThreadHandler() Beginning
ConnectionWriteThreadHandler() Beginning
When faulty behaviour is observed, the following lines are output to the log file:
Connection::ResumeThread(theConnectionReadThread) returned 1
Connection::ResumeThread(theConnectionWriteThread) returned 1
The callback functions do not appear to being invoked.
It is at this point that the error is displayed on the client indicating that:
"Receive() - The socket is marked as nonblocking and the receive operation would block"
On the Client side, I've got a class called CClientDoc, which contains the client side socket code. It first initialises theSocket which is the socket object which is embedded within a client:
private:
CClientSocket* theSocket = new CClientSocket;
When a connection is initialised between client and server, this class calls a function called CreateSocket() part of which is included below, along with ancillary functions which it calls:
void CClientDoc::CreateSocket()
{
AfxSocketInit();
int lastError;
theSocket->Init(this);
if (theSocket->Create()) // Calls CAyncSocket::Create() (part of afxsock.h)
{
theErrorMessage = "Socket Creation Successful"; // this is a CString
theSocket->SetSocketStatus(WAITING);
}
else
{
// We don't fall in here
}
}
void CClientDoc::Init(CClientDoc* pDoc)
{
pClient = pDoc; // pClient is a pointer to a CClientDoc
}
void CClientDoc::SetSocketStatus(SOCKET_STATUS sock_stat)
{
theSocketStatus = sock_stat; // theSocketStatus is a private member of CClientSocket of type SOCKET_STATUS
}
Immediately after CreateSocket(), SetupSocket() is called which is also provided here:
void CClientDoc::SetupSocket()
{
theSocket->AsyncSelect(); // Function within afxsock.h
}
Upon disconnection of the client from the server,
void CClientDoc::OnClienDisconnect()
{
theSocket->ShutDown(2); // Inline function within afxsock.inl
delete theSocket;
theSocket = new CClientSocket;
CreateSocket();
SetupSocket();
}
So we delete the current socket and then create a new one, ready for use, which appears to work as expected.
The error is being written on the Client within the DoReceive() function. This function calls the socket to attempt to read in a message.
CClientDoc::DoReceive()
{
int lastError;
switch (numBytesReceived = theSocket->Receive(theReceiveBuffer, 10000))
{
case 0:
// We don't fall in here
break;
case SOCKET_ERROR: // We come in here when the faulty behaviour occurs
if (lastError = theSocket->GetLastError() == WSAEWOULDBLOCK)
{
theErrorMessage = "Receive() - The socket is marked as nonblocking and the receive operation would block";
}
else
{
// We don't fall in here
}
break;
default:
// When connection works, we come in here
break;
}
}
Hopefully the addition of some of the code proves insightful. I should be able to add a bit more if needed.
Thanks
The WSAEWOULDBLOCK error DOES NOT mean the socket is marked as blocking. It means the socket is marked as non-blocking and there is NO DATA TO READ at that time.
WSAEWOULDBLOCK means the socket WOULD HAVE blocked the calling thread waiting for data if the socket HAD BEEN marked as blocking.
To know when a non-blocking socket has data waiting to be read, use Winsock's select() function, or the CClientSocket::AsyncSelect() method to request FD_READ notifications, or other equivalent. Don't try to read until there is something to read.
In your analysis, you see the client sending data to the server, but the server is not sending data to the client. So you clearly have a logic bug in your code somewhere, you need to find and fix it. Either the client is not terminating its request correctly, or the server is not receiving/processing/replying to it correctly. But since you did not show your actual code, we can't tell you what is actually wrong with it.

what happand to socket, when the other end close it?

I want to develop a client server app and I want to make it as robust as possible. There are multiple questions come up for me, and I just can't find an unambiguous answer on the internet.
Let's say, that the server is running on a while(TRUE) loop and check for command existance is it's commands queue, if there is one, it sends it, if there isn't one, it just continue to the head of the loop.
But what if the other end went down, or there is a connection error between the two, what happen to the socket value, does it become INVALID_SOCKET?
while (TRUE) {
if (ReqQueue->size() != 0 && ReqQueue->front() != string("STOP")) { // there is some command in the ReqQueue which is NOT STOP.
int sent = send(ClientSocket, ReqQueue->front().c_str(), (int)strlen(ReqQueue->front().c_str()), 0);
if (sent == (int)strlen(ReqQueue->front().c_str()))
ReqQueue->pop(); // Next Command.
else if (int err = WSAGetLastError() == WSAETIMEDOUT){
shutdown(ClientSocket, SD_BOTH);
closesocket(ClientSocket);
return;
}
else
continue;
}
else if (ReqQueue->size() == 0) {
continue;
}
else if(ReqQueue->front() == string("STOP"))
{
if (send(ClientSocket, "STOP", strlen("STOP"), 0) == strlen("STOP")) {
/*Message received indication from target*/
shutdown(ClientSocket, SD_BOTH);
closesocket(ClientSocket);
return;
}
}
}
shutdown(ClientSocket, SD_BOTH);
closesocket(ClientSocket);
return 0;
that's the source :)
what I want to ask is, there is a better way to implement the above goal, maybe I can change the while loop condition to something like while(the socket is OK) or while(there is still a connection).
what happen to the socket value
Nothing. A send() on that socket will eventually fail, and a recv() on it will deliver zero or -1, but the socket itself remains open, and the variable value is unaffected. There is no magic.
does it become INVALID_SOCKET?
No.
For me, better idea would be when you receive a request to your server from any client, just create a new thread and assign the task to that thread. By doing that you can make your server parallel processing of client request and work on multiple request from multiple client. Hence no client need to wait for server to complete the request of a client already submitted a request. If you implement like this you don’t need to bother what will happened if a connection broken. In normal case if a correction broken you will get this information in your server while sending the reply to the client and you can mark that process as failed and log it into server log.

Winsock not sending in a while loop

I am very new to networking and have an issue with sending messages during a while loop.
To my knowledge I should do something along the lines of this:
Create Socket()
Connect()
While
Do logic
Send()
End while
Close Socket()
However it sends once and returns -1 there after.
The code will only work when I create the socket in the loop.
While
Create Socket()
Connect()
Do logic
Send()
Close Socket()
End while
Here is a section of the code I am using but doesn't work:
//init winsock
WSAStartup(MAKEWORD(2, 0), &wsaData);
//open socket
sock = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP);
//connect
memset(&serveraddr, 0, sizeof(serveraddr));
serveraddr.sin_family = AF_INET;
serveraddr.sin_addr.s_addr = inet_addr(ipaddress);
serveraddr.sin_port = htons((unsigned short) port);
connect(sock, (struct sockaddr *) &serveraddr, sizeof(serveraddr));
while(true) {
if (send(sock, request.c_str(), request.length(), 0)< 0 /*!= request.length()*/) {
OutputDebugString(TEXT("Failed to send."));
} else {
OutputDebugString(TEXT("Activity sent."));
}
Sleep(30000);
}
//disconnect
closesocket(sock);
//cleanup
WSACleanup();
The function CheckForLastError() returns:10053
WSAECONNABORTED
Software caused connection abort.
An established connection was aborted by the software in your host computer, possibly due to a data transmission time-out or protocol error
Thanks
I have been looking for a solution to this problem too. I am having the same problem with my server. When trying to send a response from inside the loop, the client seems never to receive it.
As I understand the problem, according to user207421's suggestions, when you establish a connection between a client and a server, the protocol should have enough information to let the client know when the server has finished sending the response. If you see this example, you have a minimum HTTP server that responds to requests. In this case, you can use a browser or an application like Postman. And if you see the response message, you will see a header called Connection. Setting its value to close tells the client which one is the last message from the server for that request. The message is being sent, but the client keeps waiting, maybe because there is no closing element the client can recognize. I was also missing the Content-Length header. My HTTP response message was wrong, and the client was lost.
This diagram shows what needs to be outside the loop and what needs to be inside.
To understand how and why your program fails,you have to understand the functions you use.
Some of them are blocking functions and some are them not. Some of them need previous calles of other functions and some of them don't.
Now from what i understand we are talking about a client here,not a server.
The client has only non blocking functions in this case. That means that whenever you call a function,it will be executed without waiting.
So send() will send data the second it is called and the stream will go on to the next line of code.
If the information to be sent was not yet ready...you will have a problem,since nothing will be sent.
To solve it you could use some sort of a delay. The problem with delays is that they are Blocking functions meaning your stream will stop once it hits the delay. To solve it you can create a thread and lock it untill the information is ready to be sent.
But that would do the job for one send(). You will send the info and thats that.
If you want to hold the communication and send repeatedly info,you will need to create a while loop. once you have a while loop you dont have to worry about anything. That is because you can verify that the information is ready with a stream control and you can use send over and over again before terminating the connection.
Now the question is what is happening on the server side of things?
"ipaddress" should hold the ip of the server. The server might reject your request to connect.Or worst he might accept your request but he is listening with diffrent settings in relation to your client.Meaning that maybe the server is not reciving (does not have recv() function)information and you are trying to send info... that might resault in errors/crashes and what not.

TCP connection accepted, but writing data causes it to use a stale connection

The server (192.168.1.5:3001), is running Linux 3.2, and is designed to only accept one connection at a time.
The client (192.168.1.18), is running Windows 7. The connection is a wireless connection. Both programs are written in C++.
It works great 9 in 10 connect/disconnect cycles. The tenth-ish (randomly happens) connection has the server accept the connection, then when it later actually writes to it (typically 30+s later), according to Wireshark (see screenshot) it looks like it's writing to an old stale connection, with a port number that the client has FINed (a while ago), but the server hasn't yet FINed. So the client and server connections seems to get out of sync - the client makes new connections, and the server tries writing to the previous one. Every subsequent connection attempt fails once it gets in this broken state. The broken state can be initiated by going beyond the maximum wireless range for a half a minute (as before 9 in 10 cases this works, but it sometimes causes the broken state).
Wireshark screenshot behind link
The red arrows in the screenshot indicate when the server started sending data (Len != 0), which is the point when the client rejects it and sends a RST to the server. The coloured dots down the right edge indicate a single colour for each of the client port numbers used. Note how one or two dots appear well after the rest of the dots of that colour were (and note the time column).
The problem looks like it's on the server's end, since if you kill the server process and restart, it resolves itself (until next time it occurs).
The code is hopefully not too out-of-the-ordinary. I set the queue size parameter in listen() to 0, which I think means it only allows one current connection and no pending connections (I tried 1 instead, but the problem was still there). None of the errors appear as trace prints where "// error" is shown in the code.
// Server code
mySocket = ::socket(AF_INET, SOCK_STREAM, 0);
if (mySocket == -1)
{
// error
}
// Set non-blocking
const int saveFlags = ::fcntl(mySocket, F_GETFL, 0);
::fcntl(mySocket, F_SETFL, saveFlags | O_NONBLOCK);
// Bind to port
// Union to work around pointer aliasing issues.
union SocketAddress
{
sockaddr myBase;
sockaddr_in myIn4;
};
SocketAddress address;
::memset(reinterpret_cast<Tbyte*>(&address), 0, sizeof(address));
address.myIn4.sin_family = AF_INET;
address.myIn4.sin_port = htons(Port);
address.myIn4.sin_addr.s_addr = INADDR_ANY;
if (::bind(mySocket, &address.myBase, sizeof(address)) != 0)
{
// error
}
if (::listen(mySocket, 0) != 0)
{
// error
}
// main loop
{
...
// Wait for a connection.
fd_set readSet;
FD_ZERO(&readSet);
FD_SET(mySocket, &readSet);
const int aResult = ::select(getdtablesize(), &readSet, NULL, NULL, NULL);
if (aResult != 1)
{
continue;
}
// A connection is definitely waiting.
const int fileDescriptor = ::accept(mySocket, NULL, NULL);
if (fileDescriptor == -1)
{
// error
}
// Set non-blocking
const int saveFlags = ::fcntl(fileDescriptor, F_GETFL, 0);
::fcntl(fileDescriptor, F_SETFL, saveFlags | O_NONBLOCK);
...
// Do other things for 30+ seconds.
...
const int bytesWritten = ::write(fileDescriptor, buffer, bufferSize);
if (bytesWritten < 0)
{
// THIS FAILS!! (but succeeds the first ~9 times)
}
// Finished with the connection.
::shutdown(fileDescriptor, SHUT_RDWR);
while (::close(fileDescriptor) == -1)
{
switch(errno)
{
case EINTR:
// Break from the switch statement. Continue in the loop.
break;
case EIO:
case EBADF:
default:
// error
return;
}
}
}
So somewhere between the accept() call (assuming that is exactly the point when the SYN packet is sent), and the write() call, the client's port gets changed to the previously-used client port.
So the question is: how can it be that the server accepts a connection (and thus opens a file descriptor), and then sends data through a previous (now stale and dead) connection/file descriptor? Does it need some sort of option in a system call that's missing?
I'm submitting an answer to summarize what we've figured out in the comments, even though it's not a finished answer yet. It does cover the important points, I think.
You have a server that handles clients one at a time. It accepts a connection, prepares some data for the client, writes the data, and closes the connection. The trouble is that the preparing-the-data step sometimes takes longer than the client is willing to wait. While the server is busy preparing the data, the client gives up.
On the client side, when the socket is closed, a FIN is sent notifying the server that the client has no more data to send. The client's socket now goes into FIN_WAIT1 state.
The server receives the FIN and replies with an ACK. (ACKs are done by the kernel without any help from the userspace process.) The server socket goes into the CLOSE_WAIT state. The socket is now readable, but the server process doesn't notice because it's busy with its data-preparation phase.
The client receives the ACK of the FIN and goes into FIN_WAIT2 state. I don't know what's happening in userspace on the client since you haven't shown the client code, but I don't think it matters.
The server process is still preparing data for a client that has hung up. It's oblivious to everything else. Meanwhile, another client connects. The kernel completes the handshake. This new client will not be getting any attention from the server process for a while, but at the kernel level the second connection is now ESTABLISHED on both ends.
Eventually, the server's data preparation (for the first client) is complete. It attempts to write(). The server's kernel doesn't know that the first client is no longer willing to receive data because TCP doesn't communicate that information! So the write succeeds and the data is sent out (packet 10711 in your wireshark listing).
The client gets this packet and its kernel replies with RST because it knows what the server didn't know: the client socket has already been shut down for both reading and writing, probably closed, and maybe forgotten already.
In the wireshark trace it appears that the server only wanted to send 15 bytes of data to the client, so it probably completed the write() successfully. But the RST arrived quickly, before the server got a chance to do its shutdown() and close() which would have sent a FIN. Once the RST is received, the server won't send any more packets on that socket. The shutdown() and close() are now executed, but don't have any on-the-wire effect.
Now the server is finally ready to accept() the next client. It begins another slow preparation step, and it's falling further behind schedule because the second client has been waiting a while already. The problem will keep getting worse until the rate of client connections slows down to something the server can handle.
The fix will have to be for you to make the server process notice when a client hangs up during the preparation step, and immediately close the socket and move on to the next client. How you will do it depends on what the data preparation code actually looks like. If it's just a big CPU-bound loop, you have to find some place to insert a periodic check of the socket. Or create a child process to do the data preparation and writing, while the parent process just watches the socket - and if the client hangs up before the child exits, kill the child process. Other solutions are possible (like F_SETOWN to have a signal sent to the process when something happens on the socket).
Aha, success! It turns out the server was receiving the client's SYN, and the server's kernel was automatically completing the connection with another SYN, before the accept() had been called. So there definitely a listening queue, and having two connections waiting on the queue was half of the cause.
The other half of the cause was to do with information which was omitted from the question (I thought it was irrelevant because of the false assumption above). There was a primary connection port (call it A), and the secondary, troublesome connection port which this question is all about (call it B). The proper connection order is A establishes a connection (A1), then B attempts to establish a connection (which would become B1)... within a time frame of 200ms (I already doubled the timeout from 100ms which was written ages ago, so I thought I was being generous!). If it doesn't get a B connection within 200ms, then it drops A1. So then B1 establishes a connection with the server's kernel, waiting to be accepted. It only gets accepted on the next connection cycle when A2 establishes a connection, and the client also sends a B2 connection. The server accepts the A2 connection, then gets the first connection on the B queue, which is B1 (hasn't been accepted yet - the queue looked like B1, B2). That is why the server didn't send a FIN for B1 when the client had disconnected B1. So the two connections the server has are A2 and B1, which are obviously out of sync. It tries writing to B1, which is a dead connection, so it drops A2 and B1. Then the next pair are A3 and B2, which are also invalid pairs. They never recover from being out of sync until the server process is killed and the TCP connections are all reset.
So the solution was to just change a timeout for waiting on the B socket from 200ms to 5s. Such a simple fix that had me scratching my head for days (and fixed it within 24 hours of putting it on stackoverflow)! I also made it recover from stray B connections by adding socket B to the main select() call, and then accept()ing it and close()ing it immediately (which would only happen if the B connection took longer than 5s to establish). Thanks #AlanCurry for the suggestion of adding it to the select() and adding the puzzle piece about the listen() backlog parameter being a hint.

GetQueuedCompletionStatus delayed

I have written complex library for managing network communication based on iocp mechanism. Problem is that when server closes the connection by calling API method closesocket() this information is sometimes transmitted to client delayed by seconds or even minutes. My code for detecting connection closure looks like this (simplified):
ok = GetQueuedCompletionStatus(completion_port, &io_size, (PULONG_PTR)&context, &overlapped, 40);
if (!ok) {
// something went broken
DWORD err = GetLastError();
if (err == ERROR_CONNECTION_REFUSED) {
// connection failed
} else if (err == ERROR_SEM_TIMEOUT) {
// connection timeout
} else if (err == ERROR_NETNAME_DELETED) {
// connection closure - point of interest
} else if (err != WAIT_TIMEOUT) {
// unknown error
}
} else {
// process incomming or outgoing data
}
Why is this happening? I need to know about connection closure immediately to be able to connect to backup server (not so heavily loaded - disconnect is happening because of this).
How are you closing the connection?
If you're just calling closesocket() then you are initiating a shutdown sequence which will attempt to ensure that all data that is currently pending will reach the destination. This can take time, especially if the network connection has been overloaded and datagrams have been lost and TCP retransmission is occurring.
If you want to close the connection straight away, and lose any pending data, then set linger to 0 and then close the socket. This will issue an RST on the connection and you'll get that much quicker.
I tried to experiment with linger parameter as Len wrote but this did not help. Adding call of shutdown() function just before closesocket() helped me. After analyzing packets reaching network interface on client (with WireShark) I have found that RST packet was replaced by FIN packet. Curiously that RST packet was not delayed. So operating system knew that connection was closed but by some unknown reason this information was transferred to application layer very delayed. I measured delays between 10 seconds and 4 minutes.