I am new to OpenSSL programming. Anyways, I have coded an openssl server and client in C (also tested in C++). When the two connect they successfully handshake and are able to read and write to each other sucessfully. I have it currently set up such that the client only reads from stream and writes to buffer, like so:
while((rc = SSL_read(ssl, buffer, sizeof(buffer))) > 0){
fprintf(stdout,"%s\n", buffer);
}
Likewise, my server is setup such that it constantly writes to stream from buffer, like so:
while ((rc = SSL_write(ssl, buffer, sizeof(buffer))) > 0) {
fprintf(stdout, "Sent message.\n");
}
fprintf(stdout, "Done sending.\n");
And this works. If I were to abruptly end the client with ^C (Ctrl C), the server would finish and print "Done Sending." However, if I were to put a delay any longer than about 10000 nanoseconds between every SSL_write (within the server's writing while loop), I get unexpected behaviour when the client disconnects abruptly (using using ^C) or normally via counter and break. To clarify, before abruptly disconnecting the server is able to SSL_write and the client is able to SSL_read normally between any duration of a delay (haven't tried anything past a minute).
This issue means that a client connection can effectively crash the server thread, as demonstrated by not printing "Done Sending" after the client disconnects when using delays longer than 10000 nanoseconds. I do not want the server to be able to crash because of an abrupt disconnect in a session. To be clear the server crashes on call to SSL_write and does not return anything.
Thing I have tried attempting to solve this issue:
Attempting to see changes in these return values: SSL_want(ssl), SSL_get_error(ssl,0), ERR_get_error(), and SSL_get_shutdown(ssl). In some test code, I have printed all these out prior to calling SSL_write and none have changed their values before crashing.
Clearing the error queue prior to every SSL_write using ERR_clear_error()
Seeing if anything is printed with ERR_print_errors_fp(stderr) - nada
I have used the following delay methods:
// Method 1
for(long i = 0; i < (long) 99999999; i++){}
// Method 2
struct timespec tim, tim2;
tim.tv_sec = 0;
tim.tv_nsec = 10000L; // 5 milliseconds
nanosleep(&tim, &tim2);
// Method 3
sleep(1)
I personally think it would be ludicrous to be required to write to socket within a duration of a hundredth of a millisecond just so that the server doesn't crash on client disconnect.
Is this actually expected behavior? Am I doing something wrong or am I forgetting something? What should I do to circumvent this issue?
Any help or advice would be appreciated.
Related
I am using cpp-httplib to retrieve some data from a server using long polling (that is, the client will issue a request to the server, and the server will just keep the connection open until the required data is available or a timeout is reached).
The program is running on my raspberry pi, which sits behind a router that does not have an outgoing static ip address. Every time the ip is reassigned (or, at least, close to that time point), my program breaks, in that the thread currently performing the poll will be forever stuck in httplib::SSLClient::Get, which is caused by a blocking read() syscall. Both server- and client timeouts are unable to do anything, while a connection close should make read immediately return 0, which is what i would have expected in this situation.
Inspecting the program with gdb shows the following:
(gdb) thread 2
(gdb) where
__libc_read (nbytes=5, buf=0x75608edb, fd=3) at ../sysdeps/unix/sysv/linux/read.c:26
__libc_read (fd=3, buf=0x75608edb, nbytes=5) at ../sysdeps/unix/sysv/linux/read.c:24
0x76d1862c in ?? () from /usr/lib/arm-linux-gnueabihf/libcrypto.so.1.1
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
I am not doing anything (as far as I know) that could accidentally overwrite return addresses.
For comparison, a 'healthy' stack trace during a SSLCLient::Get can be found here.
The actual code is quite a lot, but here's a short version that shows the same behaviour:
#include <iostream>
#define CPPHTTPLIB_OPENSSL_SUPPORT 1
#include "httplib.h"
void poll(httplib::SSLClient* c, char* path) {
while (true) {
auto response = c->Get(path);
std::cout << response.body << std::endl;
}
}
int main(int argc, char* argv[]) {
if (argc >= 3) {
httplib::SSLClient client(argv[1], 443, 20);
std::thread poll_thread(poll, &client, argv[2]);
poll_thread.join();
} else {
std::cerr << "Usage: ./poll <host> <path>" << std::endl;
return 1;
}
}
I can think of some workarounds that might or might not work, but I'd really like to know why and how this is happening in the first place.
Just expanding on the keep_alive option I mentioned in the comment.
In the scenario you described, it seems possible that the underlying TCP socket connection was terminated in an unclean fashion. I.e., you say the IP address was reassigned.
Ideally when there is a TCP socket termination, you want your code to exit out of any blocked read/poll operation. That is what will happen for normal socket closures, e.g., say the remote process is killed, or the remote process just decides it is time to close. But if the IP address of your host is changed .... I'm not sure there will necessarily be a low level TCP messages that says, to affect, this connection is now closed. So the consequence for your program is that is can still hold a local socket (the local TCP endpoint), and not realise the connection has dropped.
This is where something like keep_alive. The idea is that the kernel will send keep alive packets to keep testing if the connection is established; if these ever fail, then it can close the local socket (and so your blocking read, or blocking select, will return with some sort of end-of-stream error).
Separately to keep_alive, you can also consider application heart-beat messages (e.g., websocket has ping/pong). In addition to ensuring the TCP connection remains established, it confirms whether the remote application is healthy.
[TL;DR version: the code below hangs indefinitely on the second recv() call both in Release and Debug mode. In Debug, if I place or remove a breakpoint anywhere in the code, it makes the execution continue and everything behaves normally]
I'm coding a simple client-server communication using UNIX sockets. The server is in C++ while the client is in python. The connection (TCP socket on localhost) gets established no problem, but when it comes to receiving data on the server side, it hangs on the recv function. Here is the code where the problem happens:
bool server::readBody(int csock) // csock is the socket filedescriptor
{
int bytecount;
// protobuf-related variables
google::protobuf::uint32 siz;
kinMsg::request message;
// if the code is working, client will send false
// I initialize at true to be sure that the message is actually read
message.set_endconnection(true);
// First, read 4-characters header for extracting data size
char buffer_hdr[5];
if((bytecount = recv(csock, buffer_hdr, 4, MSG_WAITALL))== -1)
::std::cerr << "Error receiving data "<< ::std::endl;
buffer_hdr[4] = '\0';
siz = atoi(buffer_hdr);
// Second, read the data. The code hangs here !!
char buffer [siz];
if((bytecount = recv(csock, (void *)buffer, siz, MSG_WAITALL))== -1)
::std::cerr << "Error receiving data " << errno << ::std::endl;
//Finally, process the protobuf message
google::protobuf::io::ArrayInputStream ais(buffer,siz);
google::protobuf::io::CodedInputStream coded_input(&ais);
google::protobuf::io::CodedInputStream::Limit msgLimit = coded_input.PushLimit(siz);
message.ParseFromCodedStream(&coded_input);
coded_input.PopLimit(msgLimit);
if (message.has_endconnection())
return !message.endconnection();
return false;
}
As can be seen in the code, the protocol is such that the client will first send the number of bytes in the message in a 4-character array, followed by the protobuf message itself. The first recv call works well and does not hang. Then, the code hangs on the second recv call, which should be recovering the body of the message.
Now, for the interesting part. When run in Release mode, the code hangs indefinitely and I have to kill either the client or the server. It does not matter whether I run it from my IDE (qtcreator), or from the CLI after a clean build (using cmake/g++).
When I run the code in Debug mode, it also hangs at the same recv() call. Then, if I place or remove a breakpoint ANYWHERE in the code (before or after that line of code), it starts again and works perfectly : the server receives the data, and reads the correct message.endconnection() value before returning out of the readBody function. The breakpoint that I have to place to trigger this behavior is not necessarily trigerred. Since the readBody() function is in a loop (my C++ server waits for requests from the python client), at the next iteration, the same behavior happens again, and I have to place or remove a breakpoint anywhere in the code, which is not necessarily triggered, in order to go past that recv() call. The loop looks like this:
bool connection = true;
// server waiting for client connection
if (!waitForConnection(connectionID)) std::cerr << "Error accepting connection" << ::std::endl;
// main loop
while(connection)
{
if((bytecount = recv(connectionID, buffer, 4, MSG_PEEK))== -1)
{
::std::cerr << "Error receiving data "<< ::std::endl;
}
else if (bytecount == 0)
break;
try
{
if(readBody(connectionID))
{
sendResponse(connectionID);
}
// if client is requesting disconnection, break the while(true)
else
{
std::cout << "Disconnection requested by client. Exiting ..." << std::endl;
connection = false;
}
}
catch(...)
{
std::cerr << "Erro receiving message from client" << std::endl;
}
}
Finally, as you can see, when the program returns from readBody(), it sends back another message to the client, which processes it and prints in the standard output (python code working, not shown because the question is already long enough). From this last behavior, I can conclude that the protocol and client code are OK. I tried to put sleep instructions at many points to see whether it was a timing problem, but it did not change anything.
I searched all over Google and SO for a similar problem, but did not find anything. Help would be much appreciated !
The solution is to not use any flags. Call recv with 0 for the flags or just use read instead of recv.
You are requesting the socket for data that is not there. The recv expects 10 bytes, but the client only sent 6. The MSG_WAITALL states clearly that the call should block until 10 bytes are available in the stream.
If you dont use any flags, the call will succeed with a bytecount at 6, which is the exact same effect than with MSG_DONTWAIT, without the potential side effects of non-blocking calls.
I did the test on the github project, it works.
The solution is to replace MSG_WAITALL by MSG_DONTWAIT in the recv() calls. It now works fine. To summarize, it makes the recv() calls non blocking, which makes the whole code work fine.
However, this still raises many questions, the first of which being: why was it working with this weird breakpoint changing thing ?
If the socket was blocking in the first place, one could assume that it is because there is no data on the socket. Let's assume both situations here :
There is no data on the socket, which is the reason why the blocking recv() call was not working. Changing it to a non blocking recv() call would then, in the same situation, trigger an error. If not, the protobuf deserialization would afterwards fail trying to deserialize from an empty buffer. But it does not ...
There is data on the socket. Then, why on earth would it block in the first place ?
Obviously there is something that I don't get about sockets in C, and I'd be very happy if somebody has an explanation for this behavior !
I am working on UDP server and this code of UDP server is working fine except the else condition. May be i am wrong but i have done lot of things using else condition in the same way to terminate while loop. I am not sure if its UDP problem or something else........
while(1)// execute three times because its getting data only three times from the client
{
int total_bytes = 0;
int bytes_recv=0;
int count = 0;
std::vector<double> m_vector(8000);
// Bytes are also received 3 times correctly then why else condition not executing after receiving 3 times ?
bytes_recv = recvfrom(Socket,(char*)m_vector.data(),64000,0,(SOCKADDR*)&ClientAddr,&i);
count++;
if(bytes_recv > 0 )
{
total_bytes = total_bytes+bytes_recv;
std::cout<<"Server: loop counter is"<<count<<std::endl;
std::cout<<"Server: Received bytes are"<<total_bytes<<std::endl;
}else
{
//why this part never executes ?
std::cout<<"Data Receiving has finished"<<std::endl;
break;
}
}
WSACleanup();
system("pause");
return 0;
}
The comment in the source says that you expect only 3 datagrams from the client. Thus, do count how many datagrams you have received, and if you already have 3 of them, do not continue calling recvfrom.
You already have a variable count, but it is reset to zero every iteration and isn't used as exit condition.
Once you have count == 3, you know that there is nothing more coming, so calling recvfrom is pointless. It will only block, since that is what you're telling it to do. Making the socket non-blocking would "help" to avoid blocking, but then you would be polling, which isn't good either (and useless, since you know there is nothing to be received). It's best to operate correctly.
You could also have the client send an "end of message" datagram, but of course you would have to add a timeout and a strategy for packet loss, or the server could block forever. Not only because of malicious clients, but also simply because the receive buffer was full and a packet was dropped (which is a normal thing to happen!).
Alternatively, since there is a call to WSACleanup in your code, you're using Winsock. Which means you could use overlapped WSARecvFrom instead of recvfrom. Fire off one receive, and from its completion handler fire off another two, also with a callback function. After firing off the request, forget about it and let the callback handle the rest, you can now deal with another client (must be alertable though for that to happen ... alternatively, block on an IOCP or WaitOnMultipleObjects or whatever).
If no second or third packet comes in after so and so long, either send a "please resend" message or consider the client dead, close the socket and move on.
recvfrom is by default a blocking call and will only return once a packet has been read. Because of this when you stop sending packets it just blocks on recvfrom so the case with 0 bytes never happens
You could change the flags to recvfrom to change this behaviour, but it's likely not what you want because then if there's any delay between sending the packets you will get 0 bytes and exit.
I suppose you could see how long you've gone without receiving any packets and then shut down, so in the else case you could use a timer and a running total before exiting.
What are you trying to accomplish?
I have not checked (bad me, I know, but time's short), if recvfrom follows typical behavior, then it guarantees you that:
returns value < 0 means error
returns value == 0 means that everything was OK but channel cannot receive anything more
returns value > 0 means something was received
In TCP you get 'received bytes' == 0 only when the connection is closed.
In UDP there's no such thing as 'connection'. The channel is always ready to receive, until your the socked is closed.
Hence, it probably simply waits until something arrives. It cannot detect that there is noone to listen from. That's the UDP specifics.
If you want to catch a case when nothing arrives for a long time, try to set read timeout.
I read that it should be safe from different threads concurrently, but my program has some weird behaviour and I don't know what's wrong.
I have concurrent threads communicating with a client socket
one doing send to a socket
one doing select and then recv from the same socket
As I'm still sending, the client has already received the data and closed the socket.
At the same time, I'm doing a select and recv on that socket, which returns 0 (since it is closed) so I close this socket. However, the send has not returned yet...and since I call close on this socket the send call fails with EBADF.
I know the client has received the data correctly since I output it after I close the socket and it is right. However, on my end, my send call is still returning an error (EBADF), so I want to fix it so it doesn't fail.
This doesn't always happen. It happens maybe 40% of the time. I don't use sleep anywhere. Am I supposed to have pauses between sends or recvs or anything?
Here's some code:
Sending:
while(true)
{
// keep sending until send returns 0
n = send(_sfd, bytesPtr, sentSize, 0);
if (n == 0)
{
break;
}
else if(n<0)
{
cerr << "ERROR: send returned an error "<<errno<< endl; // this case is triggered
return n;
}
sentSize -= n;
bytesPtr += n;
}
Receiving:
while(true)
{
memset(bufferPointer,0,sizeLeft);
n = recv(_sfd,bufferPointer,sizeLeft, 0);
if (debug) cerr << "Receiving..."<<sizeLeft<<endl;
if(n == 0)
{
cerr << "Connection closed"<<endl; // this case is triggered
return n;
}
else if (n < 0)
{
cerr << "ERROR reading from socket"<<endl;
return n;
}
bufferPointer += n;
sizeLeft -= n;
if(sizeLeft <= 0) break;
}
On the client, I use the same receive code, then I call close() on the socket.
Then on my side, I get 0 from the receive call and also call close() on the socket
Then my send fails. It still hasn't finished?! But my client already got the data!
I must admit I'm surprised you see this problem as often as you do, but it's always a possibility when you're dealing with threads. When you call send() you'll end up going into the kernel to append the data to the socket buffer in there, and it's therefore quite likely that there'll be a context switch, maybe to another process in the system. Meanwhile the kernel has probably buffered and transmitted the packet quite quickly. I'm guessing you're testing on a local network, so the other end receives the data and closes the connection and sends the appropriate FIN back to your end very quickly. This could all happen while the sending machine is still running other threads or processes because the latency on a local ethernet network is so low.
Now the FIN arrives - your receive thread hasn't done a lot lately since it's been waiting for input. Many scheduling systems will therefore raise its priority quite a bit and there's a good chance it'll be run next (you don't specify which OS you're using but this is likely to happen on at least Linux, for example). This thread closes the socket due to its zero read. At some point shortly after this the sending thread will be re-awoken, but presumably the kernel notices that the socket is closed before it returns from the blocked send() and returns EBADF.
Now this is just speculation as to the exact cause - among other things it heavily depends on your platform. But you can see how this could happen.
The easiest solution is probably to use poll() in the sending thread as well, but wait for the socket to become write-ready instead of read-ready. Obviously you also need to wait until there's any buffered data to send - how you do that depends on which thread buffers the data. The poll() call will let you detect when the connection has been closed by flagging it with POLLHUP, which you can detect before you try your send().
As a general rule you shouldn't close a socket until you're certain that the send buffer has been fully flushed - you can only be sure of this once the send() call has returned and indicates that all the remaining data has gone out. I've handled this in the past by checking the send buffer when I get a zero read and if it's not empty I set a "closing" flag. In your case the sending thread would then use this as a hint to do the close once everything is flushed. This matters because if the remote end does a half-close with shutdown() then you'll get a zero read even if it might still be reading. You might not care about half closes, however, in which case your strategy above is OK.
Finally, I personally would avoid the hassle of sending and receiving threads and just have a single thread which does both - that's more or less the point of select() and poll(), to allow a single thread of execution to deal with one or more filehandles without worrying about performing an operation which blocks and starves the other connections.
Found the problem. It's with my loop. Notice that it's an infinite loop. When I don't have anymore left to send, my sentSize is 0, but I'll still loop to try to send more. At this time, the other thread has already closed this thread and so my send call for 0 bytes returns with an error.
I fixed it by changing the loop to stop looping when sentSize is 0 and it fixed the problem!
The server (192.168.1.5:3001), is running Linux 3.2, and is designed to only accept one connection at a time.
The client (192.168.1.18), is running Windows 7. The connection is a wireless connection. Both programs are written in C++.
It works great 9 in 10 connect/disconnect cycles. The tenth-ish (randomly happens) connection has the server accept the connection, then when it later actually writes to it (typically 30+s later), according to Wireshark (see screenshot) it looks like it's writing to an old stale connection, with a port number that the client has FINed (a while ago), but the server hasn't yet FINed. So the client and server connections seems to get out of sync - the client makes new connections, and the server tries writing to the previous one. Every subsequent connection attempt fails once it gets in this broken state. The broken state can be initiated by going beyond the maximum wireless range for a half a minute (as before 9 in 10 cases this works, but it sometimes causes the broken state).
Wireshark screenshot behind link
The red arrows in the screenshot indicate when the server started sending data (Len != 0), which is the point when the client rejects it and sends a RST to the server. The coloured dots down the right edge indicate a single colour for each of the client port numbers used. Note how one or two dots appear well after the rest of the dots of that colour were (and note the time column).
The problem looks like it's on the server's end, since if you kill the server process and restart, it resolves itself (until next time it occurs).
The code is hopefully not too out-of-the-ordinary. I set the queue size parameter in listen() to 0, which I think means it only allows one current connection and no pending connections (I tried 1 instead, but the problem was still there). None of the errors appear as trace prints where "// error" is shown in the code.
// Server code
mySocket = ::socket(AF_INET, SOCK_STREAM, 0);
if (mySocket == -1)
{
// error
}
// Set non-blocking
const int saveFlags = ::fcntl(mySocket, F_GETFL, 0);
::fcntl(mySocket, F_SETFL, saveFlags | O_NONBLOCK);
// Bind to port
// Union to work around pointer aliasing issues.
union SocketAddress
{
sockaddr myBase;
sockaddr_in myIn4;
};
SocketAddress address;
::memset(reinterpret_cast<Tbyte*>(&address), 0, sizeof(address));
address.myIn4.sin_family = AF_INET;
address.myIn4.sin_port = htons(Port);
address.myIn4.sin_addr.s_addr = INADDR_ANY;
if (::bind(mySocket, &address.myBase, sizeof(address)) != 0)
{
// error
}
if (::listen(mySocket, 0) != 0)
{
// error
}
// main loop
{
...
// Wait for a connection.
fd_set readSet;
FD_ZERO(&readSet);
FD_SET(mySocket, &readSet);
const int aResult = ::select(getdtablesize(), &readSet, NULL, NULL, NULL);
if (aResult != 1)
{
continue;
}
// A connection is definitely waiting.
const int fileDescriptor = ::accept(mySocket, NULL, NULL);
if (fileDescriptor == -1)
{
// error
}
// Set non-blocking
const int saveFlags = ::fcntl(fileDescriptor, F_GETFL, 0);
::fcntl(fileDescriptor, F_SETFL, saveFlags | O_NONBLOCK);
...
// Do other things for 30+ seconds.
...
const int bytesWritten = ::write(fileDescriptor, buffer, bufferSize);
if (bytesWritten < 0)
{
// THIS FAILS!! (but succeeds the first ~9 times)
}
// Finished with the connection.
::shutdown(fileDescriptor, SHUT_RDWR);
while (::close(fileDescriptor) == -1)
{
switch(errno)
{
case EINTR:
// Break from the switch statement. Continue in the loop.
break;
case EIO:
case EBADF:
default:
// error
return;
}
}
}
So somewhere between the accept() call (assuming that is exactly the point when the SYN packet is sent), and the write() call, the client's port gets changed to the previously-used client port.
So the question is: how can it be that the server accepts a connection (and thus opens a file descriptor), and then sends data through a previous (now stale and dead) connection/file descriptor? Does it need some sort of option in a system call that's missing?
I'm submitting an answer to summarize what we've figured out in the comments, even though it's not a finished answer yet. It does cover the important points, I think.
You have a server that handles clients one at a time. It accepts a connection, prepares some data for the client, writes the data, and closes the connection. The trouble is that the preparing-the-data step sometimes takes longer than the client is willing to wait. While the server is busy preparing the data, the client gives up.
On the client side, when the socket is closed, a FIN is sent notifying the server that the client has no more data to send. The client's socket now goes into FIN_WAIT1 state.
The server receives the FIN and replies with an ACK. (ACKs are done by the kernel without any help from the userspace process.) The server socket goes into the CLOSE_WAIT state. The socket is now readable, but the server process doesn't notice because it's busy with its data-preparation phase.
The client receives the ACK of the FIN and goes into FIN_WAIT2 state. I don't know what's happening in userspace on the client since you haven't shown the client code, but I don't think it matters.
The server process is still preparing data for a client that has hung up. It's oblivious to everything else. Meanwhile, another client connects. The kernel completes the handshake. This new client will not be getting any attention from the server process for a while, but at the kernel level the second connection is now ESTABLISHED on both ends.
Eventually, the server's data preparation (for the first client) is complete. It attempts to write(). The server's kernel doesn't know that the first client is no longer willing to receive data because TCP doesn't communicate that information! So the write succeeds and the data is sent out (packet 10711 in your wireshark listing).
The client gets this packet and its kernel replies with RST because it knows what the server didn't know: the client socket has already been shut down for both reading and writing, probably closed, and maybe forgotten already.
In the wireshark trace it appears that the server only wanted to send 15 bytes of data to the client, so it probably completed the write() successfully. But the RST arrived quickly, before the server got a chance to do its shutdown() and close() which would have sent a FIN. Once the RST is received, the server won't send any more packets on that socket. The shutdown() and close() are now executed, but don't have any on-the-wire effect.
Now the server is finally ready to accept() the next client. It begins another slow preparation step, and it's falling further behind schedule because the second client has been waiting a while already. The problem will keep getting worse until the rate of client connections slows down to something the server can handle.
The fix will have to be for you to make the server process notice when a client hangs up during the preparation step, and immediately close the socket and move on to the next client. How you will do it depends on what the data preparation code actually looks like. If it's just a big CPU-bound loop, you have to find some place to insert a periodic check of the socket. Or create a child process to do the data preparation and writing, while the parent process just watches the socket - and if the client hangs up before the child exits, kill the child process. Other solutions are possible (like F_SETOWN to have a signal sent to the process when something happens on the socket).
Aha, success! It turns out the server was receiving the client's SYN, and the server's kernel was automatically completing the connection with another SYN, before the accept() had been called. So there definitely a listening queue, and having two connections waiting on the queue was half of the cause.
The other half of the cause was to do with information which was omitted from the question (I thought it was irrelevant because of the false assumption above). There was a primary connection port (call it A), and the secondary, troublesome connection port which this question is all about (call it B). The proper connection order is A establishes a connection (A1), then B attempts to establish a connection (which would become B1)... within a time frame of 200ms (I already doubled the timeout from 100ms which was written ages ago, so I thought I was being generous!). If it doesn't get a B connection within 200ms, then it drops A1. So then B1 establishes a connection with the server's kernel, waiting to be accepted. It only gets accepted on the next connection cycle when A2 establishes a connection, and the client also sends a B2 connection. The server accepts the A2 connection, then gets the first connection on the B queue, which is B1 (hasn't been accepted yet - the queue looked like B1, B2). That is why the server didn't send a FIN for B1 when the client had disconnected B1. So the two connections the server has are A2 and B1, which are obviously out of sync. It tries writing to B1, which is a dead connection, so it drops A2 and B1. Then the next pair are A3 and B2, which are also invalid pairs. They never recover from being out of sync until the server process is killed and the TCP connections are all reset.
So the solution was to just change a timeout for waiting on the B socket from 200ms to 5s. Such a simple fix that had me scratching my head for days (and fixed it within 24 hours of putting it on stackoverflow)! I also made it recover from stray B connections by adding socket B to the main select() call, and then accept()ing it and close()ing it immediately (which would only happen if the B connection took longer than 5s to establish). Thanks #AlanCurry for the suggestion of adding it to the select() and adding the puzzle piece about the listen() backlog parameter being a hint.