I have inherited two applications, one Test Harness (a client) running on a Windows 7 PC and one server application running on a Windows 10 PC. I am attempting to communicate between the two using TCP/IP sockets. The Client sends requests (for data in the form of XML) to the Server and the Server then sends the requested data (also XML) back to the client.
The set up is as shown below:
Client Server
-------------------- --------------------
| | Sends Requests | |
| Client Socket | -----------------> | Server Socket |
| | <----------------- | |
| | Sends Data | |
-------------------- --------------------
This process always works on an initial connection (i.e. freshly launched client and server applications). The client has the ability to disconnect from the server, which triggers cleanup of sockets. Upon reconnection, I almost always (it does not always happen, but does most of the time) receive the following error:
"Receive() - The socket is marked as nonblocking and the receive operation would block"
This error is displayed at the client and the socket in question is an asynchronous, non-blocking socket.
The line which causes this SOCKET_ERROR is:
numBytesReceived = theSocket->Receive(theReceiveBuffer, 10000));
where:
- numBytesReceived is an integer (int)
- theSocket is a pointer to a class called CClientSocket which is a specialisation of CASyncSocket, which is part of the MFC C++ Library. This defines the socket object which is embedded within the client. It is an asynchonous, non-blocking socket.
- Receive() is a virtual function within the CASyncSocket object
- theReceiveBuffer is a char array (10000 elements)
In executing the line descirbed above, SOCKET_ERROR is returned from the function and calling theSocket->GetLastError() returns WSAEWOULDBLOCK.
SocketTools highlights that
When a non-blocking (asynchronous) socket attempts to perform an operation that cannot be performed immediately, error 10035 will be returned. This error is not fatal, and should be considered advisory by the application. This error code corresponds to the Windows Sockets error WSAEWOULDBLOCK.
When reading data from a non-blocking socket, this error will be returned if there is no more data available to be read at that time. In this case, the application should wait for the OnRead event to fire which indicates that more data has become available to read. The IsReadable property can be used to determine if there is data that can be read from the socket.
When writing data to a non-blocking socket, this error will be returned if the local socket buffers are filled while waiting for the remote host to read some of the data. When buffer space becomes available, the OnWrite event will fire which indicates that more data can be written. The IsWritable property can be used to determine if data can be written to the socket.
It is important to note that the application will not know how much data can be sent in a single write operation, so it is possible that if the client attempts to send too much data too quickly, this error may be returned multiple times. If this error occurs frequently when sending data it may indicate high network latency or the inability for the remote host to read the data fast enough.
I am consistently getting this error and failing to receive anything on the socket.
Using Wireshark, the following communications occur with the source, destinaton and TCP Bit Flags presented here:
Event: Connect Test Harness to Server via TCP/IP
Client --> Server: SYN
Server --> Client: SYN, ACK
Client --> Server: ACK
This appears to be correct and represents the Three-Way Handshake of connecting.
SocketSniff confirms that a Socket is closed on the client side. It was not possible to get SocketSniff to work with the Windows 10 Server application.
Event: Send a Request for Data from the Test Harness
Client --> Server: PSH, ACK
Server --> Client: PSH, ACK
Client --> Server: ACK
Both request data and received data is confirmed to be exchanged successfully
Event: Disconnect Test Harness from Server
Client --> Server: FIN, ACK
Server --> Client: ACK
Server --> Client: FIN, ACK
Client --> Server: ACK
This appears to be correct and represents the Four-Way handshake of connection closure.
SocketSniff confirms that a Socket is closed on the client side. It was not possible to get SocketSniff to work with the Windows 10 Server application.
Event: Reconnect Test Harness to Server via TCP/IP
Client --> Server: SYN
Server --> Client: SYN, ACK
Client --> Server: ACK
This appears to be correct and represents the Three-Way Handshake of connecting.
SocketSniff confirms that a new Socket is opened on the client side. It was not possible to get SocketSniff to work with the Windows 10 Server application.
Event: Send a Request for Data from the Test Harness
Client --> Server: PSH, ACK
Server --> Client: ACK
We see no data being pushed (PSH) back to the client, yet we do see an acknowledgement.
Has anyone got any ideas what may be going on here? I understand it would be difficult for you to diagnose without seeing the source code, however I was hoping others may have had experience with this error and could point me down the specific route to investigate.
More Info:
The Server initialises a listening thread and binds to 0.0.0.0:49720. The 'WSAStartup()', 'bind()' and 'listen()' functions all return '0', indicating success. This thread persists throughout the lifetime of the server application.
The Server initialises two threads, a read and a write thread. The read thread is responsible for reading request data off its socket and is initialised as follows with a class called Connection:
HANDLE theConnectionReadThread
= CreateThread(NULL, // Security Attributes
0, // Default Stacksize
Connection::connectionReadThreadHandler, // Callback
(LPVOID)this, // Parameter to pass to thread
CREATE_SUSPENDED, // Don't start yet
NULL); // Don't Save Thread ID
The write thread is initialised in a similar way.
In each case, the CreateThread() function returns a suitable HANDLE, e.g.
theConnectionReadThread = 00000570
theConnectionWriteThread = 00000574
The threads actually get started within the following function:
void Connection::startThreads()
{
ResumeThread(theConnectionReadThread);
ResumeThread(theConnectionWriteThread);
}
And this function is called from within another class called ConnectionManager which manages all the possible connections to the server. In this case, I am only concerned with a single connection, for simplicity.
Adding text output to the server application reveals that I can successfully connect/disconnect the client and server several times before the faulty behaviour is observed. For example, Within the connectionReadThreadHandler() and connectionWriteThreadHandler() functions, I am outputing text to a log file as soon as they execute.
When correct behaviour is observed, the following lines are output to the log file:
Connection::ResumeThread(theConnectionReadThread) returned 1
Connection::ResumeThread(theConnectionWriteThread) returned 1
ConnectionReadThreadHandler() Beginning
ConnectionWriteThreadHandler() Beginning
When faulty behaviour is observed, the following lines are output to the log file:
Connection::ResumeThread(theConnectionReadThread) returned 1
Connection::ResumeThread(theConnectionWriteThread) returned 1
The callback functions do not appear to being invoked.
It is at this point that the error is displayed on the client indicating that:
"Receive() - The socket is marked as nonblocking and the receive operation would block"
On the Client side, I've got a class called CClientDoc, which contains the client side socket code. It first initialises theSocket which is the socket object which is embedded within a client:
private:
CClientSocket* theSocket = new CClientSocket;
When a connection is initialised between client and server, this class calls a function called CreateSocket() part of which is included below, along with ancillary functions which it calls:
void CClientDoc::CreateSocket()
{
AfxSocketInit();
int lastError;
theSocket->Init(this);
if (theSocket->Create()) // Calls CAyncSocket::Create() (part of afxsock.h)
{
theErrorMessage = "Socket Creation Successful"; // this is a CString
theSocket->SetSocketStatus(WAITING);
}
else
{
// We don't fall in here
}
}
void CClientDoc::Init(CClientDoc* pDoc)
{
pClient = pDoc; // pClient is a pointer to a CClientDoc
}
void CClientDoc::SetSocketStatus(SOCKET_STATUS sock_stat)
{
theSocketStatus = sock_stat; // theSocketStatus is a private member of CClientSocket of type SOCKET_STATUS
}
Immediately after CreateSocket(), SetupSocket() is called which is also provided here:
void CClientDoc::SetupSocket()
{
theSocket->AsyncSelect(); // Function within afxsock.h
}
Upon disconnection of the client from the server,
void CClientDoc::OnClienDisconnect()
{
theSocket->ShutDown(2); // Inline function within afxsock.inl
delete theSocket;
theSocket = new CClientSocket;
CreateSocket();
SetupSocket();
}
So we delete the current socket and then create a new one, ready for use, which appears to work as expected.
The error is being written on the Client within the DoReceive() function. This function calls the socket to attempt to read in a message.
CClientDoc::DoReceive()
{
int lastError;
switch (numBytesReceived = theSocket->Receive(theReceiveBuffer, 10000))
{
case 0:
// We don't fall in here
break;
case SOCKET_ERROR: // We come in here when the faulty behaviour occurs
if (lastError = theSocket->GetLastError() == WSAEWOULDBLOCK)
{
theErrorMessage = "Receive() - The socket is marked as nonblocking and the receive operation would block";
}
else
{
// We don't fall in here
}
break;
default:
// When connection works, we come in here
break;
}
}
Hopefully the addition of some of the code proves insightful. I should be able to add a bit more if needed.
Thanks
The WSAEWOULDBLOCK error DOES NOT mean the socket is marked as blocking. It means the socket is marked as non-blocking and there is NO DATA TO READ at that time.
WSAEWOULDBLOCK means the socket WOULD HAVE blocked the calling thread waiting for data if the socket HAD BEEN marked as blocking.
To know when a non-blocking socket has data waiting to be read, use Winsock's select() function, or the CClientSocket::AsyncSelect() method to request FD_READ notifications, or other equivalent. Don't try to read until there is something to read.
In your analysis, you see the client sending data to the server, but the server is not sending data to the client. So you clearly have a logic bug in your code somewhere, you need to find and fix it. Either the client is not terminating its request correctly, or the server is not receiving/processing/replying to it correctly. But since you did not show your actual code, we can't tell you what is actually wrong with it.
Related
I'm trying to implement OpenSSL into my application which uses raw C sockets and the only issue I'm having is the SSL_accept / SSL_connect part of the code which starts the KeyExchange phase but does not seem to complete it on the serverside.
I've had a look at countless websites and Q&A's here on StackOverflow to get myself through the OpenSSL API since this is basically the first time I'm attempting to implement SSL into an application but the only thing I could not find yet was how to properly manage failed handshakes.
Basically, running process A which serves as a server will listen for incoming connections. Once I run process B, which acts as a client, it will successfully connect to process A but SSL_accept (on the server) fails with error code -2 SSL_ERROR_WANT_READ.
According to openssl handshake failed, the problem is "easily" worked around by calling SSL_accept within a loop until it finally returns 1 (It successfully connects and completes the handshake). However, I do not believe that this is the proper way of doing things as it looks like a dirty trick. The reason for why I believe it is a dirty trick is because I tried to run a small application I found on https://www.cs.utah.edu/~swalton/listings/articles/ (ssl_client and ssl_server) and magically, everything works just fine. There are no multiple calls to SSL_accept and the handshake is completed right away.
Here's some code where I'm accepting the SSL connection on the server:
if (SSL_accept(conn.ssl) == -1)
{
fprintf(stderr, "Connection failed.\n");
fprintf(stderr, "SSL State: %s [%d]\n", SSL_state_string_long(conn.ssl), SSL_state(conn.ssl));
ERR_print_errors_fp(stderr);
PrintSSLError(conn.ssl, -1, "SSL_accept");
return -1;
}
else
{
fprintf(stderr, "Connection accepted.\n");
fprintf(stderr, "Server -> Client handshake completed");
}
This is the output of PrintSSLError:
SSL State: SSLv3 read client hello B [8465]
[DEBUG] SSL_accept : Failed with return -1
[DEBUG] SSL_get_error() returned : 2
[DEBUG] Error string : error:00000002:lib(0):func(0):system lib
[DEBUG] ERR_get_error() returned : 0
[DEBUG] errno returned : Resource temporarily unavailable
And here's the client side snippet which connects to the server:
if (SSL_connect(conn.ssl) == -1)
{
fprintf(stderr, "Connection failed.\n");
ERR_print_errors_fp(stderr);
PrintSSLError(conn.ssl, -1, "SSL_connect");
return -1;
}
else
{
fprintf(stderr, "Connection established.\n");
fprintf(stderr, "Client -> Server handshake completed");
PrintSSLInfo(conn.ssl);
}
The connection is successfully enstablished client-side (SSL_connect does not return -1) and PrintSSLInfo outputs:
Connection established.
Cipher: DHE-RSA-AES256-GCM-SHA384
SSL State: SSL negotiation finished successfully [3]
And this is how I wrap the C Socket into SSL:
SSLConnection conn;
conn.fd = fd;
conn.ctx = sslContext;
conn.ssl = SSL_new(conn.ctx);
SSL_set_fd(conn.ssl, conn.fd);
The code snippet here resides within a function that takes a file-descriptor of the accepted incoming connection on the raw socket and the SSL Context to use.
To initialize the SSL Contexts I use TLSv1_2_server_method() and TLSv1_2_client_method(). Yes, I know that this will prevent clients from connecting if they do not support TLS 1.2 but this is exactly what I want. Whoever connects to my application will have to do it through my client anyway.
Either way, what am I doing wrong? I'd like to avoid loops in the authentication phase to avoid possible hang ups/slow downs of the application due to unexpected infinite loops since OpenSSL does not specify how many attempts it might take.
The workaround that worked, but that I'd like to avoid, is this:
while ((accept = SSL_accept(conn.ssl)) != 1)
And inside the while loop I check for the return code stored inside accept.
Things I've tried to workaround the SSL_ERROR_WANT_READ error:
Added usleep(50) inside the while loop (still takes several cycles to complete)
Added SSL_do_handshake(conn.ssl) after SSL_connect and SSL_accept (didn't change anything on the end-result)
Had a look at the code shown on roxlu.com (search on Google for "Using OpenSSL with memory BIOs - Roxlu") to guide me through the handshaking phase but since I'm new to this, and I don't directly use BIOs in my code but simply wrap my native C sockets into SSL, it was kind of confusing. I'm also unable to re-write the Networking part of the application as it'd would be too much work for me right now.
I've done some tests with the openssl command-line as well to troubleshoot the issue but it gives no error. The handshake appears to be successful as no errors such as:
24069864:error:1409E0E5:SSL routines:ssl3_write_bytes:ssl handshake failure:s3_pkt.c:656
appear. Here's the whole output of the command
openssl s_client -connect IP:Port -tls1_2 -prexit -msg
http://pastebin.com/9u1bfuf4
Things to note:
1. I'm using the latest OpenSSL version 1.0.2h
2. Application runs on a Unix system
3. Using self-signed certificates to encrypt the network traffic
Thanks everyone who's going to help me out.
Edit:
I forgot to mention that the sockets are in non-blocking mode since the application serves multiple clients in one-go. Though, client-side they are in blocking mode.
Edit2:
Leaving this here for future reference: jmarshall.com/stuff/handling-nbio-errors-in-openssl.html
You have clarified that the socket question is non-blocking.
Well, that's your answer. Obviously, when the socket is in a non-blocking mode, the handshake cannot be immediately completed. The handshake involves an exchange of protocol packets between the client and the server, with each one having to wait to receive the response from its peer. This works fine when the socket is in its default blocking mode. The library simply read()s and write()s, which blocks and waits until the message gets succesfully read or written. This obviously can't happen when the socket is in the non-blocking mode. Either the read() or write() immediately succeeds, or fails, if there's nothing to read or if the socket's output buffer is full.
The manual pages for SSL_accept() and SSL-connect() explain the procedure you must implement to execute the SSL handshake when the underlying socket is in a non-blocking mode. Rather than repeating the whole thing here, you should read the manual pages yourself. The capsule summary is to use SSL_get_error() to determine if the handshake actually failed, or if the library wants to read or write to/from the socket; and in that eventuality call poll() or select(), accordingly, then call SSL_accept() and SSL_connect() again.
Any other approach, like sprinkling silly sleep() calls, here and there, will result in an unreliable house of cards, that will fail randomly.
After developing a sample client server application which can exchange some data, I'm trying to implement the retry mechanism into it. Currently my application is following below protocol:
Client connects to server (non blocking mode) with 3 secs timeout and with 2 reties.
Start sending data from client with fixed length. Send has some error checking whether it is sending the complete data or not.
Receive response (timeout: 3secs) from server and verify that. If incorrect response received, re-send the data and wait for response. Repeat this for two times if failed.
For the above implementation code sections look likes something below:
connect() and select() for opening connection
select() and send() for data send
select() and recv() for data receiving
Now I'm making the retries based on return types of the socket functions, and if send() or recv() fails I'm retring the same methods. But not recalling connect().
I tested the thing by restarting the server in between the data transfer, and as a result client fails to communicate with the server and it quits after several retries, I believe this is happening as because there is no connect() call on retry methods.
Any suggestions?
Example code for receiving socket data
bool CTCPCommunication::ReceiveSocketData(char* pchBuff, int iBuffLen)
{
bool bReturn = true;
//check whether the socket is ready to receive
fd_set stRead;
FD_ZERO(&stRead);
FD_SET(m_hSocket, &stRead);
int iRet = select(0, &stRead, NULL, NULL, &m_stTimeout);
//if socket is not ready this line will be hit after 3 sec timeout and go to the end
//if it is ready control will go inside the read loop and reads data until data ends or
//socket error is getting triggered continuously for more than 3 secs.
if ((iRet > 0) && (FD_ISSET(m_hSocket, &stRead)))
{
DWORD dwStartTime = GetTickCount();
DWORD dwCurrentTime = 0;
while ((iBuffLen-1) > 0)
{
int iRcvLen = recv(m_hSocket, pchBuff, iBuffLen-1, 0);
dwCurrentTime = GetTickCount();
//receive failed due to socket error
if (iRcvLen == SOCKET_ERROR)
{
if((dwCurrentTime - dwStartTime) >= SOCK_TIMEOUT_SECONDS * 1000)
{
WRITELOG("Call to socket API 'recv' failed after 3 secs continuous retries, error: %d", WSAGetLastError());
bReturn = false;
break;
}
}
//connection closed by remote host
else if (iRcvLen == 0)
{
WRITELOG("recv() returned zero - time to do something: %d", WSAGetLastError());
break;
}
pchBuff += iRcvLen;
iBuffLen -= iRcvLen;
}
}
else
{
WRITELOG("Call to API 'select' failed inside 'ReceiveSocketData', error: %d", WSAGetLastError());
bReturn = false;
}
return bReturn;
}
Currently my application is following below protocol:
Client connects to server (non blocking mode) with 3 secs timeout and with 2 retries.
You can't retry a connection. You have to close the socket whose connect attempt failed, create a new socket, and call connect() again.
Start sending data from client with fixed length. Send has some error checking whether it is sending the complete data or not.
This isn't necessary in blocking mode: the POSIX standard guarantees that a blocking-mode send() will send all the data, or fail with an error.
Receive response (timeout: 3secs) from server and verify that. If incorrect response received, re-send the data and wait for response. Repeat this for two times if failed.
This is a bad idea. Most probably all the data willl arrive including all the retries, or none of it. You need to make sure that your transactions are idempotent if you use this technique. You also need to pay close attention to the actual timeout period. 3 seconds is not adequate in general. A starting point is double the expected service time.
For the above implementation code sections look likes something below:
connect() and select() for opening connection
select() and send() for data send
select() and recv() for data receiving
You don't need the select() in blocking mode. You can just set a read timeout with SO_RCVTIMEO.
Now I'm making the retries based on return types of the socket functions, and if send() or recv() fails I'm retrying the same methods. But not recalling connect().
I tested the thing by restarting the server in between the data transfer, and as a result client fails to communicate with the server and it quits after several retries, I believe this is happening as because there is no connect() call on retry methods.
If that was true you would get an error that said so.
When you use the simple ZeroMQ REQ/REP pattern you depend on a fixed send()->recv() / recv()->send() sequence.
As this article describes you get into trouble when a participant disconnects in the middle of a request because then you can't just start over with receiving the next request from another connection but the state machine would force you to send a request to the disconnected one.
Has there emerged a more elegant way to solve this since the mentioned article has been written?
Is reconnecting the only way to solve this (apart from not using REQ/REP but use another pattern)
As the accepted answer seem so terribly sad to me, I did some research and have found that everything we need was actually in the documentation.
The .setsockopt() with the correct parameter can help you resetting your socket state-machine without brutally destroy it and rebuild another on top of the previous one dead body.
(yeah I like the image).
ZMQ_REQ_CORRELATE: match replies with requests
The default behaviour of REQ sockets is to rely on the ordering of messages to match requests and responses and that is usually sufficient. When this option is set to 1, the REQ socket will prefix outgoing messages with an extra frame containing a request id. That means the full message is (request id, 0, user frames…). The REQ socket will discard all incoming messages that don't begin with these two frames.
Option value type int
Option value unit 0, 1
Default value 0
Applicable socket types ZMQ_REQ
ZMQ_REQ_RELAXED: relax strict alternation between request and reply
By default, a REQ socket does not allow initiating a new request with zmq_send(3) until the reply to the previous one has been received. When set to 1, sending another message is allowed and has the effect of disconnecting the underlying connection to the peer from which the reply was expected, triggering a reconnection attempt on transports that support it. The request-reply state machine is reset and a new request is sent to the next available peer.
If set to 1, also enable ZMQ_REQ_CORRELATE to ensure correct matching of requests and replies. Otherwise a late reply to an aborted request can be reported as the reply to the superseding request.
Option value type int
Option value unit 0, 1
Default value 0
Applicable socket types ZMQ_REQ
A complete documentation is here
The good news is that, as of ZMQ 3.0 and later (the modern era), you can set a timeout on a socket. As others have noted elsewhere, you must do this after you have created the socket, but before you connect it:
zmq_req_socket.setsockopt( zmq.RCVTIMEO, 500 ) # milliseconds
Then, when you actually try to receive the reply (after you have sent a message to the REP socket), you can catch the error that will be asserted if the timeout is exceeded:
try:
send( message, 0 )
send_failed = False
except zmq.Again:
logging.warning( "Image send failed." )
send_failed = True
However! When this happens, as observed elsewhere, your socket will be in a funny state, because it will still be expecting the response. At this point, I cannot find anything that works reliably other than just restarting the socket. Note that if you disconnect() the socket and then re connect() it, it will still be in this bad state. Thus you need to
def reset_my_socket:
zmq_req_socket.close()
zmq_req_socket = zmq_context.socket( zmq.REQ )
zmq_req_socket.setsockopt( zmq.RCVTIMEO, 500 ) # milliseconds
zmq_req_socket.connect( zmq_endpoint )
You will also notice that because I close()d the socket, the receive timeout option was "lost", so it is important set that on the new socket.
I hope this helps. And I hope that this does not turn out to be the best answer to this question. :)
There is one solution to this and that is adding timeouts to all calls. Since ZeroMQ by itself does not really provide simple timeout functionality I recommend using a subclass of the ZeroMQ socket that adds a timeout parameter to all important calls.
So, instead of calling s.recv() you would call s.recv(timeout=5.0) and if a response does not come back within that 5 second window it will return None and stop blocking. I had made a futile attempt at this when I run into this problem.
I'm actually looking into this at the moment, because I am retro fitting a legacy system.
I am coming across code constantly that "needs" to know about the state of the connection. However the thing is I want to move to the message passing paradigm that the library promotes.
I found the following function : zmq_socket_monitor
What it does is monitor the socket passed to it and generate events that are then passed to an "inproc" endpoint - at that point you can add handling code to actually do something.
There is also an example (actually test code) here : github
I have not got any specific code to give at the moment (maybe at the end of the week) but my intention is to respond to the connect and disconnects such that I can actually perform any resetting of logic required.
Hope this helps, and despite quoting 4.2 docs, I am using 4.0.4 which seems to have the functionality
as well.
Note I notice you talk about python above, but the question is tagged C++ so that's where my answer is coming from...
Update: I'm updating this answer with this excellent resource here: https://blog.cloudflare.com/when-tcp-sockets-refuse-to-die/ Socket programming is complicated so do checkout the references in this post.
None of the answers here seem accurate or useful. The OP is not looking for information on BSD socket programming. He is trying to figure out how to robustly handle accept()ed client-socket failures in ZMQ on the REP socket to prevent the server from hanging or crashing.
As already noted -- this problem is complicated by the fact that ZMQ tries to pretend that the servers listen()ing socket is the same as an accept()ed socket (and there is no where in the documentation that describes how to set basic timeouts on such sockets.)
My answer:
After doing a lot of digging through the code, the only relevant socket options passed along to accept()ed socks seem to be keep alive options from the parent listen()er. So the solution is to set the following options on the listen socket before calling send or recv:
void zmq_setup(zmq::context_t** context, zmq::socket_t** socket, const char* endpoint)
{
// Free old references.
if(*socket != NULL)
{
(**socket).close();
(**socket).~socket_t();
}
if(*context != NULL)
{
// Shutdown all previous server client-sockets.
zmq_ctx_destroy((*context));
(**context).~context_t();
}
*context = new zmq::context_t(1);
*socket = new zmq::socket_t(**context, ZMQ_REP);
// Enable TCP keep alive.
int is_tcp_keep_alive = 1;
(**socket).setsockopt(ZMQ_TCP_KEEPALIVE, &is_tcp_keep_alive, sizeof(is_tcp_keep_alive));
// Only send 2 probes to check if client is still alive.
int tcp_probe_no = 2;
(**socket).setsockopt(ZMQ_TCP_KEEPALIVE_CNT, &tcp_probe_no, sizeof(tcp_probe_no));
// How long does a con need to be "idle" for in seconds.
int tcp_idle_timeout = 1;
(**socket).setsockopt(ZMQ_TCP_KEEPALIVE_IDLE, &tcp_idle_timeout, sizeof(tcp_idle_timeout));
// Time in seconds between individual keep alive probes.
int tcp_probe_interval = 1;
(**socket).setsockopt(ZMQ_TCP_KEEPALIVE_INTVL, &tcp_probe_interval, sizeof(tcp_probe_interval));
// Discard pending messages in buf on close.
int is_linger = 0;
(**socket).setsockopt(ZMQ_LINGER, &is_linger, sizeof(is_linger));
// TCP user timeout on unacknowledged send buffer
int is_user_timeout = 2;
(**socket).setsockopt(ZMQ_TCP_MAXRT, &is_user_timeout, sizeof(is_user_timeout));
// Start internal enclave event server.
printf("Host: Starting enclave event server\n");
(**socket).bind(endpoint);
}
What this does is tell the operating system to aggressively check the client socket for timeouts and reap them for cleanup when a client doesn't return a heart beat in time. The result is that the OS will send a SIGPIPE back to your program and socket errors will bubble up to send / recv - fixing a hung server. You then need to do two more things:
1. Handle SIGPIPE errors so the program doesn't crash
#include <signal.h>
#include <zmq.hpp>
// zmq_setup def here [...]
int main(int argc, char** argv)
{
// Ignore SIGPIPE signals.
signal(SIGPIPE, SIG_IGN);
// ... rest of your code after
// (Could potentially also restart the server
// sock on N SIGPIPEs if you're paranoid.)
// Start server socket.
const char* endpoint = "tcp://127.0.0.1:47357";
zmq::context_t* context;
zmq::socket_t* socket;
zmq_setup(&context, &socket, endpoint);
// Message buffers.
zmq::message_t request;
zmq::message_t reply;
// ... rest of your socket code here
}
2. Check for -1 returned by send or recv and catch ZMQ errors.
// E.g. skip broken accepted sockets (pseudo-code.)
while (1):
{
try
{
if ((*socket).recv(&request)) == -1)
throw -1;
}
catch (...)
{
// Prevent any endless error loops killing CPU.
sleep(1)
// Reset ZMQ state machine.
try
{
zmq::message_t blank_reply = zmq::message_t();
(*socket).send (blank_reply);
}
catch (...)
{
1;
}
continue;
}
Notice the weird code that tries to send a reply on a socket failure? In ZMQ, a REP server "socket" is an endpoint to another program making a REQ socket to that server. The result is if you go do a recv on a REP socket with a hung client, the server sock becomes stuck in a broken receive loop where it will wait forever to receive a valid reply.
To force an update on the state machine, you try send a reply. ZMQ detects that the socket is broken, and removes it from its queue. The server socket becomes "unstuck", and the next recv call returns a new client from the queue.
To enable timeouts on an async client (in Python 3), the code would look something like this:
import asyncio
import zmq
import zmq.asyncio
#asyncio.coroutine
def req(endpoint):
ms = 2000 # In milliseconds.
sock = ctx.socket(zmq.REQ)
sock.setsockopt(zmq.SNDTIMEO, ms)
sock.setsockopt(zmq.RCVTIMEO, ms)
sock.setsockopt(zmq.LINGER, ms) # Discard pending buffered socket messages on close().
sock.setsockopt(zmq.CONNECT_TIMEOUT, ms)
# Connect the socket.
# Connections don't strictly happen here.
# ZMQ waits until the socket is used (which is confusing, I know.)
sock.connect(endpoint)
# Send some bytes.
yield from sock.send(b"some bytes")
# Recv bytes and convert to unicode.
msg = yield from sock.recv()
msg = msg.decode(u"utf-8")
Now you have some failure scenarios when something goes wrong.
By the way -- if anyone's curious -- the default value for TCP idle timeout in Linux seems to be 7200 seconds or 2 hours. So you would be waiting a long time for a hung server to do anything!
Sources:
https://github.com/zeromq/libzmq/blob/84dc40dd90fdc59b91cb011a14c1abb79b01b726/src/tcp_listener.cpp#L82 TCP keep alive options preserved for client sock
http://www.tldp.org/HOWTO/html_single/TCP-Keepalive-HOWTO/ How does keep alive work
https://github.com/zeromq/libzmq/blob/master/builds/zos/README.md Handling sig pipe errors
https://github.com/zeromq/libzmq/issues/2586 for information on closing sockets
https://blog.cloudflare.com/when-tcp-sockets-refuse-to-die/
https://github.com/zeromq/libzmq/issues/976
Disclaimer:
I've tested this code and it seems to be working, but ZMQ does complicate testing this a fair bit because the client re-connects on failure? If anyone wants to use this solution in production, I recommend writing some basic unit tests, first.
The server code could also be improved a lot with threading or polling to be able to handle multiple clients at once. As it stands, a malicious client can temporarily take up resources from the server (3 second timeout) which isn't ideal.
I writing a small program that can Send File from Client -> Server (Send) and Server -> Client(Request).
Well done this part but the problems comes when:
1. I found the File on Server, How can I execute a cin on the client side?
2. How can I force my messages between Server and Client to be synced? I mean I dont want the Server to move to next step or freeze on the receive.
For Example(No Threading applied in this porblem):-
Server: Waiting a Message from Client.
Client: Send the Message.
Client: Waiting a Message from Client.
Server: Send the Message.
.....etc.
In a rare times the messages arrive on order but 99.999% of the time they don't and the program on two sides freeze.
The problem with the inorder messages was a thread on the client side who kept reading the inc replies without allowing the actual functions to see them.
However, about point 1.
I am trying in this code:
1. No shared resources so i am trying to define everything inside this function (part of it where the problem happening)
2. I was trying to pass this function to a thread so the server can accept more clients.
3. send & receive nothing special about them just a normal send/recv calls.
3. Question: if SendMyMessage & ReceiveMyMessage is going to be used by different threads, should I pass the socket to them with the message?
void ExecuteRequest(void * x)
{
RequestInfo * req = (RequestInfo *) x;
// 1st Message Direction get or put
fstream myFile;
myFile.open(req->_fName);
char tmp;
string _MSG= "";
string cFile = "*";
if(req->_fDir.compare("put") == 0)
{
if(myFile.is_open())
{
SendMyMessage("*F*");
cFile = ReceiveMyMessage();
// I want here to ask the client what to do after he found the that file exist on the server,
// I want the client to to get a message "*F*", then a cin command appear to him
// then the client enter a char
// then a message sent back to the server
// then the server continue executing the code
//More code
}
Client side:
{
cout <<"Waiting Message" <<endl;
temps = ReceiveMessage();
if(temps.compare("*F*") == 0)
{
cout <<"File found on Server want to:\n(1)Replace it.\n(2)Append to it." <<endl;
cin>>temps;
SendMyMessage(temps);
}}
I am using visual studio 2013
Windowx 7
thread am using: _beginthread (I removed all threads)
Regards,
On linux, there is a system call "select" using which the server can wait on the open sockets. As soon as there is an activity, like client wrote something, the server wakes up on that sockets and processes the data.
You are on windows platform. So :
http://msdn.microsoft.com/en-us/library/windows/desktop/ms740141%28v=vs.85%29.aspx
The server (192.168.1.5:3001), is running Linux 3.2, and is designed to only accept one connection at a time.
The client (192.168.1.18), is running Windows 7. The connection is a wireless connection. Both programs are written in C++.
It works great 9 in 10 connect/disconnect cycles. The tenth-ish (randomly happens) connection has the server accept the connection, then when it later actually writes to it (typically 30+s later), according to Wireshark (see screenshot) it looks like it's writing to an old stale connection, with a port number that the client has FINed (a while ago), but the server hasn't yet FINed. So the client and server connections seems to get out of sync - the client makes new connections, and the server tries writing to the previous one. Every subsequent connection attempt fails once it gets in this broken state. The broken state can be initiated by going beyond the maximum wireless range for a half a minute (as before 9 in 10 cases this works, but it sometimes causes the broken state).
Wireshark screenshot behind link
The red arrows in the screenshot indicate when the server started sending data (Len != 0), which is the point when the client rejects it and sends a RST to the server. The coloured dots down the right edge indicate a single colour for each of the client port numbers used. Note how one or two dots appear well after the rest of the dots of that colour were (and note the time column).
The problem looks like it's on the server's end, since if you kill the server process and restart, it resolves itself (until next time it occurs).
The code is hopefully not too out-of-the-ordinary. I set the queue size parameter in listen() to 0, which I think means it only allows one current connection and no pending connections (I tried 1 instead, but the problem was still there). None of the errors appear as trace prints where "// error" is shown in the code.
// Server code
mySocket = ::socket(AF_INET, SOCK_STREAM, 0);
if (mySocket == -1)
{
// error
}
// Set non-blocking
const int saveFlags = ::fcntl(mySocket, F_GETFL, 0);
::fcntl(mySocket, F_SETFL, saveFlags | O_NONBLOCK);
// Bind to port
// Union to work around pointer aliasing issues.
union SocketAddress
{
sockaddr myBase;
sockaddr_in myIn4;
};
SocketAddress address;
::memset(reinterpret_cast<Tbyte*>(&address), 0, sizeof(address));
address.myIn4.sin_family = AF_INET;
address.myIn4.sin_port = htons(Port);
address.myIn4.sin_addr.s_addr = INADDR_ANY;
if (::bind(mySocket, &address.myBase, sizeof(address)) != 0)
{
// error
}
if (::listen(mySocket, 0) != 0)
{
// error
}
// main loop
{
...
// Wait for a connection.
fd_set readSet;
FD_ZERO(&readSet);
FD_SET(mySocket, &readSet);
const int aResult = ::select(getdtablesize(), &readSet, NULL, NULL, NULL);
if (aResult != 1)
{
continue;
}
// A connection is definitely waiting.
const int fileDescriptor = ::accept(mySocket, NULL, NULL);
if (fileDescriptor == -1)
{
// error
}
// Set non-blocking
const int saveFlags = ::fcntl(fileDescriptor, F_GETFL, 0);
::fcntl(fileDescriptor, F_SETFL, saveFlags | O_NONBLOCK);
...
// Do other things for 30+ seconds.
...
const int bytesWritten = ::write(fileDescriptor, buffer, bufferSize);
if (bytesWritten < 0)
{
// THIS FAILS!! (but succeeds the first ~9 times)
}
// Finished with the connection.
::shutdown(fileDescriptor, SHUT_RDWR);
while (::close(fileDescriptor) == -1)
{
switch(errno)
{
case EINTR:
// Break from the switch statement. Continue in the loop.
break;
case EIO:
case EBADF:
default:
// error
return;
}
}
}
So somewhere between the accept() call (assuming that is exactly the point when the SYN packet is sent), and the write() call, the client's port gets changed to the previously-used client port.
So the question is: how can it be that the server accepts a connection (and thus opens a file descriptor), and then sends data through a previous (now stale and dead) connection/file descriptor? Does it need some sort of option in a system call that's missing?
I'm submitting an answer to summarize what we've figured out in the comments, even though it's not a finished answer yet. It does cover the important points, I think.
You have a server that handles clients one at a time. It accepts a connection, prepares some data for the client, writes the data, and closes the connection. The trouble is that the preparing-the-data step sometimes takes longer than the client is willing to wait. While the server is busy preparing the data, the client gives up.
On the client side, when the socket is closed, a FIN is sent notifying the server that the client has no more data to send. The client's socket now goes into FIN_WAIT1 state.
The server receives the FIN and replies with an ACK. (ACKs are done by the kernel without any help from the userspace process.) The server socket goes into the CLOSE_WAIT state. The socket is now readable, but the server process doesn't notice because it's busy with its data-preparation phase.
The client receives the ACK of the FIN and goes into FIN_WAIT2 state. I don't know what's happening in userspace on the client since you haven't shown the client code, but I don't think it matters.
The server process is still preparing data for a client that has hung up. It's oblivious to everything else. Meanwhile, another client connects. The kernel completes the handshake. This new client will not be getting any attention from the server process for a while, but at the kernel level the second connection is now ESTABLISHED on both ends.
Eventually, the server's data preparation (for the first client) is complete. It attempts to write(). The server's kernel doesn't know that the first client is no longer willing to receive data because TCP doesn't communicate that information! So the write succeeds and the data is sent out (packet 10711 in your wireshark listing).
The client gets this packet and its kernel replies with RST because it knows what the server didn't know: the client socket has already been shut down for both reading and writing, probably closed, and maybe forgotten already.
In the wireshark trace it appears that the server only wanted to send 15 bytes of data to the client, so it probably completed the write() successfully. But the RST arrived quickly, before the server got a chance to do its shutdown() and close() which would have sent a FIN. Once the RST is received, the server won't send any more packets on that socket. The shutdown() and close() are now executed, but don't have any on-the-wire effect.
Now the server is finally ready to accept() the next client. It begins another slow preparation step, and it's falling further behind schedule because the second client has been waiting a while already. The problem will keep getting worse until the rate of client connections slows down to something the server can handle.
The fix will have to be for you to make the server process notice when a client hangs up during the preparation step, and immediately close the socket and move on to the next client. How you will do it depends on what the data preparation code actually looks like. If it's just a big CPU-bound loop, you have to find some place to insert a periodic check of the socket. Or create a child process to do the data preparation and writing, while the parent process just watches the socket - and if the client hangs up before the child exits, kill the child process. Other solutions are possible (like F_SETOWN to have a signal sent to the process when something happens on the socket).
Aha, success! It turns out the server was receiving the client's SYN, and the server's kernel was automatically completing the connection with another SYN, before the accept() had been called. So there definitely a listening queue, and having two connections waiting on the queue was half of the cause.
The other half of the cause was to do with information which was omitted from the question (I thought it was irrelevant because of the false assumption above). There was a primary connection port (call it A), and the secondary, troublesome connection port which this question is all about (call it B). The proper connection order is A establishes a connection (A1), then B attempts to establish a connection (which would become B1)... within a time frame of 200ms (I already doubled the timeout from 100ms which was written ages ago, so I thought I was being generous!). If it doesn't get a B connection within 200ms, then it drops A1. So then B1 establishes a connection with the server's kernel, waiting to be accepted. It only gets accepted on the next connection cycle when A2 establishes a connection, and the client also sends a B2 connection. The server accepts the A2 connection, then gets the first connection on the B queue, which is B1 (hasn't been accepted yet - the queue looked like B1, B2). That is why the server didn't send a FIN for B1 when the client had disconnected B1. So the two connections the server has are A2 and B1, which are obviously out of sync. It tries writing to B1, which is a dead connection, so it drops A2 and B1. Then the next pair are A3 and B2, which are also invalid pairs. They never recover from being out of sync until the server process is killed and the TCP connections are all reset.
So the solution was to just change a timeout for waiting on the B socket from 200ms to 5s. Such a simple fix that had me scratching my head for days (and fixed it within 24 hours of putting it on stackoverflow)! I also made it recover from stray B connections by adding socket B to the main select() call, and then accept()ing it and close()ing it immediately (which would only happen if the B connection took longer than 5s to establish). Thanks #AlanCurry for the suggestion of adding it to the select() and adding the puzzle piece about the listen() backlog parameter being a hint.