I use a blocking FSocket in client-side that connected to tcp server, if there's no message from server, socket thread would block in function FScoket::Recv(), if TCP server shutdown, socket thread is still blocking in this function. but when use blocking socket of BSD Socket API, thread would pass from recv function and return errno when TCP server shutdown, so is it the defect of FSocket?
uint32 HRecvThread::Run()
{
uint8* recv_buf = new uint8[RECV_BUF_SIZE];
uint8* const recv_buf_head = recv_buf;
int readLenSeq = 0;
while (Started)
{
//if (TcpClient->Connected() && ClientSocket->GetConnectionState() != SCS_Connected)
//{
// // server disconnected
// TcpClient->SetConnected(false);
// break;
//}
int32 bytesRead = 0;
//because use blocking socket, so thread would block in Recv function if have no message
ClientSocket->Recv(recv_buf, readLenSeq, bytesRead);
.....
//some logic of resolution for tcp msg bytes
.....
}
delete[] recv_buf;
return 0
}
As I expected, you are ignoring the return code, which presumably indicates success or failure, so you are looping indefinitely (not blocking) on an error or end of stream condition.
NB You should allocate the recv_buf on the stack, not dynamically. Don't use the heap when you don't have to.
There is a similar question on the forums in the UE4 C++ Programming section. Here is the discussion:
https://forums.unrealengine.com/showthread.php?111552-Recv-function-would-keep-blocking-when-TCP-server-shutdown
Long story short, in the UE4 Source, they ignore EWOULDBLOCK as an error. The code comments state that they do not view it as an error.
Also, there are several helper functions you should be using when opening the port and when polling the port (I assume you are polling since you are using blocking calls)
FSocket::Connect returns a bool, so make sure to check that return
value.
FSocket::GetLastError returns the UE4 Translated error code if an
error occured with the socket.
FSocket::HasPendingData will return a value that informs you if it
is safe to read from the socket.
FSocket::HasPendingConnection can check to see your connection state.
FSocket::GetConnectionState will tell you your active connection state.
Using these helper functions for error checking before making a call to FSocket::Recv will help you make sure you are in a good state before trying to read data. Also, it was noted in the forum posts that using the non-blocking code worked as expected. So, if you do not have a specific reason to use blocking code, just use the non-blocking implementation.
Also, as a final hint, using FSocket::Wait will block until your socket is in a desirable state of your choosing with a timeout, i.e. is readable or has data.
Related
I created a proxy server to handle CQL orders from website clients. The proxy listens for incoming connections and each connection is given a thread. The thread loops as long as the socket exists and dies on HUP. You may also stop the proxy, which will stop the threads by sending an event (See eventfd()) to each thread.
By itself, this already allows me to save a good 100ms because the proxy is local and connecting to a local service is much faster than a service on a remote computer... (even if the computer is local.)
However, I send orders and once in a while the proxy sees no incoming data (i.e. it calls read() on the socket which is setup as NONBLOCK and gets -1 in return and errno == EAGAIN.) When that happens, I call poll() to wait for additional data, the HUP, or a hit on the eventfd meaning I have to quit (i.e. 2 fds, the socket and the eventfd).
Somehow, more often than not, when I hit the poll() function call, it adds an extra 40ms to the time it takes for a message to go round trip. Although one would think this only happens on larger messages, it happens when I receive an order, which is less than 100 bytes! So the size should not be the culprit. I also changed the code to make sure I send the entire order from the client to the proxy in one write() and to avoid the poll() if at all possible (i.e. I call read() first, and poll() only if nothing is available.)
Note that I have no timeout in this case because there is nothing to check other than the incoming orders and the eventfd. So I would imagine that the timeout won't be a problem.
The code base is really big. But the client/server comes down to something like this (the sizes in original are fully dynamic):
// Client
...
connect(socket);
...
write(socket, order, sizeof(order));
read(socket, result, sizeof(result));
// repeat for other orders, as required by client...
// server
...
socket = accept(); // happens for each client
...
pthread_create(runner);
...
// server thread (runner)
...
for(;;)
{
int r(0);
for(;;)
{
r += read(socket, order, sizeof(order));
if(r >= sizeof(order))
{
break;
}
// wait for more data is not enough received yet
poll(..."socket" + "eventfd"...); // <-- this will often take 40ms
if(eventfd_happened)
{
// quit thread
return;
}
}
...
[work on order]
...
write(socket, result, sizeof(result));
}
Note 1: I see the problem when I have a single client. So having multiple clients does not in itself cause the problem either.
Note 2: The client really uses BIO_connect(), BIO_read() and BIO_write() [from OpenSSL], but I doubt that would be a problem. I do not use any kind of encryption.
I don't see why you're using non-blocking I/O given you have a dedicated thread per socket. Just block in read(). Use SO_RCVTIMEO if you need an overall read timeout.
I'm currently using libpcap to sniff traffic in promiscuous mode
int main()
{
// some stuff
printf("Opening device: %s\n", devname.c_str());
handle = pcap_open_live(devname.c_str(), 65536 , 1 , 0 , errbuf);
if (handle == NULL)
{
fprintf(stderr, "Couldn't open device %s : %s..." , devname.c_str(), errbuf);
return 1;
}
printf(" Done\n");
pcap_loop(handle , -1 , process_packet , NULL);
// here run a thread to do some stuff. however, pcap_loop is blocking
return 0;
}
I'd like to add an external thread to do some other stuff. How do I change the code above to make it non-blocking?
When you use non-blocking mode on libpcap you have to use pcap_dispatch, but note, pcap_dispatch can work in blocking or in non-blocking mode, it depends how you set libpcap, to set libpcap to work in non-blocking you have use the function pcap_setnonblock:
int pcap_setnonblock(pcap_t *p, int nonblock, char *errbuf);
The difference between blocking and non-blocking is not a loop that runs forever, but in blocking the function pcap_dispatch waits for a packet and only returns when this packet is received, however, in the non-blocking mode the function returns immediately and the callback must process the packet.
In "non-blocking" mode, an attempt to read from the capture
descriptor with pcap_dispatch() will, if no packets are currently
available to be read, return 0 immediately rather than blocking
waiting for packets to arrive. pcap_loop() and pcap_next() will not
work in "non-blocking" mode.
http://www.tcpdump.org/manpages/pcap_setnonblock.3pcap.html
pcap_loop is meant to go on until all input ends. If you don't want that behavior, call pcap_dispatch in a loop instead. By definition pcap_loop will never return, its meant to always searching for more data.
I use pcap_next_ex It returns a result indicating if a packet was read. This way I manage the acquisition my own thread. See an example here. The read_timeout in pcap_open also affects this function.
When you use the simple ZeroMQ REQ/REP pattern you depend on a fixed send()->recv() / recv()->send() sequence.
As this article describes you get into trouble when a participant disconnects in the middle of a request because then you can't just start over with receiving the next request from another connection but the state machine would force you to send a request to the disconnected one.
Has there emerged a more elegant way to solve this since the mentioned article has been written?
Is reconnecting the only way to solve this (apart from not using REQ/REP but use another pattern)
As the accepted answer seem so terribly sad to me, I did some research and have found that everything we need was actually in the documentation.
The .setsockopt() with the correct parameter can help you resetting your socket state-machine without brutally destroy it and rebuild another on top of the previous one dead body.
(yeah I like the image).
ZMQ_REQ_CORRELATE: match replies with requests
The default behaviour of REQ sockets is to rely on the ordering of messages to match requests and responses and that is usually sufficient. When this option is set to 1, the REQ socket will prefix outgoing messages with an extra frame containing a request id. That means the full message is (request id, 0, user frames…). The REQ socket will discard all incoming messages that don't begin with these two frames.
Option value type int
Option value unit 0, 1
Default value 0
Applicable socket types ZMQ_REQ
ZMQ_REQ_RELAXED: relax strict alternation between request and reply
By default, a REQ socket does not allow initiating a new request with zmq_send(3) until the reply to the previous one has been received. When set to 1, sending another message is allowed and has the effect of disconnecting the underlying connection to the peer from which the reply was expected, triggering a reconnection attempt on transports that support it. The request-reply state machine is reset and a new request is sent to the next available peer.
If set to 1, also enable ZMQ_REQ_CORRELATE to ensure correct matching of requests and replies. Otherwise a late reply to an aborted request can be reported as the reply to the superseding request.
Option value type int
Option value unit 0, 1
Default value 0
Applicable socket types ZMQ_REQ
A complete documentation is here
The good news is that, as of ZMQ 3.0 and later (the modern era), you can set a timeout on a socket. As others have noted elsewhere, you must do this after you have created the socket, but before you connect it:
zmq_req_socket.setsockopt( zmq.RCVTIMEO, 500 ) # milliseconds
Then, when you actually try to receive the reply (after you have sent a message to the REP socket), you can catch the error that will be asserted if the timeout is exceeded:
try:
send( message, 0 )
send_failed = False
except zmq.Again:
logging.warning( "Image send failed." )
send_failed = True
However! When this happens, as observed elsewhere, your socket will be in a funny state, because it will still be expecting the response. At this point, I cannot find anything that works reliably other than just restarting the socket. Note that if you disconnect() the socket and then re connect() it, it will still be in this bad state. Thus you need to
def reset_my_socket:
zmq_req_socket.close()
zmq_req_socket = zmq_context.socket( zmq.REQ )
zmq_req_socket.setsockopt( zmq.RCVTIMEO, 500 ) # milliseconds
zmq_req_socket.connect( zmq_endpoint )
You will also notice that because I close()d the socket, the receive timeout option was "lost", so it is important set that on the new socket.
I hope this helps. And I hope that this does not turn out to be the best answer to this question. :)
There is one solution to this and that is adding timeouts to all calls. Since ZeroMQ by itself does not really provide simple timeout functionality I recommend using a subclass of the ZeroMQ socket that adds a timeout parameter to all important calls.
So, instead of calling s.recv() you would call s.recv(timeout=5.0) and if a response does not come back within that 5 second window it will return None and stop blocking. I had made a futile attempt at this when I run into this problem.
I'm actually looking into this at the moment, because I am retro fitting a legacy system.
I am coming across code constantly that "needs" to know about the state of the connection. However the thing is I want to move to the message passing paradigm that the library promotes.
I found the following function : zmq_socket_monitor
What it does is monitor the socket passed to it and generate events that are then passed to an "inproc" endpoint - at that point you can add handling code to actually do something.
There is also an example (actually test code) here : github
I have not got any specific code to give at the moment (maybe at the end of the week) but my intention is to respond to the connect and disconnects such that I can actually perform any resetting of logic required.
Hope this helps, and despite quoting 4.2 docs, I am using 4.0.4 which seems to have the functionality
as well.
Note I notice you talk about python above, but the question is tagged C++ so that's where my answer is coming from...
Update: I'm updating this answer with this excellent resource here: https://blog.cloudflare.com/when-tcp-sockets-refuse-to-die/ Socket programming is complicated so do checkout the references in this post.
None of the answers here seem accurate or useful. The OP is not looking for information on BSD socket programming. He is trying to figure out how to robustly handle accept()ed client-socket failures in ZMQ on the REP socket to prevent the server from hanging or crashing.
As already noted -- this problem is complicated by the fact that ZMQ tries to pretend that the servers listen()ing socket is the same as an accept()ed socket (and there is no where in the documentation that describes how to set basic timeouts on such sockets.)
My answer:
After doing a lot of digging through the code, the only relevant socket options passed along to accept()ed socks seem to be keep alive options from the parent listen()er. So the solution is to set the following options on the listen socket before calling send or recv:
void zmq_setup(zmq::context_t** context, zmq::socket_t** socket, const char* endpoint)
{
// Free old references.
if(*socket != NULL)
{
(**socket).close();
(**socket).~socket_t();
}
if(*context != NULL)
{
// Shutdown all previous server client-sockets.
zmq_ctx_destroy((*context));
(**context).~context_t();
}
*context = new zmq::context_t(1);
*socket = new zmq::socket_t(**context, ZMQ_REP);
// Enable TCP keep alive.
int is_tcp_keep_alive = 1;
(**socket).setsockopt(ZMQ_TCP_KEEPALIVE, &is_tcp_keep_alive, sizeof(is_tcp_keep_alive));
// Only send 2 probes to check if client is still alive.
int tcp_probe_no = 2;
(**socket).setsockopt(ZMQ_TCP_KEEPALIVE_CNT, &tcp_probe_no, sizeof(tcp_probe_no));
// How long does a con need to be "idle" for in seconds.
int tcp_idle_timeout = 1;
(**socket).setsockopt(ZMQ_TCP_KEEPALIVE_IDLE, &tcp_idle_timeout, sizeof(tcp_idle_timeout));
// Time in seconds between individual keep alive probes.
int tcp_probe_interval = 1;
(**socket).setsockopt(ZMQ_TCP_KEEPALIVE_INTVL, &tcp_probe_interval, sizeof(tcp_probe_interval));
// Discard pending messages in buf on close.
int is_linger = 0;
(**socket).setsockopt(ZMQ_LINGER, &is_linger, sizeof(is_linger));
// TCP user timeout on unacknowledged send buffer
int is_user_timeout = 2;
(**socket).setsockopt(ZMQ_TCP_MAXRT, &is_user_timeout, sizeof(is_user_timeout));
// Start internal enclave event server.
printf("Host: Starting enclave event server\n");
(**socket).bind(endpoint);
}
What this does is tell the operating system to aggressively check the client socket for timeouts and reap them for cleanup when a client doesn't return a heart beat in time. The result is that the OS will send a SIGPIPE back to your program and socket errors will bubble up to send / recv - fixing a hung server. You then need to do two more things:
1. Handle SIGPIPE errors so the program doesn't crash
#include <signal.h>
#include <zmq.hpp>
// zmq_setup def here [...]
int main(int argc, char** argv)
{
// Ignore SIGPIPE signals.
signal(SIGPIPE, SIG_IGN);
// ... rest of your code after
// (Could potentially also restart the server
// sock on N SIGPIPEs if you're paranoid.)
// Start server socket.
const char* endpoint = "tcp://127.0.0.1:47357";
zmq::context_t* context;
zmq::socket_t* socket;
zmq_setup(&context, &socket, endpoint);
// Message buffers.
zmq::message_t request;
zmq::message_t reply;
// ... rest of your socket code here
}
2. Check for -1 returned by send or recv and catch ZMQ errors.
// E.g. skip broken accepted sockets (pseudo-code.)
while (1):
{
try
{
if ((*socket).recv(&request)) == -1)
throw -1;
}
catch (...)
{
// Prevent any endless error loops killing CPU.
sleep(1)
// Reset ZMQ state machine.
try
{
zmq::message_t blank_reply = zmq::message_t();
(*socket).send (blank_reply);
}
catch (...)
{
1;
}
continue;
}
Notice the weird code that tries to send a reply on a socket failure? In ZMQ, a REP server "socket" is an endpoint to another program making a REQ socket to that server. The result is if you go do a recv on a REP socket with a hung client, the server sock becomes stuck in a broken receive loop where it will wait forever to receive a valid reply.
To force an update on the state machine, you try send a reply. ZMQ detects that the socket is broken, and removes it from its queue. The server socket becomes "unstuck", and the next recv call returns a new client from the queue.
To enable timeouts on an async client (in Python 3), the code would look something like this:
import asyncio
import zmq
import zmq.asyncio
#asyncio.coroutine
def req(endpoint):
ms = 2000 # In milliseconds.
sock = ctx.socket(zmq.REQ)
sock.setsockopt(zmq.SNDTIMEO, ms)
sock.setsockopt(zmq.RCVTIMEO, ms)
sock.setsockopt(zmq.LINGER, ms) # Discard pending buffered socket messages on close().
sock.setsockopt(zmq.CONNECT_TIMEOUT, ms)
# Connect the socket.
# Connections don't strictly happen here.
# ZMQ waits until the socket is used (which is confusing, I know.)
sock.connect(endpoint)
# Send some bytes.
yield from sock.send(b"some bytes")
# Recv bytes and convert to unicode.
msg = yield from sock.recv()
msg = msg.decode(u"utf-8")
Now you have some failure scenarios when something goes wrong.
By the way -- if anyone's curious -- the default value for TCP idle timeout in Linux seems to be 7200 seconds or 2 hours. So you would be waiting a long time for a hung server to do anything!
Sources:
https://github.com/zeromq/libzmq/blob/84dc40dd90fdc59b91cb011a14c1abb79b01b726/src/tcp_listener.cpp#L82 TCP keep alive options preserved for client sock
http://www.tldp.org/HOWTO/html_single/TCP-Keepalive-HOWTO/ How does keep alive work
https://github.com/zeromq/libzmq/blob/master/builds/zos/README.md Handling sig pipe errors
https://github.com/zeromq/libzmq/issues/2586 for information on closing sockets
https://blog.cloudflare.com/when-tcp-sockets-refuse-to-die/
https://github.com/zeromq/libzmq/issues/976
Disclaimer:
I've tested this code and it seems to be working, but ZMQ does complicate testing this a fair bit because the client re-connects on failure? If anyone wants to use this solution in production, I recommend writing some basic unit tests, first.
The server code could also be improved a lot with threading or polling to be able to handle multiple clients at once. As it stands, a malicious client can temporarily take up resources from the server (3 second timeout) which isn't ideal.
I am currently trying to fix a bug in a proxy server I have written relating to the socket select() call. I am using the Poco C++ libraries (using SocketReactor) and the issue is actually in the Poco code which may be a bug but I have yet to receive any confirmation of this from them.
What is happening is whenever a connection abruptly terminates the socket select() call is returning immediately which is what I believe it is meant to do? Anyway, it returns all of the disconnected sockets within the readable set of file descriptors but the problem is that an exception "Socket is not connected" is thrown when Poco tries to fire the onReadable event handler which is where I would be putting the code to deal with this. Given that the exception is silently caught and the onReadable event is never fired, the select() call keeps returning immediately resulting in an infinite loop in the SocketReactor.
I was considering modifying the Poco code so that rather than catching the exception silently it fires a new event called onDisconnected or something like that so that a cleanup can be performed.
My question is, are there any elegant ways of determining whether a socket has closed abnormally using select() calls? I was thinking of using the exception message to determine when this has occured but this seems dirty to me.
I had this same problem. The only way to get around it is to control the client applications exit code. The solution that I used was to send a shutdown signal before the reactor was terminated on the client side. Then on the server you simply close the socket.
//Client:
//Handler Class: onWrite
Packet p = Packet::Shutdown();
if (p.fn == "shutdown")
{
_reactor.stop();
delete this;
}
//Server
//Accepter Class: onRead
if (p.fn == "shutdown")
{
printf("%s has disconnected", _username.c_str());
_socket.close();
delete this;
}
It appears you are correct Remy. I managed to distinguish whether the socket had disconnected using the following code (this was added to Poco/Net/src/SocketImpl.cpp):
bool SocketImpl::isConnected()
{
int bytestoread;
int rc;
fd_set fdRead;
FD_ZERO(&fdRead);
FD_SET(_sockfd, &fdRead);
struct timeval tv;
tv.tv_sec = 0;
tv.tv_usec = 250000;
rc = ::select(int(_sockfd) + 1, &fdRead, (fd_set*) 0, (fd_set*) 0, &tv);
ioctl(FIONREAD, &bytestoread);
return !((bytestoread == 0) && (rc == 1));
}
From my understanding, this checks if the socket is readable using a call to select() and then checks the actual number of bytes which are available on that socket. If the socket reports that it is readable but the bytes are 0 then the socket is not actually connected.
While this answers my question here, this unfortunately has not solved my Poco problem as I can't figure out a way to fix this in the Poco SocketReactor code. I tried making a new event called DisconnectNotification but unfortunately cannot call that as the same error gets thrown as does for a ReadNotification on a closed socket.
Just catch the ConnectionResetException in OnReadable() (processes the ReadableNotification)
Then it handles "Connection reset by peer" properly.
catch(Poco::Net::ConnectionResetException &ex)
{
_socket.shutdownSend();
delete this;
}
I want to write a server using a pool of worker threads and an IO completion port. The server should processes and forwards messages between multiple clients. The 'per client' data is in a class ClientContext. Data between instances of this class are exchanged using the worker threads. I think this is a typical scenario.
However, I have two problems with those IO completion ports.
(1) The first problem is that the server basically receives data from clients but I never know if a complete message was received. In fact WSAGetLastError() always returns that WSARecv() is still pending. I tried to wait for the event OVERLAPPED.hEvent with WaitForMultipleObjects(). However, it blocks forever, i.e WSARecv() never completes in my program.
My goal is to be absolutely sure that the whole message has been received before further processing starts. My message has a 'message length' field in its header, but I don't really see how to use it with the IOCP function parameters.
(2) If WSARecv() is commented out in the code snippet below, the program still receives data. What does that mean? Does it mean that I don't need to call WSARecv() at all? I am not able to get a deterministic behaviour with those IO completion ports.
Thanks for your help!
while(WaitForSingleObject(module_com->m_shutdown_event, 0)!= WAIT_OBJECT_0)
{
dequeue_result = GetQueuedCompletionStatus(module_com->m_h_io_completion_port,
&transfered_bytes,
(LPDWORD)&lp_completion_key,
&p_ol,
INFINITE);
if (lp_completion_key == NULL)
{
//Shutting down
break;
}
//Get client context
current_context = (ClientContext *)lp_completion_key;
//IOCP error
if(dequeue_result == FALSE)
{
//... do some error handling...
}
else
{
// 'per client' data
thread_state = current_context->GetState();
wsa_recv_buf = current_context->GetWSABUFPtr();
// 'per call' data
this_overlapped = current_context->GetOVERLAPPEDPtr();
}
while(thread_state != STATE_DONE)
{
switch(thread_state)
{
case STATE_INIT:
//Check if completion packet has been posted by internal function or by WSARecv(), WSASend()
if(transfered_bytes > 0)
{
dwFlags = 0;
transf_now = 0;
transf_result = WSARecv(current_context->GetSocket(),
wsa_recv_buf,
1,
&transf_now,
&dwFlags,
this_overlapped,
NULL);
if (SOCKET_ERROR == transf_result && WSAGetLastError() != WSA_IO_PENDING)
{
//...error handling...
break;
}
// put received message into a message queue
}
else // (transfered_bytes == 0)
{
// Another context passed data to this context
// and notified it via PostQueuedCompletionStatus().
}
break;
}
}
}
(1) The first problem is that the
server basically receives data from
clients but I never know if a complete
message was received.
Your recv calls can return anywhere from 1 byte to the whole 'message'. You need to include logic that works out when it has enough data to work out the length of the complete 'message' and then work out when you actually have a complete 'message'. Whilst you do NOT have enough data you can reissue a recv call using the same memory buffer but with an updated WSABUF structure that points to the end of the data that you have already recvd. In that way you can accumulate a full message in your buffer without needing to copy data after every recv call completes.
(2) If WSARecv() is commented out in
the code snippet below, the program
still receives data. What does that
mean? Does it mean that I don't need
to call WSARecv() at all?
I expect it just means you have a bug in your code...
Note that it's 'better' from a scalability point of view not to use the event in the overlapped structure and instead to associate the socket with the IOCP and allow the completions to be posted to a thread pool that deals with your completions.
I have a free IOCP client/server framework available from here which may give you some hints; and a series of articles on CodeProject (first one is here: http://www.codeproject.com/KB/IP/jbsocketserver1.aspx) where I deal with the whole 'reading complete messages' problem (see "Chunking the byte stream").