I am practicing a bit with sockets and UDP client-server architecture and, referring to some examples available on the web, I have implemented a very simple UDP server using C and a UDP client class using C++.
Briefly speaking, the current implementation let the server listen for incoming messages and transmit back the same packet to client.
It seems to work fine if client makes sequential requests.
Here is a brief explanatory example:
#include "UDPClient.h"
int main(int argc, char* argv[]) {
UDPClient testClient;
testClient.initSockets(1501, "127.0.0.1", 1500);
for (int i = 0; i < 10; i++) {
testClient.notifyEntry();
testClient.notifyExit();
}
return 0;
}
Since client actually should share with server more informations at the same time, I tested the same code block starting new threads:
#include <thread>
#include "UDPClient.h"
int main(int argc, char* argv[]) {
UDPClient testClient;
std::thread thrdOne, thrdTwo;
testClient.initSockets(1501, "127.0.0.1", 1500);
for (int i = 0; i < 10; i++) {
thrdOne = std::thread(UDPClient::notifyEntry, std::ref(testClient));
thrdTwo = std::thread(UDPClient::notifyExit, std::ref(testClient));
}
return 0;
}
As you can see, notifyEntry and notifyExit have been made static and currently need a reference to a class instance to work properly.
Furthermore, inside their function body I have added also a little code block in order to check if, since the server sends back the same content, the sent message is equal to the received one.
Here is a explanatory example:
void UDPClient::notifyEntry(UDPClient& inst) {
char buffer = "E"
inst.sendPacket(buffer); // sendto...
inst.receivePacket(buffer); // recvfrom...
if (!(buffer == 'E') ){
std::string e = "Buffer should be E but it is ";
e.append(buffer);
throw UDPClientException(e);
}
}
Using multithreading often happens that the above mentioned check throws exception, because the buffer actually contains another char (the one sent by notifyExit).
Taking this information into account, I would like to ask you:
this happens because the recvfrom of a thread can catch also the response of a request from another one, being the socket instantiated only a single bound socket?
if yes, should I instantiate more than a single socket (for instance, each one usable for only a single type of messages, that is one for notifyEntry and one for notifyExit)? Does multithreading on server for response only not solve the issue mentioned anyway?
this happens because the recvfrom of a thread can catch also the
response of a request from another one, being the socket instantiated
only a single bound socket?
That's very likely -- if you have multiple threads calling recvfrom() on the same UDP socket, then it will be indeterminate/unpredictable which thread receives which incoming UDP packet.
if yes, should I instantiate more than a single socket (for instance,
each one usable for only a single type of messages, that is one for
notifyEntry and one for notifyExit)?
Yes, I'd recommend having each thread create its own private UDP socket and bind() its socket to its own separate port (e.g. by passing 0 as the port number to bind()); that way each thread can be sure to receive only its own responses and not get confused by responses that were intended for other threads. (Note that you'll also want to code your server to send its replies back to the IP address and port that was reported by the recvfrom() call, rather than sending reply packets back to a hard-coded port number)
Does multithreading on server for response only not solve the issue
mentioned anyway?
No, the correct handling of UDP packets (or not) is a separate issue that is independent of whether the server is single-threaded or multi-threaded.
Related
I am using cpp-httplib to retrieve some data from a server using long polling (that is, the client will issue a request to the server, and the server will just keep the connection open until the required data is available or a timeout is reached).
The program is running on my raspberry pi, which sits behind a router that does not have an outgoing static ip address. Every time the ip is reassigned (or, at least, close to that time point), my program breaks, in that the thread currently performing the poll will be forever stuck in httplib::SSLClient::Get, which is caused by a blocking read() syscall. Both server- and client timeouts are unable to do anything, while a connection close should make read immediately return 0, which is what i would have expected in this situation.
Inspecting the program with gdb shows the following:
(gdb) thread 2
(gdb) where
__libc_read (nbytes=5, buf=0x75608edb, fd=3) at ../sysdeps/unix/sysv/linux/read.c:26
__libc_read (fd=3, buf=0x75608edb, nbytes=5) at ../sysdeps/unix/sysv/linux/read.c:24
0x76d1862c in ?? () from /usr/lib/arm-linux-gnueabihf/libcrypto.so.1.1
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
I am not doing anything (as far as I know) that could accidentally overwrite return addresses.
For comparison, a 'healthy' stack trace during a SSLCLient::Get can be found here.
The actual code is quite a lot, but here's a short version that shows the same behaviour:
#include <iostream>
#define CPPHTTPLIB_OPENSSL_SUPPORT 1
#include "httplib.h"
void poll(httplib::SSLClient* c, char* path) {
while (true) {
auto response = c->Get(path);
std::cout << response.body << std::endl;
}
}
int main(int argc, char* argv[]) {
if (argc >= 3) {
httplib::SSLClient client(argv[1], 443, 20);
std::thread poll_thread(poll, &client, argv[2]);
poll_thread.join();
} else {
std::cerr << "Usage: ./poll <host> <path>" << std::endl;
return 1;
}
}
I can think of some workarounds that might or might not work, but I'd really like to know why and how this is happening in the first place.
Just expanding on the keep_alive option I mentioned in the comment.
In the scenario you described, it seems possible that the underlying TCP socket connection was terminated in an unclean fashion. I.e., you say the IP address was reassigned.
Ideally when there is a TCP socket termination, you want your code to exit out of any blocked read/poll operation. That is what will happen for normal socket closures, e.g., say the remote process is killed, or the remote process just decides it is time to close. But if the IP address of your host is changed .... I'm not sure there will necessarily be a low level TCP messages that says, to affect, this connection is now closed. So the consequence for your program is that is can still hold a local socket (the local TCP endpoint), and not realise the connection has dropped.
This is where something like keep_alive. The idea is that the kernel will send keep alive packets to keep testing if the connection is established; if these ever fail, then it can close the local socket (and so your blocking read, or blocking select, will return with some sort of end-of-stream error).
Separately to keep_alive, you can also consider application heart-beat messages (e.g., websocket has ping/pong). In addition to ensuring the TCP connection remains established, it confirms whether the remote application is healthy.
I use a blocking FSocket in client-side that connected to tcp server, if there's no message from server, socket thread would block in function FScoket::Recv(), if TCP server shutdown, socket thread is still blocking in this function. but when use blocking socket of BSD Socket API, thread would pass from recv function and return errno when TCP server shutdown, so is it the defect of FSocket?
uint32 HRecvThread::Run()
{
uint8* recv_buf = new uint8[RECV_BUF_SIZE];
uint8* const recv_buf_head = recv_buf;
int readLenSeq = 0;
while (Started)
{
//if (TcpClient->Connected() && ClientSocket->GetConnectionState() != SCS_Connected)
//{
// // server disconnected
// TcpClient->SetConnected(false);
// break;
//}
int32 bytesRead = 0;
//because use blocking socket, so thread would block in Recv function if have no message
ClientSocket->Recv(recv_buf, readLenSeq, bytesRead);
.....
//some logic of resolution for tcp msg bytes
.....
}
delete[] recv_buf;
return 0
}
As I expected, you are ignoring the return code, which presumably indicates success or failure, so you are looping indefinitely (not blocking) on an error or end of stream condition.
NB You should allocate the recv_buf on the stack, not dynamically. Don't use the heap when you don't have to.
There is a similar question on the forums in the UE4 C++ Programming section. Here is the discussion:
https://forums.unrealengine.com/showthread.php?111552-Recv-function-would-keep-blocking-when-TCP-server-shutdown
Long story short, in the UE4 Source, they ignore EWOULDBLOCK as an error. The code comments state that they do not view it as an error.
Also, there are several helper functions you should be using when opening the port and when polling the port (I assume you are polling since you are using blocking calls)
FSocket::Connect returns a bool, so make sure to check that return
value.
FSocket::GetLastError returns the UE4 Translated error code if an
error occured with the socket.
FSocket::HasPendingData will return a value that informs you if it
is safe to read from the socket.
FSocket::HasPendingConnection can check to see your connection state.
FSocket::GetConnectionState will tell you your active connection state.
Using these helper functions for error checking before making a call to FSocket::Recv will help you make sure you are in a good state before trying to read data. Also, it was noted in the forum posts that using the non-blocking code worked as expected. So, if you do not have a specific reason to use blocking code, just use the non-blocking implementation.
Also, as a final hint, using FSocket::Wait will block until your socket is in a desirable state of your choosing with a timeout, i.e. is readable or has data.
I am currently working on a project written in C++ involving UDP real time connection. I receive UDP packets from a control computer containing commands to start/stop an infinite while loop that reads data from an IMU and sends that data to the control computer.
My problem is the following: First I implemented an exit condition from the loop using recvfrom() and read(), but the control computer sends a UDP packet every second, which was delaying the whole loop and made sending the data in the desired time interval of 5ms impossible.
I tried to fix this problem by usingfcntl(fd, F_SETFL, O_NONBLOCK);and using only read(), which actually works fine, but I am unsure whether this is a wise idea or not, since I am not checking for errors anymore. Is there any elegant way how to solve this problem? I thought about using Pthreads or something like that, however I have never worked with threads or parallel programming so I would have to spend some time learning that.
I appreciate any advice on that problem you could give me.
Here is a code example:
//include
...
int main() {
RNet cmd; //RNet: struct that contains all the information of the UDP header and the command
RNet* pCmd = &cmd;
ssize_t b;
int fd2;
struct sockaddr_in snd; // sender is control computer
socklen_t length;
// further declaration of variables, connecting to socket, etc...
...
fcntl(fd2, F_SETFL, O_NONBLOCK);
while (1)
{
// read messages from control computer
if ((b = read(fd2, pCmd, 19)) > 0) {
memcpy(&cmd, pCmd, b);
}
// transmission
while (cmd.CLout.MotionCommand == 1) // MotionCommand: 1 - send messages; 0 - do nothing
{
if(time_elapsed >= 5) // elapsed time in ms
{
// update sensor values
...
//sendto ()
...
// update control time, timestamp, etc.
...
}
if (recvfrom(fd2, pCmd, (int)sizeof(pCmd), 0, (struct sockaddr*) &snd, &length) < 0) {
perror("error receiving data");
return 0;
}
// checking Control Model Command
if ((b = read(fd2, pCmd, 19)) > 0) {
memcpy(&cmd, pCmd, b);
}
}
}
}
I really like the "blocking calls on multiple threads" design. It enables you to have distinct independent tasks, and you don't have to worry about how each task can disturb another. It can have some drawbacks but it is usually a good fit for many needs.
To do that, just use pthread_create to create a new thread for each task (you may keep the main thread for one task). In your case, you should have a thread to receive commands, and another one to send your data. You also need for the receiving thread to notify the sending thread of the commands. To do that, you can use some synchronization tool, like a mutex.
Overall, you should have your receiving thread blocking on recvfrom, and the sending thread waiting for a signal from the mutex (wait for the mutex to be freed, technically). When the receiving thread receive a start command, it signals the mutex and go back to recvfrom (optionally you can set a variable to provide more information to the other thread).
As a comment, remember that UDP are 1-to-many, thus your code here will react to any packet sent to you (even from some random or malicious host). You may want to filter with the remote sockaddr after recvfrom, or use connect + recv. It depends on what you want.
I am learning Winsock and trying to create some easy programs to get to know it. I managed to create server which can handle multiple connections also manage them and client according to all tutorials, it is working how it was supposed to but :
I tried to make loop where I check if any of clients has disconnected and if it has, I wanted to close it.
I managed to write something which would check if socket is disconnected but it does not connect 2 or more sockets at one time
Anyone can give me reply how to make working loop checking through every client if it has disconnected and close socket ? It is all to make something like max clients connected to server at one time. Thanks in advance.
while (true)
{
ConnectingSocket = accept (ListeningSocket, (SOCKADDR*)&addr, &addrlen);
if (ConnectingSocket!=INVALID_SOCKET)
{
Connections[ConnectionsCounter] = ConnectingSocket;
char *Name = new char[64];
ZeroMemory (Name,64);
sprintf (Name, "%i",ConnectionsCounter);
send (Connections[ConnectionsCounter],Name,64,0);
cout<<"New connection !\n";
ConnectionsCounter++;
char data;
if (ConnectionsCounter>0)
{
for (int i=0;i<ConnectionsCounter;i++)
{
if (recv(Connections[i],&data,1, MSG_PEEK))
{
closesocket(Connections[i]);
cout<<"Connection closed.\n";
ConnectionsCounter=ConnectionsCounter-1;
}
}
}
}
Sleep(50);
}
it seems that you want to manage multiple connections using a single thread. right?
Briefly socket communication has two mode, block and non-block. The default one is block mode. let's focus your code:
for (int i=0;i<ConnectionsCounter;i++)
{
if (recv(Connections[i],&data,1, MSG_PEEK))
{
closesocket(Connections[i]);
cout<<"Connection closed.\n";
ConnectionsCounter=ConnectionsCounter-1;
}
}
In the above code, you called the recv function. and it will block until peer has sent msg to you, or peer closed the link. So, if you have two connection now namely Connections[0] and Connections[1]. If you were recv Connections[0], at the same time, the Connections[1] has disconnected, you were not know it. because you were blocking at recv(Connections[0]). when the Connections[0] sent msg to you or it closed the socket, then loop continue, finally you checked it disconnect, even through it disconnected 10 minutes ago.
To solve it, I think you need a book Network Programming for Microsoft Windows . There are some method, such as one thread one socket pattern, asynchronous communication mode, non-block mode, and so on.
Forgot to point out the bug, pay attention here:
closesocket(Connectons[i]);
cout<<"Connection closed.\n";
ConnectionsCounter=ConnectionsCounter-1;
Let me give an example to illustrate it. now we have two Connections with index 0 and 1, and then ConnectionsCount should be 2, right? When the Connections[0] is disconnected, the ConnectionsCounter is changed from 2 to 1. and loop exit, a new client connected, you save the new client socket as Connections[ConnectionsCounter(=1)] = ConnectingSocket; oops, gotting an bug. because the disconnected socket's index is 0, and index 1 was used by another link. you are reusing the index 1.
why not try to use vector to save the socket.
hope it helps~
When you use the simple ZeroMQ REQ/REP pattern you depend on a fixed send()->recv() / recv()->send() sequence.
As this article describes you get into trouble when a participant disconnects in the middle of a request because then you can't just start over with receiving the next request from another connection but the state machine would force you to send a request to the disconnected one.
Has there emerged a more elegant way to solve this since the mentioned article has been written?
Is reconnecting the only way to solve this (apart from not using REQ/REP but use another pattern)
As the accepted answer seem so terribly sad to me, I did some research and have found that everything we need was actually in the documentation.
The .setsockopt() with the correct parameter can help you resetting your socket state-machine without brutally destroy it and rebuild another on top of the previous one dead body.
(yeah I like the image).
ZMQ_REQ_CORRELATE: match replies with requests
The default behaviour of REQ sockets is to rely on the ordering of messages to match requests and responses and that is usually sufficient. When this option is set to 1, the REQ socket will prefix outgoing messages with an extra frame containing a request id. That means the full message is (request id, 0, user frames…). The REQ socket will discard all incoming messages that don't begin with these two frames.
Option value type int
Option value unit 0, 1
Default value 0
Applicable socket types ZMQ_REQ
ZMQ_REQ_RELAXED: relax strict alternation between request and reply
By default, a REQ socket does not allow initiating a new request with zmq_send(3) until the reply to the previous one has been received. When set to 1, sending another message is allowed and has the effect of disconnecting the underlying connection to the peer from which the reply was expected, triggering a reconnection attempt on transports that support it. The request-reply state machine is reset and a new request is sent to the next available peer.
If set to 1, also enable ZMQ_REQ_CORRELATE to ensure correct matching of requests and replies. Otherwise a late reply to an aborted request can be reported as the reply to the superseding request.
Option value type int
Option value unit 0, 1
Default value 0
Applicable socket types ZMQ_REQ
A complete documentation is here
The good news is that, as of ZMQ 3.0 and later (the modern era), you can set a timeout on a socket. As others have noted elsewhere, you must do this after you have created the socket, but before you connect it:
zmq_req_socket.setsockopt( zmq.RCVTIMEO, 500 ) # milliseconds
Then, when you actually try to receive the reply (after you have sent a message to the REP socket), you can catch the error that will be asserted if the timeout is exceeded:
try:
send( message, 0 )
send_failed = False
except zmq.Again:
logging.warning( "Image send failed." )
send_failed = True
However! When this happens, as observed elsewhere, your socket will be in a funny state, because it will still be expecting the response. At this point, I cannot find anything that works reliably other than just restarting the socket. Note that if you disconnect() the socket and then re connect() it, it will still be in this bad state. Thus you need to
def reset_my_socket:
zmq_req_socket.close()
zmq_req_socket = zmq_context.socket( zmq.REQ )
zmq_req_socket.setsockopt( zmq.RCVTIMEO, 500 ) # milliseconds
zmq_req_socket.connect( zmq_endpoint )
You will also notice that because I close()d the socket, the receive timeout option was "lost", so it is important set that on the new socket.
I hope this helps. And I hope that this does not turn out to be the best answer to this question. :)
There is one solution to this and that is adding timeouts to all calls. Since ZeroMQ by itself does not really provide simple timeout functionality I recommend using a subclass of the ZeroMQ socket that adds a timeout parameter to all important calls.
So, instead of calling s.recv() you would call s.recv(timeout=5.0) and if a response does not come back within that 5 second window it will return None and stop blocking. I had made a futile attempt at this when I run into this problem.
I'm actually looking into this at the moment, because I am retro fitting a legacy system.
I am coming across code constantly that "needs" to know about the state of the connection. However the thing is I want to move to the message passing paradigm that the library promotes.
I found the following function : zmq_socket_monitor
What it does is monitor the socket passed to it and generate events that are then passed to an "inproc" endpoint - at that point you can add handling code to actually do something.
There is also an example (actually test code) here : github
I have not got any specific code to give at the moment (maybe at the end of the week) but my intention is to respond to the connect and disconnects such that I can actually perform any resetting of logic required.
Hope this helps, and despite quoting 4.2 docs, I am using 4.0.4 which seems to have the functionality
as well.
Note I notice you talk about python above, but the question is tagged C++ so that's where my answer is coming from...
Update: I'm updating this answer with this excellent resource here: https://blog.cloudflare.com/when-tcp-sockets-refuse-to-die/ Socket programming is complicated so do checkout the references in this post.
None of the answers here seem accurate or useful. The OP is not looking for information on BSD socket programming. He is trying to figure out how to robustly handle accept()ed client-socket failures in ZMQ on the REP socket to prevent the server from hanging or crashing.
As already noted -- this problem is complicated by the fact that ZMQ tries to pretend that the servers listen()ing socket is the same as an accept()ed socket (and there is no where in the documentation that describes how to set basic timeouts on such sockets.)
My answer:
After doing a lot of digging through the code, the only relevant socket options passed along to accept()ed socks seem to be keep alive options from the parent listen()er. So the solution is to set the following options on the listen socket before calling send or recv:
void zmq_setup(zmq::context_t** context, zmq::socket_t** socket, const char* endpoint)
{
// Free old references.
if(*socket != NULL)
{
(**socket).close();
(**socket).~socket_t();
}
if(*context != NULL)
{
// Shutdown all previous server client-sockets.
zmq_ctx_destroy((*context));
(**context).~context_t();
}
*context = new zmq::context_t(1);
*socket = new zmq::socket_t(**context, ZMQ_REP);
// Enable TCP keep alive.
int is_tcp_keep_alive = 1;
(**socket).setsockopt(ZMQ_TCP_KEEPALIVE, &is_tcp_keep_alive, sizeof(is_tcp_keep_alive));
// Only send 2 probes to check if client is still alive.
int tcp_probe_no = 2;
(**socket).setsockopt(ZMQ_TCP_KEEPALIVE_CNT, &tcp_probe_no, sizeof(tcp_probe_no));
// How long does a con need to be "idle" for in seconds.
int tcp_idle_timeout = 1;
(**socket).setsockopt(ZMQ_TCP_KEEPALIVE_IDLE, &tcp_idle_timeout, sizeof(tcp_idle_timeout));
// Time in seconds between individual keep alive probes.
int tcp_probe_interval = 1;
(**socket).setsockopt(ZMQ_TCP_KEEPALIVE_INTVL, &tcp_probe_interval, sizeof(tcp_probe_interval));
// Discard pending messages in buf on close.
int is_linger = 0;
(**socket).setsockopt(ZMQ_LINGER, &is_linger, sizeof(is_linger));
// TCP user timeout on unacknowledged send buffer
int is_user_timeout = 2;
(**socket).setsockopt(ZMQ_TCP_MAXRT, &is_user_timeout, sizeof(is_user_timeout));
// Start internal enclave event server.
printf("Host: Starting enclave event server\n");
(**socket).bind(endpoint);
}
What this does is tell the operating system to aggressively check the client socket for timeouts and reap them for cleanup when a client doesn't return a heart beat in time. The result is that the OS will send a SIGPIPE back to your program and socket errors will bubble up to send / recv - fixing a hung server. You then need to do two more things:
1. Handle SIGPIPE errors so the program doesn't crash
#include <signal.h>
#include <zmq.hpp>
// zmq_setup def here [...]
int main(int argc, char** argv)
{
// Ignore SIGPIPE signals.
signal(SIGPIPE, SIG_IGN);
// ... rest of your code after
// (Could potentially also restart the server
// sock on N SIGPIPEs if you're paranoid.)
// Start server socket.
const char* endpoint = "tcp://127.0.0.1:47357";
zmq::context_t* context;
zmq::socket_t* socket;
zmq_setup(&context, &socket, endpoint);
// Message buffers.
zmq::message_t request;
zmq::message_t reply;
// ... rest of your socket code here
}
2. Check for -1 returned by send or recv and catch ZMQ errors.
// E.g. skip broken accepted sockets (pseudo-code.)
while (1):
{
try
{
if ((*socket).recv(&request)) == -1)
throw -1;
}
catch (...)
{
// Prevent any endless error loops killing CPU.
sleep(1)
// Reset ZMQ state machine.
try
{
zmq::message_t blank_reply = zmq::message_t();
(*socket).send (blank_reply);
}
catch (...)
{
1;
}
continue;
}
Notice the weird code that tries to send a reply on a socket failure? In ZMQ, a REP server "socket" is an endpoint to another program making a REQ socket to that server. The result is if you go do a recv on a REP socket with a hung client, the server sock becomes stuck in a broken receive loop where it will wait forever to receive a valid reply.
To force an update on the state machine, you try send a reply. ZMQ detects that the socket is broken, and removes it from its queue. The server socket becomes "unstuck", and the next recv call returns a new client from the queue.
To enable timeouts on an async client (in Python 3), the code would look something like this:
import asyncio
import zmq
import zmq.asyncio
#asyncio.coroutine
def req(endpoint):
ms = 2000 # In milliseconds.
sock = ctx.socket(zmq.REQ)
sock.setsockopt(zmq.SNDTIMEO, ms)
sock.setsockopt(zmq.RCVTIMEO, ms)
sock.setsockopt(zmq.LINGER, ms) # Discard pending buffered socket messages on close().
sock.setsockopt(zmq.CONNECT_TIMEOUT, ms)
# Connect the socket.
# Connections don't strictly happen here.
# ZMQ waits until the socket is used (which is confusing, I know.)
sock.connect(endpoint)
# Send some bytes.
yield from sock.send(b"some bytes")
# Recv bytes and convert to unicode.
msg = yield from sock.recv()
msg = msg.decode(u"utf-8")
Now you have some failure scenarios when something goes wrong.
By the way -- if anyone's curious -- the default value for TCP idle timeout in Linux seems to be 7200 seconds or 2 hours. So you would be waiting a long time for a hung server to do anything!
Sources:
https://github.com/zeromq/libzmq/blob/84dc40dd90fdc59b91cb011a14c1abb79b01b726/src/tcp_listener.cpp#L82 TCP keep alive options preserved for client sock
http://www.tldp.org/HOWTO/html_single/TCP-Keepalive-HOWTO/ How does keep alive work
https://github.com/zeromq/libzmq/blob/master/builds/zos/README.md Handling sig pipe errors
https://github.com/zeromq/libzmq/issues/2586 for information on closing sockets
https://blog.cloudflare.com/when-tcp-sockets-refuse-to-die/
https://github.com/zeromq/libzmq/issues/976
Disclaimer:
I've tested this code and it seems to be working, but ZMQ does complicate testing this a fair bit because the client re-connects on failure? If anyone wants to use this solution in production, I recommend writing some basic unit tests, first.
The server code could also be improved a lot with threading or polling to be able to handle multiple clients at once. As it stands, a malicious client can temporarily take up resources from the server (3 second timeout) which isn't ideal.