select() for UDP connection - c++

Can someone explain to me why we use the select() function before recvfrom() (on the server side) instead of before sendto() (on the client side) when waiting for a timeout? It seems to me that the timeout should be on the sender's side.
//EX
CLIENT SERVER
------ ------
select() /* start timeout */
sendto() /* --send packet--> */ recvfrom()
recvfrom() /* <--send ACK-- */ sendto()
And as long as the ACK has been received before the timeout is reached, the sender could send another file.

You do not normally use select with UDP at all, except you want one of the following:
receive from several ports (or one port and an unix socket, etc.) with a single thread
detect other events as soon as they happen, without waiting for an unrelated recvfrom or sendto to unblock
sleep in a maximally portable way
you want to use the Linux-specific recvmmsg (but then, you really want to use epoll_wait) to receive a whole bunch of datagrams with one syscall
select is regularly used with TCP as it is able to multiplex between several sockets, one for every connected client. This is not necessary with UDP, since one socket is sufficient to recieve packets from each and every client (assuming they use the same port).
select blocks until the condition you wait for (e.g. ready to receive or ready to send) is true. recvfrom blocks anyway if there is nothing to receive, so if this is the only thing you're intersted in, calling select is useless.

UDP doesn't have built-in acknowledgements. So sendto() simply sends the packet to the network and returns immediately, it doesn't have any built-in way of waiting for a response or acknowledgement. Your application knows that it expects the server to send a response, so it waits for a response with recvfrom().

Related

Raw ICMP WInsock, asynchronous I/O

I'm doing a program that will permit me to ping a lot of different IPs simultaneously (around 50-70).
I also need to delay the sending of each packet by say 1 ms, for multiple reasons, notably not getting my packets dropped by some routers which drop ICMP packets when there's too much sent at once (which mine does, on the sending machine, not the receiving one).
So I did it in a separate thread a bit like that :
// Send thread
for (;;) {
[...]
for (int i = 0; i < ip_count; i++)
{
// Send() calls sendto()
Send(m_socket, ip_array[i]);
[...]
Sleep(1); // Delay by 1 ms
}
[...]
lock_until_new_send_operation();
}
And in another thread, thread I would like to receive the packets with select(), like that
// Receive thread
FD_ZERO(&m_fdset_read);
FD_SET(m_socket, &m_fdset_read);
int rds_count = select(0, &m_fdset_read, 0, 0, &tvtimeout);
if (rds_count > 0)
ProcessReadySocket(); // Calls recv() and stuff
else {
// Timed out
m_bSendGrapeDone = true;
}
The problem with this approach is that since both calls to select() and sento() use the same non-blocking socket m_socket, calls to sendto() would later block because select() makes sendto() block when both are called simultaneously (for some strange reason... dont see such logic there, since socket is non blocking, but it still does, its ugly).
So I decided to use one socket exclusively for sending then and replaced the line
Send(m_socket, ip_array[i]);
with
Send(m_sendSock, ip_array[i]); // m_sendSock is dedicated to sending only
I read on MSDN that for raw sockets each socket receives all packets for the protocol the socket is set to (mine is IPPROTO_ICMP ofc). Here I quote :
There are further limitations for applications that use a socket of
type SOCK_RAW. For example, all applications listening for a specific
protocol will receive all packets received for this protocol
SO I thought even though my packets are sent with m_sendSock, I could still receive them using select()/recv() on m_socket, turns out I can't, select() never returns the socket is readable. So I'm kind of stuck, I cannot use select() and send() at the same time. Is there something I'm doing wrong?
(by the way I want to use raw sockets, not builtin windows icmp functions)
TL;DR How can I send() and select() simultaneously? Because on the send thread, send() blocks as soon as select() gets called on the receive thread, even if I used FIONBIO on it (non blocking). If I use two different sockets, one for sending and one destined to receive, I receive nothing on the receiving socket...
Thanks!

How to check if a UDP packet is sent at transmitter

I am trying to send a packet using UDP. I know that if the channel is not free the packet will not be sent. I am using QT's udpSocket->writeDatagram to send a UDP packet. I am doing it in a loop, I want to make sure I do not send another packet before the previous packet has been sent. Is there a flag, or any other way that I can check and make sure the packet is sent?
UDP is an unreliable protocol by design. It does not guarantee that packets don't get lost, and when they get lost, the sender is not informed about that. So you can never know if a UDP packet was received successfully by the other side.
Unless, of course, your application level protocol sends a certain response. But the response can get lost just as easily, so no response is no definite proof that the packet wasn't received.
The docs say:
Sends the datagram at data of size size to the host address address at
port port. Returns the number of bytes sent on success; otherwise
returns -1.
So if it returns something other than -1 you can consider it "sent". However, if what you really want to know is whether it made it to the other side, you'll want to hear from the peer.
Typically in order to send UDP packets as fast as possible (but no faster), you'll want to put the socket into non-blocking mode, then send packets in a loop until send()/sendto() returns -1/EAGAIN (aka EWOULDBLOCK). When that result-code is returned, break out of your send-loop loop and wait for the socket to become writable again. Under Qt, you can set up a QSocketNotifier object (once, right after you set up your socket) and it will emit an activated(int) signal whenever your socket has space for more outgoing data. When you receive that signal, you can send some more packets.

Thread listening to UDP problems

My program receive some UDP messages, each of them sent with mouse clicks by the client. The program has the main thread (the GUI) only to set some parameters and a second thread create, with
CreateThread(NULL, 0, MyFunc, &Data, 0, &ThreadTd);
that is listening to UDP packets.
This is MyFunc:
...
sd=socket(AF_INET, SOCK_DGRAM, 0);
if(bind(sd,(struct sockaddr *)&server,sizeof(struct sockaddr_in))==-1)
....
while(true){
bytes_received=recvfrom(sd,buffer,BUFFER_SIZE,0,(struct sockaddr *)&client,&client_length);
//parsing of the buffer
}
In order to prove that there is no packet loss, if I've used a simple script that listen UDP messages sent by my client using a certain port, all the packets sent are received by my computer.
When I run my application, as soon as the client do the first mouse click, the UDP message is received, but If I try to send other messages (other mouse clicks), the server doesn't receive them (like if he doesn't catch them) and client-side, the user have to click at least 2 times before the server catch the message.
The main thread isn't busy all the time and the second thread parses only the incoming message, enhancing some variables and I haven't assign any priority to the threads.
Any suggetions?
in addition to mark's suggestion, you could also use wireshark/netcat to see when/where the datagrams are sent
This may be a problem related to socket programming. I would suggest incorporating a call to select() or epoll() with the call to recvfrom(). This is a more standard approach to socket programming. This way, the UDP server could receive messages from multiple clients, and it wouldnt block indefinitely.
Also, you should isolate if the problem is that the client doesnt always send a packet for every click, or if somehow the server doesnt always receive them. Wireshark could help see when packets are sent.
not enough info to know why there's packet loss. is it possible there's a delay in the receive thread before reaching the first recvfrom? debug tracing might point the way. i assume also that the struct sockaddr server was filled in with something sane before calling bind()? you're not showing that part...
If I understood your question correctly, your threaded server app does not receive all the packets when they are sent in quick bursts. One thing you can try is to increase socket receive buffer on the server side, so more data could be queued when your application is not reading it fast enough. See setsockopt, use the SO_RCVBUF option.

Non-blocking TCP socket and flushing right after send?

I am using Windows socket for my application(winsock2.h). Since the blocking socket doesn't let me control connection timeout, I am using non-blocking one. Right after send command I am using shutdown command to flush(I have to). My timeout is 50ms and the thing I want to know is if the data to be sent is so big, is there a risk of sending only a portion of data or sending nothing at all? Thanks in advance...
hSocket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
u_long iMode=1;
ioctlsocket(hSocket,FIONBIO,&iMode);
connect(hSocket, (sockaddr*)(&sockAddr),sockAddrSize);
send(hSocket, sendbuf, sendlen, 0);
shutdown(hSocket, SD_BOTH);
Sleep(50);
closesocket(hSocket);
Non-blocking TCP socket and flushing right after send?
There is no such thing as flushing a TCP socket.
Since the blocking socket doesn't let me control connection timeout
False. You can use select() on a blocking socket.
I am using non-blocking one.
Non sequitur.
Right after send command I am using shutdown command to flush(I have to).
You don't have to, and shutdown() doesn't flush anything.
My timeout is 50ms
Why? The time to send data depends on the size of the data. Obviously. It does not make any sense whatsoever to use a fixed timeout for a send.
and the thing I want to know is if the data to be sent is so big, is there a risk of sending only a portion of data or sending nothing at all?
In blocking mode, all the data you provided to send() will be sent if possible. In non-blocking mode, the amount of data represented by the return value of send() will be sent, if possible. In either case the connection will be reset if the send fails. Whatever timeout mechanism you superimpose can't possibly change any of that: specifically, closing the socket asynchronously after a timeout will only cause the close to be appended to the data being sent. It will not cause the send to be aborted.
Your code wouldn't pass any code review known to man. There is zero error checking; the sleep is completely pointless; and shutdown before close is redundant. If the sleep is intended to implement a timeout, it doesn't.
I want to be sending data as fast as possible.
You can't. TCP implements flow control. There is exactly nothing you can do about that. You are rate-limited by the receiver.
Also the 2 possible cases are: server waits too long to accept connection
There is no such case. The client can complete a connection before the server ever calls accept(). If you're trying to implement a connect timeout shorter than the default of about a minute, use select().
or receive.
Nothing you can do about that: see above.
So both connecting and writing should be done in max of 50ms since the time is very important in my situation.
See above. It doesn't make sense to implement a fixed timeout for operations that take variable time. And 50ms is far too short for a connect timeout. If that's a real issue you should keep the connection open so that the connect delay only happens once: in fact you should keep TCP connections open as long as possible anyway.
I have to flush both write and read streams
You can't. There is no operation in TCP that will flush either a read stream or a write stream.
because the server keeps sending me unnecessarly big data and I have limited internet connection.
Another non sequitur. If the server sends you data, you have to read it, otherwise you will stall the server, and that doesn't have anything to do with flushing your own write stream.
Actually I don't even want a single byte from the server
Bad luck. You have to read it. [If you were on BSD Unix you could shutdown the socket for input, which would cause data from the server to be thrown away, but that doesn't work on Windows: it causes the server to get a connection reset.]
Thanks to EJP and Martin, now I have created a second thread to check. Also in the code I posted in my question, I added "counter=0;" line after the "send" line and removed shutdown. It works just as I wanted now. It never waits more than 50ms :) Really big thanks
unsigned __stdcall SecondThreadFunc( void* pArguments )
{
while(1)
{
counter++;
if (counter > 49)
{
closesocket(hSocket);
counter = 0;
printf("\rtimeout");
}
Sleep(1);
}
return 0;
}

When will a connected UDP socket be closed by the OS?

I have a UDP file descriptor in a C++ program running under Linux. I call connect() on it to connect it to a remote address and then read and write from that socket.
According to UNIX Network Programming, "Asynchronous errors are returned to the process for connected UDP sockets." I'm guessing that these asynchronous errors will cause the UDP socket to be closed by the OS, but the book isn't that clear. It also isn't clear what types of asynchronous errors are possible, though it's suggested that if the port on the remote machine is not open, the socket will be closed.
So my question is: Under what conditions will Linux close the UDP file descriptor?
Bad port number?
Bad IP address?
Any others?
connect() on an UDP socket just records the port number and IP address you pass in, so it'll only accept packets from that IP/port, and you can use the socket fd to send/write data without specifying the remote address for each send/write call.
Regarding this, async errors means if you send() something, and that send call results in an error occuring later (e.g. when the TCP/IP stack actually sends the packet, or an ICMP packet is later returned), a subsequent send will return that error. Such async errors are only returned on a "connected" UDP socket. (The linux udp(7) manpage suggest errors are returned whether the socket is connected or not, but testing shows this is not the cases at least when a sent UDP packet generates an ICMP error. It might be that send() errors are returned if you recv() on that socket, instead of subsequent send() calls produce an error )
The socket is not closed though, you'll have to close it yourself either by calling close() or exiting the program. e.g. if you connect() your UDP socket, and send to a port noone is listening to, an ICMP packet is normally returned and a subsequent send() call will fail with errno set to ECONNREFUSED. You can continue sending on that socket though, it doesn't get closed by the OS, and if someone starts listening on the port in the mean time the packets will get through.
UDP sockets are connectionless, so there is no real sense of "openness" state attached to them - this is unlike TCP sockets where a socket may be in any number of connection states as determined by the exchange of packets up to a given point.
The only sense in which UDP sockets can be opened and closed is in the sense that they are system level objects with some internal state and a file descriptor. Sockets are never automatically closed in the event of an error and will remain open indefinitely, unless their owning process terminates or calls close on them.
To address your other concern, if the destination port on the destination host is not opened, the sender of a UDP packet will never know.** UDP provides no means of receiver acknowledgement. The packet is routed and, if it arrives at the host, checked for correctness and either successfully received or discarded. There are a number of reasons why send might return an error code when writing to a UDP socket, but none of them have to do with the state of the receiving host.** I recommend consulting the sendto manpage for possible failure modes.
On the other hand, in the case of a TCP socket attempting to connect to an unopened port, the sender will never receive an acknowledgement of its initial connection request, and ultimately connect will fail. At this point it would be up to the sender to stop sending data over the socket (as this will only generate more errors), but even in this case however, the socket file descriptor is never automatically closed.
** See response by #Zuljin in the comments.
The OS won't close your socket just because an error has happened. If the other end disappears, you can continue to send messages to it (but may receive further errors).