My program receive some UDP messages, each of them sent with mouse clicks by the client. The program has the main thread (the GUI) only to set some parameters and a second thread create, with
CreateThread(NULL, 0, MyFunc, &Data, 0, &ThreadTd);
that is listening to UDP packets.
This is MyFunc:
...
sd=socket(AF_INET, SOCK_DGRAM, 0);
if(bind(sd,(struct sockaddr *)&server,sizeof(struct sockaddr_in))==-1)
....
while(true){
bytes_received=recvfrom(sd,buffer,BUFFER_SIZE,0,(struct sockaddr *)&client,&client_length);
//parsing of the buffer
}
In order to prove that there is no packet loss, if I've used a simple script that listen UDP messages sent by my client using a certain port, all the packets sent are received by my computer.
When I run my application, as soon as the client do the first mouse click, the UDP message is received, but If I try to send other messages (other mouse clicks), the server doesn't receive them (like if he doesn't catch them) and client-side, the user have to click at least 2 times before the server catch the message.
The main thread isn't busy all the time and the second thread parses only the incoming message, enhancing some variables and I haven't assign any priority to the threads.
Any suggetions?
in addition to mark's suggestion, you could also use wireshark/netcat to see when/where the datagrams are sent
This may be a problem related to socket programming. I would suggest incorporating a call to select() or epoll() with the call to recvfrom(). This is a more standard approach to socket programming. This way, the UDP server could receive messages from multiple clients, and it wouldnt block indefinitely.
Also, you should isolate if the problem is that the client doesnt always send a packet for every click, or if somehow the server doesnt always receive them. Wireshark could help see when packets are sent.
not enough info to know why there's packet loss. is it possible there's a delay in the receive thread before reaching the first recvfrom? debug tracing might point the way. i assume also that the struct sockaddr server was filled in with something sane before calling bind()? you're not showing that part...
If I understood your question correctly, your threaded server app does not receive all the packets when they are sent in quick bursts. One thing you can try is to increase socket receive buffer on the server side, so more data could be queued when your application is not reading it fast enough. See setsockopt, use the SO_RCVBUF option.
Related
Can someone explain to me why we use the select() function before recvfrom() (on the server side) instead of before sendto() (on the client side) when waiting for a timeout? It seems to me that the timeout should be on the sender's side.
//EX
CLIENT SERVER
------ ------
select() /* start timeout */
sendto() /* --send packet--> */ recvfrom()
recvfrom() /* <--send ACK-- */ sendto()
And as long as the ACK has been received before the timeout is reached, the sender could send another file.
You do not normally use select with UDP at all, except you want one of the following:
receive from several ports (or one port and an unix socket, etc.) with a single thread
detect other events as soon as they happen, without waiting for an unrelated recvfrom or sendto to unblock
sleep in a maximally portable way
you want to use the Linux-specific recvmmsg (but then, you really want to use epoll_wait) to receive a whole bunch of datagrams with one syscall
select is regularly used with TCP as it is able to multiplex between several sockets, one for every connected client. This is not necessary with UDP, since one socket is sufficient to recieve packets from each and every client (assuming they use the same port).
select blocks until the condition you wait for (e.g. ready to receive or ready to send) is true. recvfrom blocks anyway if there is nothing to receive, so if this is the only thing you're intersted in, calling select is useless.
UDP doesn't have built-in acknowledgements. So sendto() simply sends the packet to the network and returns immediately, it doesn't have any built-in way of waiting for a response or acknowledgement. Your application knows that it expects the server to send a response, so it waits for a response with recvfrom().
I am trying to send a packet using UDP. I know that if the channel is not free the packet will not be sent. I am using QT's udpSocket->writeDatagram to send a UDP packet. I am doing it in a loop, I want to make sure I do not send another packet before the previous packet has been sent. Is there a flag, or any other way that I can check and make sure the packet is sent?
UDP is an unreliable protocol by design. It does not guarantee that packets don't get lost, and when they get lost, the sender is not informed about that. So you can never know if a UDP packet was received successfully by the other side.
Unless, of course, your application level protocol sends a certain response. But the response can get lost just as easily, so no response is no definite proof that the packet wasn't received.
The docs say:
Sends the datagram at data of size size to the host address address at
port port. Returns the number of bytes sent on success; otherwise
returns -1.
So if it returns something other than -1 you can consider it "sent". However, if what you really want to know is whether it made it to the other side, you'll want to hear from the peer.
Typically in order to send UDP packets as fast as possible (but no faster), you'll want to put the socket into non-blocking mode, then send packets in a loop until send()/sendto() returns -1/EAGAIN (aka EWOULDBLOCK). When that result-code is returned, break out of your send-loop loop and wait for the socket to become writable again. Under Qt, you can set up a QSocketNotifier object (once, right after you set up your socket) and it will emit an activated(int) signal whenever your socket has space for more outgoing data. When you receive that signal, you can send some more packets.
I am using Windows socket for my application(winsock2.h). Since the blocking socket doesn't let me control connection timeout, I am using non-blocking one. Right after send command I am using shutdown command to flush(I have to). My timeout is 50ms and the thing I want to know is if the data to be sent is so big, is there a risk of sending only a portion of data or sending nothing at all? Thanks in advance...
hSocket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
u_long iMode=1;
ioctlsocket(hSocket,FIONBIO,&iMode);
connect(hSocket, (sockaddr*)(&sockAddr),sockAddrSize);
send(hSocket, sendbuf, sendlen, 0);
shutdown(hSocket, SD_BOTH);
Sleep(50);
closesocket(hSocket);
Non-blocking TCP socket and flushing right after send?
There is no such thing as flushing a TCP socket.
Since the blocking socket doesn't let me control connection timeout
False. You can use select() on a blocking socket.
I am using non-blocking one.
Non sequitur.
Right after send command I am using shutdown command to flush(I have to).
You don't have to, and shutdown() doesn't flush anything.
My timeout is 50ms
Why? The time to send data depends on the size of the data. Obviously. It does not make any sense whatsoever to use a fixed timeout for a send.
and the thing I want to know is if the data to be sent is so big, is there a risk of sending only a portion of data or sending nothing at all?
In blocking mode, all the data you provided to send() will be sent if possible. In non-blocking mode, the amount of data represented by the return value of send() will be sent, if possible. In either case the connection will be reset if the send fails. Whatever timeout mechanism you superimpose can't possibly change any of that: specifically, closing the socket asynchronously after a timeout will only cause the close to be appended to the data being sent. It will not cause the send to be aborted.
Your code wouldn't pass any code review known to man. There is zero error checking; the sleep is completely pointless; and shutdown before close is redundant. If the sleep is intended to implement a timeout, it doesn't.
I want to be sending data as fast as possible.
You can't. TCP implements flow control. There is exactly nothing you can do about that. You are rate-limited by the receiver.
Also the 2 possible cases are: server waits too long to accept connection
There is no such case. The client can complete a connection before the server ever calls accept(). If you're trying to implement a connect timeout shorter than the default of about a minute, use select().
or receive.
Nothing you can do about that: see above.
So both connecting and writing should be done in max of 50ms since the time is very important in my situation.
See above. It doesn't make sense to implement a fixed timeout for operations that take variable time. And 50ms is far too short for a connect timeout. If that's a real issue you should keep the connection open so that the connect delay only happens once: in fact you should keep TCP connections open as long as possible anyway.
I have to flush both write and read streams
You can't. There is no operation in TCP that will flush either a read stream or a write stream.
because the server keeps sending me unnecessarly big data and I have limited internet connection.
Another non sequitur. If the server sends you data, you have to read it, otherwise you will stall the server, and that doesn't have anything to do with flushing your own write stream.
Actually I don't even want a single byte from the server
Bad luck. You have to read it. [If you were on BSD Unix you could shutdown the socket for input, which would cause data from the server to be thrown away, but that doesn't work on Windows: it causes the server to get a connection reset.]
Thanks to EJP and Martin, now I have created a second thread to check. Also in the code I posted in my question, I added "counter=0;" line after the "send" line and removed shutdown. It works just as I wanted now. It never waits more than 50ms :) Really big thanks
unsigned __stdcall SecondThreadFunc( void* pArguments )
{
while(1)
{
counter++;
if (counter > 49)
{
closesocket(hSocket);
counter = 0;
printf("\rtimeout");
}
Sleep(1);
}
return 0;
}
Suppose I have a server application - the connection is over TCP, using UNIX sockets.
The connection is asynchronous - in other words, clients' and servers' sockets are non-blocking.
Suppose the following situation: in some conditions, the server may decide to send some data to a connected client and immediately close the connection: using shutdown with SHUT_RDWR.
So, my question is - is it guaranteed, that when the client call recv, it will receive the (sent by the server) data?
Or, to receive the data, recv must be called before the server's shutdown? If so, what should I do (or, to be more precise, how should I do this), to make sure, that the data is received by the client?
You can control this behavior with "setsockopt(SO_LINGER)":
man setsockopt
SO_LINGER
Waits to complete the close function if data is present. When this option is enabled and there is unsent data present when the close
function is called, the calling application is blocked during the
close function until the data is transmitted or the connection has
timed out. The close function returns without blocking the caller.
This option has meaning only for stream sockets.
See also:
man read
Beej's Guide to Network Programming
There's no guarantee you will receive any data, let alone this data, but the data pending when the socket is closed is subject to the same guarantees as all the other data: if it arrives it will arrive in order and undamaged and subject to TCP's best efforts.
NB 'Asynchronous' and 'non-blocking' are two different things, not two terms for the same thing.
Once you have successfully written the data to the socket, it is in the kernel's buffer, where it will stay until it has been sent and acknowledged. Shutdown doesn't cause the buffered data to get lost. Closing the socket doesn't cause the buffered data to get lost. Not even the death of the sending process would cause the buffered data to get lost.
You can observe the size of the buffer with netstat. The SendQ column is how much data the kernel still wants to transmit.
After the client has acknowledged everything, the port disappears from the server. This may happen before the client has read the data, in which case it will be in RecvQ on the client. Basically you have nothing to worry about. After a successful write to a TCP socket, every component is trying as hard as it can to make sure that your data gets to the destination unharmed regardless of what happens to the sending socket and/or process.
Well, maybe one thing to worry about: If the client tries to send anything after the server has done its shutdown, it could get a SIGPIPE and die before it has read all the available data from the socket.
Hey I'm using the WSAEventSelect for event notifications of sockets. So far everything is cool and working like a charm, but there is one problem.
The client is a .NET application and the server is written in Winsock C++. In the .NET application I'm using System.Net.Sockets.Socket class for TCP/IP. When I call the Socket.Shutdown() and Socket.Close() method, I receive the FD_CLOSE event in the server, which I'm pretty sure is fine. Okay the problem occurs when I check the iErrorCode of WSANETWORKEVENTS which I passed to WSAEnumNetworkEvents. I check it like this
if (listenerNetworkEvents.lNetworkEvents & FD_CLOSE)
{
if (listenerNetworkEvents.iErrorCode[FD_CLOSE_BIT] != 0)
{
// it comes here
// which means there is an error
// and the ERROR I got is
// WSAECONNABORTED
printf("FD_CLOSE failed with error %d\n",
listenerNetworkEvents.iErrorCode[FD_CLOSE_BIT]);
break;
}
closesocket(socketArray[Index]);
}
But it fails with the WSAECONNABORTED error. Why is that so?
EDIT: Btw, I'm running both the client and server on the same computer, is it because of that? And I received the FD_CLOSE event when I do this:
server.Shutdown(SocketShutdown.Both); // in .NET C#, client code
I'm guessing you're calling Shutdown() and then Close() immediately afterward. That will give the symptom you're seeing, because this is "slamming the connection shut". Shutdown() does initiate a graceful disconnect (TCP FIN), but immediately following it with Close() aborts that, sending a TCP RST packet to the remote peer. Your Shutdown(SocketShutdown.Both) call slams the connection shut, too, by the way.
The correct pattern is:
Call Shutdown() with the direction parameter set to "write", meaning we won't be sending any more data to the remote peer. This causes the stack to send the TCP FIN packet.
Go back to waiting for Winsock events. When the remote peer is also done writing, it will call Shutdown("write"), too, causing its stack to send your machine a TCP FIN packet, and for your application to get an FD_CLOSE event. While waiting, your code should be prepared to continue reading from the socket, because the remote peer might still be sending data.
(Please excuse the pseudo-C# above. I don't speak .NET, only C++.)
Both peers are expected to use this same shutdown pattern: each tells the other when it's done writing, and then waits to receive notification that the remote peer is done writing before it closes its socket.
The important thing to realize is that TCP is a bidirectional protocol: each side can send and receive independently of the other. Closing the socket to reading is not a nice thing to do. It's like having a conversation with another person but only talking and being unwilling to listen. The graceful shutdown protocol says, "I'm done talking now. I'm going to wait until you stop talking before I walk away."