I'm doing a program that will permit me to ping a lot of different IPs simultaneously (around 50-70).
I also need to delay the sending of each packet by say 1 ms, for multiple reasons, notably not getting my packets dropped by some routers which drop ICMP packets when there's too much sent at once (which mine does, on the sending machine, not the receiving one).
So I did it in a separate thread a bit like that :
// Send thread
for (;;) {
[...]
for (int i = 0; i < ip_count; i++)
{
// Send() calls sendto()
Send(m_socket, ip_array[i]);
[...]
Sleep(1); // Delay by 1 ms
}
[...]
lock_until_new_send_operation();
}
And in another thread, thread I would like to receive the packets with select(), like that
// Receive thread
FD_ZERO(&m_fdset_read);
FD_SET(m_socket, &m_fdset_read);
int rds_count = select(0, &m_fdset_read, 0, 0, &tvtimeout);
if (rds_count > 0)
ProcessReadySocket(); // Calls recv() and stuff
else {
// Timed out
m_bSendGrapeDone = true;
}
The problem with this approach is that since both calls to select() and sento() use the same non-blocking socket m_socket, calls to sendto() would later block because select() makes sendto() block when both are called simultaneously (for some strange reason... dont see such logic there, since socket is non blocking, but it still does, its ugly).
So I decided to use one socket exclusively for sending then and replaced the line
Send(m_socket, ip_array[i]);
with
Send(m_sendSock, ip_array[i]); // m_sendSock is dedicated to sending only
I read on MSDN that for raw sockets each socket receives all packets for the protocol the socket is set to (mine is IPPROTO_ICMP ofc). Here I quote :
There are further limitations for applications that use a socket of
type SOCK_RAW. For example, all applications listening for a specific
protocol will receive all packets received for this protocol
SO I thought even though my packets are sent with m_sendSock, I could still receive them using select()/recv() on m_socket, turns out I can't, select() never returns the socket is readable. So I'm kind of stuck, I cannot use select() and send() at the same time. Is there something I'm doing wrong?
(by the way I want to use raw sockets, not builtin windows icmp functions)
TL;DR How can I send() and select() simultaneously? Because on the send thread, send() blocks as soon as select() gets called on the receive thread, even if I used FIONBIO on it (non blocking). If I use two different sockets, one for sending and one destined to receive, I receive nothing on the receiving socket...
Thanks!
Related
Can someone explain to me why we use the select() function before recvfrom() (on the server side) instead of before sendto() (on the client side) when waiting for a timeout? It seems to me that the timeout should be on the sender's side.
//EX
CLIENT SERVER
------ ------
select() /* start timeout */
sendto() /* --send packet--> */ recvfrom()
recvfrom() /* <--send ACK-- */ sendto()
And as long as the ACK has been received before the timeout is reached, the sender could send another file.
You do not normally use select with UDP at all, except you want one of the following:
receive from several ports (or one port and an unix socket, etc.) with a single thread
detect other events as soon as they happen, without waiting for an unrelated recvfrom or sendto to unblock
sleep in a maximally portable way
you want to use the Linux-specific recvmmsg (but then, you really want to use epoll_wait) to receive a whole bunch of datagrams with one syscall
select is regularly used with TCP as it is able to multiplex between several sockets, one for every connected client. This is not necessary with UDP, since one socket is sufficient to recieve packets from each and every client (assuming they use the same port).
select blocks until the condition you wait for (e.g. ready to receive or ready to send) is true. recvfrom blocks anyway if there is nothing to receive, so if this is the only thing you're intersted in, calling select is useless.
UDP doesn't have built-in acknowledgements. So sendto() simply sends the packet to the network and returns immediately, it doesn't have any built-in way of waiting for a response or acknowledgement. Your application knows that it expects the server to send a response, so it waits for a response with recvfrom().
I receive datagrams from UDP Mulitcast socket like that:
int res=recvfrom(socketId,buf,sizeof(char) * RECEIVE_BUFFER_SIZE,0, (SOCKADDR *)& Sender, &SenderAddrSize);
According to MSDN this call blocks until datagram is available:
If no incoming data is available at the socket, the recvfrom function blocks and waits for data to arrive according to the blocking rules defined for WSARecv with the MSG_PARTIAL flag not set unless the socket is nonblocking.
Now in the same thread I want to receive packets from another socket too. So now I want to listen TWO sockets in ONE thread and data from both of them should be processed ASAP. Because of that i can not use blocking recvfrom anymore, I should use something like that (pseude-code):
while (true) {
non blocking recvfrom call to Socket1
if new datagram available in Socket1 then process
non blocking recvfrom call to Socket2
if new datagram available in Socket2 then process
}
I would say that I "spin" sockets. I understand that "spinning" spents CPU a lot but this is OK as my primary requirement is latency.
I don't want to process two sockets in two different threads because data from these sockets is the same and "merging" this data from two threads is a little bit complicated - it would be much easier to "arbitrate" data from two sockets in one thread. I have to listen both sockets as UDP is unreliable and i need to statistically decrease the probability of packet loss.
I have following question:
taking into account that I agree to spent CPU for better latency is it a good idea to "spin socket"?
how much microseconds can I win "spinning" socket instead of calling regular blocking receive? (well will I win something or it's just CPU power wasting?)
We have an application where we send data through TCP sockets. We use 8 TCP connections for this. The socket send and receive is called in a background thread. There is just one thread which iterates over the array of sockets to send data through all of them (sequential).
The code in sender thread is something like:
for(i = 0; i < 8; i++) {
nBytesWrriten = send (tcpsock[i], data2, nleft, 0))
//error handling and process more data
}
and the receiver thread is like:
for(i = 0; i < 8; i++) {
sz[i] = recv (tcpsock[i], data, MAX_UDT_SIZE, 0);
//process data
}
Everything works fine and the data gets transferred, but sometimes it just takes too long.
On checking logs, I found that in most cases, the sender thread works just fine , but sometimes, there is a huge delay in timestamps(sometimes more than a second) before and after 'send' call.
All of the send and receive action is taking place in a worker thread. Is it something to do with pre-emption of the thread just before/on send call? Can I avoid the pre-emption of the thread just before the send call? Or is it that the receiver thread has not received the data on the socket while send it ready with more data, and therefore it causes the delay?
How do I optimize this as it is taking too long to send data?
Thanks
You should use non-blocking sockets for sending. What might be happening is that one (or more than one) cannot send right away, so it waits until it can send some data, maybe buffers full or whatever.
Using non-blocking sockets it won't stop, but you must check that the data was not sent to some sockets and try again later.
Do a select on each socket before the send to see if you can write without blocking, otherwise you'll block the sends on the other sockets. You'll want to do the same thing on the read side or the lack of readability of one could block available read on others.
My program receive some UDP messages, each of them sent with mouse clicks by the client. The program has the main thread (the GUI) only to set some parameters and a second thread create, with
CreateThread(NULL, 0, MyFunc, &Data, 0, &ThreadTd);
that is listening to UDP packets.
This is MyFunc:
...
sd=socket(AF_INET, SOCK_DGRAM, 0);
if(bind(sd,(struct sockaddr *)&server,sizeof(struct sockaddr_in))==-1)
....
while(true){
bytes_received=recvfrom(sd,buffer,BUFFER_SIZE,0,(struct sockaddr *)&client,&client_length);
//parsing of the buffer
}
In order to prove that there is no packet loss, if I've used a simple script that listen UDP messages sent by my client using a certain port, all the packets sent are received by my computer.
When I run my application, as soon as the client do the first mouse click, the UDP message is received, but If I try to send other messages (other mouse clicks), the server doesn't receive them (like if he doesn't catch them) and client-side, the user have to click at least 2 times before the server catch the message.
The main thread isn't busy all the time and the second thread parses only the incoming message, enhancing some variables and I haven't assign any priority to the threads.
Any suggetions?
in addition to mark's suggestion, you could also use wireshark/netcat to see when/where the datagrams are sent
This may be a problem related to socket programming. I would suggest incorporating a call to select() or epoll() with the call to recvfrom(). This is a more standard approach to socket programming. This way, the UDP server could receive messages from multiple clients, and it wouldnt block indefinitely.
Also, you should isolate if the problem is that the client doesnt always send a packet for every click, or if somehow the server doesnt always receive them. Wireshark could help see when packets are sent.
not enough info to know why there's packet loss. is it possible there's a delay in the receive thread before reaching the first recvfrom? debug tracing might point the way. i assume also that the struct sockaddr server was filled in with something sane before calling bind()? you're not showing that part...
If I understood your question correctly, your threaded server app does not receive all the packets when they are sent in quick bursts. One thing you can try is to increase socket receive buffer on the server side, so more data could be queued when your application is not reading it fast enough. See setsockopt, use the SO_RCVBUF option.
I am using Windows socket for my application(winsock2.h). Since the blocking socket doesn't let me control connection timeout, I am using non-blocking one. Right after send command I am using shutdown command to flush(I have to). My timeout is 50ms and the thing I want to know is if the data to be sent is so big, is there a risk of sending only a portion of data or sending nothing at all? Thanks in advance...
hSocket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
u_long iMode=1;
ioctlsocket(hSocket,FIONBIO,&iMode);
connect(hSocket, (sockaddr*)(&sockAddr),sockAddrSize);
send(hSocket, sendbuf, sendlen, 0);
shutdown(hSocket, SD_BOTH);
Sleep(50);
closesocket(hSocket);
Non-blocking TCP socket and flushing right after send?
There is no such thing as flushing a TCP socket.
Since the blocking socket doesn't let me control connection timeout
False. You can use select() on a blocking socket.
I am using non-blocking one.
Non sequitur.
Right after send command I am using shutdown command to flush(I have to).
You don't have to, and shutdown() doesn't flush anything.
My timeout is 50ms
Why? The time to send data depends on the size of the data. Obviously. It does not make any sense whatsoever to use a fixed timeout for a send.
and the thing I want to know is if the data to be sent is so big, is there a risk of sending only a portion of data or sending nothing at all?
In blocking mode, all the data you provided to send() will be sent if possible. In non-blocking mode, the amount of data represented by the return value of send() will be sent, if possible. In either case the connection will be reset if the send fails. Whatever timeout mechanism you superimpose can't possibly change any of that: specifically, closing the socket asynchronously after a timeout will only cause the close to be appended to the data being sent. It will not cause the send to be aborted.
Your code wouldn't pass any code review known to man. There is zero error checking; the sleep is completely pointless; and shutdown before close is redundant. If the sleep is intended to implement a timeout, it doesn't.
I want to be sending data as fast as possible.
You can't. TCP implements flow control. There is exactly nothing you can do about that. You are rate-limited by the receiver.
Also the 2 possible cases are: server waits too long to accept connection
There is no such case. The client can complete a connection before the server ever calls accept(). If you're trying to implement a connect timeout shorter than the default of about a minute, use select().
or receive.
Nothing you can do about that: see above.
So both connecting and writing should be done in max of 50ms since the time is very important in my situation.
See above. It doesn't make sense to implement a fixed timeout for operations that take variable time. And 50ms is far too short for a connect timeout. If that's a real issue you should keep the connection open so that the connect delay only happens once: in fact you should keep TCP connections open as long as possible anyway.
I have to flush both write and read streams
You can't. There is no operation in TCP that will flush either a read stream or a write stream.
because the server keeps sending me unnecessarly big data and I have limited internet connection.
Another non sequitur. If the server sends you data, you have to read it, otherwise you will stall the server, and that doesn't have anything to do with flushing your own write stream.
Actually I don't even want a single byte from the server
Bad luck. You have to read it. [If you were on BSD Unix you could shutdown the socket for input, which would cause data from the server to be thrown away, but that doesn't work on Windows: it causes the server to get a connection reset.]
Thanks to EJP and Martin, now I have created a second thread to check. Also in the code I posted in my question, I added "counter=0;" line after the "send" line and removed shutdown. It works just as I wanted now. It never waits more than 50ms :) Really big thanks
unsigned __stdcall SecondThreadFunc( void* pArguments )
{
while(1)
{
counter++;
if (counter > 49)
{
closesocket(hSocket);
counter = 0;
printf("\rtimeout");
}
Sleep(1);
}
return 0;
}