I receive datagrams from UDP Mulitcast socket like that:
int res=recvfrom(socketId,buf,sizeof(char) * RECEIVE_BUFFER_SIZE,0, (SOCKADDR *)& Sender, &SenderAddrSize);
According to MSDN this call blocks until datagram is available:
If no incoming data is available at the socket, the recvfrom function blocks and waits for data to arrive according to the blocking rules defined for WSARecv with the MSG_PARTIAL flag not set unless the socket is nonblocking.
Now in the same thread I want to receive packets from another socket too. So now I want to listen TWO sockets in ONE thread and data from both of them should be processed ASAP. Because of that i can not use blocking recvfrom anymore, I should use something like that (pseude-code):
while (true) {
non blocking recvfrom call to Socket1
if new datagram available in Socket1 then process
non blocking recvfrom call to Socket2
if new datagram available in Socket2 then process
}
I would say that I "spin" sockets. I understand that "spinning" spents CPU a lot but this is OK as my primary requirement is latency.
I don't want to process two sockets in two different threads because data from these sockets is the same and "merging" this data from two threads is a little bit complicated - it would be much easier to "arbitrate" data from two sockets in one thread. I have to listen both sockets as UDP is unreliable and i need to statistically decrease the probability of packet loss.
I have following question:
taking into account that I agree to spent CPU for better latency is it a good idea to "spin socket"?
how much microseconds can I win "spinning" socket instead of calling regular blocking receive? (well will I win something or it's just CPU power wasting?)
Related
I'm doing a program that will permit me to ping a lot of different IPs simultaneously (around 50-70).
I also need to delay the sending of each packet by say 1 ms, for multiple reasons, notably not getting my packets dropped by some routers which drop ICMP packets when there's too much sent at once (which mine does, on the sending machine, not the receiving one).
So I did it in a separate thread a bit like that :
// Send thread
for (;;) {
[...]
for (int i = 0; i < ip_count; i++)
{
// Send() calls sendto()
Send(m_socket, ip_array[i]);
[...]
Sleep(1); // Delay by 1 ms
}
[...]
lock_until_new_send_operation();
}
And in another thread, thread I would like to receive the packets with select(), like that
// Receive thread
FD_ZERO(&m_fdset_read);
FD_SET(m_socket, &m_fdset_read);
int rds_count = select(0, &m_fdset_read, 0, 0, &tvtimeout);
if (rds_count > 0)
ProcessReadySocket(); // Calls recv() and stuff
else {
// Timed out
m_bSendGrapeDone = true;
}
The problem with this approach is that since both calls to select() and sento() use the same non-blocking socket m_socket, calls to sendto() would later block because select() makes sendto() block when both are called simultaneously (for some strange reason... dont see such logic there, since socket is non blocking, but it still does, its ugly).
So I decided to use one socket exclusively for sending then and replaced the line
Send(m_socket, ip_array[i]);
with
Send(m_sendSock, ip_array[i]); // m_sendSock is dedicated to sending only
I read on MSDN that for raw sockets each socket receives all packets for the protocol the socket is set to (mine is IPPROTO_ICMP ofc). Here I quote :
There are further limitations for applications that use a socket of
type SOCK_RAW. For example, all applications listening for a specific
protocol will receive all packets received for this protocol
SO I thought even though my packets are sent with m_sendSock, I could still receive them using select()/recv() on m_socket, turns out I can't, select() never returns the socket is readable. So I'm kind of stuck, I cannot use select() and send() at the same time. Is there something I'm doing wrong?
(by the way I want to use raw sockets, not builtin windows icmp functions)
TL;DR How can I send() and select() simultaneously? Because on the send thread, send() blocks as soon as select() gets called on the receive thread, even if I used FIONBIO on it (non blocking). If I use two different sockets, one for sending and one destined to receive, I receive nothing on the receiving socket...
Thanks!
I have a server application that reads from a single udp socket.
The application is forked x times after a bind() to the socket, so that all workers can read from the same socket using a blocking call of recvfrom().
All my tests have shown that only one process returns when a packet is received, following a first come first served pattern.
Is that a reliable behavior or do I need to expect multiple processes to return for one and the same packet once in a while?
This is how datagram sockets work: a datagram gets delivered to only one thread/process.
And, in fact, how all sockets work. Each socket has an associated kernel send and receive buffers. read/recv/recvfrom/recvmsg on a socket atomically consumes from the receive socket buffer into a user-space buffer, kernel does not care which thread or process is reading from that socket. There may be more than one file descriptor referring to the same kernel file description referring to that socket (e.g., dup, fork), or many kernel file descriptions referring to the same socket (e.g. multiple processes opened the same unix or UDP (non-multicast) socket using the same filesystem name or addr:port).
Only for multi-cast sockets the kernel creates separate per-thread/process kernel socket structures and in this case the lower networking layer duplicates received datagrams into each socket buffer, so that each thread/process gets a copy of it.
Some people used UDP and UNIX sockets in-process for thread pools: producers would post a datagram into a socket, multiple consumer threads listen on the same socket and only one of these threads receives a datagram.
I am making a program which sends UDP packets to a server at a fixed interval, something like this:
while (!stop) {
Sleep(fixedInterval);
send(sock, pkt, payloadSize, flags);
}
However the periodicity cannot be guaranteed because send is a blocking call (e.g., when fixedInterval is 20ms, and a call to send is > 20ms ). Do you know how I can turn the send into a non-blocking operation?
You need to use a non-blocking socket. The send/receive functions are the same functions for blocking or non-blocking operations, but you must set the socket itself to non-blocking.
u_long mode = 1; // 1 to enable non-blocking socket
ioctlsocket(sock, FIONBIO, &mode);
Also, be aware that working with non-blocking sockets is quite different. You'll need to make sure you handle WSAEWOULDBLOCK errors as success! :)
So, using non-blocking sockets may help, but still will not guarantee an exact period. You would be better to drive this from a timer, rather than this simple loop, so that any latency from calling send, even in non-blocking mode, will not affect the timing.
The API ioctlsocket can do it.You can use it as below.But why don't you use I/O models in winsock?
ioctlsocket(hsock,FIOBIO,(unsigned long *)&ul);
My memory is fuzzy here since it's probably been 15 years since I've used UDP non-blocking.
However, there are some things of which you should be aware.
Send only smallish packets if you're going over a public network. The PATH MTU can trip you up if either the client or the server is not written to take care of incomplete packets.
Make sure you check that you have sent the number of bytes you think you have to send. It can get weird when you're expecting to see 300 bytes sent and the receiving end only gets 248. Both client side and server side have to be aware of this issue.
See here for some good advice from the Linux folks.
See here for the Unix Socket FAQ for UDP
This is a good, general network programming FAQ and example page.
How about measuring the time Send takes and then just sleeping the time missing up to the 20ms?
I am making a program which sends UDP packets to a server at a fixed interval, something like this:
while (!stop) {
Sleep(fixedInterval);
send(sock, pkt, payloadSize, flags);
}
However the periodicity cannot be guaranteed because send is a blocking call (e.g., when fixedInterval is 20ms, and a call to send is > 20ms ). Do you know how I can turn the send into a non-blocking operation?
You need to use a non-blocking socket. The send/receive functions are the same functions for blocking or non-blocking operations, but you must set the socket itself to non-blocking.
u_long mode = 1; // 1 to enable non-blocking socket
ioctlsocket(sock, FIONBIO, &mode);
Also, be aware that working with non-blocking sockets is quite different. You'll need to make sure you handle WSAEWOULDBLOCK errors as success! :)
So, using non-blocking sockets may help, but still will not guarantee an exact period. You would be better to drive this from a timer, rather than this simple loop, so that any latency from calling send, even in non-blocking mode, will not affect the timing.
The API ioctlsocket can do it.You can use it as below.But why don't you use I/O models in winsock?
ioctlsocket(hsock,FIOBIO,(unsigned long *)&ul);
My memory is fuzzy here since it's probably been 15 years since I've used UDP non-blocking.
However, there are some things of which you should be aware.
Send only smallish packets if you're going over a public network. The PATH MTU can trip you up if either the client or the server is not written to take care of incomplete packets.
Make sure you check that you have sent the number of bytes you think you have to send. It can get weird when you're expecting to see 300 bytes sent and the receiving end only gets 248. Both client side and server side have to be aware of this issue.
See here for some good advice from the Linux folks.
See here for the Unix Socket FAQ for UDP
This is a good, general network programming FAQ and example page.
How about measuring the time Send takes and then just sleeping the time missing up to the 20ms?
I am writing a C++ application on Linux. My application has a UDP server which sends data to clients on some events. The UDP server also receives some feedback/acknowledgement back from the clients.
To implement this application I used a single UDP Socket(e.g. int fdSocket) to send and receive data from all the clients. I bound this socked to port 8080 and have set the socket into NON_BLOCKING mode.
I created two threads. In one thread I wait for some event to happen, if an event occurs then I use the fdsocket to send data to all the clients(in a for loop).
In the another thread I use the fdSocket to receive data from clients (recvfrom()). This thread is scheduled to run every 4 seconds (i.e. every 4 seconds it will call recvfrom() to retrieve the data from the socket buffer. Since it is in NON-BLOCKING mode the recvfrom() function will return immediately if no UDP data is available, then I will go to sleep for 4 secs).
The UDP Feedback/Acknowledge from all the clients has a fixed payload whose size is 20bytes.
Now I have two questions related to this implementation:
Is it correct to use the same socket for sending/receiving UDP data
with Mulitiple clients ?
How to find the maximum number of UDP Feedback/Acknowledge Packets my application can handling without UDP Socket Buffer Overflow (since I am reading at every 4secs, if I
receive lot of packets within this 4 seconds I might loose some packet ie., I need to find the rate in packets/sec I can handle safely)?
I tried to get the Linux Socket Buffer size for my socket (fdsocket) using the function call getsockopt(fdsocket,SOL_SOCKET,SO_RCVBUF,(void *)&n, &m);. From this function I discover that my Socket Buffer size is 110592. But I am not clear as what data will be stored in this socket buffer: will it store only the UDP Payload or Entire UDP Packet or event the Entire Ethernet Packet? I referred this link to get some idea but got confused.
Currently my code it little bit dirty , I will clean and post it soon here.
The following are the links I have referred before posting this question.
Linux Networking
UDP SentTo and Recvfrom Max Buffer Size
UDP Socket Buffer Overflow Detection
UDP broadcast and unicast through the same socket?
Sending from the same UDP socket in multiple threads
How to flush Input Buffer of an UDP Socket in C?
How to find the socket buffer size of linux
How do I get amount of queued data for UDP socket?
Having socket reading at fixed interval of four seconds definitely sets you up for losing packets. The conventional tried-and-true approach to non-blocking I/O is the de-multiplexer system calls select(2)/poll(2)/epoll(7). See if you can use these to capture/react to your other events.
On the other hand, since you are already using threads, you can just do blocking recv(2) without that four second sleep.
Read Stevens for explanation of SO_RCVBUF.
You can see the maximum allowed buffer size:
sysctl net.core.rmem_max
You can set the maximum buffer size you can use by:
sysctl -w net.core.rmem_max=8388608
You can also set the buffer size at run-time (not exceeding the max above) by using setsockopt and changing SO_RCVBUF. You can see the buffer level by looking at /proc/net/udp.
The buffer is used to store the UDP header and application data, rest belong to lower levels.
Q: Is it correct to use the same socket for sending/receiving UDP data with Mulitiple clients ?
A: Yes, it is correct.
Q: How to find the maximum number of UDP Feedback/Acknowledge Packets my application can handling without UDP Socket Buffer Overflow (since I am reading at every 4secs, if I receive lot of packets within this 4secs I might loose some packet ie., I need to find the rate : noofpackets/sec I can handle safely)?
A: The bottleneck might be the network bandwidth, or CPU, or memory. You could simply do a testing, using a client which sends ACK to the server with consecutive number, and verify whether there is packet loss at the server.