I want to send a UDP packet to a camera from the PC when the PC resumes from sleep. Since it takes some time (unknown) to the network interface to become alive after the PC resumes, I keep sending packets to the camera in a loop. When the camera receives the packet, it sends an acknowledge signal to the PC.
My problem is "for receiving the UDP packet from the camera (ack signal), I use recvfrm() function which blocks the loop. How do I unblock this function so that it exit the loop only when it receives acknowledge signal from the camera.
Use MSG_DONTWAIT flag passed to recvfrom function. It enables non-blocking mode. If the operation would block this call returns EAGAIN or EWOULDBLOCK error code.
A more portable solution to maverik's answer (which is otherwise correct) would be to fcntl the socket to O_NONBLOCK.
MSG_DONTWAIT, although available under Linux and BSD and most Unices is only standardized in SUSv4 for sending (why, I wouldn't know... but M. Kerrisk says so). One notable platform which doesn't support it is Winsock (at least it's not documented in MSDN).
Alternatively, if you don't want to tamper with obscure flags and fcntl, you could select the descriptor for readiness (with a zero timeout, or even with a non-zero timeout to throttle the packets you send -- it's probably a good idea not to flood the network stack). Keep sending until select says something can be read.
The easiest way (but not the nicest code) is to wait for a while on a reply is to use select before calling recvfrom().
Related
I am using blocking sendto (flag set to 0) function in my cpp code which takes at the max 3microsec and minimum 600nanosec.I want a method that is non-blocking(i.e returns immediately) and takes less time. I tried using sendto with flag set to MSG_DONTWAIT, it was found that the non-blocking sendto is similar to blocking sendto in terms of latency. Please suggest an alternative method which is non-blocking and time-efficient.
... takes at the max 3microsec and minimum 600nanosec.
This is the time which is needed by the system to put the message into the socket buffer which involves a system call. This does not include the sending to the peer itself which is done later in the kernel. This also means that it does not matter if you use a blocking or a non-blocking sendto because putting the message into the socket buffer need to be done in both cases. This also means that no select, epoll, boost::asio or whatever will help to make this faster since these don't reduce the time needed to put the message into the socket buffer.
The only difference between blocking and non-blocking sendto is that the first will wait for the system to make room in the socket buffer in case the buffer was already full, i.e. if you send messages faster than the system can deliver the messages.
It is unknown what your application really does but a way to speed it up might be to reduce the number of sendto calls by using larger messages.
You need to use select () or epoll() alike technique to find out when exactly the socket becomes writable. In case of Linux look at respective man pages. For platform independent solution you can look at libevent library.
I am making a program which sends UDP packets to a server at a fixed interval, something like this:
while (!stop) {
Sleep(fixedInterval);
send(sock, pkt, payloadSize, flags);
}
However the periodicity cannot be guaranteed because send is a blocking call (e.g., when fixedInterval is 20ms, and a call to send is > 20ms ). Do you know how I can turn the send into a non-blocking operation?
You need to use a non-blocking socket. The send/receive functions are the same functions for blocking or non-blocking operations, but you must set the socket itself to non-blocking.
u_long mode = 1; // 1 to enable non-blocking socket
ioctlsocket(sock, FIONBIO, &mode);
Also, be aware that working with non-blocking sockets is quite different. You'll need to make sure you handle WSAEWOULDBLOCK errors as success! :)
So, using non-blocking sockets may help, but still will not guarantee an exact period. You would be better to drive this from a timer, rather than this simple loop, so that any latency from calling send, even in non-blocking mode, will not affect the timing.
The API ioctlsocket can do it.You can use it as below.But why don't you use I/O models in winsock?
ioctlsocket(hsock,FIOBIO,(unsigned long *)&ul);
My memory is fuzzy here since it's probably been 15 years since I've used UDP non-blocking.
However, there are some things of which you should be aware.
Send only smallish packets if you're going over a public network. The PATH MTU can trip you up if either the client or the server is not written to take care of incomplete packets.
Make sure you check that you have sent the number of bytes you think you have to send. It can get weird when you're expecting to see 300 bytes sent and the receiving end only gets 248. Both client side and server side have to be aware of this issue.
See here for some good advice from the Linux folks.
See here for the Unix Socket FAQ for UDP
This is a good, general network programming FAQ and example page.
How about measuring the time Send takes and then just sleeping the time missing up to the 20ms?
I am making a program which sends UDP packets to a server at a fixed interval, something like this:
while (!stop) {
Sleep(fixedInterval);
send(sock, pkt, payloadSize, flags);
}
However the periodicity cannot be guaranteed because send is a blocking call (e.g., when fixedInterval is 20ms, and a call to send is > 20ms ). Do you know how I can turn the send into a non-blocking operation?
You need to use a non-blocking socket. The send/receive functions are the same functions for blocking or non-blocking operations, but you must set the socket itself to non-blocking.
u_long mode = 1; // 1 to enable non-blocking socket
ioctlsocket(sock, FIONBIO, &mode);
Also, be aware that working with non-blocking sockets is quite different. You'll need to make sure you handle WSAEWOULDBLOCK errors as success! :)
So, using non-blocking sockets may help, but still will not guarantee an exact period. You would be better to drive this from a timer, rather than this simple loop, so that any latency from calling send, even in non-blocking mode, will not affect the timing.
The API ioctlsocket can do it.You can use it as below.But why don't you use I/O models in winsock?
ioctlsocket(hsock,FIOBIO,(unsigned long *)&ul);
My memory is fuzzy here since it's probably been 15 years since I've used UDP non-blocking.
However, there are some things of which you should be aware.
Send only smallish packets if you're going over a public network. The PATH MTU can trip you up if either the client or the server is not written to take care of incomplete packets.
Make sure you check that you have sent the number of bytes you think you have to send. It can get weird when you're expecting to see 300 bytes sent and the receiving end only gets 248. Both client side and server side have to be aware of this issue.
See here for some good advice from the Linux folks.
See here for the Unix Socket FAQ for UDP
This is a good, general network programming FAQ and example page.
How about measuring the time Send takes and then just sleeping the time missing up to the 20ms?
When i was taking a look at setsockopt from msdn link. i came across a parameter SO_RCVTIMEO, it description is "Sets the timeout, in milliseconds, for blocking receive calls." I thought the socket listen operation is event driven which means when kernel drained frame from NIC card it notify my program socket, so what is the blocking all about?
The recv and WSARecv functions are blocking. They are not event driven (at least not at the calling level). Even when blocking has a timeout (as set with the SO_RECTIMEO option), they are not event driven as far as your code is concerned. In that case, they are just pseudo-blocking (arguably non-blocking depending on how short the timeout is).
When you call WSARecv, it will wait until data is ready to be read. While data is not ready to be read, it just waits. This is why it's considered blocking.
You are correct that at it's core networking is event driven. Under the hood, computers are, by nature, event driven. It's the way hardware works. Hardware interrupts are essentially events. You're right that at a low level what is happening is that your NIC card is telling the OS that it's ready to be read. At that level, it is indeed event based.
The problem is that WSARecv waits for that event.
Here's a hopefully clear analogy. Imagine that you for some reason cannot leave your house. Now imagine that your friend F lives next door. Additionally, assume that your other friend G is at your house.
Now imagine that you give G a piece of paper with a question on it and ask him to take it to F.
Once the question has been sent, imagine that you send G to go get F's response. This is like the recv call. G will wait until F has written down his response, then he will bring it to you. G does not immediately turn around and come back if F hasn't written it yet.
This is where the gap comes from. G is indeed aware of the "F wrote!" events, but you're not. You're not directly watching the piece of paper.
Setting a timeout means that you're telling G to wait at most some amount of time before giving up and coming back. In this situation, G is still waiting on F to write, but if F doesn't write within x milliseconds, G turns around and comes back empty handed.
Basically the pseudo code of recv is vaguely like:
1) is data available?
1a) Yes: read it and return
1b) No: GOTO 2
2) Wait until an event is received
2a) GOTO 1
I know this has been a horribly convoluted explanation, but my main point is this: recv is interacting with the events, not your code. recv blocks until one of those events is received. If a timeout is set, it blocks until either one of those events is received, or the timeout is reached.
Sockets are NOT event-driven by default. You have to write extra code to enable that. A socket is initially created in a blocking mode instead. This means that a call to send(), recv(), or accept() will block the calling thread indefinately by default until the requested operation is finished.
For recv(), that means the calling thread is blocked until there is at least 1 byte available to read from the socket's receive buffer, or until a socket error occurs, whichever occurs first. SO_RCVTIMEO allows you to set a timeout on the blocking read so recv() exits with a WSAETIMEDOUT error if no incoing data becomes available before the timeout elapses.
Another way to implement a timeout is to set the socket to a non-blocking mode instead via ioctlsocket(FIONBIO) and then call select() with a timeout, then call recv() or accept() only if select() reports that the socket is in a readible state, and send() only if select() reports the socket is in a writable state. But this requires more code to manage cases where the socket would enter a blocking state, causing operations to fail with WSAEWOULDBLOCK errors.
I would like to know if the following scenario is real?!
select() (RD) on non-blocking TCP socket says that the socket is ready
following recv() would return EWOULDBLOCK despite the call to select()
For recv() you would get EAGAIN rather than EWOULDBLOCK, and yes it is possible. Since you have just checked with select() then one of two things happened:
Something else (another thread) has drained the input buffer between select() and recv().
A receive timeout was set on the socket and it expired without data being received.
It's possible, but only in a situation where you have multiple threads/processes trying to read from the same socket.
On Linux it's even documented that this can happen, as I read it.
See this question:
Spurious readiness notification for Select System call
I am aware of an error in a popular desktop operating where O_NONBLOCK TCP sockets, particularly those running over the loopback interface, can sometimes return EAGAIN from recv() after select() reports the socket is ready for reading. In my case, this happens after the other side half-closes the sending stream.
For more details, see the source code for t_nx.ml in the NX library of my OCaml Network Application Environment distribution. (link)
Though my application is a single-threaded one, I noticed that the described behavior is not uncommon in RHEL5. Both with TCP and UDP sockets that were set to O_NONBLOCK (the only socket option that is set). select() reports that the socket is ready but the following recv() returns EAGAIN.
Yes, it's real. Here's one way it can happen:
A future modification to the TCP protocol adds the ability for one side to "revoke" information it sent provided it hasn't been received yet by the other side's application layer. This feature is negotiated on the connection. The other side sends you some data, you get a select hit. Before you can call recv, the other side "revokes" the data using this new extension. Your read gets a "would block" error because no data is available to be read.
The select function is a status-reporting function that does not come with future guarantees. Assuming that a hit on select now assures that a subsequent operation won't block is as invalid as using any other status-reporting function this way. It's as bad as using access to try to ensure a subsequent operation won't fail due to incorrect permissions or using statfs to try to ensure a subsequent write won't fail due to a full disk.
It is possible in a multithreaded environment where two threads are reading from the socket. Is this a multithreaded application?
If you do not call any other syscall between select() and recv() on this socket, then recv() will never return EAGAIN or EWOULDBLOCK.
I don't know what they mean with recv-timeout, however, the POSIX standard does not mention it here so you can be safe calling recv().