what does blocking means in setsockopt parameter SO_RCVTIMEO - c++

When i was taking a look at setsockopt from msdn link. i came across a parameter SO_RCVTIMEO, it description is "Sets the timeout, in milliseconds, for blocking receive calls." I thought the socket listen operation is event driven which means when kernel drained frame from NIC card it notify my program socket, so what is the blocking all about?

The recv and WSARecv functions are blocking. They are not event driven (at least not at the calling level). Even when blocking has a timeout (as set with the SO_RECTIMEO option), they are not event driven as far as your code is concerned. In that case, they are just pseudo-blocking (arguably non-blocking depending on how short the timeout is).
When you call WSARecv, it will wait until data is ready to be read. While data is not ready to be read, it just waits. This is why it's considered blocking.
You are correct that at it's core networking is event driven. Under the hood, computers are, by nature, event driven. It's the way hardware works. Hardware interrupts are essentially events. You're right that at a low level what is happening is that your NIC card is telling the OS that it's ready to be read. At that level, it is indeed event based.
The problem is that WSARecv waits for that event.
Here's a hopefully clear analogy. Imagine that you for some reason cannot leave your house. Now imagine that your friend F lives next door. Additionally, assume that your other friend G is at your house.
Now imagine that you give G a piece of paper with a question on it and ask him to take it to F.
Once the question has been sent, imagine that you send G to go get F's response. This is like the recv call. G will wait until F has written down his response, then he will bring it to you. G does not immediately turn around and come back if F hasn't written it yet.
This is where the gap comes from. G is indeed aware of the "F wrote!" events, but you're not. You're not directly watching the piece of paper.
Setting a timeout means that you're telling G to wait at most some amount of time before giving up and coming back. In this situation, G is still waiting on F to write, but if F doesn't write within x milliseconds, G turns around and comes back empty handed.
Basically the pseudo code of recv is vaguely like:
1) is data available?
1a) Yes: read it and return
1b) No: GOTO 2
2) Wait until an event is received
2a) GOTO 1
I know this has been a horribly convoluted explanation, but my main point is this: recv is interacting with the events, not your code. recv blocks until one of those events is received. If a timeout is set, it blocks until either one of those events is received, or the timeout is reached.

Sockets are NOT event-driven by default. You have to write extra code to enable that. A socket is initially created in a blocking mode instead. This means that a call to send(), recv(), or accept() will block the calling thread indefinately by default until the requested operation is finished.
For recv(), that means the calling thread is blocked until there is at least 1 byte available to read from the socket's receive buffer, or until a socket error occurs, whichever occurs first. SO_RCVTIMEO allows you to set a timeout on the blocking read so recv() exits with a WSAETIMEDOUT error if no incoing data becomes available before the timeout elapses.
Another way to implement a timeout is to set the socket to a non-blocking mode instead via ioctlsocket(FIONBIO) and then call select() with a timeout, then call recv() or accept() only if select() reports that the socket is in a readible state, and send() only if select() reports the socket is in a writable state. But this requires more code to manage cases where the socket would enter a blocking state, causing operations to fail with WSAEWOULDBLOCK errors.

Related

C++ server with recv/send commands & request/response design

I'm trying to create a server with blocking sockets (one new thread for each new client). This thread should be able to receive commands from the client (and send back the result) and periodically send commands to the client (and request back the result).
What I've thought is creating two threads for each client, one for recv, second for send. However:
it's double of the normal thread overhead.
due to request/response design, recv I do in the first thread (to wait for client's commands) can be the request I look for in the second thread (client's result to my send) and vice versa. Making it all properly synced is probably a hell story. So now I'm thinking to do that from a single thread this way:
In a loop:
setsockopt(SO_RCVTIMEO, &small_timeout); // set the timeout for the recv (like 1000 ms).
recv(); // check for client's requests first. if returns WSAETIMEDOUT than I assume no data is requested and do nothing. if I get a normal request I handle it.
if (clientbufferToSend != nullptr) send(clientbufferToSend); // now when client's request has been processed we check the command list we have to send to the client. if there is commands in queue, we send them. SO_SNDTIMEO timeout can be set to a large value so we don't deadlock if client looses connection.
setsockopt(SO_RCVTIMEO, &large_timeout); // set the timeout for the recv (as large as SO_SNDTIMEO, just to not deadlock if anything).
recv(); // now we wait the response from the client.
Is this the legal way to do what I want? Or are there better alternatives (preferrably with blocking sockets and threads)?
P.S. Does recv() with timeout returns WSAETIMEDOUT only if no data is available? Can it return this error if there is the data, but recv() wasn't fast enough to handle it all, thus returning partial data?
One approach is only create a background thread for reading from that socket. Write on whatever random thread your unsolicited events are raised.
You’ll need following stuff.
A critical section or mutex per socket to serialize writes, like when background thread is sending response to client-initiated message, and other thread wants to send message to the same client.
Some other synchronization primitive like a conditional variable for client thread to sleep while waiting for responses.
The background thread which receives messages needs to distinguish client-initiated messages (which need to be responded by the same background thread) from responses to server-initiated messages. If your network protocol doesn’t have that data you’ll have to change the protocol.
This will work OK if your server-initiated events are only happening on a single thread, e.g. they come from some serialized source like a device or OS interface.
If however the event source is multithreaded as well, and you want good performance, you gonna need non-trivial complexity to dispatch the responses to the correct server thread, like 1 conditional variable per client thread, maybe some queues, etc.

Epoll zero recv() and negative(EAGAIN) send()

I was struggling with epoll last days and I'm in the middle of nowhere right now ;)
There's a lot of information on the Internet and obviously in the system man but I probably took an overdose and a bit confused.
In my server app(backend to nginx) I'm waiting for data from clients in the ET mode:
event_template.events = EPOLLIN | EPOLLRDHUP | EPOLLET
Everything has become curious when I have noticed that nginx is responding with 502 despite I could see successful send() on my side. I run wireshark
to sniff and have realised that my server sends(trying and getting RST) data to another machine on the net. So, I decided that socket descriptor is invalid and this is sort of "undefined behaviour". Finally, I found out that on a second recv() I'm getting zero bytes which means that connection has to be closed and I'm not allowed to send data back anymore. Nevertheless, I was getting from epoll not just EPOLLIN but EPOLLRDHUP in a row.
Question: Do I have to close socket just for reading when recv() returns zero and shutdown(SHUT_WR) later on during EPOLLRDHUP processing?
Reading from socket in a nutshell:
std::array<char, BatchSize> batch;
ssize_t total_count = 0, count = 0;
do {
count = recv(_handle, batch.begin(), batch.size(), MSG_DONTWAIT);
if (0 == count && 0 == total_count) {
/// #??? Do I need to wait zero just on first iteration?
close();
return total_count;
} else if (count < 0) {
if (errno == EAGAIN || errno == EWOULDBLOCK) {
/// #??? Will be back with next EPOLLIN?!
break ;
}
_last_error = errno;
/// #brief just log the error
return 0;
}
if (count > 0) {
total_count += count;
/// DATA!
if (count < batch.size()) {
/// #??? Received less than requested - no sense to repeat recv, otherwise I need one more turn?!
return total_count;
}
}
} while (count > 0);
Probably, my the general mistake was attempt to send data on invalid socket descriptor and everything what happens later is just a consequence. But, I continued to dig ;) My second part of a question is about writing to a socket in MSG_DONTWAIT mode as well.
As far as I now know, send() may also return -1 and EAGAIN which means that I'm supposed to subscribe on EPOLLOUT and wait when kernel buffer will be free enough to receive some data from my me. Is this right? But what if client won't wait so long? Or, may I call blocking send(anyway, I'm sending on a different thread) and guarantee the everything what I send to kernel will be really sent to peer because of setsockopt(SO_LINGER)? And a final guess which I ask to confirm: I'm allowed to read and write simultaneously, but N>1 concurrent writes is a data race and everything that I have to deal with it is a mutex.
Thanks to everyone who at least read to the end :)
Questions: Do I have to close socket just for reading when recv()
returns zero and shutdown(SHUT_WR) later on during EPOLLRDHUP
processing?
No, there is no particular reason to perform that somewhat convoluted sequence of actions.
Having received a 0 return value from recv(), you know that the connection is at least half-closed at the network layer. You will not receive anything further from it, and I would not expect EPoll operating in edge-triggered mode to further advertise its readiness for reading, but that does not in itself require any particular action. If the write side remains open (from a local perspective) then you may continue to write() or send() on it, though you will be without a mechanism for confirming receipt of what you send.
What you actually should do depends on the application-level protocol or message exchange pattern you are assuming. If you expect the remote peer to shutdown the write side of its endpoint (connected to the read side of the local endpoint) while awaiting data from you then by all means do send the data it anticipates. Otherwise, you should probably just close the whole connection and stop using it when recv() signals end-of-file by returning 0. Note well that close()ing the descriptor will remove it automatically from any Epoll interest sets in which it is enrolled, but only if there are no other open file descriptors referring to the same open file description.
Any way around, until you do close() the socket, it remains valid, even if you cannot successfully communicate over it. Until then, there is no reason to expect that messages you attempt to send over it will go anywhere other than possibly to the original remote endpoint. Attempts to send may succeed, or they may appear to do even though the data never arrive at the far end, or the may fail with one of several different errors.
/// #??? Do I need to wait zero just on first iteration?
You should take action on a return value of 0 whether any data have already been received or not. Not necessarily identical action, but either way you should arrange one way or another to get it out of the EPoll interest set, quite possibly by closing it.
/// #??? Will be back with next EPOLLIN?!
If recv() fails with EAGAIN or EWOULDBLOCK then EPoll might very well signal read-readiness for it on a future call. Not necessarilly the very next one, though.
/// #??? Received less than requested - no sense to repeat recv, otherwise I need one more turn?!
Receiving less than you requested is a possibility you should always be prepared for. It does not necessarily mean that another recv() won't return any data, and if you are using edge-triggered mode in EPoll then assuming the contrary is dangerous. In that case, you should continue to recv(), in non-blocking mode or with MSG_DONTWAIT, until the call fails with EAGAIN or EWOULDBLOCK.
As far as I now know, send() may also return -1 and EAGAIN which means that I'm supposed to subscribe on EPOLLOUT and wait when kernel buffer will be free enough to receive some data from my me. Is this right?
send() certainly can fail with EAGAIN or EWOULDBLOCK. It can also succeed, but send fewer bytes than you requested, which you should be prepared for. Either way, it would be reasonable to respond by subscribing to EPOLLOUT events on the file descriptor, so as to resume sending later.
But what if client won't wait so long?
That depends on what the client does in such a situation. If it closes the connection then a future attempt to send() to it would fail with a different error. If you were registered only for EPOLLOUT events on the descriptor then I suspect it would be possible, albeit unlikely, to get stuck in a condition where that attempt never happens because no further event is signaled. That likelihood could be reduced even further by registering for and correctly handling EPOLLRDHUP events, too, even though your main interest is in writing.
If the client gives up without ever closing the connection then EPOLLRDHUP probably would not be useful, and you're more likely to get the stale connection stuck indefinitely in your EPoll. It might be worthwhile to address this possibility with a per-FD timeout.
Or, may I call blocking send(anyway, I'm sending on a different
thread) and guarantee the everything what I send to kernel will be
really sent to peer because of setsockopt(SO_LINGER)?
If you have a separate thread dedicated entirely to sending on that specific file descriptor then you can certainly consider blocking send()s. The only drawback is that you cannot implement a timeout on top of that, but other than that, what would such a thread do if it blocking either on sending data or on receiving more data to send?
I don't see quite what SO_LINGER has to do with it, though, at least on the local side. The kernel will make every attempt to send data that you have already dispatched via a send() call to the remote peer, even if you close() the socket while data are still buffered, regardless of the value of SO_LINGER. The purpose of that option is to receive (and drop) straggling data associated with the connection after it is closed, so that they are not accidentally delivered to another socket.
None of this can guarantee that the data are successfully delivered to the remote peer, however. Nothing can guarantee that.
And a final guess which I ask to confirm: I'm allowed to read and
write simultaneously, but N>1 concurrent writes is a data race and
everything that I have to deal with it is a mutex.
Sockets are full-duplex, yes. Moreover, POSIX requires most functions, including send() and recv(), to be thread safe. Nevertheless, multiple threads writing to the same socket is asking for trouble, for the thread safety of individual calls does not guarantee coherency across multiple calls.

Multiple Boost::ASIO async_send_to in one go

I want to increase the throughput of my udp gameserver which uses Boost ASIO.
Right now, everytime i need to send a packet, i am putting it in a queue, then checking if there is a pending async_send_to operation, if yes, do nothing, if not, call async_send_to.
Then i wait for the write handler to be called and then call async_send_to for the next packet in queue, if any.
The documentation says that it is the way to do it "for TCP socket", but there is NOTHING on the whole internet about UDP socket.
Try it, search it on stackoverflow, you will see nobody talks about this, and for the 2 questions you will find, the question is left ignored by users.
Why is it kept a secret?
And for the 1million dollar question, can i safely call async_send_to multiple time in a row WITHOUT waiting for the write handler to be called?
Thanks in advance.
This logic is meaningless for the UDP protocol since it doesn't need to block send operation. A datagram is either delivered or lost. UDP don't have to store it in the output buffer and resend indefinitely many times until it get ACK packet.
No, you cannot safely call async_send_to multiple times in a row WITHOUT waiting for the write handler to be called. See Asynchronous IO with Boost.Asio to see precisely why.
However, asio supports scatter gather and so you can call async_send_to with multiple buffers, e.g.:
typedef std::deque<boost::asio::const_buffer> ConstBuffers;
std::string msg_1("Blah");
...
std::string msg_n("Blah");
ConstBuffers buffers;
buffers.push_back(msg_1);
...
buffers.push_back(msg_n);
socket_.async_send_to(buffers, tx_endpoint_, write_handler);
So you could increase your throughput by double buffering your message queue and using gathered writes...

Cancel pending recv?

Hi I'm working on a networking project. I've a socket that is listening incoming data. Now I want to archive this: Socket will receive only 100 packets. And there is 3-4 clients. They are sending random data packets infinitely. I'll receive 100 packets and later I'll process them. After process I'll re-start receiving. But at this time there are some pending send() >> recv() operations. Now I want to cancel/discard pending recv operations. I think we'll recv datas and we'll not process them. Any other suggestions? (sorry for bad question composition)
Shutdown and close the connection. That will cancel everything immediately.
Better yet, rearchitect your application and network protocol so that you can reliably tell how much data to receive.
On Windows you can cancel outstanding receives using CancelIO, but that might result in lost data if the receive just happened to read something.
You can use select() or poll() loops.
you can use signal. recv() will return on receiving a signal so you can send a signal from another task to the task that blocks on recv(). But you need to make sure you don't specify SA_RESTART (see http://pubs.opengroup.org/onlinepubs/9699919799/functions/sigaction.html)
Read http://en.wikipedia.org/wiki/Asynchronous_I/O for more details
I would go with non-blocking sockets + cancellation socket.
You'll have to read into dedicated incremental buffer (as recv() may not receive all the data expected at once - this would be the case if you can only process full messages) and return to select()/poll() in your loop, where you can safely sit and wait for:
next data
next connection
cancellation event from a cancellation socket, to which your other thread will send a cancellation signal (some trivial send()).
UPD: the trivial event may be the number of the socket in the array or its handle - something to identify which one you'd like to cancel.

How to unblock recv() or recvfrm() function in linux C

I want to send a UDP packet to a camera from the PC when the PC resumes from sleep. Since it takes some time (unknown) to the network interface to become alive after the PC resumes, I keep sending packets to the camera in a loop. When the camera receives the packet, it sends an acknowledge signal to the PC.
My problem is "for receiving the UDP packet from the camera (ack signal), I use recvfrm() function which blocks the loop. How do I unblock this function so that it exit the loop only when it receives acknowledge signal from the camera.
Use MSG_DONTWAIT flag passed to recvfrom function. It enables non-blocking mode. If the operation would block this call returns EAGAIN or EWOULDBLOCK error code.
A more portable solution to maverik's answer (which is otherwise correct) would be to fcntl the socket to O_NONBLOCK.
MSG_DONTWAIT, although available under Linux and BSD and most Unices is only standardized in SUSv4 for sending (why, I wouldn't know... but M. Kerrisk says so). One notable platform which doesn't support it is Winsock (at least it's not documented in MSDN).
Alternatively, if you don't want to tamper with obscure flags and fcntl, you could select the descriptor for readiness (with a zero timeout, or even with a non-zero timeout to throttle the packets you send -- it's probably a good idea not to flood the network stack). Keep sending until select says something can be read.
The easiest way (but not the nicest code) is to wait for a while on a reply is to use select before calling recvfrom().