Initial questions here
So I've been reading up on asynchronous sockets, and I have a couple more questions. Mostly concrete.
1: I can use a blocking socket with select() without repercussions, correct?
2: When I use FD_SET() I'm appending the current fd_set* not changing it, correct?
3: When using FD_CLR(), I can simply pass in the socket ID of the socket I wish to remove, right?
4: When I remove a socket, using FD_CLR(), is there a prefferred way of resetting the Max File Descriptor (nfds)?
5: Say I have all of my connected sockets in a vector, when select() returns, I can just itterate through that vector and check if (FD_ISSET (theVector[loopNum], &readFileSet)) to see if any data needs to be read, correct? And if this returns true, I can simply use the same receiving function I was using on my synchronous sockets to retreive that data?
6: What happens if select() attempts to read from a closed socket? I know it returns -1, but does it set errno or is there some other way I can continue to use select()?
7: Why are you so awesome? =D
I appreciate your time, sorry for the headache, and I hope you can help!
Yes
Unclear? FD_SET inserts a socket into the set. If the socket is already there, nothing changes.
FD_CLR removes a socket from the set, if the socket isn't there nothing's changed
You could keep a parallel set<> of sockets, then get the highest value from there. Or you could just set a bool saying "rescan for nfd before next select" (NOTE: On windows nfd is ignored)
Correct
If select fails, the quick fix is to iterate sockets and select() on each of them one by one to find the bogus one. Optimally your code should not allow select() on a socket you have closed though, if the other end closed it it's perfectly valid to select on.
I need to get you to talk to my wife.
So I've been reading up on asynchronous sockets
Judging by what follows I don't think you have. You appear to have been reading about non-blocking sockets. Not the same thing.
1: I can use a blocking socket with select() without repercussions, correct?
No. Consider the case where a listening socket becomes readable, indicating an impending accept(), but meanwhile the client closes the connection. If you then call accept() you will block until the next incoming connection, preventing you from servicing other sockets.
2: When I use FD_SET() I'm appending the current fd_set* not changing it, correct?
No. You are setting a bit. If it's already set, nothing changes.
3: When using FD_CLR(), I can simply pass in the socket ID of the socket I wish to remove, right?
Correct.
4: When I remove a socket, using FD_CLR(), is there a preferred way of resetting the Max File Descriptor (nfds)?
Not really, just re-scan and re-compute. But you don't really need to reset it actually.
5: Say I have all of my connected sockets in a vector, when select() returns, I can just itterate through that vector and check if (FD_ISSET (theVector[loopNum], &readFileSet)) to see if any data needs to be read, correct?
Correct, but it's more usual just to iterate through the FD set itself.
And if this returns true, I can simply use the same receiving function I was using on my synchronous sockets to retreive that data?
On your blocking sockets, yes.
6: What happens if select() attempts to read from a closed socket?
select() doesn't 'attempt to read from a closed socket. It may attempt to select on a closed socket, in which case it will return -1 with errno == EBADF, as stated in the documentation.
I know it returns -1, but does it set errno or is there some other way I can continue to use select()?
See above.
Related
I made a server that uses select() to check witch of the socket descriptors have data in them, but apparently select marks a socket to be ready to read from even after the client disconnects, and I get garbage values.
I have found this post on stack overflow:
select (with the read mask set) will return with the handle signalled, but when you use ioctl* to check the number of bytes pending to be read, it will be zero. `
My question is what is ioctl* and how to use it? And an example would be very good.
If a call to read() on a socket (file) descriptor returns 0, that simply means the other side of the connection had shutdown and closed the connection.
Note: A select() waiting for possible "events" on set(s) of socket (file) descriptors will also return when a connection represented by one of the fd_set's passed to select() had been shutdown.
Check the usual errors people make when using select(2):
Always re-initialize fd_sets you give to select(2) on every iteration - these are input-output arguments that system call modifies for you.
Re-calculate fd_max, the first argument, on every iteration.
Check for errors from all system calls, check the value of errno(3).
And, yes, read(2) returns zero when the other side closed TCP connection cleanly, don't use that socket anymore, just close(2) it.
I need to enforce the return value of read from a socket to equal to zero without closing connection.
I read the following statement in a page saying:
If an end-of-file condition is received or the connection is closed, 0 is returned.
But I don't know how to make it receive that condition after the string I have sent.
Can anyone help?
I'm afraid you can't do that.
If you want read to return zero, you need to close the socket. If you don't want to close the socket, you need to signal "end-of-communication" or "end-of-message" as part of your protocol.
A common way of doing that is prefixing each message with its length. That way the receiving side knows when it's read a complete message and do whatever it wants with it.
If you want the peer's read() or recv() to return zero, you must either close the socket or shut it down for output. In either case you can't sent anything else afterwards. If that constraint doesn't suit you, you will have to revise your requirement, as it doesn't make sense.
Both the "end of file condition" and the "connection closed" condition tell the receiver that no more data can be received on this socket. You cannot simulate that by sending some magic data.
Besides of calling close on the socket you can use shutdown(2) on the socket to only close either the reading side or the writing side. This might help in limited cases but not in the general case.
Perhaps you need some multiplexing syscall like poll(2).
You definitely need to read some good material like Advanced Linux Programming or Advanced Unix Programming.
If you need the TCP/IP transmission to transit application messages, you need to care about packaging and fragmenting explicitly yourself (either by having fixed-size messages, or by having some way to know the logical message size during transmission). Be aware that TCP/IP transmission can be fragmented by the network.
I have the following select call for tcp sockets:
ret = select(nfds + 1, &rfds, &rfds2, NULL, &tv);
rfds2 is used when I send to large data (non-blocking mode). And rfds is there to detect if we received something on the socket.
Now, when the send buffer is empty, I detect it with rfds2. But at the same time I get the socket back in rfds, although there is nothing that I received on that socket.
Is that the intended behaviour of the select-call? How can I distinguish orderly between the send and the recieve case?
Now, when the send buffer is empty, I
detect it with rfds2
That's not correct. select() will detect when the send buffer has room. It is hardly ever correct to register a socket for OP_READ and OP_WRITE simultaneously. OP_WRITE is almost always ready, except in the brief intervals when the send buffer is full.
Thanks for your answers. I have found the problem for myself:
The faulty code was after the select call (how I used FD_ISSET() to determine which action I can do).
I think my assumption is true, that there is only a socket in rfds, when there is really some data that can be received.
If the socket is non-blocking that seems to be the expected behaviour. The manual page for select has this to say about the readfds argument:
Those listed in readfds will be
watched to see if characters become
available for reading (more
precisely, to see if a read will not
block; in particular, a file
descriptor is also ready on
end-of-file)
Because the socket is non-blocking it is true that a read would not block and hence it is reasonable for that bit to be set.
It shouldn't cause a problem because if you try and read from the socket you will simply get nothing returned and the read won't block.
As a rule of thumb, whenever select returns you should process each socket that it indicates is ready, either reading and processing whatever data is available if it returns as ready-to-read, or writing more data if it returns as ready-to-write. You shouldn't assume that only one event will be signalled each time it returns.
I would like to know if the following scenario is real?!
select() (RD) on non-blocking TCP socket says that the socket is ready
following recv() would return EWOULDBLOCK despite the call to select()
For recv() you would get EAGAIN rather than EWOULDBLOCK, and yes it is possible. Since you have just checked with select() then one of two things happened:
Something else (another thread) has drained the input buffer between select() and recv().
A receive timeout was set on the socket and it expired without data being received.
It's possible, but only in a situation where you have multiple threads/processes trying to read from the same socket.
On Linux it's even documented that this can happen, as I read it.
See this question:
Spurious readiness notification for Select System call
I am aware of an error in a popular desktop operating where O_NONBLOCK TCP sockets, particularly those running over the loopback interface, can sometimes return EAGAIN from recv() after select() reports the socket is ready for reading. In my case, this happens after the other side half-closes the sending stream.
For more details, see the source code for t_nx.ml in the NX library of my OCaml Network Application Environment distribution. (link)
Though my application is a single-threaded one, I noticed that the described behavior is not uncommon in RHEL5. Both with TCP and UDP sockets that were set to O_NONBLOCK (the only socket option that is set). select() reports that the socket is ready but the following recv() returns EAGAIN.
Yes, it's real. Here's one way it can happen:
A future modification to the TCP protocol adds the ability for one side to "revoke" information it sent provided it hasn't been received yet by the other side's application layer. This feature is negotiated on the connection. The other side sends you some data, you get a select hit. Before you can call recv, the other side "revokes" the data using this new extension. Your read gets a "would block" error because no data is available to be read.
The select function is a status-reporting function that does not come with future guarantees. Assuming that a hit on select now assures that a subsequent operation won't block is as invalid as using any other status-reporting function this way. It's as bad as using access to try to ensure a subsequent operation won't fail due to incorrect permissions or using statfs to try to ensure a subsequent write won't fail due to a full disk.
It is possible in a multithreaded environment where two threads are reading from the socket. Is this a multithreaded application?
If you do not call any other syscall between select() and recv() on this socket, then recv() will never return EAGAIN or EWOULDBLOCK.
I don't know what they mean with recv-timeout, however, the POSIX standard does not mention it here so you can be safe calling recv().
What is the easiest way to check if a socket was closed on the remote side of the connection? socket::is_open() returns true even if it is closed on the remote side (I'm using boost::asio::ip::tcp::socket).
I could try to read from the stream and see if it succeeds, but I'd have to change the logic of my program to make it work this way (I do not want data to be extracted from the stream at the point of the check).
Just check for boost::asio::error::eof error in your async_receive handler. It means the connection has been closed. That's the only proper way to do this.
Is there a boost peek function available? Most socket implementations have a way to read data without removing it from the queue, so you can read it again later. This would seem to satisfy your requirements.
After quickly glancing through the asio docs, I wasn't able to find exactly what I was expecting, but that doesn't mean its not there.
I'd suggest this for starters.
If the connection has been cleanly closed by the peer you should get an EOF while reading. Otherwise I generally ping in order to figure out if the connection is really alive.
I think that in general once you open a socket, you should start reading it inmediately and never stop doing so. This way you can make your server or client to support both synchronous and asynchronous protocols. The moment the client closes the connection, the moment the read will tell you this.
Using error_code is able to check the condition whether the client is connected or not. If the connection is success, the error_code error.value() will return 0, else return other value. You can also check the message() from the error_code.
boost::asio::socket_base::keep_alive keepAlive(true);
peerSocket->set_option(keepAlive);
Enable keep alive for the peer socket. Use the native socket to adjust the keepalive interval so that as soon as the connection is closed the async_receive handler will get EOF while reading.
Configuring TCP keep_alive with boost::asio