Socket can't accept connections when non-blocking? - c++

EDIT: Messed up my pseudo-coding of the accept call, it now reflects what I'm actually doing.
I've got two sockets going. I'm trying to use send/recv between the two. When the listening socket is blocking, it can see the connection and receive it. When it's nonblocking, I put a busy wait in (just to debug this) and it times out, always with the error EWOULDBLOCK. Why would the listening socket not be able to see a connection that it could see when blocking?
The code is mostly separated in functions, but here's some pseudo-code of what I'm doing.
int listener = -2;
int connector = -2;
int acceptedSocket = -2;
getaddrinfo(port 27015, AI_PASSIVE) results loop for listener socket
{
if (listener socket() == 0)
{
if (listener bind() == 0)
if (listener listen() == 0)
break;
listener close(); //if unsuccessful
}
}
SetBlocking(listener, false);
getaddrinfo("localhost", port 27015) results loop for connector socket
{
if (connector socket() == 0)
{
if (connector connect() == 0)
break; //if connect successful
connector close(); //if unsuccessful
}
}
loop for 1 second
{
acceptedSocket = listener accept();
if (acceptedSocket > 0)
break; //if successful
}
This just outputs a huge list errno of EWOULDBLOCK before ultimately ending the timeout loop. If I output the file descriptor for the accepted socket in each loop interation, it is never assigned a file descriptor.
The code for SetBlocking is as so:
int SetBlocking(int sockfd, bool blocking)
{
int nonblock = !blocking;
return ioctl(sockfd,
FIONBIO,
reinterpret_cast<int>(&nonblock));
}
If I use a blocking socket, either by calling SetBlocking(listener, true) or removing the SetBlocking() call altogether, the connection works no problem.
Also, note that this connection with the same implementation works in Windows, Linux, and Solaris.

Because of the tight loop you are not letting the OS complete your request. That's the difference between VxWorks and others - you basically preempt your kernel.
Use select(2) or poll(2) to wait for the connection instead.

Related

UDP socket select() returns 1 without delay (timeout) under some conditions

I fallen into problem during development of my client application.
I want to use non-blocking UDP sockets in my application to communicate with a server. I am using winsock2 library on Windows.
But... For some reason I have strange behavior of select() function under some conditions:
Socket don't have bound address and port (it is client-side socket, so it don't need it).
Before select() I send data to my local address and some port with sendto call.
For example: 192.168.1.2
Under these conditions select() instantly (without even waiting for timeout) returns 1. Like I have some packet ready to receive.
But if call recvFrom then it will sure return -1.
If I send my packets from client to any other address (which is not my address on LAN) then select() works as intended.
Also select() works as intented if don't send any packets to any address before calling select().
Socket initialization method:
bool CUdpSocket::initialize()
{
_handle = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
... error processing code, returns false if error...
}
Method which uses select(). This method works fine for server socket (with bound address and port).
bool CUdpSocket::waitData(s32 timeout_ms)
{
fd_set readset;
int result;
struct timeval tv;
// Initialize the set.
FD_ZERO(&readset);
FD_SET(_handle, &readset);
// Initialize time out struct.
tv.tv_sec = 0;
tv.tv_usec = timeout_ms * 1000;
result = select(_handle + 1, &readset, NULL, NULL, &tv);
// Timeout with no data.
if (result == 0) {
return false; // Get out of here!
}
// Error.
if (result < 0) {
// TODO: Maybe throw exception or do something.
return false;
} else if (!FD_ISSET(_handle, &readset)) {
return false; // No data!
}
// There is some data!
return true;
}
If you send a packet from an unbound UDP socket, the OS will pick an usused port for you and bind the socket to that port -- the UDP protocol requires that the sending port have an address to send from.
So if the packet you're sending results in a response, then it makes perfect sense for the select to return 1 -- that's the response to the packet you sent.

Can someone explain the function of writeable and readable fd_sets with WinSock?

I'm writing a network game for a university project and while I have messages being sent and received between a client and a server, I'm unsure on how I would go about implementing a writeable fd_set (my lecturer's example code only included a readable fd_set) and what the function is of both fd_sets with select(). Any insight you could give would be great in helping me understand this.
My server code is as such:
bool ServerSocket::Update() {
// Update the connections with the server
fd_set readable;
FD_ZERO(&readable);
// Add server socket, which will be readable if there's a new connection
FD_SET(m_socket, &readable);
// Add connected clients' sockets
if(!AddConnectedClients(&readable)) {
Error("Couldn't add connected clients to fd_set.");
return false;
}
// Set timeout to wait for something to happen (0.5 seconds)
timeval timeout;
timeout.tv_sec = 0;
timeout.tv_usec = 500000;
// Wait for the socket to become readable
int count = select(0, &readable, NULL, NULL, &timeout);
if(count == SOCKET_ERROR) {
Error("Select failed, socket error.");
return false;
}
// Accept new connection to the server socket if readable
if(FD_ISSET(m_socket, &readable)) {
if(!AddNewClient()) {
return false;
}
}
// Check all clients to see if there are messages to be read
if(!CheckClients(&readable)) {
return false;
}
return true;
}
A socket becomes:
readable if there is either data in the socket receive buffer or a pending FIN (recv() is about to return zero)
writable if there is room in the socket receive buffer. Note that this is true nearly all the time, so you should use it only when you've encountered a prior EWOULDBLOCK/EAGAIN on the socket, and stop using it when you don't.
You'd create an fd_set variable called writeable, initialize it the same way (with the same sockets), and pass it as select's third argument:
select(0, &readable, &writeable, NULL, &timeout);
Then after select returns you'd check whether each socket is still in the set writeable. If so, then it's writeable.
Basically, exactly the same way readable works, except that it tells you a different thing about the socket.
select() is terribly outdated and it's interface is arcane. poll (or it's windows counterpart WSAPoll is a modern replacement for it, and should be always preferred.
It would be used in following manner:
WSAPOLLFD pollfd = {m_socket, POLLWRNORM, 0};
int rc = WSAPoll(&pollfd, 1, 100);
if (rc == 1) {
// Socket is ready for writing!
}

Linux CentOS 5: non-blocking socket send hangs indefenitely

I have the following C++ code in linux:
if (epoll_wait(hEvent,&netEvents,1,0))
{
// check FIRST for disconnection to avoid send() to a closed socket (halts on centos on my server!)
if ((netEvents.events & EPOLLERR)||(netEvents.events & EPOLLHUP)||(netEvents.events & EPOLLRDHUP)) {
save_log("> client terminated connection");
goto connection_ended; // ---[ if its a CLOSE event .. close :)
}
if (netEvents.events & EPOLLOUT) // ---[ if socket is available for write
{
if (send_len) {
result = send(s,buffer,send_len,MSG_NOSIGNAL);
save_slogf("1112:send (s=%d,len=%d,ret=%d,errno=%d,epoll=%d,events=%d)",s,send_len,result,errno,hEvent,netEvents.events);
if (result > 0) {
send_len = 0;
current_stage = CL_STAGE_USE_LINK_BRIDGE;
if (close_after_send_response) {
save_log("> destination machine closed connection");
close_after_send_response = false;
goto connection_ended;
}
} else {
if (errno == EAGAIN) return;
else if (errno == EWOULDBLOCK) return;
else {
save_log("> unexpected error on socket, terminating");
connection_ended:
close_client();
reset();
return;
}
}
}
}
}
}
hEvent: epoll created to listen to EPOLLIN,EPOLLOUT,EPOLLERR,EPOLLHUP,EPOLLRDHUP
s: NON-BLOCKING (!!!) socket created from an accept on a nonblocking listening socket
Basically this code is attempting to send a packet back to a connected user that connected to a server. It usually works ok but on RANDOM occasions (perhaps when some wierd network event happends) the program hangs indefinitely on the "result = send(s,buffer,send_len,MSG_NOSIGNAL) line.
I have no idea what may be the cause for this, I have tried to monitor the socket operations and nothing seemed to give me a hint of a clue as to why it happends. I have to assume this is either a KERNEL bug or something very wierd because I have the same program written under Windows and it works perfect there.

select() always returns 1; TCP connected socket troubles in c++

I'm doing a c++ project that requires a server to create a new thread to handle connections each time accept() returns a new socket descriptor. I am using select to decide when a connection attempt has taken place as well as when a client has sent data over the newly created client socket (the one that accept creates). So two functions and two selects - one for polling the socket dedicated to listening for connections, one for polling the socket created when a new connection is successful.
The behavior of the first case is what I expect - FD_ISSET returns true for the id of my listening socket only when a connection is requested, and is false until the next connection attempt. The second case does not work, even though the code is exactly the same with different fd_set and socket objects. I'm wondering if this stems from the TCP socket? Do these sockets always return true when polled by a select due to their streamy nature?
//working snippet
struct timeval tv;
tv.tv_sec = 0;
tv.tv_usec = 500000;
fd_set readfds;
FD_ZERO(&readfds);
FD_SET(sid,&readfds);
//start server loop
for(;;){
//check if listening socket has any client requrests, timeout at 500 ms
int numsockets = select(sid+1,&readfds,NULL,NULL,&tv);
if(numsockets == -1){
if(errno == 4){
printf("SIGINT recieved in select\n");
FD_ZERO(&readfds);
myhandler(SIGINT);
}else{
perror("server select");
exit(1);
}
}
//check if listening socket is ready to be read after select returns
if(FD_ISSET(sid, &readfds)){
int newsocketfd = accept(sid, (struct sockaddr*)&client_addr, &addrsize);
if(newsocketfd == -1){
if(errno == 4){
printf("SIGINT recieved in accept\n");
myhandler(SIGINT);
}else{
perror("server accept");
exit(1);
}
}else{
s->forkThreadForClient(newsocketfd);
}
}
//non working snippet
//setup clients socket with select functionality
struct timeval ctv;
ctv.tv_sec = 0;
ctv.tv_usec = 500000;
fd_set creadfds;
FD_ZERO(&creadfds);
FD_SET(csid,&creadfds);
for(;;){
//check if listening socket has any client requrests, timeout at 500 ms
int numsockets = select(csid+1,&creadfds,NULL,NULL,&ctv);
if(numsockets == -1){
if(errno == 4){
printf("SIGINT recieved in client select\n");
FD_ZERO(&creadfds);
myhandler(SIGINT);
}else{
perror("server select");
exit(1);
}
}else{
printf("Select returned %i\n",numsockets);
}
if(FD_ISSET(csid,&creadfds)){
//read header
unsigned char header[11];
for(int i=0;i<11;i++){
if(recv(csid, rubyte, 1, 0) != 0){
printf("Received %X from client\n",*rubyte);
header[i] = *rubyte;
}
}
Any help would be appreciated.
Thanks for the responses, but I don't believe it has much todo with the timeout value being inside the loop. I tested it and even with tv being reset and the fd_set being zeroed every time the server loops, select still returns 1 immediately. I feel like there's a problem with how select is treating my TCP socket. Any time I set selects highest socket id to encompass my TCP socket, it returns immediately with that socket set. Also, client does not send anything, just connects.
One thing you must do is reset the value of tv to your desired timeout every time before you call select(). The select() function changes the values in tv to indicate how much time is left in the timeout, after returning from the function. If you fail to do this, your select() calls will end up using a timeout of zero, which is not efficient.
Some other operating systems implement select() differently, in such a way that they don't change the value of tv. Linux does change it, so you must reset it.
Move
FD_ZERO(&creadfds);
FD_SET(csid,&creadfds);
into the loop. The function select() reports the result in this structure. You already retrieve the result with
FD_ISSET(csid,&creadfds);

Properly writing to a nonblocking socket in C++

I'm having a strange problem while attempting to transform a blocking socket server into a nonblocking one. Though the message was only received once when being sent with blocking sockets, using nonblocking sockets the message seems to be received an infinite number of times.
Here is the code that was changed:
return ::write(client, message, size);
to
// Nonblocking socket code
int total_sent = 0, result = -1;
while( total_sent < size ) {
// Create a temporary set of flags for use with the select function
fd_set working_set;
memcpy(&working_set, &master_set, sizeof(master_set));
// Check if data is available for the socket - wait 1 second for timeout
timeout.tv_sec = 1;
timeout.tv_usec = 0;
result = select(client + 1, NULL, &working_set, NULL, &timeout);
// We are able to write - do so
result = ::write(client, &message[total_sent], (size - total_sent));
if (result == -1) {
std::cerr << "An error has occured while writing to the server."
<< std::endl;
return result;
}
total_sent += result;
}
return 0;
EDIT: The initialization of the master set looks like this:
// Private member variables in header file
fd_set master_set;
int sock;
...
// Creation of socket in class constructor
sock = ::socket(PF_INET, socket_type, 0);
// Makes the socket nonblocking
fcntl(sock,F_GETFL,0);
FD_ZERO(&master_set);
FD_SET(sock, &master_set);
...
// And then when accept is called on the socket
result = ::accept(sock, NULL, NULL);
if (result > 0) {
// A connection was made with a client - change the master file
// descriptor to note that
FD_SET(result, &master_set);
}
I have confirmed that in both cases, the code is only being called once for the offending message. Also, the client side code hasn't changed at all - does anyone have any recommendations?
fcntl(sock,F_GETFL,0);
How does that make the socket non-blocking?
fcntl(sock, F_SETFL, O_NONBLOCK);
Also, you are not checking if you can actually write to the socket non-blocking style with
FD_ISSET(client, &working_set);
I do not believe that this code is really called only once in the "non blocking" version (quotes because it is not really non-blocking yet as Maister pointed out, look here), check again. If the blocking and non blocking versions are consistent, the non blocking version should return total_sent (or size). With return 0 instead caller is likely to believe nothing was sent. Which would cause infinite sending... is it not what's happening ?
Also your "non blocking" code is quite strange. You seem to use select to make it blocking anyway... Ok, with a timeout of 1s, but why don't you make it really non blocking ? ie: remove all the select stuff and test for error case in write() with errno being EWOULDBLOCK. select or poll are for multiplexing.
Also you should check errors for select and use FD_ISSET to check if socket is really ready. What if the 1 s timeout really happen ? Or if select is stopped by some interruption ? And if an error occurs in write, you should also write which error, that is much more useful than your generic message. But I guess this part of code is still far from finished.
As far as I understand your code it should probably look somewhat like that (if the code is running in an unique thread or threaded, or forking when accepting a connection would change details):
// Creation of socket in class constructor
sock = ::socket(PF_INET, socket_type, 0);
fcntl(sock, F_SETFL, O_NONBLOCK);
// And then when accept is called on the socket
result = ::accept(sock, NULL, NULL);
if (result > 0) {
// A connection was made with a client
client = result;
fcntl(client, F_SETFL, O_NONBLOCK);
}
// Nonblocking socket code
result = ::write(client, &message[total_sent], (size - total_sent));
if (result == -1) {
if (errno == EWOULDBLOCK){
return 0;
}
std::cerr << "An error has occured while writing to the server."
<< std::endl;
return result;
}
return size;