I'm experiencing some issues with rewriting my blocking socket server to a non-blocking version.
Actually, I can't seem to even get socket connected anymore, I've been googling for the most of today, and trying different solutions I find here and there, but none of them seem to work properly...
Currently my server loop just keeps timeouting the select() call, with no new sockets accepted.
Client socket seems to connect on some level, since if I start it, it will block trying to write, and if I close the server, it will inform that connection was reset by peer.
Is the following a correct assumption?
With non-blocking server I should normally open the socket, then set it's flags to non-blocking, bind it, and the start calling select for read file descriptor and wait for it to populate ?
I need to remove old blocking "accept()" call, which was waiting endlessly..
If I try calling accept, it will -1 on me now...
Here is the relevant code I'm trying now
fd_set incoming_sockets;
....
int listener_socket, newsockfd, portno;
socklen_t clilen;
struct sockaddr_in serv_addr, cli_addr;
....
listener_socket = socket(AF_INET, SOCK_STREAM, 0); //get socket handle
int flags = fcntl(listener_socket, F_GETFL, 0);
if( fcntl(listener_socket, F_SETFL, flags | O_NONBLOCK) < 0 )
log_writer->write_to_error_log("Error setting listening socket to non blocking", false);
memset(&serv_addr, 0, sizeof(struct sockaddr_in));
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = INADDR_ANY;
serv_addr.sin_port = htons(portno);
....
if (bind(listener_socket, (struct sockaddr *) &serv_addr,
sizeof(struct sockaddr_in)) < 0)
{
log_writer->write_to_error_log("Unable to bind socket, aborting!", true);
}
....
struct timeval timeout;
timeout.tv_sec = 1;
timeout.tv_usec = 0;
int ready_sockets = 0;
listen(listener_socket,1);
FD_ZERO(&incoming_sockets);
FD_SET(listener_socket, &incoming_sockets);
while(true)
{
ready_sockets = select(listener_socket + 1 , &incoming_sockets, (fd_set * ) 0, (fd_set * ) 0, &timeout );
if(ready_sockets == 0)
{
//I loop here now for ever
std::cout << "no new sockets available, snooze 2\n";
sleep(2);
} else
{
std::cout << "connection received!\n";
Since you don't show the whole loop, I don't know if you do it later, but you should initialize the descriptor sets and timeout structure before every call to select.
You should mover the fd_zero() fd_set() macros inside the loop, select will actually change the bitmasks in the fd_sets (and the timeout value). Reinitialise them on every iteration. Also check for select returning -1 and the associated errno (EPIPE ...)
while(true)
{
FD_ZERO(&incoming_sockets);
FD_SET(listener_socket, &incoming_sockets);
ready_sockets = select(listener_socket + 1 , &incoming_sockets, (fd_set * ) 0, (fd_set * ) 0, &timeout );
if(ready_sockets == 0)
{
... }
Related
I need to implement following behavior: when server starts, it should check for existing servers using broadcast. Then it waits for answer.
But, how to set any timeout for waiting?
int optval = 1;
char buff[BUFF_SIZE];
SOCKADDR_IN addr;
int length = sizeof(addr);
if (setsockopt(s, SOL_SOCKET, SO_BROADCAST, (char*)&optval, sizeof(optval)) == SOCKET_ERROR) throw(errors::SETSOCKOPT);
addr->sin_family = AF_INET;
addr->sin_port = htons(this->serverPort);
addr->sin_addr.s_addr = INADDR_ANY;
sendto(s, this->serverName.c_str(), this->serverName.length() + 1, NULL, (SOCKADDR*)&addr, sizeof(addr));
memset(&addr, NULL, sizeof(addr));
recvfrom(s, buff, BUFF_SIZE, NULL, (SOCKADDR*)&addr, &length);
The common way is to use select() or poll() to wait for an event on a set of filedescriptors. These functions also allow you to specify a timeout. In your case, add the following before the recvfrom() call:
struct pollfd pfd = {.fd = s, .events = POLLIN};
poll(&pfd, 1, 1000);
This will wait 1000 milliseconds. This will exit when a packet has arrived on socket s, or after 1 second, whichever comes first. You can check the return value of poll() to see if it returned because of a packet or because of a timeout.
Set a read timeout with setsockopt() and SO_RCVTIMEO, and handle EAGAIN/EWOULDBLOCK which occurs if the timeout is triggered.
I'm writing a TCP communication script in c++ to communicate between my computer and an Aldebaran Nao robot.
In general my script is working. However, the trouble I am having is that when I call connect from the client (when the server application is closed or the ethernet connection removed) I get an error that the operation is in progress.
However, once the server application is restarted / ethernet cable reconnected, I still cannot call connect to successfully reestablish a connection. I still get an error that the operation is in progress.
As a note, whenever my client determines that a connection cannot be made, the socket descriptor is closed before reattempting a connection. Here is my code for connecting on the client side:
If there is any more information that would be useful, I would be happy to provide it. This project is relatively large, so I didn't want to include too much irrelevant information here.
TCPStream* TCPConnector::connect(const char* serverIP, int port, int timeoutSec)
{
if (timeoutSec == 0)
{
return connect(serverIP, port);
}
struct sockaddr_in address;
// Store all zeros for address struct.
memset(&address, 0, sizeof(address));
// Configure address struct.
address.sin_family = AF_INET;
address.sin_port = htons(port); // Convert from host to TCP network byte order.
inet_pton(PF_INET, serverIP, &(address.sin_addr)); // Convert IP address to network byte order.
// Create a socket. The socket signature is as follows: socket(int domain, int type, int protocol)
int sd = socket(AF_INET, SOCK_STREAM, 0);
int optval = 1;
if (setsockopt(sd, SOL_SOCKET, SO_REUSEADDR, &optval, sizeof optval) == -1)
{
std::cout << "failed to set socket option" << std::endl;
}
// Set socket to be non-blocking.
int arg;
arg = fcntl(sd, F_GETFL, NULL);
arg |= O_NONBLOCK;
fcntl(sd, F_SETFL, arg);
// Connect with time limit.
fd_set set;
FD_ZERO(&set); // Clear the set.
FD_SET(sd, &set); // Add our file descriptor to the set.
struct timeval timeout;
timeout.tv_sec = timeoutSec;
timeout.tv_usec = 0;
// If the connect call returns 0, then the connection was established. Otherwise,
// check if the three-way handshake is underway.
if (::connect(sd, (struct sockaddr *)&address, sizeof(address)) < 0)
{
// If the handshake is underway.
if (errno == EINPROGRESS)
{
std::cout << "handshake in progress" << std::endl;
// Designate timeout period.
int ret = select(sd + 1, NULL, &set, NULL, &timeout);
std::cout << "return value from select : " << ret << std::endl;
// Check if timeout or an error occurred.
if (ret <= 0)
{
return NULL;
}
else
{
// Check if select returned 1 due to an error.
int valopt;
socklen_t len = sizeof(int);
getsockopt(sd, SOL_SOCKET, SO_ERROR, (void*)(&valopt), &len);
if (valopt)
{
char * errorMessage = strerror( errno); // get string message from errn
std::string msg (errorMessage);
std::cout << msg << std::endl;
return NULL;
}
}
}
else
{
return NULL;
}
}
// Return socket to blocking mode
arg = fcntl(sd, F_GETFL, NULL);
arg &= (~O_NONBLOCK);
fcntl(sd, F_SETFL, arg);
// Create stream object.
return new TCPStream(sd, &address);
}
Your socket is non-blocking mode (you do it explicitly).
As a result, your connect will return immediately with 'connection is in progress'. When socket is non-blocking, you would need than to poll on this socket and wait for it to become readable and/or writeable - this would mean connection is completed (either successfully or not).
A better option in my view would be to use blocking sockets - I see no reason for you to use non-blocking call here.
I'm trying to create a server socket with C++ in order to accept one client connection at a time. The program successfully creates the server socket and waits for incoming connections but when a connection is closed by the client the program would loop endlessly. Otherwise if the connection is interrupted it would keep waiting for new connections as expected. Any idea why this is happening? Thanks
This is my C++ server code:
int listenfd, connfd, n;
struct sockaddr_in servaddr, cliaddr;
socklen_t clilen;
pid_t childpid;
char mesg[1000];
listenfd = socket(AF_INET, SOCK_STREAM, 0);
bzero(&servaddr, sizeof(servaddr));
servaddr.sin_family = AF_INET;
servaddr.sin_addr.s_addr = htonl(INADDR_ANY);
servaddr.sin_port = htons(32000);
bind(listenfd, (struct sockaddr *)&servaddr, sizeof(servaddr));
listen(listenfd, 1024);
while (true) {
clilen = sizeof(cliaddr);
connfd = accept(listenfd, (struct sockaddr *)&cliaddr, &clilen);
if ((childpid = fork()) == 0) {
close (listenfd);
while (true) {
n = recvfrom(connfd, mesg, 1000, 0, (struct sockaddr *)&cliaddr, &clilen);
sendto(connfd, mesg, n, 0, (struct sockaddr *)&cliaddr, sizeof(cliaddr));
mesg[n] = 0;
printf("%d: %s \n", n, mesg);
if (n <= 0) break;
}
close(connfd);
}
}
For some reason when the client closes the connection the program would keep printing -1: even with the if-break clause..
You never close connfd in parent process (when childpid != 0), and you do not properly terminate child process that will try to loop. Your if block should look like :
if ((childpid = fork()) == 0) {
...
close(connfd);
exit(0);
}
else {
close(connfd);
}
But as you say you want to accept one connection at a time, you can simply not fork.
And as seen in other answers :
do not use mesg[n] without testing n >= 0
recvfrom and sendto are overkill for TCP simply use recv and send (or even read and write)
mesg[n] = 0;
This breaks when n<0, ie. socket closed
The problem is your "n" and recvfrom. You are having a TCP client so the recvfrom won't return the correct value.
try to have a look on :
How to send and receive data socket TCP (C/C++)
Edit 1 :
Take note that you do the binding not connect() http://www.beej.us/guide/bgnet/output/html/multipage/recvman.html
means there is an error in recieving data, errno will be set accordingly, please try to check the error flag.
you've written a TCP server, but you use recvfrom and sendto which are specific for connection-less protocols (UDP).
try with recv and send. maybe that might help.
I am trying to enumerate local SQL instances using SQLBrowseConnect. Generally speaking, this is working fine, but we have one set up which results in an SQLExpress instance not being discovered. Here is the code in question:
SQLSetConnectAttr(hSQLHdbc,
SQL_COPT_SS_BROWSE_SERVER,
_T("(local)"),
SQL_NTS);
CString inputParam = _T("Driver={SQL Server}");
SQLBrowseConnect(hSQLHdbc,
inputParam,
SQL_NTS,
szConnStrOut,
MAX_RET_LENGTH,
&sConnStrOut);
In the failed instance, the code is running on a domain controller. The missing local instance of SQL is an SQLExpress instance (version 9). However, the puzzling thing is that running sqlcmd -L shows the missing instance without any problems.
Am I missing something really silly? Please remember that on other systems and set ups there is no issue.
After much investigation, I couldn't really find out what the problem was specifically. This one machine just would not discover its own instances of SQL using SQLBrowseConnect. I therefore decided to write my own version. Discovering SQL instances turns out to be pretty easy. You just send a broadcast UDP packet to port 1434 containing the payload 0x02 (1 byte) and wait for SQL servers to respond. They respond with one packet per server which details all the instances on that machine. The code required to do this is shown below:
// to enumerate sql instances we simple send 0x02 as a broadcast to port 1434.
// Any SQL servers will then respond with a packet containing all the information
// about installed instances. In this case we only send to the loopback address
// initialise
WSADATA WsaData;
WSAStartup( MAKEWORD(2,2), &WsaData );
SOCKET udpSocket;
struct sockaddr_in serverAddress;
if ((udpSocket = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP)) < 0)
{
return;
}
// set up the address
serverAddress.sin_family = AF_INET;
serverAddress.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
serverAddress.sin_port = htons(1434);
// the payload
char payload = 0x02;
// config the port for broadcast (not totally necessary right now but maybe in the future)
BOOL broadcast = TRUE;
setsockopt(udpSocket, SOL_SOCKET, SO_BROADCAST, reinterpret_cast<const char*>(&broadcast), sizeof(BOOL));
// receive address info
sockaddr_in RecvAddr;
RecvAddr.sin_family = AF_INET;
RecvAddr.sin_addr.s_addr = htonl(INADDR_ANY);
sockaddr_in SenderAddr;
int SenderAddrSize = sizeof (SenderAddr);
// bind the socket to the receive address info
int iResult = bind(udpSocket, (SOCKADDR *) & RecvAddr, sizeof (RecvAddr));
if (iResult != 0)
{
int a = WSAGetLastError();
return;
}
if (sendto(udpSocket, &payload, 1, 0, (struct sockaddr *) &serverAddress, sizeof(serverAddress)) < 0)
{
int a = WSAGetLastError();
return;
}
// set up a select so that if we don't get a timely response we just bomb out.
fd_set fds ;
int n ;
struct timeval tv ;
// Set up the file descriptor set.
FD_ZERO(&fds) ;
FD_SET(udpSocket, &fds) ;
// Set up the struct timeval for the timeout.
tv.tv_sec = 5 ;
tv.tv_usec = 0 ;
// Wait until timeout or data received.
n = select ( (int)udpSocket, &fds, NULL, NULL, &tv ) ;
if ( n == 0)
{
// timeout
return;
}
else if( n == -1 )
{
// error
return;
}
// receive buffer
char RecvBuf[1024];
int BufLen = 1024;
memset(RecvBuf, 0, BufLen);
iResult = recvfrom(udpSocket,
RecvBuf,
BufLen,
0,
(SOCKADDR *) & SenderAddr,
&SenderAddrSize);
if (iResult == SOCKET_ERROR)
{
int a = WSAGetLastError();
return;
}
// we have received some data. However we need to parse it to get the info we require
if (iResult > 0)
{
// parse the string as required here. However, note that in my tests, I noticed
// that the first 3 bytes always seem to be junk values and will mess with string
// manipulation functions if not removed. Perhaps this is why SQLBrowseConnect
// was having problems for me???
}
I have a client application that sends TCP packets. Currently, my application does this:
creates a socket, binds and sends the packets.
struct sockaddr_in localaddress;
localaddress.sin_port = htons(0);
localaddress.sin_addr.s_addr = INADDR_ANY;
int socket;
socket = socket(AF_INET, SOCK_STREAM, 0));
bind(socket, (struct sockaddr *)&sin,sizeof(struct sockaddr_in) );
And in another thread, the application connects and sends the packets:
struct socketaddr_in remoteaddress;
// omitted: code to set the remote address/ port etc...
nRet = connect (socket, (struct sockaddr * ) & remoteaddress, sizeof (sockaddr_in ));
if (nRet == -1)
nRet = WSAGetLastError();
if (nRet == WSAEWOULDBLOCK) {
int errorCode = 0;
socklen_t codeLen = sizeof(int);
int retVal = getsockopt(
socket, SOL_SOCKET, SO_ERROR, ( char * ) &errorCode, &codeLen );
if (errorCode == 0 && retVal != 0)
errorCode = errno;
}
/* if the connect succeeds, program calls a callback function to notify the socket is connected, which then calls send() */
Now I want to specify a port range for local port, so I changed the code to
nPortNumber = nPortLow;
localaddress.sin_port = htons(nPortNumber);
and loops nPortNumber in my port range, e.g ( 4000 - 5000 ) until the bind succeeds.
Since I always start my nPortNumber from the low port, if a socket is previously created on the same port, I get the WSAEADDRINUSE error as errorCode, which is too late for me because it has already passed the socket creation stage. (Why didn't I get WSAEADDRINUSE at bind() or connect()?)
Is there a way I can get the WSAEADDRINUSE earlier or is there a way to create a socket in the port range that binds and connects?
Thanks in advance!
I cannot answer with 100% certainty as for that I should know at which point you actually get WSAEADDRINUSE.
IN any case, it is normal you don't get it at bind, because you use INADDR_ANY. IIRC, this actually delays the bind process to the actual connect (my guess is it then changes the INADDR based on routing for the remote addr). However, as far as I know, you should then actually get the error at the call of connect...