SO_REUSEADDR just make sense for server? - c++

I am working with two binaries which use UDP sockets. Process A waits for messages on a UDP socket (IP_1:PORT_1) by select(), and process B eventually sends through an UDP socket.
By some constraints out of scope, process B needs to send by a socket on (IP_1:PORT_1). Since this is the same IP:PORT pair for both processes, it is not possible to use bind(). I tried SO_REUSEADDR, but I am wondering if reusing the IP:PORT with SO_REUSEADDR for sending and receiving makes sense, or was this option conceived just for listening sockets?
process A
int nOptVal = 1;
setsockopt(UDPSocket, SOL_SOCKET, SO_REUSEADDR, &nOptVal, sizeof(nOptVal));
bind(UDPSocket, (struct sockaddr *)&addrLocal, sizeof(addrLocal));
select(fdMax+1, UDPSocket, NULL, NULL, NULL);
process B
int nOptVal = 1;
setsockopt(UDPSocket, SOL_SOCKET, SO_REUSEADDR, &nOptVal, sizeof(nOptVal));
bind(UDPSocket, (struct sockaddr *)&addrLocal, sizeof(addrLocal));
sendto(UDPSocket, buff, len, 0, (struct sockaddr *)&addrDest, sizeof(struct sockaddr));

Related

tcp nonblocking not working

I have a tcp server and 2 clients that want to connect to it. The way this clients will connect is that 1 of them, lets call it client1 will be all the time connected sending data and the other, client2, will eventually connect, send small amount of data and disconnect. I set O_NONBLOCK option on. The behaviour I experience is that the client that is continuosly sending data,on server side, receives one message and wait for the next connection... Here is what i tried so far(The code is the while where in any moment the client2 wants to connect and send data and disconnect):
fcntl(sockfd, F_SETFL, O_NONBLOCK);
if (bind(sockfd, (struct sockaddr *) &serv_addr, sizeof(serv_addr)) < 0)
error("ERROR on binding");
listen(sockfd, 5);
clilen = sizeof(cli_addr);
int flag = 0;
do {
newsockfd = accept(sockfd, (struct sockaddr *) &cli_addr, &clilen);
if(newsockfd > 0){
//Sockets Layer Call: inet_ntop()
inet_ntop(AF_INET6, &(cli_addr.sin6_addr),client_addr_ipv6, 100);
printf("Incoming connection from client having IPv6 address: %s\n",client_addr_ipv6);
n = recv(newsockfd, buffer, 49,0);
if(n > 0){
send_data(argv[1],argv[2],argv[3],argv[4],argv[5],argv[6],buffer);
memset(buffer,0,sizeof(buffer));
}
}
newsockfd2 = accept(sockfd, (struct sockaddr *) &cli_addr2, &clilen);
//Sockets Layer Call: inet_ntop()
if(newsockfd2 > 0){
inet_ntop(AF_INET6, &(cli_addr2.sin6_addr),client_addr_ipv6, 100);
printf("Incoming connection from client having IPv6 address: %s\n",client_addr_ipv6);
n2= recv(newsockfd2, buffer, 49, 0);
if(n2 > 0){
send_data(argv[1],argv[2],argv[3],argv[4],argv[5],argv[6],buffer);
memset(buffer,0,sizeof(buffer));
}
}
}while(!flag);
I also tried adding the option inside of the while and setting nonblocking on newsockfd and newsockfd2but same result.
What am I doing wrong? thanks ! :D
When a new socket is retuned from accept, you must create a new thread with this socket so that one-to-one communication will be handled in this thread. The socked doesn't have to ne non-blocking.

C++ recvfrom timeout

I need to implement following behavior: when server starts, it should check for existing servers using broadcast. Then it waits for answer.
But, how to set any timeout for waiting?
int optval = 1;
char buff[BUFF_SIZE];
SOCKADDR_IN addr;
int length = sizeof(addr);
if (setsockopt(s, SOL_SOCKET, SO_BROADCAST, (char*)&optval, sizeof(optval)) == SOCKET_ERROR) throw(errors::SETSOCKOPT);
addr->sin_family = AF_INET;
addr->sin_port = htons(this->serverPort);
addr->sin_addr.s_addr = INADDR_ANY;
sendto(s, this->serverName.c_str(), this->serverName.length() + 1, NULL, (SOCKADDR*)&addr, sizeof(addr));
memset(&addr, NULL, sizeof(addr));
recvfrom(s, buff, BUFF_SIZE, NULL, (SOCKADDR*)&addr, &length);
The common way is to use select() or poll() to wait for an event on a set of filedescriptors. These functions also allow you to specify a timeout. In your case, add the following before the recvfrom() call:
struct pollfd pfd = {.fd = s, .events = POLLIN};
poll(&pfd, 1, 1000);
This will wait 1000 milliseconds. This will exit when a packet has arrived on socket s, or after 1 second, whichever comes first. You can check the return value of poll() to see if it returned because of a packet or because of a timeout.
Set a read timeout with setsockopt() and SO_RCVTIMEO, and handle EAGAIN/EWOULDBLOCK which occurs if the timeout is triggered.

Can't receive multicast packets outside of sending host

I have an application that regularly receives multicast updates from another application. As long as the receiver application is on the same host as the sender, I am able to get the multicast packets. If the receiver application is another host in the same LAN, then I am unable to read the multicast packets. I can see the those packets on wireshark on both machines.
Host A [Win8.1]- Both Wireshark and my app can read the packets.
Host B [Win2012 R2] - Only Wireshark can see the packets, my app reads nothing.
The sender is on Host A. The Host A also has Hyper-V enabled, if that matters.
My app uses Boost.asio sockets, I am seeing the same results using C sockets too. Here is the C example with all the error handling stripped off for simplicity. It works only on Host A, but not on Host B.
EDIT: I tried something crazy today.
I started the same sender on Host B using the same MC address and port. Now the receiver started receiving transmissions from both Host A and B. Then I shutdown the sender on Host B, but the receiver continued to receive packets from Host A. Now I restarted the receiver on Host B, it is again blind as a bat.
void Receiver(const char* mc_address, short port)
{
struct sockaddr_in addr;
int addrlen, sock;
struct ip_mreq mreq;
char message[500];
sock = socket(AF_INET, SOCK_DGRAM, 0);
memset(&addr,0, sizeof(addr));
addr.sin_family = AF_INET;
addr.sin_addr.s_addr = INADDR_ANY;
addr.sin_port = htons(port);
addrlen = sizeof(addr);
bind(sock, (struct sockaddr *) &addr, sizeof(addr));
mreq.imr_multiaddr.s_addr = inet_addr(mc_address);
mreq.imr_interface.s_addr = INADDR_ANY;
setsockopt(sock, IPPROTO_IP, IP_ADD_MEMBERSHIP,(const char*)&mreq, sizeof(mreq));
while (1)
{
int count = recvfrom(sock, message, sizeof(message), 0, (struct sockaddr *) &addr, &addrlen);
if (count == 0) break;
message[count] = 0;
std::cout << message << std::endl;
}
}

Server socket finishes when client closes connection

I'm trying to create a server socket with C++ in order to accept one client connection at a time. The program successfully creates the server socket and waits for incoming connections but when a connection is closed by the client the program would loop endlessly. Otherwise if the connection is interrupted it would keep waiting for new connections as expected. Any idea why this is happening? Thanks
This is my C++ server code:
int listenfd, connfd, n;
struct sockaddr_in servaddr, cliaddr;
socklen_t clilen;
pid_t childpid;
char mesg[1000];
listenfd = socket(AF_INET, SOCK_STREAM, 0);
bzero(&servaddr, sizeof(servaddr));
servaddr.sin_family = AF_INET;
servaddr.sin_addr.s_addr = htonl(INADDR_ANY);
servaddr.sin_port = htons(32000);
bind(listenfd, (struct sockaddr *)&servaddr, sizeof(servaddr));
listen(listenfd, 1024);
while (true) {
clilen = sizeof(cliaddr);
connfd = accept(listenfd, (struct sockaddr *)&cliaddr, &clilen);
if ((childpid = fork()) == 0) {
close (listenfd);
while (true) {
n = recvfrom(connfd, mesg, 1000, 0, (struct sockaddr *)&cliaddr, &clilen);
sendto(connfd, mesg, n, 0, (struct sockaddr *)&cliaddr, sizeof(cliaddr));
mesg[n] = 0;
printf("%d: %s \n", n, mesg);
if (n <= 0) break;
}
close(connfd);
}
}
For some reason when the client closes the connection the program would keep printing -1: even with the if-break clause..
You never close connfd in parent process (when childpid != 0), and you do not properly terminate child process that will try to loop. Your if block should look like :
if ((childpid = fork()) == 0) {
...
close(connfd);
exit(0);
}
else {
close(connfd);
}
But as you say you want to accept one connection at a time, you can simply not fork.
And as seen in other answers :
do not use mesg[n] without testing n >= 0
recvfrom and sendto are overkill for TCP simply use recv and send (or even read and write)
mesg[n] = 0;
This breaks when n<0, ie. socket closed
The problem is your "n" and recvfrom. You are having a TCP client so the recvfrom won't return the correct value.
try to have a look on :
How to send and receive data socket TCP (C/C++)
Edit 1 :
Take note that you do the binding not connect() http://www.beej.us/guide/bgnet/output/html/multipage/recvman.html
means there is an error in recieving data, errno will be set accordingly, please try to check the error flag.
you've written a TCP server, but you use recvfrom and sendto which are specific for connection-less protocols (UDP).
try with recv and send. maybe that might help.

How to make sure a TCP socket connects at creation time

I have a client application that sends TCP packets. Currently, my application does this:
creates a socket, binds and sends the packets.
struct sockaddr_in localaddress;
localaddress.sin_port = htons(0);
localaddress.sin_addr.s_addr = INADDR_ANY;
int socket;
socket = socket(AF_INET, SOCK_STREAM, 0));
bind(socket, (struct sockaddr *)&sin,sizeof(struct sockaddr_in) );
And in another thread, the application connects and sends the packets:
struct socketaddr_in remoteaddress;
// omitted: code to set the remote address/ port etc...
nRet = connect (socket, (struct sockaddr * ) & remoteaddress, sizeof (sockaddr_in ));
if (nRet == -1)
nRet = WSAGetLastError();
if (nRet == WSAEWOULDBLOCK) {
int errorCode = 0;
socklen_t codeLen = sizeof(int);
int retVal = getsockopt(
socket, SOL_SOCKET, SO_ERROR, ( char * ) &errorCode, &codeLen );
if (errorCode == 0 && retVal != 0)
errorCode = errno;
}
/* if the connect succeeds, program calls a callback function to notify the socket is connected, which then calls send() */
Now I want to specify a port range for local port, so I changed the code to
nPortNumber = nPortLow;
localaddress.sin_port = htons(nPortNumber);
and loops nPortNumber in my port range, e.g ( 4000 - 5000 ) until the bind succeeds.
Since I always start my nPortNumber from the low port, if a socket is previously created on the same port, I get the WSAEADDRINUSE error as errorCode, which is too late for me because it has already passed the socket creation stage. (Why didn't I get WSAEADDRINUSE at bind() or connect()?)
Is there a way I can get the WSAEADDRINUSE earlier or is there a way to create a socket in the port range that binds and connects?
Thanks in advance!
I cannot answer with 100% certainty as for that I should know at which point you actually get WSAEADDRINUSE.
IN any case, it is normal you don't get it at bind, because you use INADDR_ANY. IIRC, this actually delays the bind process to the actual connect (my guess is it then changes the INADDR based on routing for the remote addr). However, as far as I know, you should then actually get the error at the call of connect...