Where do I set TCP_NODELAY in this C++ TCP Client? - c++

Where do I set TCP_NODELAY in this C++ TCP Client?
// Client socket descriptor which is just integer number used to access a socket
int sock_descriptor;
struct sockaddr_in serv_addr;
// Structure from netdb.h file used for determining host name from local host's ip address
struct hostent *server;
// Create socket of domain - Internet (IP) address, type - Stream based (TCP) and protocol unspecified
// since it is only useful when underlying stack allows more than one protocol and we are choosing one.
// 0 means choose the default protocol.
sock_descriptor = socket(AF_INET, SOCK_STREAM, 0);
if (sock_descriptor < 0)
printf("Failed creating socket\n");
bzero((char *) &serv_addr, sizeof(serv_addr));
server = gethostbyname(host);
if (server == NULL) {
printf("Failed finding server name\n");
return -1;
}
serv_addr.sin_family = AF_INET;
memcpy((char *) &(serv_addr.sin_addr.s_addr), (char *) (server->h_addr), server->h_length);
// 16 bit port number on which server listens
// The function htons (host to network short) ensures that an integer is
// interpreted correctly (whether little endian or big endian) even if client and
// server have different architectures
serv_addr.sin_port = htons(port);
if (connect(sock_descriptor, (struct sockaddr *) &serv_addr, sizeof(serv_addr)) < 0) {
printf("Failed to connect to server\n");
return -1;
} else
printf("Connected successfully - Please enter string\n");

TCP_NODELAY is option given to setsockopt system call:
#include <netinet/tcp.h>
int yes = 1;
int result = setsockopt(sock,
IPPROTO_TCP,
TCP_NODELAY,
(char *) &yes,
sizeof(int)); // 1 - on, 0 - off
if (result < 0)
// handle the error
This is to set Nagle buffering off. You should turn this option on only if you really know what you are doing.

Related

TCP connection establishment related questions

I'm reading this code and have several questions about it:
When creating this TCP connection to host and port, it should return a file descriptor on success, and -1 on error. But so far I can only see it returns -1 when connect (s, (struct sockaddr *) &sa, sizeof (sa)) < 0, where did it show it returns a file descriptor on success?
connect (s, (struct sockaddr *) &sa, sizeof (sa)) < 0 What does <0 means here?
Which one is the function of reading the request? If there is none, could you provide me with some good code examples?
int tcpconnect (char *host, int port){
struct hostent *h;
struct sockaddr_in sa;
int s;
/* Get the address of the host at which to finger from the
* hostname. */
h = gethostbyname (host);
if (!h || h->h_length != sizeof (struct in_addr)) {
fprintf (stderr, "%s: no such host\n", host);
return -1;
}
/* Create a TCP socket. */
s = socket (AF_INET, SOCK_STREAM, 0);
bzero (&sa, sizeof (sa));
sa.sin_family = AF_INET;
sa.sin_port = htons (0); /* tells OS to choose a port */
sa.sin_addr.s_addr = htonl (INADDR_ANY); /* tells OS to choose IP addr */
if (bind (s, (struct sockaddr *) &sa, sizeof (sa)) < 0) {
perror ("bind");
close (s);
return -1;
}
sa.sin_port = htons (port);
sa.sin_addr = *(struct in_addr *) h->h_addr;
/* And connect to the server */
if (connect (s, (struct sockaddr *) &sa, sizeof (sa)) < 0) {
perror (host);
close (s);
return -1;
}
return s;
}
When creating this TCP connection to host and port, it should return a file descriptor on success, and -1 on error. But so far I can only see it returns -1 when connect (s, (struct sockaddr *) &sa, sizeof (sa)) < 0, where did it show it returns a file descriptor on success?
s = socket(...) creates the actual file descriptor, and then return s; returns that file descriptor to the caller of tcpconnect() if nothing goes wrong. If anything does goes wrong, the code is releasing the file descriptor with close(s) and then returning -1 to the caller.
connect (s, (struct sockaddr *) &sa, sizeof (sa)) < 0 What does <0 means here?
connect() returns 0 on success, and negative on failure. The code is simply checking if connect() fails. Same as it is doing with bind() (and should be doing with socket()).
Which one is the function of reading the request?
There is nothing in the code shown that is reading a request. That happens outside of this code, after tcpconnect() returns a valid file descriptor to the caller.

How to detect if a port is already in use server side (in C++ on windows)?

It's certainly a common question, but not in this terms (windows, server side, accept multi connexion).
My goal is to accept to start a server listing on a port for multiple connections only if before that the port is detected "unused".
At the line where I put //HERE..., binddoesn't return a SOCKET_ERROR status as I expected.
Maybe I'm doing something wrong.
How to detect that my port is not in use by some other app?
Here is the status of the port before running (it is used)
netstat -an
TCP 127.0.0.1:2005 0.0.0.0:0 LISTENING
I hope this snippet is sufficent to explain what I'm doing, it's a merge of several steps.
WSADATA WSAData;
int err = WSAStartup(MAKEWORD(2, 2), &WSAData);
SOCKADDR_IN sin;
socklen_t recsize = sizeof(sin);
int one = 1;
SOCKADDR_IN* csin;
SOCKET csock = INVALID_SOCKET;
socklen_t crecsize = sizeof(SOCKADDR_IN);
int sock_err;
if (m_socket != INVALID_SOCKET)
{
memset(&sin, 0, recsize);
if(m_blocal)
sin.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
else
sin.sin_addr.s_addr = htonl(INADDR_ANY);
sin.sin_family = AF_INET;
sin.sin_port = htons(m_iPort);
setsockopt(m_socket, SOL_SOCKET, SO_REUSEADDR, (const char*)&one, sizeof(int));
sock_err = bind(m_socket, (SOCKADDR*)&sin, recsize);
//HERE I want to be sure no one else runs on this port
//rest of the code using: select(m_socket + 1, &rd, &wr, &er, &timeout);
}
closesocket(m_socket);
WSACleanup();
Don't set SO_REUSEADDR. Then bind() will fail if the address is already in use and WSAGetLastError() will return WSAEADDRINUSE.
Also note that two processen can still bind to the same port if the IP addresses are different, for example, one process binding to localhost and another process binding to the LAN network address.

Reading from UDP socket over WiFi always timeout

I have a piece of code that send a UDP broadcast to scan for device on our local network. It works fine when im plugged via ethernet, but it doesnt when im connected via WiFi.
Is there something different to do to connect in UDP when using WiFi?
You can find the code im using below. When using WiFi, select always return 0
struct sockaddr_in addr;
//Create socket
if ((fd = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP)) < 0)
{
perror("socket");
exit(1);
}
/* set up destination address */
memset((char *)&addr, 0, sizeof(addr));
addr.sin_family = AF_INET;
addr.sin_port = htons(48620);
addr.sin_addr.s_addr = inet_addr("192.168.3.255");
//TRYING TO BIND, NOT WORKING
if (bind(fd, (struct sockaddr *) &addr, sizeof(addr)) == -1)
{
int a = WSAGetLastError(); //ERROR 10049
perror("bind"); //Says NO ERROR
}
//allow broadcast
int broadcast = 1;
if (setsockopt(fd, SOL_SOCKET, SO_BROADCAST, (char*)&broadcast, sizeof(broadcast)) == -1)
exit(1);
if (sendto(fd, (const char *)&request, sizeof(request), 0, (struct sockaddr *) &addr, sizeof(addr)) < 0)
{
perror("sendto");
exit(1);
}
do
{
FD_ZERO(&rdFs);
FD_SET(fd, &rdFs);
lTimeout.tv_sec = 1;
lTimeout.tv_usec = 000000;
lSelRet = select(fd, (fd_set*)&rdFs, NULL, NULL, &lTimeout);
if (lSelRet > 0 && FD_ISSET(fd, &rdFs))
{
addrFromSize = sizeof(addrFrom);
lResult = recvfrom(fd, bufferIn, sizeof(bufferIn), 0, (struct sockaddr *) &addrFrom, &addrFromSize);
//Treat result
}
} while (lSelRet > 0);
Note : Even using WiFi, i can estalbish a TCP connection and communicate with the device, its just the UDP broadcast that doesnt work
Note2: currently testing on windows, but I will port it to Linux after
Edit : added the SO_BROADCAST as advised by Remy
Finally got it working, it was a code issue, not a router issue.
The issue was a misuse of the bind function, I needed to use my IP and not the broadcast IP.
/* set up destination address */
memset((char *)&addr, 0, sizeof(addr));
addr.sin_family = AF_INET;
addr.sin_port = htons(48620);
addr.sin_addr.s_addr = inet_addr("192.168.3.134"); //<== Windows : My IP, not the broadcast IP
addr.sin_addr.s_addr = INADDR_ANY; //Linux
if (bind(fd, (struct sockaddr *) &addr, sizeof(addr)) == -1)
{
perror("bind");
}
EDIT : strangely enough, in windows you must bind to the ip sending the request, and on linux you must bind to INADDR_ANY.

My program doesn't send message or doesn't receive to socket, the ip is 127.0.0.1

when I use sockets the program server process doesn't receive any message from the client process/class.
The input port to the user is 5555, but when the program exits the Client's constructor, the port number of sin doesn't match (I think it's because of htons), same goes to the ip address.
please help me fix this.
this is my server code:
#include "SocketUDP.h"
/*
* class constructor
*/
SocketUDP::SocketUDP() {
sock = socket(AF_INET, SOCK_DGRAM, 0);
if (sock < 0)
perror("error creating socket");
}
/*
* class destructor
*/
SocketUDP::~SocketUDP() {
close(sock);
}
/*
* this function recieves a message from client/server
* #param - the length of the message
*/
std::string SocketUDP::RecieveMessage(){
unsigned int from_len = sizeof(struct sockaddr_in);
char buffer[4096];
memset(&buffer, 0, sizeof(buffer));
int bytes = recvfrom(sock, buffer, sizeof(buffer), 0,
(struct sockaddr *) &from, &from_len);
if (bytes < 0)
perror("error reading from socket");
return std::string(buffer);
}
This is the client:
#include "UDPClient.h"
/*
* class constructor
*/
UDPClient::UDPClient(char * ip, int port) {
memset(&sin, 0, sizeof(sin));
sin.sin_addr.s_addr = inet_addr(ip);
sin.sin_family = AF_INET;
sin.sin_port = htons(port);
}
/*
* class destructor
*/
UDPClient::~UDPClient() {
// TODO Auto-generated destructor stub
}
/*
* this function sends a message to the client/server
* #param - the message
*/
int UDPClient::SendMessage(std::string st){
int sent_bytes = sendto(sock, st.c_str(), st.length(), 0,
(struct sockaddr *) &sin, sizeof(sin));
if (sent_bytes < 0)
perror("error writing to socket");
return sent_bytes;
}
You are missing a call to bind() in the server. This is where you tell the OS on which port (5555) it should listen for incoming UDP packets.
It is quite confusing if you omit bind()in the server. In this case the OS selects a random port to receive from, which is usually not what one wants.
The class name UDPSocket indicates that this is just a wrapper around a UDP socket, and not a server. A server would have a bind() call in addition, and an endless loop where it processes requests. Perhaps you omitted the server code by accident?

How can I restrict my server to start only on specified port?

I have a server which listens on certain port (fixed).
Now if that port is not available, it starts on any random port. I don't want this.
How can I make sure that if the specified port is not available, my service should not start?
int fd = ::socket(AF_INET, SOCK_STREAM, 0);
int32_t const opt = 1;
struct sockaddr_in serv_addr;
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = htonl(INADDR_ANY);
serv_addr.sin_port = htons(61014);
::bind(fd, (struct sockaddr *)&serv_addr, sizeof(serv_addr));
::listen(fd, 5);
As showed above your code won't compile. But I think you want to achieve something like the code below:
You have to check if the bind() call fails. If so than this would mean that this port is already in use. It can also happen that you already own a port, so it is recommended to use the SO_REUSEADDR flag.
if( bind(fd_desc,(struct sockaddr *)&serv_addr, sizeof(serv_addr)) < 0)
{
//print the error message
perror("Bind failed.");
return 1;
}