I'm reading this code and have several questions about it:
When creating this TCP connection to host and port, it should return a file descriptor on success, and -1 on error. But so far I can only see it returns -1 when connect (s, (struct sockaddr *) &sa, sizeof (sa)) < 0, where did it show it returns a file descriptor on success?
connect (s, (struct sockaddr *) &sa, sizeof (sa)) < 0 What does <0 means here?
Which one is the function of reading the request? If there is none, could you provide me with some good code examples?
int tcpconnect (char *host, int port){
struct hostent *h;
struct sockaddr_in sa;
int s;
/* Get the address of the host at which to finger from the
* hostname. */
h = gethostbyname (host);
if (!h || h->h_length != sizeof (struct in_addr)) {
fprintf (stderr, "%s: no such host\n", host);
return -1;
}
/* Create a TCP socket. */
s = socket (AF_INET, SOCK_STREAM, 0);
bzero (&sa, sizeof (sa));
sa.sin_family = AF_INET;
sa.sin_port = htons (0); /* tells OS to choose a port */
sa.sin_addr.s_addr = htonl (INADDR_ANY); /* tells OS to choose IP addr */
if (bind (s, (struct sockaddr *) &sa, sizeof (sa)) < 0) {
perror ("bind");
close (s);
return -1;
}
sa.sin_port = htons (port);
sa.sin_addr = *(struct in_addr *) h->h_addr;
/* And connect to the server */
if (connect (s, (struct sockaddr *) &sa, sizeof (sa)) < 0) {
perror (host);
close (s);
return -1;
}
return s;
}
When creating this TCP connection to host and port, it should return a file descriptor on success, and -1 on error. But so far I can only see it returns -1 when connect (s, (struct sockaddr *) &sa, sizeof (sa)) < 0, where did it show it returns a file descriptor on success?
s = socket(...) creates the actual file descriptor, and then return s; returns that file descriptor to the caller of tcpconnect() if nothing goes wrong. If anything does goes wrong, the code is releasing the file descriptor with close(s) and then returning -1 to the caller.
connect (s, (struct sockaddr *) &sa, sizeof (sa)) < 0 What does <0 means here?
connect() returns 0 on success, and negative on failure. The code is simply checking if connect() fails. Same as it is doing with bind() (and should be doing with socket()).
Which one is the function of reading the request?
There is nothing in the code shown that is reading a request. That happens outside of this code, after tcpconnect() returns a valid file descriptor to the caller.
Related
I have a piece of code that send a UDP broadcast to scan for device on our local network. It works fine when im plugged via ethernet, but it doesnt when im connected via WiFi.
Is there something different to do to connect in UDP when using WiFi?
You can find the code im using below. When using WiFi, select always return 0
struct sockaddr_in addr;
//Create socket
if ((fd = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP)) < 0)
{
perror("socket");
exit(1);
}
/* set up destination address */
memset((char *)&addr, 0, sizeof(addr));
addr.sin_family = AF_INET;
addr.sin_port = htons(48620);
addr.sin_addr.s_addr = inet_addr("192.168.3.255");
//TRYING TO BIND, NOT WORKING
if (bind(fd, (struct sockaddr *) &addr, sizeof(addr)) == -1)
{
int a = WSAGetLastError(); //ERROR 10049
perror("bind"); //Says NO ERROR
}
//allow broadcast
int broadcast = 1;
if (setsockopt(fd, SOL_SOCKET, SO_BROADCAST, (char*)&broadcast, sizeof(broadcast)) == -1)
exit(1);
if (sendto(fd, (const char *)&request, sizeof(request), 0, (struct sockaddr *) &addr, sizeof(addr)) < 0)
{
perror("sendto");
exit(1);
}
do
{
FD_ZERO(&rdFs);
FD_SET(fd, &rdFs);
lTimeout.tv_sec = 1;
lTimeout.tv_usec = 000000;
lSelRet = select(fd, (fd_set*)&rdFs, NULL, NULL, &lTimeout);
if (lSelRet > 0 && FD_ISSET(fd, &rdFs))
{
addrFromSize = sizeof(addrFrom);
lResult = recvfrom(fd, bufferIn, sizeof(bufferIn), 0, (struct sockaddr *) &addrFrom, &addrFromSize);
//Treat result
}
} while (lSelRet > 0);
Note : Even using WiFi, i can estalbish a TCP connection and communicate with the device, its just the UDP broadcast that doesnt work
Note2: currently testing on windows, but I will port it to Linux after
Edit : added the SO_BROADCAST as advised by Remy
Finally got it working, it was a code issue, not a router issue.
The issue was a misuse of the bind function, I needed to use my IP and not the broadcast IP.
/* set up destination address */
memset((char *)&addr, 0, sizeof(addr));
addr.sin_family = AF_INET;
addr.sin_port = htons(48620);
addr.sin_addr.s_addr = inet_addr("192.168.3.134"); //<== Windows : My IP, not the broadcast IP
addr.sin_addr.s_addr = INADDR_ANY; //Linux
if (bind(fd, (struct sockaddr *) &addr, sizeof(addr)) == -1)
{
perror("bind");
}
EDIT : strangely enough, in windows you must bind to the ip sending the request, and on linux you must bind to INADDR_ANY.
when I use sockets the program server process doesn't receive any message from the client process/class.
The input port to the user is 5555, but when the program exits the Client's constructor, the port number of sin doesn't match (I think it's because of htons), same goes to the ip address.
please help me fix this.
this is my server code:
#include "SocketUDP.h"
/*
* class constructor
*/
SocketUDP::SocketUDP() {
sock = socket(AF_INET, SOCK_DGRAM, 0);
if (sock < 0)
perror("error creating socket");
}
/*
* class destructor
*/
SocketUDP::~SocketUDP() {
close(sock);
}
/*
* this function recieves a message from client/server
* #param - the length of the message
*/
std::string SocketUDP::RecieveMessage(){
unsigned int from_len = sizeof(struct sockaddr_in);
char buffer[4096];
memset(&buffer, 0, sizeof(buffer));
int bytes = recvfrom(sock, buffer, sizeof(buffer), 0,
(struct sockaddr *) &from, &from_len);
if (bytes < 0)
perror("error reading from socket");
return std::string(buffer);
}
This is the client:
#include "UDPClient.h"
/*
* class constructor
*/
UDPClient::UDPClient(char * ip, int port) {
memset(&sin, 0, sizeof(sin));
sin.sin_addr.s_addr = inet_addr(ip);
sin.sin_family = AF_INET;
sin.sin_port = htons(port);
}
/*
* class destructor
*/
UDPClient::~UDPClient() {
// TODO Auto-generated destructor stub
}
/*
* this function sends a message to the client/server
* #param - the message
*/
int UDPClient::SendMessage(std::string st){
int sent_bytes = sendto(sock, st.c_str(), st.length(), 0,
(struct sockaddr *) &sin, sizeof(sin));
if (sent_bytes < 0)
perror("error writing to socket");
return sent_bytes;
}
You are missing a call to bind() in the server. This is where you tell the OS on which port (5555) it should listen for incoming UDP packets.
It is quite confusing if you omit bind()in the server. In this case the OS selects a random port to receive from, which is usually not what one wants.
The class name UDPSocket indicates that this is just a wrapper around a UDP socket, and not a server. A server would have a bind() call in addition, and an endless loop where it processes requests. Perhaps you omitted the server code by accident?
Where do I set TCP_NODELAY in this C++ TCP Client?
// Client socket descriptor which is just integer number used to access a socket
int sock_descriptor;
struct sockaddr_in serv_addr;
// Structure from netdb.h file used for determining host name from local host's ip address
struct hostent *server;
// Create socket of domain - Internet (IP) address, type - Stream based (TCP) and protocol unspecified
// since it is only useful when underlying stack allows more than one protocol and we are choosing one.
// 0 means choose the default protocol.
sock_descriptor = socket(AF_INET, SOCK_STREAM, 0);
if (sock_descriptor < 0)
printf("Failed creating socket\n");
bzero((char *) &serv_addr, sizeof(serv_addr));
server = gethostbyname(host);
if (server == NULL) {
printf("Failed finding server name\n");
return -1;
}
serv_addr.sin_family = AF_INET;
memcpy((char *) &(serv_addr.sin_addr.s_addr), (char *) (server->h_addr), server->h_length);
// 16 bit port number on which server listens
// The function htons (host to network short) ensures that an integer is
// interpreted correctly (whether little endian or big endian) even if client and
// server have different architectures
serv_addr.sin_port = htons(port);
if (connect(sock_descriptor, (struct sockaddr *) &serv_addr, sizeof(serv_addr)) < 0) {
printf("Failed to connect to server\n");
return -1;
} else
printf("Connected successfully - Please enter string\n");
TCP_NODELAY is option given to setsockopt system call:
#include <netinet/tcp.h>
int yes = 1;
int result = setsockopt(sock,
IPPROTO_TCP,
TCP_NODELAY,
(char *) &yes,
sizeof(int)); // 1 - on, 0 - off
if (result < 0)
// handle the error
This is to set Nagle buffering off. You should turn this option on only if you really know what you are doing.
I'm trying to receive a message from the server in my client, and although I don't get any compiling errors, my buffer won't take what the server is sending. I've tried changing the parameters in recvfrom in the client to correlate to the parameters used in the client's sendto but the same thing happens, my memset buffer remains empty. I've also tried just sending a simple null terminated char array of size two to test it, and the same result occurs.
Server:
int sockfd;
struct addrinfo hints, *servinfo, *p;
int rv;
int numbytes;
struct sockaddr_storage their_addr;
char buf[MAXBUFLEN];
socklen_t addr_len;
char s[INET6_ADDRSTRLEN];
memset(&hints, 0, sizeof hints);
hints.ai_family = AF_UNSPEC; // set to AF_INET to force IPv4
hints.ai_socktype = SOCK_DGRAM;
hints.ai_flags = AI_PASSIVE; // use my IP
if ((rv = getaddrinfo(NULL, MYPORT, &hints, &servinfo)) != 0) {
fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(rv));
return 1;
}
for(p = servinfo; p != NULL; p = p->ai_next) {
if ((sockfd = socket(p->ai_family, p->ai_socktype,
p->ai_protocol)) == -1)
perror("listener: socket");
continue;
}
if (bind(sockfd, p->ai_addr, p->ai_addrlen) == -1) {
close(sockfd);
perror("listener: bind");
continue;
}
break;
}
if (p == NULL) {
fprintf(stderr, "listener: failed to bind socket\n");
return 2;
}
freeaddrinfo(servinfo);
while(1){
addr_len = sizeof their_addr;
if ((numbytes = recvfrom(sockfd, buf, MAXBUFLEN-1 , 0,
(struct sockaddr *)&their_addr, &addr_len)) == -1) {
perror("recvfrom");
exit(1);
}
buf[numbytes] = '\0';
string toRespond = theMove(buf, AG);
char * sendBack = new char[toRespond.size() + 1];
std::copy(toRespond.begin(), toRespond.end(), sendBack);
sendBack[toRespond.size()] = '\0';
sendto(sockfd, testing, strlen(testing), 0, (struct sockaddr *)&their_addr, addr_len);
}
Client:
int sockfd;
struct addrinfo hints, *servinfo, *p;
struct sockaddr_storage src_addr;
socklen_t src_addr_len = sizeof(src_addr);
int rv;
int numbytes;
if (argc != 3) {
fprintf(stderr,"usage: talker hostname message\n");
exit(1);
}
memset(&hints, 0, sizeof hints);
hints.ai_family = AF_UNSPEC;
hints.ai_socktype = SOCK_DGRAM;
if ((rv = getaddrinfo(argv[1], SERVERPORT, &hints, &servinfo)) != 0) {
fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(rv));
return 1;
}
// loop through all the results and make a socket
for(p = servinfo; p != NULL; p = p->ai_next) {
if ((sockfd = socket(p->ai_family, p->ai_socktype,
p->ai_protocol)) == -1) {
perror("talker: socket");
continue;
}
break;
}
if (p == NULL) {
fprintf(stderr, "talker: failed to bind socket\n");
return 2;
}
char * chatBuff = (char*)malloc(sizeof(char)*512);
while(1){
scanf("%s", chatBuff);
if ((numbytes = sendto(sockfd, chatBuff, strlen(chatBuff), 0,
p->ai_addr, p->ai_addrlen)) == -1) {
perror("talker: sendto");
exit(1);
}
memset(chatBuff, '\0', sizeof(chatBuff));
if (recvfrom(sockfd, chatBuff, strlen(chatBuff), 0, (struct sockaddr*)&src_addr, &src_addr_len) == -1)
{
puts("throw computer out the stacks");
}
puts(chatBuff);
freeaddrinfo(servinfo);
printf("talker: sent %d bytes to %s\n", numbytes, argv[1]);
memset(chatBuff, '\0', sizeof(chatBuff));
}
memset(chatBuff, '\0', sizeof(chatBuff));
While not actually incorrect, this initializing of the entire buffer is cargo-cult nonsense when you intend to load it in the next line with a call that retuns the number of bytes loaded - that return would allow you to ensure a null-terminated string by setting one byte only. The only thing you must remember is that you must leave enough space for the null, either by oversizing the buffer or reducing the read length requested.
if (recvfrom(sockfd, chatBuff, strlen(chatBuff), 0, (struct sockaddr*)&src_addr, &src_addr_len) == -1)
In the (unnecessary and wasteful) line above, you use 'sizeof(chatBuff)' as the buffer size but then here, inexplicably, you shove in 'strlen(chatBuff)' - a RUNTIME CALL that returns the size of a null-terminated char array. Since you just set that array to all null, it returns zero, so your recvfrom() will always return with a 'buffer too small' error unless you receive an empty datagram.
So:
int bytesRec=recvfrom(sockfd, chatBuff, sizeof(chatBuff)-1, 0, (struct sockaddr*)&src_addr, &src_addr_len);
if(bytesRec<1) puts("throw computer out the stacks")
else chatBuff[bytesRec]=0;
In your server code, you are passing some unknown buffer named testing as the buffer to send, but you should be passing the sendBack buffer instead:
sendto(sockfd, sendBack, strlen(sendBack), 0, (struct sockaddr *)&their_addr, addr_len);
For that matter, you can eliminate sendBack and send the data from toRespond directly instead:
sendto(sockfd, toRespond.c_str(), toResponse.size(), 0, (struct sockaddr *)&their_addr, addr_len);
In both your server and client code, you are using AF_UNSPEC when calling getaddrinfo(). That is OK in a server, but generally not OK in a client. Imagine what happens if the server binds to an IPv6 address, but the client creates an IPv4 socket, or vice versa. Obvious mismatch, communication will not be possible (unless the server creates a dual-stack IPv6 socket that can accept both IPv4 and IPv6 packets). So in your client code, you should not use AF_UNSPEC. Use either AF_INET or AF_INET6, depending on what the server is actually bound to. If you must use AF_UNSPEC on the client side, then you need to call sendto() and recvfrom() for every possible server IP address until you receive a response from one of them.
Lastly, in your client code, your call to recvfrom() is assuming the response data will be no more than the same size as the data sent with sendto(). Is that actually the case? You did not show what theMove() does to the data the server receives, or how it generates a response. At the very least, you should replace strlen() with 512 when calling recvfrom(). Your client code is also assuming that the server's response will be null-terminated, but the server is not sending a null terminator in the data it echoes. So you need to terminate the buffer that you are passing to puts().
I'm trying to create a server socket with C++ in order to accept one client connection at a time. The program successfully creates the server socket and waits for incoming connections but when a connection is closed by the client the program would loop endlessly. Otherwise if the connection is interrupted it would keep waiting for new connections as expected. Any idea why this is happening? Thanks
This is my C++ server code:
int listenfd, connfd, n;
struct sockaddr_in servaddr, cliaddr;
socklen_t clilen;
pid_t childpid;
char mesg[1000];
listenfd = socket(AF_INET, SOCK_STREAM, 0);
bzero(&servaddr, sizeof(servaddr));
servaddr.sin_family = AF_INET;
servaddr.sin_addr.s_addr = htonl(INADDR_ANY);
servaddr.sin_port = htons(32000);
bind(listenfd, (struct sockaddr *)&servaddr, sizeof(servaddr));
listen(listenfd, 1024);
while (true) {
clilen = sizeof(cliaddr);
connfd = accept(listenfd, (struct sockaddr *)&cliaddr, &clilen);
if ((childpid = fork()) == 0) {
close (listenfd);
while (true) {
n = recvfrom(connfd, mesg, 1000, 0, (struct sockaddr *)&cliaddr, &clilen);
sendto(connfd, mesg, n, 0, (struct sockaddr *)&cliaddr, sizeof(cliaddr));
mesg[n] = 0;
printf("%d: %s \n", n, mesg);
if (n <= 0) break;
}
close(connfd);
}
}
For some reason when the client closes the connection the program would keep printing -1: even with the if-break clause..
You never close connfd in parent process (when childpid != 0), and you do not properly terminate child process that will try to loop. Your if block should look like :
if ((childpid = fork()) == 0) {
...
close(connfd);
exit(0);
}
else {
close(connfd);
}
But as you say you want to accept one connection at a time, you can simply not fork.
And as seen in other answers :
do not use mesg[n] without testing n >= 0
recvfrom and sendto are overkill for TCP simply use recv and send (or even read and write)
mesg[n] = 0;
This breaks when n<0, ie. socket closed
The problem is your "n" and recvfrom. You are having a TCP client so the recvfrom won't return the correct value.
try to have a look on :
How to send and receive data socket TCP (C/C++)
Edit 1 :
Take note that you do the binding not connect() http://www.beej.us/guide/bgnet/output/html/multipage/recvman.html
means there is an error in recieving data, errno will be set accordingly, please try to check the error flag.
you've written a TCP server, but you use recvfrom and sendto which are specific for connection-less protocols (UDP).
try with recv and send. maybe that might help.