ChromeOS TCP Connectivity with Windows - peer resets - c++

I'm working on a server implementation on a Chromebook, using tcp connectivity between the windows client and the ChromeOS server. When a connection is being made, the server (Chromebook) side is sending out 5 packets; first one is the header, the next 3 ones are the information sent and the last one is the footer of the message.
We're using send and recv for sending and receiving the information, and after the header is being sent, the rest of the packets are never received, because the client is receiving error code 10054, "connection reset by peer", before the rest are received, though those are sent.
The sizes of the packets are as follows: Header is 4 bytes, the second packet sent is 2 bytes, the next one is 1 byte and the next one is 8 bytes, and the footer is 4 bytes. Our suspicion was that perhaps the 2 bytes are too small for the OS to send, and it perhaps waits for more data before sending, unlike in Windows where it currently does send those immediately. So we tried using SO_LINGER on the socket, but it didn't help. We also tried using TCP_NODELAY, but it didn't help as well. When attempting not to write to the socket's fd with the timeout (using select) the connection is being broken after the first header is sent.
We know all the packets are sent, because logging the sent packets from the machine shows all packets as sent, and only the first one arrives.
Socket flag used is this only:
setsockopt(s, SOL_SOCKET, SO_REUSEADDR, (const char *) &n, sizeof(n));
Sending a message:
ret = write_timeout(fd, timeout);
if (ret != OK) {
Logger::LogError(PROTOCOL_ERROR, "Write data to socket failed with error %d, while waiting timeout of %u\n", get_last_comm_error(), timeout);
return PROTOCOL_ERROR;
}
while (size) {
ret = send(fd, ptr, size, 0);
ptr += ret;
size -= ret;
if (ret < 0) {
Logger::LogError(PROTOCOL_ERROR, "Transport write failed: %d\n", get_last_comm_error());
return PROTOCOL_ERROR;
}
}
Write_timeout:
int write_timeout(int fd, unsigned int wait_useconds)
{
Logger::LogInfo(__FUNCTION__);
int ret = OK;
if (wait_useconds > 0) {
fd_set write_fdset;
struct timeval timeout;
FD_ZERO(&write_fdset);
FD_SET(fd, &write_fdset);
timeout.tv_sec = 0;
timeout.tv_usec = wait_useconds;
do {
ret = select(fd + 1, NULL, &write_fdset, NULL, &timeout);
} while (ret < 0 && errno == EINTR);
if (ret == OK) {
ret = -1;
errno = ETIMEDOUT;
} else if (ret == 1)
return OK;
}
The receiving end is similar:
ret = read_timeout(fd, timeout);
if (ret != OK) {
Logger::LogError(PROTOCOL_ERROR, "Error while trying to receive data from the host - timeout\n");
return TIMED_OUT;
}
while (size) {
ret = recv(fd, ptr, size, 0);
ptr+=ret;
size-=ret;
if (ret == 0) {
return FAILED_TRANSACTION;
}
if (ret < 0) {
Logger::LogError(PROTOCOL_ERROR, "Transport read failed: %d\n", get_last_comm_error());
return UNKNOWN_ERROR;
}
}
return OK;
And timeout:
int read_timeout(int fd, unsigned int wait_useconds)
{
Logger::LogInfo(__FUNCTION__);
int ret = OK;
if (wait_useconds > 0) {
fd_set read_fdset;
struct timeval timeout;
FD_ZERO(&read_fdset);
FD_SET(fd, &read_fdset);
timeout.tv_sec = 0;
timeout.tv_usec = wait_useconds;
do {
ret = select(fd + 1, &read_fdset, NULL, NULL, &timeout);
} while (ret < 0 && errno == EINTR);
if (ret == OK) {
ret = -1;
errno = ETIMEDOUT;
} else if (ret == 1)
return OK;
}
Our code does work on Windows, but (after modifying it accordingly and) using it on ChromeOS does not seem to work unfortunately.
We're running the server on a Chromebook with version 93 and building the code with that code base as well.
I did try making the second packet 4 bytes as well, but it still does not work and connection is being reset by peer after the first one is received correctly.
Does anyone know if maybe the chrome OS system waits for bigger packets before sending? Or if something else works a little bit different when working with TCP on that OS that needs to be done differently then in Windows?

Related

C/C++: socket() creation fails in the loop, too many open files

I am implementing a client-server TCP socket application. Client is on an OpenWRT Linux router (C based) and writes some data on the socket repeatedly and in a loop at some frequency rate. The Server is on a Linux Ubuntu machine (C/C++ based) and reads data in a loop according to data arrival speed.
Problem: Running the Server and then Client, server keeps reading new data. Both sides work well until the number of data deliveries (# of connections) reaches 1013. After that, the Client stuck at socket(AF_INET,SOCK_STREAM,0) with socket creation failed...: Too many open files. Apparently, the number of open fd approaches ulimit -n = 1024 on client.
I put the snippets of the code which shows the loop structures for Server.cpp and Client.c:
Server.c:
// TCP Socket creation stuff over here (work as they should):
// int sock_ = socket() / bind() / listen()
while (1)
{
socklen_t sizeOfserv_addr = sizeof(serv_addr_);
fd_set set;
struct timeval timeout;
int connfd_;
FD_ZERO(&set);
FD_SET(sock_, &set);
timeout.tv_sec = 10;
timeout.tv_usec = 0;
int rv_ = select(sock_ + 1, &set, NULL, NULL, &timeout);
if(rv_ == -1){
perror("select");
return 1;
}
else if(rv_ == 0){
printf("Client disconnected.."); /* a timeout occured */
close (connfd_);
close (sock_);
}
else{
connfd_ = accept (sock_,(struct sockaddr*)&serv_addr_,(socklen_t*)&sizeOfserv_addr);
if (connfd_ >= 0) {
int ret = read (connfd_, &payload, sizeof(payload)); /* some payload */
if (ret > 0)
printf("Received %d bytes !\n", ret);
close (connfd_); /* Keep parent socket open (sock_) */
}else{
printf("Server acccept failed..\n");
close (connfd_);
close (stcp.sock_);
return 0;
}
}
}
Client.cpp:
while (payload_exist) /* assuming payload_exist is true */
{
struct sockaddr_in servaddr;
int sock;
if (sock = socket(AF_INET, SOCK_STREAM, 0) == -1)
perror("socket creation failed...\n");
int one = 1;
int idletime = 2;
setsockopt(sock, SOL_SOCKET, SO_KEEPALIVE, &one, sizeof(one));
setsockopt(sock, IPPROTO_TCP, TCP_KEEPIDLE, &idletime, sizeof(idletime));
setsockopt(sock, SOL_SOCKET, SO_REUSEADDR, &one, sizeof(one));
bzero(&servaddr, sizeof(servaddr));
servaddr.sin_family = AF_INET;
servaddr.sin_addr.s_addr = inet_addr("192.168.100.12");
servaddr.sin_port = htons(PORT); /* some PORT */
if (connect (sock, (struct sockaddr*)&servaddr, sizeof(servaddr)) != 0){
perror("connect failed...");
return 1;
}
write(sock, (struct sockaddr*)&payload, sizeof(payload)); /* some new payload */
shutdown(sock,SHUT_WR);
bool serverOff = false;
while (!serverOff){
if(read(sock, &res, sizeof(res)) < 0){
serverOff = true;
close(sock);
}
}
}
NOTE: payload is 800 bytes and always gets fully transmitted per one write action. Having both codes defined under int main(), the client keeps creating sockets and sending data, on the other side, server receives all and would automatically close() and leave if client terminates, due to using select(). If I don't terminate the Client, however, by checking some print logs, it is evident that Server successfully receives 1013 payloads before client crashes with socket creation failed...: Too many open files.
Update:
Following the point mentioned by Steffen Ullrich, it turned out that, the client socket fd has no leak, and the existence of a second fd in the original loop (which was left open) was making the ulimit exceed the limit.
if(read(sock, &res, sizeof(res)) < 0){
serverOff = true;
close(sock); /********* Not actually closing sock *********/
}
Your check for end of connection is wrong.
read returns 0 if the other side has shut down the connection and <0 only on error.
if (sock = socket(AF_INET, SOCK_STREAM, 0) == -1)
perror("socket creation failed...\n");
Given the precedence of operators in C this basically says:
sock = ( socket(AF_INET, SOCK_STREAM, 0) == -1) )
if (sock) ...
Assuming that socket(...) will not return an error but a file descriptor (i.e. >=0) the comparison will be false and thus this essentially says sock = 0 while leaking a file descriptor if the fd returned by socket was >0.

First recv() cannot read message sent from server

I'm writing a simple TCP server and client where the server echoes back the message to the client. But I have a problem with the first read()/recv() call from the client side. Whenever a client connects to the server, it sends a welcome message, but I cannot display the welcome message on the client side. What i get in return from recv()/read() is 0, which indicates that the socket is closed or 0 bytes read. I know it isn't closed since the server echoes back messages but with a delay(example bellow). The read()/recv() works fine after I've written to the server from the client side. So my question is:
Why does the first read()/recv() call receive return a 0?
TLDR; My client does not read()/recv() the welcome message sent from server. What am I doing wrong?
Server and client interaction(Notice empty 'Welcome message'):
As you can see, the socket isn't closed so the only reason read()/recv() returns 0 is because 0 bytes read.
Client code:
(SETUP NOT INCLUDED)
printf("Connected. \n");
memset(buffer, 0, 1025);
/********* PROBLEM IS THIS READ()/RECV() **********/
n = recv(sockfd, buffer, strlen(buffer), NULL);
if(n == 0){ //
//error("Error reading\n");
printf("Error reading socket.");
}
printf("Welcome message: \n%s", buffer);
while(1){
printf("\nPlease enter message: \n");
memset(buffer, 0, 256);
fgets(buffer, 255, stdin);
printf("You sent: %s", buffer);
n = write(sockfd, buffer, strlen(buffer));
if(n <= 0)
{
error("Error writing socket. \n");
}
//om bye, break
memset(buffer, 0, 256);
//Läser här endast efter write
n = read(sockfd, buffer, 255);
if(n < 0)
{
error("Error reading from socket. \n");
}
printf("You received: %s", buffer);
}
//end while
close(sockfd);
return 0;
Relevant Server code:
while(TRUE)
{
/* Clear socket set */
FD_ZERO(&readfds);
/* Add master socket to set */
FD_SET(masterSocket, &readfds);
/* For now maxSd is highest */
maxSd = masterSocket;
/* Add child sockets to set, will be 0 first iteration */
for(int i = 0; i < maxClients ; i++)
{
sd = clientSockets[i]; // sd = socket descriptor
/* If valid socket descriptor */
if(sd > 0)
{
FD_SET(sd, &readfds);
}
/* Get highest fd number, needed for the select function (later) */
if(sd > maxSd)
{
maxSd = sd;
}
}//end for-loop
/* Wait for activity on any socket */
activity = select(maxSd +1, &readfds, NULL, NULL, NULL);
if((activity < 0) && (errno != EINTR))
{
printf("****** Error on select. ******\n"); //no need for exit.
}
/* If the bit for the file descriptor fd is set in the
file descriptor set pointed to by fdset */
/* If something happend in the master socket, its a new connection */
if(FD_ISSET(masterSocket, &readfds))
{
//står här och läser
if((newSocket = accept(masterSocket, (struct sockaddr*)&address, (socklen_t*)&addrlen)) < 0)
{
perror("****** Could not accept new socket. ******\n");
exit(EXIT_FAILURE);
}
/* Print info about connector */
printf("New connection, socket fd is %d, ip is: %s, port: %d\n", newSocket, inet_ntoa(address.sin_addr), ntohs(address.sin_port));
/**************** THIS IS THE WRITE THAT DOESN'T GET DISPLAYED ON CLIENT ******************/
if( send(newSocket, message, strlen(message), 0) != strlen(message))
{
perror("****** Could not sent welcome message to new socket. ******\n");
}
puts("Welcome message sen successfully");
/* Add new socket to array of clients */
for(int i = 0; i < maxClients; i++)
{
if(clientSockets[i] == 0)
{
clientSockets[i] = newSocket;
printf("Adding socket to list of client at index %d\n", i);
break;
}
}
}//end masterSocket if
/* Else something happend at client side */
for(int i = 0; i < maxClients; i++)
{
sd = clientSockets[i];
if(FD_ISSET(sd, &readfds))
{
/* Read socket, if it was closing, else read value */
//denna read kan vara fel
if((valread = read( sd, buffer, 1024)) == 0)
{
getpeername( sd, (struct sockaddr*)&address, (socklen_t*)&addrlen);
printf("Host disconnected, ip %s, port %d.\n", inet_ntoa(address.sin_addr), ntohs(address.sin_port));
close(sd);
clientSockets[i] = 0;
}
else
{
buffer[valread] = '\0';
send(sd, buffer, strlen(buffer), 0);
}
}
}
I know this is a big wall of text but i am very thankful for anyone who takes their time with this problem.
The third arg to recv specifies a number bytes to read from the socket. And now look at your code:
memset(buffer, 0, 1025);
recv(sockfd, buffer, strlen(buffer), NULL);
First, you zero out whole buffer and then call strlen on it. No wonder it returns 0, as strlen counts non-zero bytes.
Instead, put the buffer length into a variable and use it everywhere:
const int bufSize = 1025;
memset(buffer, 0, bufSize);
recv(sockfd, buffer, bufSize, NULL);
I'm not sure if it's the sole cause of the issue but... in your client code you have...
memset(buffer, 0, 1025);
Then shortly after...
n = recv(sockfd, buffer, strlen(buffer), NULL);
strlen(buffer) at this point will return zero, so the call to recv does exactly what is requested -- it reads zero bytes.
Note also that the welcome message as sent by the server is not null terminated. Thus, your client has no way of knowing when the welcome message ends and the subsequent data begins.

c tcp socket non blocking receive timeout

Trying to write a client which will try to receive data till 3 seconds. I have implemented the connect method using select by below code.
//socket creation
m_hSocket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
m_stAddress.sin_family = AF_INET;
m_stAddress.sin_addr.S_un.S_addr = inet_addr(pchIP);
m_stAddress.sin_port = htons(iPort);
m_stTimeout.tv_sec = SOCK_TIMEOUT_SECONDS;
m_stTimeout.tv_usec = 0;
//connecting to server
long iMode = 1;
int iResult = ioctlsocket(m_hSocket, FIONBIO, &iMode);
connect(m_hSocket, (struct sockaddr *)&m_stAddress, sizeof(m_stAddress));
long iMode = 0;
iResult = ioctlsocket(m_hSocket, FIONBIO, &iMode);
fd_set stWrite;
FD_ZERO(&stWrite);
FD_SET(m_hSocket, &stWrite);
iResult = select(0, NULL, &stWrite, NULL, &m_stTimeout);
if((iResult > 0) && (FD_ISSET(m_hSocket, &stWrite)))
return true;
But I cannot figure out what I am missing at receiving timeout with below code? It doesn't wait if the server connection got disconnected. It just returns instantly from select method.
Also how can I write a non blocking socket call with timeout for socket send.
long iMode = 1;
int iResult = ioctlsocket(m_hSocket, FIONBIO, &iMode);
fd_set stRead;
FD_ZERO(&stRead);
FD_SET(m_hSocket, &stRead);
int iRet = select(0, &stRead, NULL, NULL, &m_stTimeout);
if ((iRet > 0) && (FD_ISSET(m_hSocket, &stRead)))
{
while ((iBuffLen-1) > 0)
{
int iRcvLen = recv(m_hSocket, pchBuff, iBuffLen-1, 0);
if (iRcvLen == SOCKET_ERROR)
{
return false;
}
else if (iRcvLen == 0)
{
break;
}
pchBuff += iRcvLen;
iBuffLen -= iRcvLen;
}
}
The first parameter to select should not be 0.
Correct usage of select can be found here :
http://developerweb.net/viewtopic.php?id=2933
the first parameter should be the max value of your socket +1 and take interrupted system calls into account if it is non blocking:
/* Call select() */
do {
FD_ZERO(&readset);
FD_SET(socket_fd, &readset);
result = select(socket_fd + 1, &readset, NULL, NULL, NULL);
} while (result == -1 && errno == EINTR);
This is just example code you probably need the timeout parameter as well.
If you can get EINTR this will complicate your required logic, because if you get EINTR you have to do the same call again, but with the remaining time to wait for.
I think for non blocking mode one needs to check the recv() failure along with a timeout value. That mean first select() will return whether the socket is ready to receive data or not. If yes it will go forward else it will sleep until timeout elapses on the select() method call line. But if the receive fails due to some uncertain situations while inside read loop there we need to manually check for socket error and maximum timeout value. If the socket error continues and timeout elapses we need to break it.
I'm done with my receive timeout logic with non blocking mode.
Please correct me if I am wrong.
bool bReturn = true;
SetNonBlockingMode(true);
//check whether the socket is ready to receive
fd_set stRead;
FD_ZERO(&stRead);
FD_SET(m_hSocket, &stRead);
int iRet = select(0, &stRead, NULL, NULL, &m_stTimeout);
DWORD dwStartTime = GetTickCount();
DWORD dwCurrentTime = 0;
//if socket is not ready this line will be hit after 3 sec timeout and go to the end
//if it is ready control will go inside the read loop and reads data until data ends or
//socket error is getting triggered continuously for more than 3 secs.
if ((iRet > 0) && (FD_ISSET(m_hSocket, &stRead)))
{
while ((iBuffLen-1) > 0)
{
int iRcvLen = recv(m_hSocket, pchBuff, iBuffLen-1, 0);
dwCurrentTime = GetTickCount();
if ((iRcvLen == SOCKET_ERROR) && ((dwCurrentTime - dwStartTime) >= SOCK_TIMEOUT_SECONDS * 1000))
{
bReturn = false;
break;
}
else if (iRcvLen == 0)
{
break;
}
pchBuff += iRcvLen;
iBuffLen -= iRcvLen;
}
}
SetNonBlockingMode(false);
return bReturn;

How to use both TCP and UDP in one application in c++

I'm developing a server-client project based on Winsock in c++. I have designed the server and the client sides so that they can send and receive text messages and files as well.
Then I decided to go for audio communication between server and client. I've actually implemented that however I've figured it out that I've done everything using TCP protocol and that for the audio communication it is better to use UDP protocol.
Then I've searched over the internet and found out that it is possible to use both TCP and UDP alongside each other.
I've tried to use the UDP protocol but I didn't have any major progresses.
My problem is I use both recv() and recvFrom() in a while loop like this:
while (true)
{
buflen = recv(clientS, buffer, 1024, NULL);
if (buflen > 0)
{
// Send the received buffer
}
else if (buflen == 0)
{
printf("closed\n");
break;
}
buflen = recvfrom(udpS, buffer, 1024, NULL, (struct sockaddr*)&_s, &_size);
But the recvFrom() blocks. I think I haven't done the job properly but I couldn't find out how to do it.
Here Server in C accepting UDP and TCP connections I found a similar question but the answers were just explanations and there were no sample codes to demonstrate the point clearly.
Now I need you to help me undestand clearly how to receive data from both TCP and UPD connections.
Any help is appreciated.
When dealing with multiple sockets at a time, use select() to know which socket has data pending before you read it, eg:
while (true)
{
fd_set rfd;
FD_ZERO(&rfd);
FD_SET(clientS, &rfd);
FD_SET(udpS, &rfd);
struct timeval timeout;
timeout.tv_sec = ...;
timeout.tv_usec = ...;
int ret = select(0, &rfd, NULL, NULL, &timeout);
if (ret == SOCKET_ERROR)
{
// handle error
break;
}
if (ret == 0)
{
// handle timeout
continue;
}
// at least one socket is readable, figure out which one(s)...
if (FD_ISSET(clientS, &rfd))
{
buflen = recv(clientS, buffer, 1024, NULL);
if (buflen == SOCKET_ERROR)
{
// handle error...
printf("error\n");
}
else if (buflen == 0)
{
// handle disconnect...
printf("closed\n");
}
else
{
// handle received data...
}
}
if (FD_ISSET(udpS, &rfd))
{
buflen = recvfrom(udpS, buffer, 1024, NULL, (struct sockaddr*)&_s, &_size);
//...
}
}

Use of select in c++ to do a timeout in a protocol to transfer files over serial port

I have a function to write data to the serial port with a certain protocol. When the function writes one frame, it waits for one answer of receiver. If no answer is received it has to resend data during 3 timeouts and in the end of 3 timeouts with no success, close the communication...
I have this function:
int serial_write(int fd, unsigned char* send, size_t send_size) {
......
int received_counter = 0;
while (!RECEIVED) {
Timeout.tv_usec = 0; // milliseconds
Timeout.tv_sec = timeout; // seconds
FD_SET(fd, &readfs);
//set testing for source 1
res = select(fd + 1, &readfs, NULL, NULL, &Timeout);
//timeout occurred.
if (received_counter == 3) {
printf(
"Connection maybe turned off! Number of resends exceeded!\n");
exit(-1);
}
if (res == 0) {
printf("Timeout occured\n");
write(fd, (&I[0]), I.size());
numTimeOuts++;
received_counter++;
} else {
RECEIVED = true;
break;
}
}
......
}
I have verified that this function, when it goes into timeout, does not resend the data. Why?