First recv() cannot read message sent from server - c++

I'm writing a simple TCP server and client where the server echoes back the message to the client. But I have a problem with the first read()/recv() call from the client side. Whenever a client connects to the server, it sends a welcome message, but I cannot display the welcome message on the client side. What i get in return from recv()/read() is 0, which indicates that the socket is closed or 0 bytes read. I know it isn't closed since the server echoes back messages but with a delay(example bellow). The read()/recv() works fine after I've written to the server from the client side. So my question is:
Why does the first read()/recv() call receive return a 0?
TLDR; My client does not read()/recv() the welcome message sent from server. What am I doing wrong?
Server and client interaction(Notice empty 'Welcome message'):
As you can see, the socket isn't closed so the only reason read()/recv() returns 0 is because 0 bytes read.
Client code:
(SETUP NOT INCLUDED)
printf("Connected. \n");
memset(buffer, 0, 1025);
/********* PROBLEM IS THIS READ()/RECV() **********/
n = recv(sockfd, buffer, strlen(buffer), NULL);
if(n == 0){ //
//error("Error reading\n");
printf("Error reading socket.");
}
printf("Welcome message: \n%s", buffer);
while(1){
printf("\nPlease enter message: \n");
memset(buffer, 0, 256);
fgets(buffer, 255, stdin);
printf("You sent: %s", buffer);
n = write(sockfd, buffer, strlen(buffer));
if(n <= 0)
{
error("Error writing socket. \n");
}
//om bye, break
memset(buffer, 0, 256);
//Läser här endast efter write
n = read(sockfd, buffer, 255);
if(n < 0)
{
error("Error reading from socket. \n");
}
printf("You received: %s", buffer);
}
//end while
close(sockfd);
return 0;
Relevant Server code:
while(TRUE)
{
/* Clear socket set */
FD_ZERO(&readfds);
/* Add master socket to set */
FD_SET(masterSocket, &readfds);
/* For now maxSd is highest */
maxSd = masterSocket;
/* Add child sockets to set, will be 0 first iteration */
for(int i = 0; i < maxClients ; i++)
{
sd = clientSockets[i]; // sd = socket descriptor
/* If valid socket descriptor */
if(sd > 0)
{
FD_SET(sd, &readfds);
}
/* Get highest fd number, needed for the select function (later) */
if(sd > maxSd)
{
maxSd = sd;
}
}//end for-loop
/* Wait for activity on any socket */
activity = select(maxSd +1, &readfds, NULL, NULL, NULL);
if((activity < 0) && (errno != EINTR))
{
printf("****** Error on select. ******\n"); //no need for exit.
}
/* If the bit for the file descriptor fd is set in the
file descriptor set pointed to by fdset */
/* If something happend in the master socket, its a new connection */
if(FD_ISSET(masterSocket, &readfds))
{
//står här och läser
if((newSocket = accept(masterSocket, (struct sockaddr*)&address, (socklen_t*)&addrlen)) < 0)
{
perror("****** Could not accept new socket. ******\n");
exit(EXIT_FAILURE);
}
/* Print info about connector */
printf("New connection, socket fd is %d, ip is: %s, port: %d\n", newSocket, inet_ntoa(address.sin_addr), ntohs(address.sin_port));
/**************** THIS IS THE WRITE THAT DOESN'T GET DISPLAYED ON CLIENT ******************/
if( send(newSocket, message, strlen(message), 0) != strlen(message))
{
perror("****** Could not sent welcome message to new socket. ******\n");
}
puts("Welcome message sen successfully");
/* Add new socket to array of clients */
for(int i = 0; i < maxClients; i++)
{
if(clientSockets[i] == 0)
{
clientSockets[i] = newSocket;
printf("Adding socket to list of client at index %d\n", i);
break;
}
}
}//end masterSocket if
/* Else something happend at client side */
for(int i = 0; i < maxClients; i++)
{
sd = clientSockets[i];
if(FD_ISSET(sd, &readfds))
{
/* Read socket, if it was closing, else read value */
//denna read kan vara fel
if((valread = read( sd, buffer, 1024)) == 0)
{
getpeername( sd, (struct sockaddr*)&address, (socklen_t*)&addrlen);
printf("Host disconnected, ip %s, port %d.\n", inet_ntoa(address.sin_addr), ntohs(address.sin_port));
close(sd);
clientSockets[i] = 0;
}
else
{
buffer[valread] = '\0';
send(sd, buffer, strlen(buffer), 0);
}
}
}
I know this is a big wall of text but i am very thankful for anyone who takes their time with this problem.

The third arg to recv specifies a number bytes to read from the socket. And now look at your code:
memset(buffer, 0, 1025);
recv(sockfd, buffer, strlen(buffer), NULL);
First, you zero out whole buffer and then call strlen on it. No wonder it returns 0, as strlen counts non-zero bytes.
Instead, put the buffer length into a variable and use it everywhere:
const int bufSize = 1025;
memset(buffer, 0, bufSize);
recv(sockfd, buffer, bufSize, NULL);

I'm not sure if it's the sole cause of the issue but... in your client code you have...
memset(buffer, 0, 1025);
Then shortly after...
n = recv(sockfd, buffer, strlen(buffer), NULL);
strlen(buffer) at this point will return zero, so the call to recv does exactly what is requested -- it reads zero bytes.
Note also that the welcome message as sent by the server is not null terminated. Thus, your client has no way of knowing when the welcome message ends and the subsequent data begins.

Related

ChromeOS TCP Connectivity with Windows - peer resets

I'm working on a server implementation on a Chromebook, using tcp connectivity between the windows client and the ChromeOS server. When a connection is being made, the server (Chromebook) side is sending out 5 packets; first one is the header, the next 3 ones are the information sent and the last one is the footer of the message.
We're using send and recv for sending and receiving the information, and after the header is being sent, the rest of the packets are never received, because the client is receiving error code 10054, "connection reset by peer", before the rest are received, though those are sent.
The sizes of the packets are as follows: Header is 4 bytes, the second packet sent is 2 bytes, the next one is 1 byte and the next one is 8 bytes, and the footer is 4 bytes. Our suspicion was that perhaps the 2 bytes are too small for the OS to send, and it perhaps waits for more data before sending, unlike in Windows where it currently does send those immediately. So we tried using SO_LINGER on the socket, but it didn't help. We also tried using TCP_NODELAY, but it didn't help as well. When attempting not to write to the socket's fd with the timeout (using select) the connection is being broken after the first header is sent.
We know all the packets are sent, because logging the sent packets from the machine shows all packets as sent, and only the first one arrives.
Socket flag used is this only:
setsockopt(s, SOL_SOCKET, SO_REUSEADDR, (const char *) &n, sizeof(n));
Sending a message:
ret = write_timeout(fd, timeout);
if (ret != OK) {
Logger::LogError(PROTOCOL_ERROR, "Write data to socket failed with error %d, while waiting timeout of %u\n", get_last_comm_error(), timeout);
return PROTOCOL_ERROR;
}
while (size) {
ret = send(fd, ptr, size, 0);
ptr += ret;
size -= ret;
if (ret < 0) {
Logger::LogError(PROTOCOL_ERROR, "Transport write failed: %d\n", get_last_comm_error());
return PROTOCOL_ERROR;
}
}
Write_timeout:
int write_timeout(int fd, unsigned int wait_useconds)
{
Logger::LogInfo(__FUNCTION__);
int ret = OK;
if (wait_useconds > 0) {
fd_set write_fdset;
struct timeval timeout;
FD_ZERO(&write_fdset);
FD_SET(fd, &write_fdset);
timeout.tv_sec = 0;
timeout.tv_usec = wait_useconds;
do {
ret = select(fd + 1, NULL, &write_fdset, NULL, &timeout);
} while (ret < 0 && errno == EINTR);
if (ret == OK) {
ret = -1;
errno = ETIMEDOUT;
} else if (ret == 1)
return OK;
}
The receiving end is similar:
ret = read_timeout(fd, timeout);
if (ret != OK) {
Logger::LogError(PROTOCOL_ERROR, "Error while trying to receive data from the host - timeout\n");
return TIMED_OUT;
}
while (size) {
ret = recv(fd, ptr, size, 0);
ptr+=ret;
size-=ret;
if (ret == 0) {
return FAILED_TRANSACTION;
}
if (ret < 0) {
Logger::LogError(PROTOCOL_ERROR, "Transport read failed: %d\n", get_last_comm_error());
return UNKNOWN_ERROR;
}
}
return OK;
And timeout:
int read_timeout(int fd, unsigned int wait_useconds)
{
Logger::LogInfo(__FUNCTION__);
int ret = OK;
if (wait_useconds > 0) {
fd_set read_fdset;
struct timeval timeout;
FD_ZERO(&read_fdset);
FD_SET(fd, &read_fdset);
timeout.tv_sec = 0;
timeout.tv_usec = wait_useconds;
do {
ret = select(fd + 1, &read_fdset, NULL, NULL, &timeout);
} while (ret < 0 && errno == EINTR);
if (ret == OK) {
ret = -1;
errno = ETIMEDOUT;
} else if (ret == 1)
return OK;
}
Our code does work on Windows, but (after modifying it accordingly and) using it on ChromeOS does not seem to work unfortunately.
We're running the server on a Chromebook with version 93 and building the code with that code base as well.
I did try making the second packet 4 bytes as well, but it still does not work and connection is being reset by peer after the first one is received correctly.
Does anyone know if maybe the chrome OS system waits for bigger packets before sending? Or if something else works a little bit different when working with TCP on that OS that needs to be done differently then in Windows?

C/C++: socket() creation fails in the loop, too many open files

I am implementing a client-server TCP socket application. Client is on an OpenWRT Linux router (C based) and writes some data on the socket repeatedly and in a loop at some frequency rate. The Server is on a Linux Ubuntu machine (C/C++ based) and reads data in a loop according to data arrival speed.
Problem: Running the Server and then Client, server keeps reading new data. Both sides work well until the number of data deliveries (# of connections) reaches 1013. After that, the Client stuck at socket(AF_INET,SOCK_STREAM,0) with socket creation failed...: Too many open files. Apparently, the number of open fd approaches ulimit -n = 1024 on client.
I put the snippets of the code which shows the loop structures for Server.cpp and Client.c:
Server.c:
// TCP Socket creation stuff over here (work as they should):
// int sock_ = socket() / bind() / listen()
while (1)
{
socklen_t sizeOfserv_addr = sizeof(serv_addr_);
fd_set set;
struct timeval timeout;
int connfd_;
FD_ZERO(&set);
FD_SET(sock_, &set);
timeout.tv_sec = 10;
timeout.tv_usec = 0;
int rv_ = select(sock_ + 1, &set, NULL, NULL, &timeout);
if(rv_ == -1){
perror("select");
return 1;
}
else if(rv_ == 0){
printf("Client disconnected.."); /* a timeout occured */
close (connfd_);
close (sock_);
}
else{
connfd_ = accept (sock_,(struct sockaddr*)&serv_addr_,(socklen_t*)&sizeOfserv_addr);
if (connfd_ >= 0) {
int ret = read (connfd_, &payload, sizeof(payload)); /* some payload */
if (ret > 0)
printf("Received %d bytes !\n", ret);
close (connfd_); /* Keep parent socket open (sock_) */
}else{
printf("Server acccept failed..\n");
close (connfd_);
close (stcp.sock_);
return 0;
}
}
}
Client.cpp:
while (payload_exist) /* assuming payload_exist is true */
{
struct sockaddr_in servaddr;
int sock;
if (sock = socket(AF_INET, SOCK_STREAM, 0) == -1)
perror("socket creation failed...\n");
int one = 1;
int idletime = 2;
setsockopt(sock, SOL_SOCKET, SO_KEEPALIVE, &one, sizeof(one));
setsockopt(sock, IPPROTO_TCP, TCP_KEEPIDLE, &idletime, sizeof(idletime));
setsockopt(sock, SOL_SOCKET, SO_REUSEADDR, &one, sizeof(one));
bzero(&servaddr, sizeof(servaddr));
servaddr.sin_family = AF_INET;
servaddr.sin_addr.s_addr = inet_addr("192.168.100.12");
servaddr.sin_port = htons(PORT); /* some PORT */
if (connect (sock, (struct sockaddr*)&servaddr, sizeof(servaddr)) != 0){
perror("connect failed...");
return 1;
}
write(sock, (struct sockaddr*)&payload, sizeof(payload)); /* some new payload */
shutdown(sock,SHUT_WR);
bool serverOff = false;
while (!serverOff){
if(read(sock, &res, sizeof(res)) < 0){
serverOff = true;
close(sock);
}
}
}
NOTE: payload is 800 bytes and always gets fully transmitted per one write action. Having both codes defined under int main(), the client keeps creating sockets and sending data, on the other side, server receives all and would automatically close() and leave if client terminates, due to using select(). If I don't terminate the Client, however, by checking some print logs, it is evident that Server successfully receives 1013 payloads before client crashes with socket creation failed...: Too many open files.
Update:
Following the point mentioned by Steffen Ullrich, it turned out that, the client socket fd has no leak, and the existence of a second fd in the original loop (which was left open) was making the ulimit exceed the limit.
if(read(sock, &res, sizeof(res)) < 0){
serverOff = true;
close(sock); /********* Not actually closing sock *********/
}
Your check for end of connection is wrong.
read returns 0 if the other side has shut down the connection and <0 only on error.
if (sock = socket(AF_INET, SOCK_STREAM, 0) == -1)
perror("socket creation failed...\n");
Given the precedence of operators in C this basically says:
sock = ( socket(AF_INET, SOCK_STREAM, 0) == -1) )
if (sock) ...
Assuming that socket(...) will not return an error but a file descriptor (i.e. >=0) the comparison will be false and thus this essentially says sock = 0 while leaking a file descriptor if the fd returned by socket was >0.

Use of select in c++ to do a timeout in a protocol to transfer files over serial port

I have a function to write data to the serial port with a certain protocol. When the function writes one frame, it waits for one answer of receiver. If no answer is received it has to resend data during 3 timeouts and in the end of 3 timeouts with no success, close the communication...
I have this function:
int serial_write(int fd, unsigned char* send, size_t send_size) {
......
int received_counter = 0;
while (!RECEIVED) {
Timeout.tv_usec = 0; // milliseconds
Timeout.tv_sec = timeout; // seconds
FD_SET(fd, &readfs);
//set testing for source 1
res = select(fd + 1, &readfs, NULL, NULL, &Timeout);
//timeout occurred.
if (received_counter == 3) {
printf(
"Connection maybe turned off! Number of resends exceeded!\n");
exit(-1);
}
if (res == 0) {
printf("Timeout occured\n");
write(fd, (&I[0]), I.size());
numTimeOuts++;
received_counter++;
} else {
RECEIVED = true;
break;
}
}
......
}
I have verified that this function, when it goes into timeout, does not resend the data. Why?

Getting "Transport endpoint is not connected" in UDP socket programming in C++

I am getting Transport endpoint is not connected error in UDP server program, while I am try to
shutdown the socket via shutdown(m_ReceiveSocketId, SHUT_RDWR);
Following is my code snippet:
bool UDPSocket::receiveMessage()
{
struct sockaddr_in serverAddr; //Information about the server
struct hostent *hostp; // Information about this device
char buffer[BUFFERSIZE]; // Buffer to store incoming message
int serverlen; // to store server address length
//Open a datagram Socket
if((m_ReceiveSocketId = socket(AF_INET, SOCK_DGRAM, 0)) < 0)
{
Utility_SingleTon::printLog(LOG_ERROR,"(%s %s %d) UDP Client - socket() error",__FILE__,__func__, __LINE__);
pthread_exit(NULL);
return false;
}
//Configure Server Address.
//set family and port
serverAddr.sin_family = AF_INET;
serverAddr.sin_port = htons(m_ListeningPort);
if (bind(m_ReceiveSocketId, (struct sockaddr *) &serverAddr,sizeof(struct sockaddr_in)) < 0 )
{
Utility_SingleTon::printLog(LOG_ERROR,"(%s %s %d) UDP Client- Socket Bind error=%s",__FILE__,__func__, __LINE__,strerror(errno));
pthread_exit(NULL);
return false;
}
//TODO Re-Route Mechanism.
if((serverAddr.sin_addr.s_addr = inet_addr(m_ServerIPStr.c_str())) == (unsigned long)INADDR_NONE)
{
/* Use the gethostbyname() function to retrieve */
/* the address of the host server if the system */
/* passed the host name of the server as a parameter. */
/************************************************/
/* get server address */
hostp = gethostbyname(m_ServerIPStr.c_str());
if(hostp == (struct hostent *)NULL)
{
/* h_errno is usually defined */
/* in netdb.h */
Utility_SingleTon::printLog(LOG_ERROR,"%s %d %s %s %d", "Host Not found", h_errno,__FILE__,__func__, __LINE__);
pthread_exit(NULL);
return false;
}
memcpy(&serverAddr.sin_addr, hostp->h_addr, sizeof(serverAddr.sin_addr));
}
serverlen = (int )sizeof(serverAddr);
// Loop and listen for incoming message
while(m_RecevieFlag)
{
int receivedByte = 0;
memset(buffer, 0, BUFFERSIZE);
//receive data from the server
receivedByte = recvfrom(m_ReceiveSocketId, buffer, BUFFERSIZE, 0, (struct sockaddr *)&serverAddr, (socklen_t*)&serverlen);
if(receivedByte == -1)
{
Utility_SingleTon::printLog(LOG_ERROR,"[%s:%d#%s] UDP Client - receive error",__FILE__,__LINE__,__func__);
close(m_ReceiveSocketId);
pthread_exit(NULL);
return false;
}
else if(receivedByte > 0)
{
string rMesg;
rMesg.erase();
for(int loop = 0; loop < receivedByte; loop++)
rMesg.append(1, buffer[loop]);
Utility_SingleTon::printLog(LOG_DEBUG,"[%s:%d#%s] received message=%d",__FILE__,__LINE__,__func__, rMesg.length());
QOMManager_SingleTon::getInstance()->setReceivedMessage(rMesg);
raise(SIGUSR1);
}
}
close(m_ReceiveSocketId);
pthread_exit(NULL);
return true;
}
Any help would be appreciated.
Thanks Yuvi.
You don't need to call shutdown() for a UDP socket. From the man page:
The shutdown() call causes all or part of a full-duplex connection on the socket
associated with sockfd to be shut down.
If you call shutdown() on a UDP socket, it will return ENOTCONN
(The specified socket is not connected) because UDP is a connectionless protocol.
All you need to do is close the socket and set the socket to INVALID_SOCKET. Then in your destructor check whether the socket has already been set to INVALID_SOCKET before closing it.

First client is laggy in multi-socket winsock server

I have a WinSock server setup, which is properly accepting clients and relaying the appropriate information. The server takes two clients, receives a fixed size buffer of 256 bytes, stores it, and then relays the other buffer to the client. (Ie. client1 sends its buffer, server saves it, then sends client1 the buffer for client2).
Anytime client1 changes its buffer, it takes roughly 4 seconds for client2 to receive the changes. If client2 makes a change, client1 receives the update almost instantly (less than 0.1s).
Nagle's algorithm is disabled and I've tried changing the order which the server processes the requests, but client1 always lags. The data always shows up intact, but takes too long. Below is the loop the server uses to process the data:
for(;;)
{
// check if more clients can join
if (numClients < MAX_CLIENTS)
{
theClients[numClients] = accept(listeningSocket, NULL, NULL);
if (theClients[numClients] == INVALID_SOCKET)
{
nret = WSAGetLastError();
JBS::reportSocketError(nret, "server accept()");
closesocket(listeningSocket);
WSACleanup();
exit(0);
}
// disable Nagle's algorithm
int flag = 1;
int result = setsockopt(theClients[numClients], IPPROTO_TCP, TCP_NODELAY,
(char *) &flag, sizeof(int));
if (result < 0)
{
nret = WSAGetLastError();
JBS::reportSocketError(nret, "client connect()");
closesocket(theClients[numClients]);
WSACleanup();
}
// make the socket non-blocking
u_long iMode = 1;
ioctlsocket(theClients[numClients],FIONBIO, &iMode);
cout << "Client # " << numClients << " connected." << endl;
numClients++;
started = true;
}
else
{
// we've received all the connections, so close the listening socket
closesocket(listeningSocket);
}
// process client2
if (theClients[1] != INVALID_SOCKET)
{
memset(keys2, 0, 255);
// receive the updated buffer
nBytes = recv(theClients[1], keys2, sizeof(keys2), 0);
receiveResult = WSAGetLastError();
if ((receiveResult != WSAEWOULDBLOCK) && (receiveResult != 0))
{
JBS::reportSocketError(receiveResult, "server receive keys2()");
shutdown(theClients[1],2);
closesocket(theClients[1]);
WSACleanup();
exit(0);
break;
}
// send client1's buffer to client2
send(theClients[1],keys1,sizeof(keys1),0);
sendResult = WSAGetLastError();
if((sendResult != WSAEWOULDBLOCK) && (sendResult != 0))
{
JBS::reportSocketError(sendResult, "server send keys1()");
shutdown(theClients[1],2);
closesocket(theClients[1]);
WSACleanup();
exit(0);
break;
}
}
// process client1
if (theClients[0] != INVALID_SOCKET)
{
memset(keys1, 0, 255);
// receive the updated buffer
nBytes = recv(theClients[0], keys1, sizeof(keys1), 0);
receiveResult = WSAGetLastError();
if ((receiveResult != WSAEWOULDBLOCK) && (receiveResult != 0))
{
JBS::reportSocketError(receiveResult, "server receive keys1()");
shutdown(theClients[0],2);
closesocket(theClients[0]);
WSACleanup();
exit(0);
break;
}
// send client2's buffer to client1
send(theClients[0],keys2,sizeof(keys2),0);
sendResult = WSAGetLastError();
if((sendResult != WSAEWOULDBLOCK) && (sendResult != 0))
{
JBS::reportSocketError(sendResult, "server send keys2()");
shutdown(theClients[0],2);
closesocket(theClients[0]);
WSACleanup();
exit(0);
break;
}
}
Sleep((float)(1000.0f / 30.0f));
}
Client sending code:
int nError, sendResult;
sendResult = send(theSocket, keys, sizeof(keys),0);
nError=WSAGetLastError();
if((nError != WSAEWOULDBLOCK) && (nError != 0))
{
JBS::reportSocketError(sendResult, "client send()");
shutdown(theSocket,2);
closesocket(theSocket);
WSACleanup();
}
I've pasted your code below, with some inline comments in it, mostly because I can't fit it all reaonsably in a comment. How are you determining that it's taking four seconds for changes to get from client1 to client2? Visual inspection? Does this mean that Client1 & Client2 are running on the same machine (no different network latency issues to worry about)?
I've highlighted some blocks that look wrong. They may not be, it may be because you've tried to simplify the code that you've posted and you've missed some bits. I've also made some suggestions for where you might want to add some logging. If the sockets are really non-blocking you should be coming back from all of the calls very quickly and failing to read data, unless the client has sent it. If you've got a 4 second delay, then the problem could be:
the client hasn't sent it... is Nagle disabled on the client? If this were the case, I'd expect successive calls to recv to happen, with no data.
The recv call is taking too long... is the socket really in non-blocking mode?
The send call is taking too long... is the socket in non-blocking mode, is it buffered, is the client trying to receive the data?
Having the times each section of code takes will help to track down where your problem is.
You can get the time, using something like this (borrowed from the web):
struct timeval tv;
struct timezone tz;
struct tm *tm;
gettimeofday(&tv, &tz);
tm=localtime(&tv.tv_sec);
printf(" %d:%02d:%02d %d \n", tm->tm_hour, tm->tm_min,
m->tm_sec, tv.tv_usec);
Your code:
for(;;)
{
/* This block of code is checking the server socket and accepting
* connections, until two? (MAX_CLIENTS isn't defined in visible code)
* connections have been made. After this, it is attempting to close
* the server socket everytime around the loop. This may have side
* effects (although probably not), so I'd clean it up, just in case
*/
/* LOG TIME 1 */
// check if more clients can join
if (numClients < MAX_CLIENTS)
{
theClients[numClients] = accept(listeningSocket, NULL, NULL);
if (theClients[numClients] == INVALID_SOCKET)
{
nret = WSAGetLastError();
JBS::reportSocketError(nret, "server accept()");
closesocket(listeningSocket);
WSACleanup();
exit(0);
}
// disable Nagle's algorithm
int flag = 1;
int result = setsockopt(theClients[numClients], IPPROTO_TCP, TCP_NODELAY,
(char *) &flag, sizeof(int));
if (result < 0)
{
nret = WSAGetLastError();
JBS::reportSocketError(nret, "client connect()");
closesocket(theClients[numClients]);
WSACleanup();
}
// make the socket non-blocking
u_long iMode = 1;
ioctlsocket(theClients[numClients],FIONBIO, &iMode);
cout << "Client # " << numClients << " connected." << endl;
numClients++;
/* This started variable isn't used, is it supposed to be wrapping
* this server code in an if statement?
*/
started = true;
}
else
{
// we've received all the connections, so close the listening socket
closesocket(listeningSocket);
}
/* LOG TIME 2 */
// process client2
if (theClients[1] != INVALID_SOCKET)
{
memset(keys2, 0, 255);
// receive the updated buffer
/* LOG TIME 3 */
nBytes = recv(theClients[1], keys2, sizeof(keys2), 0);
/* LOG TIME 4 */
receiveResult = WSAGetLastError();
if ((receiveResult != WSAEWOULDBLOCK) && (receiveResult != 0))
{
JBS::reportSocketError(receiveResult, "server receive keys2()");
shutdown(theClients[1],2);
closesocket(theClients[1]);
WSACleanup();
exit(0);
break;
}
// send client1's buffer to client2
/* LOG TIME 5 */
send(theClients[1],keys1,sizeof(keys1),0);
/* LOG TIME 6 */
sendResult = WSAGetLastError();
if((sendResult != WSAEWOULDBLOCK) && (sendResult != 0))
{
JBS::reportSocketError(sendResult, "server send keys1()");
shutdown(theClients[1],2);
closesocket(theClients[1]);
WSACleanup();
exit(0);
break;
}
}
// process client1
/* If the client has been accepted (note that because this
* is part of the same block of code, and there's no protection
* around it, the first connection will process it's first
* receive/send combination before the second socket has been accepted)
*/
if (theClients[0] != INVALID_SOCKET)
{
memset(keys1, 0, 255);
// receive the updated buffer
/* You're trying a receive against a non-blocking socket. I would expect this
* to fail with WSAEWOULDBLOCK, if nothing has been sent by the client, but
* this block of data will still be sent to the client
*/
/* LOG TIME 7 */
nBytes = recv(theClients[0], keys1, sizeof(keys1), 0);
/* LOG TIME 8 */
receiveResult = WSAGetLastError();
if ((receiveResult != WSAEWOULDBLOCK) && (receiveResult != 0))
{
JBS::reportSocketError(receiveResult, "server receive keys1()");
shutdown(theClients[0],2);
closesocket(theClients[0]);
WSACleanup();
exit(0);
break;
}
// send client2's buffer to client1
/* The first time around the loop, you're sending the buffer to the
* first connected client, even though the second client hasn't connected yet.
* This will continue 30 times a second, until the second client connects. Does
* the client handle this correctly?
*/
/* LOG TIME 9 */
send(theClients[0],keys2,sizeof(keys2),0);
/* LOG TIME 10 */
sendResult = WSAGetLastError();
if((sendResult != WSAEWOULDBLOCK) && (sendResult != 0))
{
JBS::reportSocketError(sendResult, "server send keys2()");
shutdown(theClients[0],2);
closesocket(theClients[0]);
WSACleanup();
exit(0);
break;
}
}
Sleep((float)(1000.0f / 30.0f));
}
Client sending code:
int nError, sendResult;
/* There's no recv / loop in this section
*/
sendResult = send(theSocket, keys, sizeof(keys),0);
nError=WSAGetLastError();
if((nError != WSAEWOULDBLOCK) && (nError != 0))
{
JBS::reportSocketError(sendResult, "client send()");
shutdown(theSocket,2);
closesocket(theSocket);
WSACleanup();
}