node.js http couldn't receive request from C++ clients via socket - c++

I am trying to send GET request to nodejs server from a C++ client.
nodejs server:
const server = http.createServer((request, response) => {
console.log(request.url);
response.end("received");
})
and here is my C++ clients:
while(getline(cin, random_input)) {
int s_len;
input = "GET / HTTP/1.1\r\n\r\n";
s_len = send(sock, input.c_str(), input.size(), 0);
if( s_len < 0)
{
perror("Send failed : ");
return false;
}
cout<<socket_c.receive(1024);
}
string tcp_client::receive(int size=512)
{
char buffer[size];
string reply;
int r_len; // received len
//Receive a reply from the server
r_len = recv(sock, buffer, sizeof(buffer), 0);
if( r_len < 0)
{
puts("recv failed");
}
if(buffer[r_len-1] == '\n') {
buffer[r_len-1] = '\0';
} else {
buffer[r_len] = '\0';
}
reply = buffer;
return reply;
}
so the C++ client can send GET requests each time when it's typing something in the terminal.
It works pretty fine if I type something right after the connection has been established. However, if I wait for 15-30 seconds after establish the connection, then type something on the client program, although the number of byte s_len that has been sent is correct, the server could't received anything.
May I know what goes wrong?

A few errors I spotted:
send return value is not checked correctly. Condition input.size() == s_len must be true.
recv return value is not checked of EOF. It treats r_len of 0 as valid data instead of disconnect. This may be the reason you do not see server replies: it may have disconnected but you did not notice that.

Setting the value of keepAliveTimeout of the node.js server to 0 could solve the problem

Related

Close connection with client after inactivty period

I'm currently managing a server that can serve at most MAX_CLIENTS clients concurrently.
This is the code I've written so far:
//create and bind listen_socket_
struct pollfd poll_fds_[MAX_CLIENTS];
for (auto& poll_fd: poll_fds_)
{
poll_fd.fd = -1;
}
listen(listen_socket_, MAX_CLIENTS);
poll_fds_[0].fd = listen_socket_;
poll_fds_[0].events = POLLIN;
while (enabled)
{
const int result = poll(poll_fds_, MAX_CLIENTS, DEFAULT_TIMEOUT);
if (result == 0)
{
continue;
}
else if (result < 0)
{
// throw error
}
else
{
for (auto& poll_fd: poll_fds_)
{
if (poll_fd.revents == 0)
{
continue;
}
else if (poll_fd.revents != POLLIN)
{
// throw error
}
else if (poll_fd.fd == listen_socket_)
{
int new_socket = accept(listen_socket_, nullptr, nullptr);
if (new_socket < 0)
{
// throw error
}
else
{
for (auto& poll_fd: poll_fds_)
{
if (poll_fd.fd == -1)
{
poll_fd.fd = new_socket;
poll_fd.events = POLLIN;
break;
}
}
}
}
else
{
// serve connection
}
}
}
}
Everything is working great, and when a client closes the socket on its side, everything gets handled well.
The problem I'm facing is that when a client connects and send a requests, if it does not close the socket on its side afterwards, I do not detect it and leave that socket "busy".
Is there any way to implement a system to detect if nothing is received on a socket after a certain time? In that way I could free that connection on the server side, leaving room for new clients.
Thanks in advance.
You could close the client connection when the client has not sent any data for a specific time.
For each client, you need to store the time when the last data was received.
Periodically, for example when poll() returns because the timeout expired, you need to check this time for all clients. When this time to too long ago, you can shutdown(SHUT_WR) and close() the connection. You need to determine what "too long ago" is.
If a client does not have any data to send but wants to leave the connection open, it could send a "ping" message periodically. The server could reply with a "pong" message. These are just small messages with no actual data. It depends on your client/server protocol whether you can implement this.

Websocket client is receiving consecutive TEXT and PING messages appended in a single packet

I have implemented a secure websocket client (with OpenSSL encryption) in C++. The problem is, for received messages with size above 16k bytes, the client does not receive the ping message from server separately, rather the ping message is appended at the tail end of the preceding data message. As a result, the client does not parse the ping message, does not send a PONG reply and server closes the connection on timeout.
I am using a python based websocket server to test my client application. As far as I have tested, if the packet size sent from server is below 16k bytes, the pings are correctly received consistently during the test.
This is my read_from_websocket function
int read_from_websocket(char *recv_buff)
{
uint buf_capacity = INT_MAX;
uint buf_offset = 0;
uint read_count;
do
{
read_count = ssl_read(recv_buff + buf_offset, buf_capacity);
if (read_count == P_FD_ERR) // P_FD_ERR = -1
{
return -1;
}
else if (read_count == P_FD_PENDING) // P_FD_PENDING = -2
{
break;
}
if (read_count == 0)
{
break; // EOF
}
buf_offset += read_count;
buf_capacity -= read_count;
} while (buf_capacity);
return INT_MAX - buf_capacity;
}
/**SSL read function being called by the above websocket read function**/
int ssl_read(void *buf, size_t count)
{
int len = SSL_read(ssl, buf, count);
if (len < 0)
{
int err = SSL_get_error(ssl, len);
if (err == SSL_ERROR_WANT_READ)
{
return P_FD_PENDING;
}
else
{
return P_FD_ERR;
}
}
return len;
}
I don't understand:
how to ensure that a single call to read_from_websocket() always returns only one complete message
why does this only happen in case of incoming ping messages and only when the preceding message is greater than 16k bytes long
whether the server side can be a culprit here since I'm using an off-the-shelf server code

How to receive a "DONE" TCP packet after waiting for server to finish processing

I'm trying to get a couple Raspberry Pi's to network with one another to process data. After the client sends the server the initial config file, I'd like the server to send back a packet saying that it's done saving the config file to a file.
Originally, I had the server echo the message back and it ran just fine.
client
// Initialize a connection to the server
TCPSocket sock(
config["server"].as<std::string>(),
config["port"].as<unsigned short>()
);
// Send the configuration file to the server
configRaw = "CONFIG\n" + getFileData(configFile);
handleBuffer(configRaw, messageBuffer, messageLength);
sock.send(messageBuffer, messageLength);
while (totalBytesReceived < messageLength) {
// When the bytesRecieved is negative, the packet received is no longer
// a part of the buffer. At this point the packet is no longer what the
// server is sending back.
if ((bytesReceived = (sock.recv(receiveBuffer, RCVBUFFERSIZE))) <= 0) {
std::cerr << "Unable to read";
exit(1);
}
totalBytesReceived += bytesReceived;
receiveBuffer[bytesReceived] = '\0';
}
server
while(
(messageLength = clientSocket->recv(messageBuffer, RCVBUFFERSIZE)) > 0
) {
message.append(
std::string(messageBuffer).substr(0, messageLength)
);
clientSocket->send(messageBuffer, messageLength);
}
When the server only echos the original message, it runs fine. When I add an additional receive while not changing the server, the server never leaves its while loop:
all of server
while(
(messageLength = clientSocket->recv(messageBuffer, RCVBUFFERSIZE)) > 0
) {
message.append(
std::string(messageBuffer).substr(0, messageLength)
);
clientSocket->send(messageBuffer, messageLength);
}
std::istringstream iss(message);
std::getline(iss, line);
if (line == "CONFIG") {
writeConfigFile(message); // write the config file to disk
handleBuffer(done, sendBuffer, messageLength);
// Let the client know that the server has written everything to disk.
clientSocket->send(sendBuffer, messageLength);
}
broken client
// Initialize a connection to the server
TCPSocket sock(
config["server"].as<std::string>(),
config["port"].as<unsigned short>()
);
// Send the configuration file to the server
configRaw = "CONFIG\n" + getFileData(configFile);
handleBuffer(configRaw, messageBuffer, messageLength);
sock.send(messageBuffer, messageLength);
while (totalBytesReceived < messageLength) { // NEVER LEAVES THIS LOOP
// When the bytesRecieved is negative, the packet received is no longer
// a part of the buffer. At this point the packet is no longer what the
// server is sending back.
if ((bytesReceived = (sock.recv(receiveBuffer, RCVBUFFERSIZE))) <= 0) {
std::cerr << "Unable to read";
exit(1);
}
totalBytesReceived += bytesReceived;
receiveBuffer[bytesReceived] = '\0';
}
sock.recv(receiveBuffer, RCVBUFFERSIZE); // THIS BREAKS THE SERVER
Is there a certain order of operation when it comes to TCP sockets?

WinSocket IRC USER command

When i am connecting to an IRC server via telnet everything works fine, but in my program there is no respond from server after the greeting message. What's wrong?
PS when i am sending "JOIN #channel" server responds.
fragment of the code:
while (true)
{
ret = recv(pocket, buf, 512, 0);
if (ret == 0 || ret == SOCKET_ERROR)
{
printf("Serwer przerwal polaczenie");
break;
}
buf[ret] = '\0';
input = buf;
printf("%s\n", input.c_str());
if (fTime)
{
isend(pocket, "USER foox 0 0 :foox");
isend(pocket, "NICK foobar");
fTime = false;
}
memset(buf, 0, sizeof(buf));
}
isend function:
bool isend(SOCKET socket,std::string message)
{
int ret = send(socket, message.c_str(), message.size() + 1, 0);
if (!ret){
printf("Nie udalo sie wyslac pakietu: \"%s\"", message);
return false;
}
else
return true;
}
Don't read upon connection. Send the NICK and USER information as per RFC 2812. You're doing it in reverse order that is suggested. Both NICK and USER lines need to be correctly terminated with \r\n and then you can read.
Don't send message.size()+1 - do send message.size(). I don't understand why you were sending message.size()+1 and you didn't answer why in my comments.
If you get stuck I suggest using something like Wireshark with an unencrypted connection and log how IRC clients manage it.
You have three issues:
You do a blocking read, which will wait forever if there's nothing to read.
You need to send a carriage return and newline after each line.
You don't want to send the terminating zero byte.

How to implement an SSL tunnel through a transparent proxy?

I have read the SSL TUNNELING INTERNET-DRAFT of December 1995 and set up an HTTP transparent proxy that works perfectly with unencrypted traffic.
Having read the above, as well as googled my brains out, the accepted method to create a tunnel for secure traffic through a proxy seems to be:
connect to the requested host, then have the proxy send an "HTTP 200..." confirmation message back to the client, then from that point on simply pass all further data traffic between client and server.
When I try this, however, the client (Chrome browser) responds to the "HTTP 200..." message with three wingdings characters which I forward to the remote host. At this point there is no response back and the connection fails.
Here is the code I am using for this, after having connected to the host:
if((*request=='C')&&(*(request+1)=='O')&&(*(request+2)=='N')&&(*(request+3)=='N'))
{
int recvLen;
send(output,htok,strlen(htok),0); //htok looks like "HTTP/1.0 200 Connection Established\nProxy-Agent: this_proxy\r\n\r\n"
std::memset(buff,0,bSize);
int total;
int bytes;
int n;
char cdata[MAXDATA];
while ((recvLen = recv(output, buff, bSize-1,0)) > 0) //recving from client - here we get wingdings
{
memset(cdata,0, MAXDATA);
strcat(cdata, buff);
while(recvLen>=bSize-1)//just in case buff is too small
{
std::memset(buff,0,bSize);
recvLen=recv(output,buff,bSize-1,0);
strcat(cdata, buff);
}
total = 0;
bytes = strlen(cdata);
cout << cdata << endl;//how I see the wingdings
while (total < strlen(cdata))
{
n = send(requestSock, cdata + total, bytes,0);//forwarding to remote host
if(n == SOCKET_ERROR)
{
cout << "secure sending error" << endl;
break;
}
total += n;
bytes -= n;
}
std::memset(buff,0,bSize);
recvLen=recv(requestSock, buff, bSize,0);//get reply from remote host
if (recvLen > 0)
{
do
{
cout<<"Thread "<<threadid<<" [Connection:Secure]: "<<recvLen<<endl;
send(output, buff, recvLen,0);//forward all to client
recvLen= recv(requestSock, buff, bSize,0);
if(0==recvLen || SOCKET_ERROR==recvLen)
{
cout<<"finished secure receiving or socket error"<<endl;
break;
}
}while(true);
}
}//end while, loop checks again for client data
Can anyone spot the error of my ways?
Your code is much more complicated than necessary. Just read into a char array, save the length returned, and write that many bytes from the same array, in a loop until recv() returns zero. 4 lines of code including two for the braces. Don't try to assemble the entire incoming message, just relay whatever comes in as it comes. Otherwise you are just adding latency, and programming errors. Get rid of all the strXXX() calls altogether.
I don't think you should make the assumption that the traffic does not contain ASCII NUL characters:
strcat(cdata, buff);
}
total = 0;
bytes = strlen(cdata);
If there are ASCII NULs in the stream, these will fail.