Cesanta Mongoose - problems when connecting to localhost - c++

I'm having issues building an HTTP server using the Cesanta Mongoose web server library. The issue that I'm having occurs when I have an HTTP server built to listen on port 8080, and a client sending an HTTP request to localhost:8080. The problem is that the server processes the request fine and sends back a response, but the client only processes and prints the response after I kill the server process. Basically Mongoose works where you create connections which take an event handler function, ev_handler(). This event handler function is called whenever an
"event" occurs, such as the receiving of a request or a reply. On the server side, the event handler function is called fine when it receives a request from the client on 8080. However, the client-side event handler function is not called when the response sends the reply, but is called only after the server process is killed. I suspected that this may have something to do with the fact that the connection is on localhost, and I was right - this issue does not occur when the client sends requests to addresses other than localhost. The event handler function is called fine. Here is the ev_handler function on the client-side for reference:
static void ev_handler(struct mg_connection *c, int ev, void *p) {
if (ev == MG_EV_HTTP_REPLY) {
struct http_message *hm = (struct http_message *)p;
c->flags |= MG_F_CLOSE_IMMEDIATELY;
fwrite(hm->message.p, 1, (int)hm->message.len, stdout);
putchar('\n');
exit_flag = 1;
} else if (ev == MG_EV_CLOSE) {
exit_flag = 1;
};
}
Is this a common issue when trying to establish a connection on localhost with a server on the same computer?

The cause of such behavior is the fact that client connection does not fire an event until all data is read. How client knows the all data is read? There are 3 possibilities:
Server has sent Content-Length: XXX header and client has read XXX bytes of the message body, so it knows it received everything.
Server has sent Transfer-Encoding: chunked header, and sent all data chunks followed by an empty chunk. When client receives an empty chunk, it knows it received everything.
Server set neither Content-Lenth, nor Transfer-Encoding. Client does not know in this case what is the size of the body, and it keeps reading until server closes the connection.
What you see is (3). Solution: set Content-Length in your server code.

Related

Python socket.recv Closing Socket Prematurely

I have a web proxy that starts a TCP listener socket that accepts connections from clients. The listener accepts connections via:
clientConnection, clientAddress = listenerSocket.accept()
and then a new thread handles the client connection from there.
To mock a client connection, I am using telnet to connect to the proxy and issue commands. The proxy needs to receive data from telnet and I need to make sure that I receive all of it. To achieve this, I am doing the following:
while True:
requestBytes = clientConnection.recv(1024)
if not requestBytes:
break
requestBuffer += requestBytes
The proxy then decodes the bytes and does some things with them that takes a little bit of time, and then has to send a response back to the same client. However, when using the above code the connection with clientConnection gets closed long before I can process the bytes and respond.
Here's what I don't understand, when I use the following instead:
while True:
requestBytes = clientConnection.recv(1024)
requestBuffer += requestBytes
break
It works just fine and the clientConnection remains intact. This obviously has a problem if I receive more than 1024 bytes, but the clientConnection does not get closed.
More specifically, the error occurs after I have a response to send to the client and call:
clientConnection.sendall(response)
clientConnection.shutdown(1)
clientConnection.close()
The line clientConnection.shutdown(1) throws the error:
[Errno 107] Transport endpoint is not connected
which is confusing because somehow it was able to still call sendall on the previous line. Note that I did not actually receive anything on the client side.
I am sure that the connection is not getting closed elsewhere in the code. What exactly is happening here and what is the best way to do something like recvall and keep the clientConnection open?

Send Recv from client to server socket by establish TCP Connection only once

I am working on a client/server solution in C++.
From the client, I am sending data to my server, and from this server I am sending to another server. I am able to configure port and IP address, and am able to send successfully.
But, the other server (which is not on my side) needs to establish only one TCP connection from my side, after that only sending and receiving needs to happen.
If I am connecting twice (say from two clients at the same time), it shows connection refused.
Part of the code is shown below:
while ((len = stream->receive(input, sizeof(input)-1)) > 0 )
{
input[len] = NULL;
//Code Addition by Srini starts here
//Client declaration
TCPConnector* connector_client = new TCPConnector();
printf("ip_client = %s\tport_client = %s\tport_client_int = %d\n", ip_client.c_str(), port_client.c_str(),atoi(port_client.c_str()));
TCPStream* stream_client = connector_client->connect(ip_client.c_str(), atoi(port_client.c_str()));
//Client declaration ends
if (stream_client)
{
//message = "Is there life on Mars?";
//stream_client->send(message.c_str(), message.size());
//printf("sent - %s\n", message.c_str());
stream_client->send(input, sizeof(input));
printf("sent - %s\n", input);
len = stream_client->receive(line, sizeof(line));
line[len] = NULL;
printf("received - %s\n", line);
delete stream_client;
}
//Code Additon by Srini ends here
stream->send(line, len);
printf("thread %lu, echoed '%s' back to the client\n",
(long unsigned int)self(), line);
}
The full thread code where receiving from client, sending to server, receiving from server, and sending to client is shown in the below link:
https://pastebin.com/UmPQJ70w
How can I change my design flow? Even in a basic diagram of client/server program. When the client calls connect(), then the server calls accept() every time, then sending/receiving happens. So, what can be done to modify the flow so that the client can connect only once?
Your intermediate server (which is acting as a proxy, so lets call it that) needs to maintain a single connection to the other server and delegate messaging with it in parallel to the messaging being done between your proxy and its clients.
I would suggest creating a separate thread whose sole task is to maintain that connection to the other server, and to send/receive messages with it.
When a client sends a message to your proxy, place the message in a thread-safe queue somewhere. Have the thread check the queue periodically and send any queued messages to the other server.
When the other server sends a message to your proxy, the thread can receive it and forward it to the appropriate client.

On server side in QTcpServer appears: The remote host closed the connection

I have a QTcpServer app and QTcpClient app.
See my screenshot.
When a client after interacting with server is disconnecting from server, on server side appears event (in client socket - in slot):
void CMyClient::onSocketDisplayError(QAbstractSocket::SocketError socketError)
{
QString sErr = m_pClientSocket->errorString();
m_pWin->AddMessageFormClient("Was gotten some error! " + sErr);
}
Error message:
The remote host closed the connection.
After that appears an event:
void CMyClient::onSocketDisconnected()
{
m_pWin->AddMessageFormClient("Client is disconnected!");
m_pWin->UpdateDisconnectUI();
}
Is it proper behavior on server side to generate onSocketDisplayError?
The code to disconnect on client side:
void MainWindow::on_pushButton_DisconnectFromServ_clicked()
{
m_pSocket->disconnectFromHost();
m_pSocket->waitForDisconnected(3000);
}
According with the documentation of QAbstractSocket, that is the class behind a QTcpSocket and thus your client and server (emphasis mine):
To close the socket, call disconnectFromHost(). QAbstractSocket enters QAbstractSocket::ClosingState. After all pending data has been written to the socket, QAbstractSocket actually closes the socket, enters QAbstractSocket::UnconnectedState, and emits disconnected(). If you want to abort a connection immediately, discarding all pending data, call abort() instead. If the remote host closes the connection, QAbstractSocket will emit error(QAbstractSocket::RemoteHostClosedError), during which the socket state will still be ConnectedState, and then the disconnected() signal will be emitted.
Therefore I'd say that:
disconnectFromHost is what you should use to close the client or the server
It's the proper behavior for the server to emit an error that indicates that a remote host closed the connection

boost asio async_read header connection closes too early

Providing a MCVE is going to be hard, the scenario is the following:
a server written in c++ with boost asio offers some services
a client written in c++ with boost asio requests services
There are custom headers and most communication is done using multipart/form.
However, in the case where the server returns a 401 for an unauthorized access,
the client receives a broken pipe (system error 32).
AFAIK this happens when the server connection closes too early.
So, running into gdb, I can see that the problem is indeed the transition from the async_write which sends the request, to the async_read_until which reads the first line of the HTTP Header:
The connect routine sends the request from the client to the server:
boost::asio::async_write(*socket_.get(),
request_,
boost::bind(&asio_handler<http_socket>::write_request,
this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
And the write_request callback, checks if the request was sent OK, and then reads the first line (until the first newline):
template <class T>
void asio_handler<T>::write_request(const boost::system::error_code & err,
const std::size_t bytes)
{
if (!err) {
// read until first newline
boost::asio::async_read_until(*socket_,
buffer_,
"\r\n",
boost::bind(&asio_handler::read_status_line,
this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
else {
end(err);
}
}
The problem is that the end(err) is always called with a broken pipe (error code 32). Meaning, as far as I understand, that the server closed the connection. The server indeed closes the connection, but only after it has sent a message HTTP/1.1 401 Unauthorized.
using curl with the appropriate request, we do get the actual message/error before the server closes the connection
using our client written in C++/boost asio we only get the broken pipe and no data
only when the server leaves the connection open, do we get to the point of reading the error (401) but that defeats the purpose, since now the connection is left open.
I would really appreciate any hints or tips. I understand that without the code its hard to help, so I can add more source at any time.
EDIT:
If I do not check for errors between writing the request, and reading the server reply, then I do get the actual HTTP 401 error. However this seems counter-intuitive, and I am not sure why this happens or if it is supposed to happen.
The observed behavior is allowed per the HTTP specification.
A client or server may close the socket at anytime. The server can provide a response and close the connection before the client has finished transmitting the request. When writing the body, it is recommended that clients monitor the socket for an error or close notification. From the RFC 7230, HTTP/1.1: Message Syntax and Routing Section 6.5. Failures and Timeouts:
6.5. Failures and Timeouts
A client, server, or proxy MAY close the transport connection at any time. [...]
A client sending a message body SHOULD monitor the network connection for an error response while it is transmitting the request. If the client sees a response that indicates the server does not wish to receive the message body and is closing the connection, the client SHOULD immediately cease transmitting the body and close its side of the connection.
On a graceful connection closure, the server will send a response to the client before closing the underlying socket:
6.6. Tear-down
A server that sends a "close" connection option MUST initiate a close of the connection [...] after it sends the response containing "close". [...]
Given the above behaviors, there are three possible scenarios. The async_write() operation completes with:
success, indicating the request was written in full. The client may or may not have received the HTTP Response yet
an error, indicating the request was not written in full. If there is data available to be read on the socket, then it may contain the HTTP Response sent by the server before the connection terminated. The HTTP connection may have terminated gracefully
an error, indicating the request was not written in full. If there is no data available to be read on the socket, then the HTTP connection was not terminated gracefully
Consider either:
initiating the async_read() operation if the async_write() is successful or there is data available to be read
void write_request(
const boost::system::error_code & error,
const std::size_t bytes_transferred)
{
// The server may close the connection before the HTTP Request finished
// writing. In that case, the HTTP Response will be available on the
// socket. Only stop the call chain if an error occurred and no data is
// available.
if (error && !socket_->available())
{
return;
}
boost::asio::async_read_until(*socket_, buffer_, "\r\n", ...);
}
per the RFC recommendation, initiate the async_read() operation at the same time as the async_write(). If the server indicates the HTTP connection is closing, then the client would shutdown its send side of the socket. The additional state handling may not warrant the extra complexity

How do I set timeout for TIdHTTPProxyServer (not connection timout)

I am using TIdHTTPProxyServer and now I want to terminate connection when it is success to connect to the target HTTP server but receive no response for a long time(i.g. 3 mins)
Currently I find no related property or event about it. And even if the client terminate the connection before the proxy server receive the response from the HTTP server. OnException Event will not be fired until the proxy server receive the response. (That is, if the proxy server still receive no response from HTTP Server, I even do not know the client has already terminate the connection...)
Any help will be appreciated.
Thanks!
Willy
Indy uses infinite timeouts by default. To do what you are asking for, you need to set the ReadTimeout property of the outbound connection to the target server. You can access that connection via the TIdHTTPProxyServerContext.OutboundClient property. Use the OnHTTPBeforeCommand event, which is triggered just before the OutboundClient connects to the target server, eg:
#include "IdTCPClient.hpp"
void __fastcall TForm1::IdHTTPProxyServer1HTTPBeforeCommand(TIdHTTPProxyServerContext *AContext)
{
static_cast<TIdTCPClient*>(AContext->OutboundClient)->ReadTimeout = ...;
}