How to simulate server failure with httptest or http package in isolated unit test?
Details:
I've been using gorilla websockets, so on mt, msg, err := t.conn.ReadMessage() mt value must be -1 when server goes down.
I tried the following as the main option:
var srv *httptest.Server
srv = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, err := wsUpgrader.Upgrade(w, r, nil)
if err != nil {
t.Fatal(err)
}
srv.Close()
}))
But client didn't receive any messages at all. Also I had tried standard http server with panic, however after recover() client didn't receive any messages as well. srv.CloseClientConnections() didn't help, client waited for messages as before.
It's sufficient to call conn.Close() on client connection on WS server side in order to terminate connection without sending the close frame. This will trigger behavior with abnormal connection close and value of message type will be -1.
After connection upgrade to websocket it's hijacked and general http server cannot handle this connection anymore, that's why server closing doesn't work for hijacked connections.
Related
I'm writing a program that required connecting to a Web Server using the Web Socket Channel package. I am following the guide https://flutter.dev/docs/cookbook/networking/web-sockets to connect to an AWS Server. The link is something like this:
wss://xxxxxxx.execute-api.ap-southeast-1.amazonaws.com/dev/
Using the package I was able to connect and get the connectionState.waiting, but I cannot seem to listen to any data from the server nor send data to the server. For the data send, the format is below:
Map message = {
"action": 'subscribe',
"channel": 'contentTest',
};
channel.sink.add(jsonEncode(message));
The rest of my code is similar to the guide, however the server does not seem to receive any data and neither can my client. Can anyone share a working example for above problem? Thank you in advance.
I have a working example to share, you can see how the connection is done and the message is sent, this example is using the WebSocket class in dart:io. When I was coding this I had some issues with Web Socket Channel, can't remember why, so I opted for Dart WebSocket class.
Future<WatchResponse> WatchCollection(
CollectionRequest collectionRequest, String token) async {
try {
WebSocket ws = await WebSocket.connect(
"ws://${this.authority}/gapi/collection/watch?token=$token");
if (ws.readyState == WebSocket.open) {
ws.add(jsonEncode(collectionRequest.toMap()));
return WatchResponse(ok: true, streamSubscription: ws.listen(null));
}
} on WebSocketException catch (err) {
return WatchResponse(ok: false, webSocketExecption: err);
}
return WatchResponse(ok: false, webSocketExecption: null);
}
I have the lambda function A with a user validator that requests a second lambda function B with the Bearer token and expects to receive the user information.
When I define the timeout of function A as less than 28 seconds, I receive the following error:
ERROR: Get https://dev.url.com/auth/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
My code is:
client := &http.Client{
Timeout: time.Second * 20,
}
req, err := http.NewRequest("GET", m.authURL, nil)
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Authorization", "Bearer "+m.token)
resp, err := client.Do(req)
if err != nil || resp.StatusCode != http.StatusOK {
return errors.Errorf("Failed to request auth service.\ntoken: %s\nERROR: %+v\n", m.token, err)
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
log.Println(string(body))
return err
When I define the timeout of function A as higher or equal to 28 seconds everything works as expected, both functions work correctly, and the whole process takes around 7ms (!!!).
Is it possible that a timeout has such role in the execution? If so, why?
To whom might come by with the same issue, as mentioned in the comments above, my lambda function was inside of a VPC. I questioned my VPC configurations at the beginning, but because sometimes the execution between both lambdas was successfully executed, I eliminated that hypothesis since early.
Still don't understand why increasing the timeout of my lambda could make it work. I run out of assumptions and Suraj's comment encouraged me to reset my VPC configurations and instead of doing the configurations based on my experience, tried to follow every detail in the following link. During the process, realised that previously I assigned the public route table to the private subnet. That seems to explain the problem.
I'm having issues building an HTTP server using the Cesanta Mongoose web server library. The issue that I'm having occurs when I have an HTTP server built to listen on port 8080, and a client sending an HTTP request to localhost:8080. The problem is that the server processes the request fine and sends back a response, but the client only processes and prints the response after I kill the server process. Basically Mongoose works where you create connections which take an event handler function, ev_handler(). This event handler function is called whenever an
"event" occurs, such as the receiving of a request or a reply. On the server side, the event handler function is called fine when it receives a request from the client on 8080. However, the client-side event handler function is not called when the response sends the reply, but is called only after the server process is killed. I suspected that this may have something to do with the fact that the connection is on localhost, and I was right - this issue does not occur when the client sends requests to addresses other than localhost. The event handler function is called fine. Here is the ev_handler function on the client-side for reference:
static void ev_handler(struct mg_connection *c, int ev, void *p) {
if (ev == MG_EV_HTTP_REPLY) {
struct http_message *hm = (struct http_message *)p;
c->flags |= MG_F_CLOSE_IMMEDIATELY;
fwrite(hm->message.p, 1, (int)hm->message.len, stdout);
putchar('\n');
exit_flag = 1;
} else if (ev == MG_EV_CLOSE) {
exit_flag = 1;
};
}
Is this a common issue when trying to establish a connection on localhost with a server on the same computer?
The cause of such behavior is the fact that client connection does not fire an event until all data is read. How client knows the all data is read? There are 3 possibilities:
Server has sent Content-Length: XXX header and client has read XXX bytes of the message body, so it knows it received everything.
Server has sent Transfer-Encoding: chunked header, and sent all data chunks followed by an empty chunk. When client receives an empty chunk, it knows it received everything.
Server set neither Content-Lenth, nor Transfer-Encoding. Client does not know in this case what is the size of the body, and it keeps reading until server closes the connection.
What you see is (3). Solution: set Content-Length in your server code.
Providing a MCVE is going to be hard, the scenario is the following:
a server written in c++ with boost asio offers some services
a client written in c++ with boost asio requests services
There are custom headers and most communication is done using multipart/form.
However, in the case where the server returns a 401 for an unauthorized access,
the client receives a broken pipe (system error 32).
AFAIK this happens when the server connection closes too early.
So, running into gdb, I can see that the problem is indeed the transition from the async_write which sends the request, to the async_read_until which reads the first line of the HTTP Header:
The connect routine sends the request from the client to the server:
boost::asio::async_write(*socket_.get(),
request_,
boost::bind(&asio_handler<http_socket>::write_request,
this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
And the write_request callback, checks if the request was sent OK, and then reads the first line (until the first newline):
template <class T>
void asio_handler<T>::write_request(const boost::system::error_code & err,
const std::size_t bytes)
{
if (!err) {
// read until first newline
boost::asio::async_read_until(*socket_,
buffer_,
"\r\n",
boost::bind(&asio_handler::read_status_line,
this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
else {
end(err);
}
}
The problem is that the end(err) is always called with a broken pipe (error code 32). Meaning, as far as I understand, that the server closed the connection. The server indeed closes the connection, but only after it has sent a message HTTP/1.1 401 Unauthorized.
using curl with the appropriate request, we do get the actual message/error before the server closes the connection
using our client written in C++/boost asio we only get the broken pipe and no data
only when the server leaves the connection open, do we get to the point of reading the error (401) but that defeats the purpose, since now the connection is left open.
I would really appreciate any hints or tips. I understand that without the code its hard to help, so I can add more source at any time.
EDIT:
If I do not check for errors between writing the request, and reading the server reply, then I do get the actual HTTP 401 error. However this seems counter-intuitive, and I am not sure why this happens or if it is supposed to happen.
The observed behavior is allowed per the HTTP specification.
A client or server may close the socket at anytime. The server can provide a response and close the connection before the client has finished transmitting the request. When writing the body, it is recommended that clients monitor the socket for an error or close notification. From the RFC 7230, HTTP/1.1: Message Syntax and Routing Section 6.5. Failures and Timeouts:
6.5. Failures and Timeouts
A client, server, or proxy MAY close the transport connection at any time. [...]
A client sending a message body SHOULD monitor the network connection for an error response while it is transmitting the request. If the client sees a response that indicates the server does not wish to receive the message body and is closing the connection, the client SHOULD immediately cease transmitting the body and close its side of the connection.
On a graceful connection closure, the server will send a response to the client before closing the underlying socket:
6.6. Tear-down
A server that sends a "close" connection option MUST initiate a close of the connection [...] after it sends the response containing "close". [...]
Given the above behaviors, there are three possible scenarios. The async_write() operation completes with:
success, indicating the request was written in full. The client may or may not have received the HTTP Response yet
an error, indicating the request was not written in full. If there is data available to be read on the socket, then it may contain the HTTP Response sent by the server before the connection terminated. The HTTP connection may have terminated gracefully
an error, indicating the request was not written in full. If there is no data available to be read on the socket, then the HTTP connection was not terminated gracefully
Consider either:
initiating the async_read() operation if the async_write() is successful or there is data available to be read
void write_request(
const boost::system::error_code & error,
const std::size_t bytes_transferred)
{
// The server may close the connection before the HTTP Request finished
// writing. In that case, the HTTP Response will be available on the
// socket. Only stop the call chain if an error occurred and no data is
// available.
if (error && !socket_->available())
{
return;
}
boost::asio::async_read_until(*socket_, buffer_, "\r\n", ...);
}
per the RFC recommendation, initiate the async_read() operation at the same time as the async_write(). If the server indicates the HTTP connection is closing, then the client would shutdown its send side of the socket. The additional state handling may not warrant the extra complexity
I am using TIdHTTPProxyServer and now I want to terminate connection when it is success to connect to the target HTTP server but receive no response for a long time(i.g. 3 mins)
Currently I find no related property or event about it. And even if the client terminate the connection before the proxy server receive the response from the HTTP server. OnException Event will not be fired until the proxy server receive the response. (That is, if the proxy server still receive no response from HTTP Server, I even do not know the client has already terminate the connection...)
Any help will be appreciated.
Thanks!
Willy
Indy uses infinite timeouts by default. To do what you are asking for, you need to set the ReadTimeout property of the outbound connection to the target server. You can access that connection via the TIdHTTPProxyServerContext.OutboundClient property. Use the OnHTTPBeforeCommand event, which is triggered just before the OutboundClient connects to the target server, eg:
#include "IdTCPClient.hpp"
void __fastcall TForm1::IdHTTPProxyServer1HTTPBeforeCommand(TIdHTTPProxyServerContext *AContext)
{
static_cast<TIdTCPClient*>(AContext->OutboundClient)->ReadTimeout = ...;
}