how to send multiple http2 requests over the same connection with libcurl - libcurl

I'm using https://curl.haxx.se/libcurl/c/http2-download.html to send mulitple http2 requests to a demo http server. This server is based on spring webflux. To verify if libcurl can send http2 requests concurrently, the server will delay 10 seconds before return response. In this way, I hope to observe that the server will receive multiple http2 requests at almost the same time over the same connection, after 10 seconds, the client will receive responses.
However,I noticed that the server received the requests sequentially. It seems that the client doesn't send the next request before geting the response of previous request.
Here is the log of server, the requests arrived every 10 seconds.
2021-05-07 17:14:57.514 INFO 31352 --- [ctor-http-nio-2] i.g.h.mongo.controller.PostController : Call get 609343a24b79c21c4431a2b1
2021-05-07 17:15:07.532 INFO 31352 --- [ctor-http-nio-2] i.g.h.mongo.controller.PostController : Call get 609343a24b79c21c4431a2b1
2021-05-07 17:15:17.541 INFO 31352 --- [ctor-http-nio-2] i.g.h.mongo.controller.PostController : Call get 609343a24b79c21c4431a2b1
Any guys can help figure out my mistakes? Thank you

For me,
curl -v --http2 --parallel --config urls.txt
did exactly what you need, where urls.txt was like
url = "localhost:8080/health"
url = "localhost:8080/health"
the result was that at first, curl sent first request via HTTP/1.1, received 101 upgrade to http/2, immediately sent the second request without waiting for response, and then received two times 200 response in succession.
Note: -v is added for verbosity to validate it works as expected. You don't need it other than for printing the underlying protocol conversation.

Related

AWS HTTP API Gateway 503 Service Unavailable

I have an HTTP API Gateway with a HTTP Integration backend server on EC2. The API has lots of queries during the day and looking at the logs i realized that the API is returning sometimes a 503 HTTP Code with a body:
{ "message": "Service Unavailable" }
When i found out this, i tried the API and running the HTTP requests many times on Postman, when i try twenty times i get at least one 503.
I then thought that the HTTP Integration Server was busy but the server is not loaded and i tried going directly to the HTTP Integration Server and i get 200 responses all the times.
The timeout parameter is set to 30000ms and the endpoint average response time is 200ms so timeout is not a problem. Also the HTTP 503 is not after 30 seconds of the request but instantly.
Can anyone help me?
Thanks
I solved this issue by editing the keep-alive connection parameters of my internal integration server. The AWS API Gateway needs the keep alive parameters on a standard configuration, so I started tweaking my NGINX server parameters until I solved the issue.
Had the same issue on a selfmade Microservice with Node that was integrated into AWS API-Gateway. After some reconfiguration of the Cloudwatch-Logs I got further indicator on what is wrong: INTEGRATION_NETWORK_FAILURE
Verify your problem is alike - i.e. through elaborated log output
In API-Gateway - Logging add more output in "Log format"
Use this or similar content for "Log format":
{"httpMethod":"$context.httpMethod","integrationErrorMessage":"$context.integrationErrorMessage","protocol":"$context.protocol","requestId":"$context.requestId","requestTime":"$context.requestTime","resourcePath":"$context.resourcePath","responseLength":"$context.responseLength","routeKey":"$context.routeKey","sourceIp":"$context.identity.sourceIp","status":"$context.status","errMsg":"$context.error.message","errType":"$context.error.responseType","intError":"$context.integration.error","intIntStatus":"$context.integration.integrationStatus","intLat":"$context.integration.latency","intReqID":"$context.integration.requestId","intStatus":"$context.integration.status"}
After using API-Gateway Endpoint and failing consult the logs again - should be looking like that:
Solve in NodeJS Microservice (using Express)
Add timeouts for headers and keep-alive on express servers socket configuration when upon listening.
const app = require('express')();
// if not already set and required to advertise the keep-alive through HTTP-Response you might want to use this
/*
app.use((req: Request, res: Response, next: NextFunction) => {
res.setHeader('Connection', 'keep-alive');
res.setHeader('Keep-Alive', 'timeout=30');
next();
});
*/
/* ..you r main logic.. */
const server = app.listen(8080, 'localhost', () => {
console.warn(`⚡️[server]: Server is running at http://localhost:8080`);
});
server.keepAliveTimeout = 30 * 1000; // <- important lines
server.headersTimeout = 35 * 1000; // <- important lines
Reason
Some AWS Components seem to demand a connection kept alive - even if server responding otherwise (connection: close). Upon reusage in API Gateway (and possibly AWS ELBs) the recycling will fail because other-side most likely already closed hence the assumed "NETWORK-FAILURE".
This error seems intermittent - since at least the API-Gateway seems to close unused connections after a while providing a clean execution the next time. I can only assume they do that for high-performance and not divert to anything less.

cURL setopt CONNECTTIMEOUT vs TIMEOUT

After a lookup both on SO and other places, I've noticed there is a lot of conflicting information about the cURL options CONNECTTIMEOUT vs TIMEOUT.
CONNECTTIMEOUT is definitely the timeout just for the connection phase,
TIMEOUT is stated as being timeout for the entire cURL process (including CONNECTTIMEOUT) or the timeout after the connection phase has finished, depending on who you ask.
Furthermore, the official libcurl docs explain CONNECTTIMEOUT as
set maximum time the request is allowed to take
which is quite ambiguous language as it could be referring to e.g a HTTP request or speaking about the entire process as a request
CONNECTTIMEOUT is the time curl waits for during the connection. after that curl abandons the effort to connect. on the other hand, TIMEOUT is the total duration of receiving a response for a given request for which curl will wait, including the time it takes to connect and the time that the server takes to reply. here is the official link for both:
https://curl.haxx.se/libcurl/c/CURLOPT_CONNECTTIMEOUT.html
https://curl.haxx.se/libcurl/c/CURLOPT_TIMEOUT.html

Timeout on 3 attempt

I've set up two services with grapevine, a client server schema, I'm like the following problem, after the second request the server or the client returns me a timeout limit.
After calling the method RestResponse response = this.client.Execute (request) the RestResponse; To execute the request, so I saw it does not arrive on the server.
This always happens on 3 did I send the call

zmq DEALER socket zmq_recv_msg call always timeouts

I am using zmq ROUTER and DEALER sockets in my application(C++).
One process is listening on zmq ROUTER socket for clients to connect (a Service).
Clients connect to this service using zmq DEALER socket. From the client I am doing synchronous (blocking) request to the service. To avoid the
infinite wait time for the response, I am setting RCVTIMEO on DEALER socket to let say 5 ms. After setting this timeout I observe un-expected
behaviour on the client.
Here are the details:
Case 1: No RCVTIMEO is set on DEALER (client) socket
In this case, let say client sends 1000 Request to the service. Out of these requests for around 850 requests, client receives responses within 5 ms.
For remaining 150 request it takes more than 5 ms for response to come.
Case 2: RCVTIMEO is set for 5 ms on DEALER (client) socket
In this case, for the first 150-200 request I see valid response received within RCVTIMEO period. For all remaining requests I see RCVTIMEO timeout happening, which is not expected. The requests in both the cases are same.
The expected behiour should be: for 850 requests we should receive valid response (as they are coming within RCVTIMEO). And for remaining 150
requests we should see a timeout happening.
For having the timeout feature, I tried zmq_poll() also instead of setting RCVTIMEO, but results are same. Most of the requests are getting TIMEDOUT.
I went through the zmq documentation for details, but didn't find anything.
Can someone please explain the reason for this behaviour ?

How to delete email message with libcurl and POP3?

Is it possible to do? I know about custom request; so I send custom request with text "DELE", and set message ID that I want to delete. As a result, curl_easy_perform hangs until timeout appears. On web forums people write advice to send also "QUIT" command after "DELE"; but how can I send "QUIT" command if libcurl hangs?
libcurl debug output follows:
* Connected to pop-mail.outlook.com (157.55.1.215) port 995 (#2)
* SSL connection using DES-CBC3-SHA
* Server certificate:
* subject: C=US; ST=Washington; L=Redmond; O=Microsoft Corporation; CN=*.
hotmail.com
* start date: 2013-04-24 20:35:09 GMT
* expire date: 2016-04-24 20:35:09 GMT
* issuer: C=BE; O=GlobalSign nv-sa; CN=GlobalSign Organization Validation
CA - G2
* SSL certificate verify result: unable to get local issuer certificate (
20), continuing anyway.
< +OK DUB0-POP132 POP3 server ready
> CAPA
< -ERR unrecognized command
> USER ************#hotmail.com
< +OK password required
> PASS ******************
< +OK mailbox has 1 messages
> DELE 1
< +OK message deleted
* Operation too slow. Less than 1000 bytes/sec transferred the last 10 seconds
> QUIT
* Operation too slow. Less than 1000 bytes/sec transferred the last 10 seconds
* Closing connection 2
So, the message is removed, but libcurl hangs until speed limit forces it to disconnect, which is bad idea. How to force it to stop after deleting of message and don't wait until timeout comes?
If you look at the libcurl documentation, CURLOPT_CUSTOMREQUEST says:
POP3
When you tell libcurl to use a custom request it will behave like a LIST or RETR command was sent where it expects data to be returned by the server. As such CURLOPT_NOBODY should be used when specifying commands such as DELE and NOOP for example.
That is why libcurl is hanging - it is waiting for more data that the server is not actually sending. So add CURLOPT_NOBODY to stop that waiting.
There's a recently added example code on the libcurl site showing exactly how to do this:
pop3-dele.c