curl_easy_perform return CURLE_OK ,but the write callback func return data 502 Bad Gateway - libcurl

I use curl_easy_perform and set the CURLOPT_WRITEFUNCTION and CURLOPT_WRITEDATA.
Sometimes it returns CURLE_OK, but the write callback function returns data "502 Bad Gateway".
Why would it error but return CURLE_OK and how can I resolve this error.

libcurl will not return an error for HTTP transfers that worked fine. The actual transferring of HTTP.
To extract the HTTP response code from a HTTP response like yours, use curl_easy_getinfo(... CURLINFO_RESPONSE_CODE ) after the completed transfer.

Related

http async response handling with pistache framework

I am trying to write a c++ pistache server that, on a specific endpoint, has to contact another pistache server.
This is the scenario:
client -> server1 -> server2
client <- server1 <- server2
I am having problems waiting for the response in server1 and sending it back to the client asynchronously.
More in details:
I think that an efficient way of handling this problem should be to call the response.send in the resp.then block (it returns a Pistache::Async::Promise). Unluckily, it gives a segmentation fault once that endpoint is called and it gets in the then block. So I guess it is illegal to do that as I wanted. Also, the logs are not giving more details than segmentation fault, so it is hard to debug.
I share my server1 code to show you how I did implement it.
void doSmth(const Rest::Request& request, Http::ResponseWriter httpResponse)
{
auto resp_srv2 = client
.post(addr)
.body(json)
.send();
resp_srv2.then(
[&](Http::Response response) {
httpResponse.send(r.response_code);
},
[&](std::exception_ptr exc) {
PrintException excPrinter;
excPrinter(exc);
});
}
In this case, I could avoid using the barrier as shown in the pistache git repo. Using their code, 28k requests from the user are correctly handled, and then I guess that the resources are not correctly allocated since it gets stuck.
Do you know how to send the response to the client once received the server2 response asynchronously? I need to do it in an optimized way and correctly allocate all the resources.
Thanks for your help!

Cloudfront Lambda Egde bug? Change Response Status code in viewer response

The question is: Why is not possible to change http status before return responses in ViewerResponseEvents in lambda edge?
I have a Lambda function that must check every single response and change it's status code based on JSON file that I have in my lambda function. I deployed this lambda function in Viewer response because I want that this function executes before return every single response. Aws Documentation ( https://aws.amazon.com/blogs/networking-and-content-delivery/lambdaedge-design-best-practices/ ) says if you want to execute a function for all requests it should be placed in viewer events.
So, I've created a simple function that basically is cloning and changing the http status code of response before return it. I did this code for test:
exports.handler = async (event) => {
const request = event.Records[0].cf.request;
console.log(request);
console.log(`Original response`);
const response = event.Records[0].cf.response;
console.log(`Original response`);
console.log(response);
//clone response just for change the status code
let cloneResponseReturn = JSON.parse(JSON.stringify(response));
cloneResponseReturn.status = 404;
cloneResponseReturn.statusDescription = 'Not Found';
console.log('Log Clone Response Return');
console.log(cloneResponseReturn);
return cloneResponseReturn;
};
When I access the log in cloudwatch, it shows that response has http 404 code, but for some reason, cloudfront still returning the response with 200 status code. (I've cleared browsers cache, tested it in other tools such as postman, but in all of them CloudFront returns HTTP 200)
CloudWatch Log and Response print:
If I change this function to execute in origin response it will work, but I don't want to execute it ONLY in cache miss (+as aws tell us that origin events will be executed only in that case+). As origin events are executed only in cache miss, to execute that redirects I would have to create a chache header buster to make sure that origin events will be always executed.
Is really weird this behaviour of edge lambda. Does anyone have any idea how I can solve this? I already tried to clean cache of browsers and tools that I am using for test the requests, also clean the cache of my distribution but still not working.
I've posted the question in AWS Forum a week ago but it still without answer: https://forums.aws.amazon.com/message.jspa?messageID=885516#885516
Thanks in advance.

after adding CURLOPT_TIMEOUT_MS curl doesn't send anything

I have a while loop, and inside that loop I send a PUT request into google firebase REST api. It works very well, but if I want to fasten things up (the while loop waits for the curl response every round of the loop which is very slow sometimes, over 200ms), I'm trying to add the CURLOPT_TIMEOUT_MS and set it to a low 1 millisecond.
TLDR;
after adding line
curl_easy_setopt(curl, CURLOPT_TIMEOUT_MS, 1L);
My curl does not send anything to the server anymore. Or does the server somehow force the client to receive the returning value from the request?
You tell curl to fail the operation if it isn't completed within 1 millisecond. Not many requests are completed that quickly, especially not if you're using DNS or just use connections over the Internet.
So yes, most transfers will then just return CURLE_OPERATION_TIMEDOUT (28) with no content.
This is a bug of CURL.
If your timeout setting is less than 1s, it will directly return an error.
Solution is:
curl_easy_setopt(conn, CURLOPT_NOSIGNAL, 1);
conn is the pointer of CURL, e.g.:
CURL *conn = NULL;
curl_easy_setopt(conn, CURLOPT_NOSIGNAL, 1);

libCURL timeout while receiving HTTP multi-part flow

I'm using libCURL to perform an HTTP GET request toward a device that responds with a continuous flow of data in a multipart HTTP response.
I'd like to handle the unfortunate but possible case where the device is disconnected/shutdown or is not reachable anymore on the network.
By default libCURL does not have a few seconds timeout as I need, so I tried:
setting the CURLOPT_CONNECTTIMEOUT options,
but this only works at connection stage, not while already receiving data.
setting the CURLOPT_TIMEOUT option,
but this seems to always force a timeout even when data is still received.
My question is: how can I properly handle a timeout with libCURL, in the case described above?
For your scenario instead of
curl_easy_setopt(curl, CURLOPT_TIMEOUT, <your timeout in seconds>);
use
curl_easy_setopt(curl, CURLOPT_LOW_SPEED_LIMIT, 1);
curl_easy_setopt(curl, CURLOPT_LOW_SPEED_TIME, <your timeout in seconds>);
The above two lines make sure that if the average speed drops below 1 byte per second, in a time frame of X seconds, then the operation is aborted (timeout).
See reference here.

C++: Extract Session token from cURL connection

I have used this code example successfully. Running this code returns to stdout:
{"sessionToken": <some string>,"loginStatus":"SUCCESS"}
I need to have the sessionToken string for the following requests I need to made, thus including cookie in http headers won't work for me.
I could redirect stdout to a pipe and read from there but I am looking for a more efficient, native libcurl/C++ way to do that.
I would suggest you setup a write callback for libcurl and receive the response directly into a memory buffer instead and then you parse that to extract what you need after the request has completed.
The getinmemory libcurl examples shows one way to receive data directly into a memory buffer.