I'm writting a GUI application, that needs to periodically ask for data using requests, and then plot it in a widget. I created a wxThread which in Entry() ask for data using libcurl, then it shoots event to the main thread that activates plot updating function. But there is a strange issue with sending requests, always after a few iterations my request sending thread hangs, and when I kill it, then I got information about the error(Cannot connect to server.) Is there any way to make thread not to hang, but just ignore the error, and try to send request again? (This error only occours when server is online, when its offline I can get the error without hanging the thread.)
I thought it was due to some server error so I setted CURLOPT_TIMEOUT and CURLOPT_CONNECTIONTIMEOUT, but with no effect, just the name of the error changes after killing a thread to "Timeout error". Strangly enough when the colelcting data thread is restarted from GUI without closing the app it works perfectly. So It hangs only on the first run(exactly after 18-24 iterations) or I did not have opportunity to see it hangs again, but I did multiple tests.
Here is my code for sending, the app frezzes at curle_easy_perform(), it did not go into writting function.
struct stats
{
int y_data1;
int y_data2;
int x_data;
};
class OutputThread : public wxThread
{
public:
OutputThread(wxWindow* parent, std::string url) : wxThread(wxTHREAD_DETACHED)
{
_parentHandler = parent;
_url = url;
}
protected:
wxThread::ExitCode Entry()
{
while(!TestDestroy())
{
repidjson::Document response;
std::string raw_response = send_request(_url );
if(raw_response != "")
{
if(response.Parse(raw_response.c_str()).HasParseError())
{
std::cout << "Parse error!\n";
sleep(1);
continue;
}
else
{
stats new_stats;
new_stats.y_data1 = response["y_data1"].GetInt();
new_stats.y_data2 = response["y_data2"].GetInt();
new_stats.x_data = response["x_data"].GetInt();
wxCommandEvent event(wxEVT_COMMAND_OUTPUT_THREAD_UPDATE, wxID_ANY);
event.SetClientData(static_cast<void*>(&new_stats));
_parentHandler->GetEventHandler()->AddPendingEvent(event);
sleep(1);
}
}
}
}
private:
wxWindow* _parentHandler;
std::string _url;
};
std::string send_request(std::string url)
{
curl_global_init(CURL_GLOBAL_ALL);
auto handle = curl_easy_init();
std::stringstream returnData;
if(handle)
{
curl_easy_setopt(handle, CURLOPT_NOPROXY, "localhost");
curl_easy_setopt(handle, CURLOPT_URL, url.c_str());
curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, write_data);
curl_easy_setopt(handle, CURLOPT_WRITEDATA, &returnData);
curl_easy_setopt(handle, CURLOPT_TIMEOUT, 1L);
curl_easy_setopt(handle, CURLOPT_CONNECTTIMEOUT, 1L):
curl_easy_setopt(handle, CURLOPT_HTTPGET, 1);
auto ret = curl_easy_perform(handle); //here the thread hangs
if(ret != CURLE_OK)
{
std::cout << "curl_easy_perform() failed: " << curl_Easy_strerror(ret) << "\n";
returnData.clear();
returnData << "";
}
curl_easy_cleanup(handle);
}
curl_global_cleanup();
return returnData.str();
}
//Data writing function
size_t write_data(void* ptr, size_t size, size_t nmeb, void* stream)
{
std::string buf = std::string(static_cast<char*>(ptr), size * nmeb);
std::stringstream* response = static_cast<std::stringstream*>(stream);
response->write(buf.c_str(), (std::streamsize)buf.size());
return size*nmeb;
}
Parent is an wxPanel with graph and function that updates the plot.
wxEVT_COMMAND_OUTPUT_THREAD_UPDATE it is my custom event that activates plot updating function. I'm not sure if there is any point in reproducing the whole widget with graph so to make things short there is a button to start capturing data and to stop, what is equivalent of creating OutputThread and executing Entry() function until user press stop button.
Could it be a threading issue or is it just an error in sending function?
Related
I'm learning how to use CURL properly, and according to all the examples (the documentation is a pain) my code should work, but for some reason sometimes it connects and other times it won't.
I did check if there was a firewall problem, or the antivirus interfering, but both are turn off and the problem persists.
The main idea is to connect to a local server (rpi), and in the future to an external server for backup/updates.
My code is as follows. Here's the callback function, and the actual function that does all the work, the different URLs are for example purposes.
static std::size_t callback(const char* in,std::size_t size, std::size_t num, std::string* out){
Silo* silo = new Silo();
const std::size_t totalBytes(size * num);
std::string data = std::to_string(totalBytes);
silo->Log("Total Bytes recive " + QString::fromStdString(data));
out->append(in, totalBytes);
return totalBytes;
}
void Server::RPI_Request(){
Silo* silo = new Silo();
//curl_global_init(CURL_GLOBAL_ALL);
CURL *curl = curl_easy_init();
const std::string url_A("http://date.jsontest.com/");
const std::string url_B("https://jsonplaceholder.typicode.com/todos/1");
const std::string url_C("https://www.google.com/");
const std::string url_D("https://stackoverflow.com/");
if (curl){
CURLcode res;
// set Ip Direction
curl_easy_setopt(curl, CURLOPT_URL, url_C.c_str() );
// Don't bother trying IPv6, which would increase DNS resolution time.
curl_easy_setopt(curl, CURLOPT_IPRESOLVE, CURL_IPRESOLVE_V4);
// Don't wait forever, time out after 10 seconds.
silo->Log("antes de timeout");
curl_easy_setopt(curl, CURLOPT_TIMEOUT, 10);
// Follow HTTP redirects if necessary.
//curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1L);
// Response information.
long httpCode(0);
std::unique_ptr<std::string> httpData(new std::string());
// Hook up data handling function.
silo->Log("antes de write function");
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, callback);
// Hook up data container (will be passed as the last parameter to the
// callback handling function). Can be any pointer type, since it will
// internally be passed as a void pointer.
curl_easy_setopt(curl, CURLOPT_WRITEDATA, httpData.get());
// Run our HTTP GET command, capture the HTTP response code, and clean up.
silo->Log("antes de easy perform");
res = curl_easy_perform(curl);
curl_easy_getinfo(curl, CURLINFO_RESPONSE_CODE, &httpCode);
silo->Log("Respuesta de httpCode: " + QString::number(httpCode));
if (res != CURLE_OK){
silo->Log("Hay pedo no se conecto " + QString::fromStdString(url_C) );
} else {
silo->Log("Coneccion establecida con " + QString::fromStdString(url_C));
}
curl_easy_cleanup(curl);
//curl_global_cleanup();
}
}
Goal: To send requests to the same URL without having to wait for the request-sending function to finish executing.
Currently when I send a request to a URL, I have to wait around 10 ms for the server's response before sending another request using the same function. The aim is to detect changes on a webpage slightly faster than the program currently is doing, so for the WHILE loop to behave in a non-blocking manner.
Question: Using libcurl C++, if I have a WHILE loop that calls a function to send a request to a URL, how can I avoid waiting for the function to finish executing before sending another request to the SAME URL?
Note: I have been researching libcurl's multi-interface but I am struggling to determine if this interface is more suited to parallel requests to multiple URLs rather than sending requests to the same URL without having to wait for the function to finish executing each time. I have tried the following and looked at these resources:
an attempt at multi-threading a C program using libcurl requests
How to do curl_multi_perform() asynchronously in C++?
http://www.godpatterns.com/2011/09/asynchronous-non-blocking-curl-multi.html
https://curl.se/libcurl/c/multi-single.html
https://curl.se/libcurl/c/multi-poll.html
Here is my attempt at sending a request to one URL, but I have to wait for the request() function to finish and return a response code before sending the same request again.
#include <vector>
#include <iostream>
#include <curl/curl.h>
size_t write_callback(char *ptr, size_t size, size_t nmemb, void *userdata) {
std::vector<char> *response = reinterpret_cast<std::vector<char> *>(userdata);
response->insert(response->end(), ptr, ptr+nmemb);
return nmemb;
}
long request(CURL *curl, const std::string &url) {
std::vector<char> response;
long response_code;
curl_easy_setopt(curl, CURLOPT_URL, url.c_str());
curl_easy_getinfo(curl, CURLINFO_RESPONSE_CODE, &response_code);
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_callback);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &response);
auto res = curl_easy_perform(curl);
// ...
// Print variable "response"
// ...
return response_code;
}
int main() {
curl_global_init(CURL_GLOBAL_ALL);
CURL *curl = curl_easy_init();
while (true) {
// blocking: request() must complete before executing again
long response_code = request(curl, "https://example.com");
// ...
// Some condition breaks loop
}
curl_easy_cleanup(curl);
curl_global_cleanup();
return 0;
}
I'm at a point where I have tried to understand the multi-interface documentation as best as possible, but still struggle to fully understand it / determine if it's actually suited to my particular problem. Apologies if this question appears to have not provided enough of my own research, but there are gaps in my libcurl knowledge I'm struggling to fill.
I'd appreciate it if anyone could suggest / explain ways in which I can modify my single libcurl example above to behave in a non-blocking manner.
EDIT:
From libcurl's C implemented example called "multi-poll", when I run the below program the URL's content is printed, but because it only prints once despite the WHILE (1) loop I'm confused as to whether or not it is sending repeated non-blocking requests to the URL (which is the aim), or just one request and is waiting on some other change/event?
#include <stdio.h>
#include <string.h>
/* somewhat unix-specific */
#include <sys/time.h>
#include <unistd.h>
/* curl stuff */
#include <curl/curl.h>
int main(void)
{
CURL *http_handle;
CURLM *multi_handle;
int still_running = 1; /* keep number of running handles */
curl_global_init(CURL_GLOBAL_DEFAULT);
http_handle = curl_easy_init();
curl_easy_setopt(http_handle, CURLOPT_URL, "https://example.com");
multi_handle = curl_multi_init();
curl_multi_add_handle(multi_handle, http_handle);
while (1) {
CURLMcode mc; /* curl_multi_poll() return code */
int numfds;
/* we start some action by calling perform right away */
mc = curl_multi_perform(multi_handle, &still_running);
if(still_running) {
/* wait for activity, timeout or "nothing" */
mc = curl_multi_poll(multi_handle, NULL, 0, 1000, &numfds);
}
// if(mc != CURLM_OK) {
// fprintf(stderr, "curl_multi_wait() failed, code %d.\n", mc);
// break;
// }
}
curl_multi_remove_handle(multi_handle, http_handle);
curl_easy_cleanup(http_handle);
curl_multi_cleanup(multi_handle);
curl_global_cleanup();
return 0;
}
You need to move curl_multi_add_handle and curl_multi_remove_handle inside the
while loop. Below is the extract from curl documentation https://curl.se/libcurl/c/libcurl-multi.html
When a single transfer is completed, the easy handle is still left added to the >multi stack. You need to first remove the easy handle with curl_multi_remove_handle >and then close it with curl_easy_cleanup, or possibly set new options to it and add >it again with curl_multi_add_handle to start another transfer.
How do you add a prefix for the output of command execution with c++
localhost is a Flask web application
std::string exec(const char* cmd) {
std::array<char, 128> buffer;
std::string result;
std::unique_ptr<FILE, decltype(&_pclose)> pipe(_popen(cmd, "r"), _pclose);
if (!pipe) {
throw std::runtime_error("popen() failed!");
}
while (fgets(buffer.data(), buffer.size(), pipe.get()) != nullptr) {
result += buffer.data();
//std::cout << typeid(result).name() << std::endl;
// read form pipe and add to the output string
std::string output = "output=";
output += buffer.data()
std::cout << output << std::endl;
// call report_ to send a post request to the server
report_(output);
}
char* c = const_cast<char*>(result.c_str());
return result;
}
As far as I understand this is a c++ function that returns a string value of the output from the command prompt
int report_(std::string report )
{
CURL* curl;
CURLcode res;
/* In windows, this will init the winsock stuff */
curl_global_init(CURL_GLOBAL_ALL);
/* get a curl handle */
curl = curl_easy_init();
if (curl) {
/* First set the URL that is about to receive our POST. This URL can
just as well be a https:// URL if that is what should receive the
data. */
curl_easy_setopt(curl, CURLOPT_URL, "http://localhost/api/00000000000000000000/report");
/* Now specify the POST data */
// report starts with "output="
std::cout << report << std::endl;
// this is where we add the post data
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, output );
/* Perform the request, res will get the return code */
res = curl_easy_perform(curl);
/* Check for errors */
if (res != CURLE_OK)
fprintf(stderr, "curl_easy_perform() failed: %s\n",
curl_easy_strerror(res));
/* always cleanup */
curl_easy_cleanup(curl);
}
curl_global_cleanup();
return 0;
}
This function reports the output of the exec() function but before you do that you have to add the prefix output= to the output of exec() which takes a string as an argument
The server returns
400 Bad Request: The browser (or proxy) sent a request that this server could not understand. KeyError: 'output'
If you change curl_easy_setopt(curl, CURLOPT_POSTFIELDS, output ); to curl_easy_setopt(curl, CURLOPT_POSTFIELDS, "output=hello world" ); then the server receives the output
This link explains how to add post data to post fields you have to pass a pointer to the data you want to send so using const char*
// this line refers to the pointer of the string needed to be send over
// just replace output with and std::string value and you can send it as post data
// do not use std::string as post data
const char* c = const_cast<char*>(output.c_str());
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, c );
I am currently trying to download a File from a public git-Repository using curl in my Unreal C++ Project. Here is the code I'm trying to execute that I derived from the FTP-Example:
// This is in the .h file
struct FFtpFile {
FILE* File;
const char* Filename;
};
void FtpFetch(const std::string URL, const char* Filename) {
CURL* Curl = curl_easy_init();
const FFtpFile FtpFile {
nullptr,
Filename
};
if (!Curl) {
UE_LOG(LogTemp, Warning, TEXT("Error Initiating cURL"));
return;
}
curl_easy_setopt(Curl, CURLOPT_URL, URL.c_str());
curl_easy_setopt(Curl, CURLOPT_VERBOSE, 1);
curl_easy_setopt(Curl, CURLOPT_FOLLOWLOCATION, 1);
curl_easy_setopt(Curl, CURLOPT_SSL_VERIFYPEER, false);
// Data Callback
const auto WriteCallback = +[](void* Contents, const size_t Size, const size_t NumMem, FFtpFile* FileStruct) -> size_t {
if (!FileStruct->File) {
fopen_s(&FileStruct->File, FileStruct->Filename, "wb");
if (!FileStruct->File) {
return CURLE_WRITE_ERROR;
}
}
return fwrite(Contents, Size, NumMem, FileStruct->File);
};
curl_easy_setopt(Curl, CURLOPT_WRITEFUNCTION, DownloadCallback);
curl_easy_setopt(Curl, CURLOPT_WRITEDATA, FtpFile);
const CURLcode Result = curl_easy_perform(Curl);
if (Result != CURLE_OK) {
const FString Message(curl_easy_strerror(Result));
UE_LOG(LogTemp, Warning, TEXT("Error Getting Content of the Model File: %s"), *Message);
return;
}
curl_easy_cleanup(Curl);
// Close the Stream after Cleanup
UE_LOG(LogTemp, Log, TEXT("Successfully Fetched FTP-File. Closing Write Stream"))
if (FtpFile.File) {
fclose(FtpFile.File);
}
}
Note that this is executed on a separate Thread using the Unreal Async function:
void AsyncFetchModelFile(const std::string URL) {
std::string Path = ...
TFunction<void()> Task = [Path, URL]() {
FtpFetch(URL, Path.c_str());
};
UE_LOG(LogTemp, Log, TEXT("Fetching FTP on Background Thread"))
Async(EAsyncExecution::Thread, Task, [](){UE_LOG(LogTemp, Warning, TEXT("Finishied FTP on Background Thread!"))});
}
I already removed the curl_global calls, as the documentation states those are not thread-safe. I also tried running the code on the main thread, but the same error happens there too.
To the error itself: The download runs almost flawlessly, but the downloaded file (in this case a .fbx file) always misses the last ~800 Bytes and is therefore incomplete. Also, the file keeps being open in Unreal, so I can't delete/move the file unless I close the Editor.
Before writing this Unreal code I tried running the same code in a pure C++ setting and there it worked flawlessly. But for some reason doing the same in Unreal doesn't work.
I also tried using a private method instead of the lambda-Function, but that didn't make any difference.
Any help would be appreciated
~Okaghana
I didn't manage to get it working in the end (I suspect it's an Unreal-Related Bug), but I found another way using the included Unreal HTTP Module:
FString Path = ...
// Create the Callback when the HTTP-Request has finished
auto OnRequestComplete = [Path](FHttpRequestPtr Request, FHttpResponsePtr Response, bool bWasSuccessful) {
if (bWasSuccessful) {
FFileHelper::SaveArrayToFile(Response->GetContent(), *Path);
UE_LOG(LogTemp, Log, TEXT("Successfully downloaded the file to '%s'"), *Path)
} else {
UE_LOG(LogTemp, Warning, TEXT("Error downloading the file (See EHttpResponseCodes): %s"), Response->GetResponseCode())
}
};
// Create a HTTP-Request and Fetch the file
TSharedRef<IHttpRequest, ESPMode::ThreadSafe> Request = FHttpModule::Get().CreateRequest();
Request->SetVerb("GET");
Request->SetURL(URL);
Request->OnProcessRequestComplete().BindLambda(OnRequestComplete);
Request->ProcessRequest();
I have started working with libcurl, and simply tried running the basic code to get a file from an url. When I get this file using curl.exe, compiled with the same library, I get detect no random traffic on my localhost. However, when I run with my own executable, I get around 19 packets sent between two localhost ports.
I make sure to call curl_global_init(CURL_GLOBAL_WIN32) and curl_global_cleanup() after the method call.
What could be the cause of this traffic, and how could I make it go away?
int CurlFileDownloader::downloadSingleFile(const std::string& url, const std::string& destination) {
CURLcode res = CURLE_READ_ERROR;
mHandle = curl_easy_init();
if(mHandle) {
mData.destinationFolder = destination;
// Get the file name from the url
auto lastPos = url.find_last_of("/");
mData.fileName = url.substr(lastPos + 1);
curl_easy_setopt(mHandle, CURLOPT_URL, url.c_str());
/* Define our callback to get called when there's data to be written */
curl_easy_setopt(mHandle, CURLOPT_WRITEFUNCTION, &CurlFileDownloader::writeFileContent);
/* Set a pointer to our struct to pass to the callback */
curl_easy_setopt(mHandle, CURLOPT_WRITEDATA, &mData);
/* Switch on full protocol/debug output */
curl_easy_setopt(mHandle, CURLOPT_VERBOSE, 1L);
mLastError = curl_easy_perform(mHandle);
/* always cleanup */
curl_easy_cleanup(mHandle);
if (mData.fileStream.is_open()) {
mData.fileStream.close();
}
if(CURLE_OK != mLastError) {
std::cerr << "Curl error " << mLastError << std::endl;
}
}
return mLastError;
}
size_t CurlFileDownloader::writeFileContent(char *buffer, size_t size, size_t nmemb, void *cb_data) {
struct CurlCallbackData *data = (CurlCallbackData*)cb_data;
size_t written = 0;
if (data->fileStream.is_open()) {
data->fileStream.write(buffer, nmemb);
}
else {
/* listing output */
if (data->destinationFolder != "") {
data->fileStream.open(data->destinationFolder + "\\" + data->fileName, std::ios::out | std::ios::binary);
}
else {
data->fileStream.open(data->fileName, std::ios::out | std::ios::binary);
}
data->fileStream.write(buffer, nmemb);
}
return nmemb;
}
Here is a sample of what RawCap.exe is capturing.
The source of the localhost communication was the use of a socket pair for IPV4 Loopback. When removing the #USE_SOCKETPAIR from libCURL's socketpair.h, the issue went away.