Sending and receiving strings over http via curl - c++

I have a situation where my program on a server (windows machine) outputs some strings. I need to send those strings from the server to the client via HTTP using curl. Once sent I am to receive the data on the client side as string, decode it and perform subsequent actions.
I already achieved this functionality using C Sockets using berkely API as I had familiarity with that. But for some reason I am not allowed to use a program of my own.
I poked around and seems CURL can be my solution. However I am very new to curl and cant seem to figure out how to achieve this functionality. On the Client side I found this to be useful may be:
#include <stdio.h>
#include <curl/curl.h>
int main(void)
{
CURL *curl;
CURLcode res;
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_URL, "http://example.com");
/* Perform the request, res will get the return code */
res = curl_easy_perform(curl);
/* Check for errors */
if(res != CURLE_OK)
fprintf(stderr, "curl_easy_perform() failed: %s\n",
curl_easy_strerror(res));
/* always cleanup */
curl_easy_cleanup(curl);
}
return 0;
}
I understand that you have to use the write back functions to receive data ?
Also on the client side I need to develop a program using curl that whenever the server sends over a string, it should receive it and decode it. Any pointers to tutorials related to the specific problems will be highly appreciated. Or if someone has already tried this I'll highly appreciate any help here.
Thanks.

Take a look at this example code from their site. It details how to get your response data written to a region of memory rather than a file:
http://curl.haxx.se/libcurl/c/getinmemory.html
also take a look at the generic tutorial on the curl website:
http://curl.haxx.se/libcurl/c/libcurl-tutorial.html
one final thing to consider, if using C++ you need to make sure your callbacks are not non static member functions (see here libcurl - unable to download a file)
This should get you started at least.

Related

Is it possible to do API streaming using curl library similar to python's request (or some other C++ lib)?

I have a python test script that performs an API streaming using requests.post(). It looks like so:
response = requests.post(url_events, data="XYZ", stream=True, headers = {"A":"B"})
if (response.ok):
for chunk in response.iter_content(chunk_size=256):
print chunk
I'm trying to figure out how can I have the same logic but using C++. From what I found the curl library may help, however I cannot find how to pass data field. This is the code I have so far:
CURL* connection = curl_easy_init();
// set url
curl_easy_setopt(connection, CURLOPT_URL, url_events);
// set header
struct curl_slist* headers = NULL;
headers = curl_slist_append(headers, "A:B");
code = curl_easy_setopt(connection, CURLOPT_HTTPHEADER, headers);
// set streaming callback that will print every received message
curl_easy_setopt(connection, CURLOPT_WRITEFUNCTION, printCallback);
// start connection
code = curl_easy_perform(connection);
// ...
curl_easy_cleanup(connection);
curl_slist_free_all(headers);
I was looking through the curl.h file trying to find how to specify the data field, but nothing seems to fit (based on the name)?
Am I on the right track? Would using curl be the right approach for my task, or should I be looking into some other C,C++ libraries? An example that does the same task as above request.post() is appreciated, or a suggestion how to achieve the same using curl.

How to send a request to the WooCommerce API

I'm currently building a solution for a company as an intern, and I need to use the WooCommerce REST API features in my C++ project to send data to the website.
I've so far, after 2 long painful days, managed to install the cURL library (through vcpkg) and tested the library a bit with the many examples that you can find on the internet. But for now, what I found doesn't seem to match with what the people at WooCommerce put in their documentation.
For example, in this section, they show how to create a product on the platform using cURL, but I can't understand how to translate it in cURL language inside the C++ project. Heck, the command doesn't even work when I use it in the command prompt with my parameters.
#include <curl/curl.h>
#include <string>
// cUrl declaration
CURL* curl;
CURLcode res;
std::string readBuffer;
std::string URL = "http://www.example.com";
curl_global_init(CURL_GLOBAL_ALL);
curl = curl_easy_init();
if (curl) {
curl_easy_setopt(curl, CURLOPT_URL, URL);
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);
res = curl_easy_perform(curl);
// Check for errors
if (res != CURLE_OK) {
std::string error = "curl_easy_perform() failed: ";
error += curl_easy_strerror(res);
error += "\nImpossible de se connecter au site WooCommerce fourni. Veuillez verifier vos paramètres et redémarrer l'application.";
wxMessageBox(error);
}
else {
std::string success = "Connexion au domaine ";
success += URL;
success += " réussie.\nPour changer de domaine, veuillez consulter la page Paramètres.";
wxMessageBox(success);
}
}
// cleanup
curl_easy_cleanup(curl);
curl_global_cleanup();
This code works fine, I know that I have to add the company's website instead of the example, but I can't figure out where to add my client key and client secret (basically like in the example shown on the WooCommerce doc). The basic cURL commands work fine in my local command prompt, but the example doesn't event work.
I know that my request for help may be kind of basic and easy to solve but I just spent the last 2 days and a half working on this and I'm starting to lose it.
Thanks for your help, I tried to speak the best english I could, so sorry in advance for any typo, or sorry if my post doesn't live up to the presentation standards of this platform, I'm kinda new around here :D
Ok, I've figured it out, for those who pass by and may have the same problem as I had. The commands you do with cURL in the terminal and with the library are totally different :
In the command prompt, you got to enter curl -X POST https://blablablabla
In the C++ library, you have to call the curl_easy_setopt() function with parameters to specify each component of the request : CURLOPT_URL is your main domain, CURLOPT_POSTFIELDS is the data you want to POST, and there are other parameters such as CURLOPT_WRITEFUNCTION, CURLOPT_WRITE_DATA,... etc. that handles the response from the server.
For me, this example was really useful, I don't know how I could have missed it :D Thanks Jesper Juhl for the advice, it is crucial to understand how HTTP and HTTPS works to figure this out.

Using libcurl for a POST request

I'll preface this by saying I'm still a new C/C++ programmer, so please excuse me for what may be a redundant question.
I'm writing a program in C/C++ to interact with this website: http://www.youtube-mp3.org/.
From what I understand, to get my program to download a link for me I'll have to send a POST request to the server containing the URL I want to convert, then find a way of getting it to follow the URL that is generated allowing me to download the file. I also understand that libcurl is a good way of doing this sort of thing in C/C++.
I've tried using the POST examples on the libcurl website (http://curl.haxx.se/libcurl/c/simplepost.html and one other) but neither seems to work. In addition, I'm not sure how to then get my program to follow the link that appears saying 'Download' . I've tried sending a POST request, then telling my program to get the html source of the page and store this in a file, but that file doesn't seem to contain any download link. When this is done through a browser, the page source definitely includes a working download link.
Would really appreciate some help, as I'm not sure whether I've got completely the wrong idea!
EDIT: My question wasn't very clear at all. Here is the relevant code I'm using for the POST request:
static const char *postthis="http://www.youtube.com/watch?v=KMU0tzLwhbE";
CURL *curl;
CURLcode res;
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_URL, "http://www.youtube-mp3.org/");
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, postthis);
curl_easy_setopt(curl, CURLOPT_POSTFIELDSIZE, (long)strlen(postthis));
/* Perform the request, res will get the return code */
res = curl_easy_perform(curl);
/* Check for errors */
if(res != CURLE_OK)
fprintf(stderr, "curl_easy_perform() failed: %s\n",
curl_easy_strerror(res));
}
And for writing the html source to file:
static size_t write_data(void *ptr, size_t size, size_t nmemb, void *stream)
{
int written = fwrite(ptr, size, nmemb, (FILE *)stream);
return written;
}
{
static const char *filename = "head.txt";
FILE *htmlfile;
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_data);
// open the file
htmlfile = fopen(filename,"w");
if (htmlfile == NULL) {
curl_easy_cleanup(curl);
return -1;
}
curl_easy_setopt(curl,CURLOPT_WRITEDATA, htmlfile);
curl_easy_perform(curl);
/* close the header file */
fclose(htmlfile);
/* always clean up */
curl_easy_cleanup(curl);
}
Your code does not work because you are assuming the wrong logic to begin with.
http://www.youtube-mp3.org does NOT use POST, in fact its download form doesn't even submit to a server-side URL at all. When you click on the "Convert Video" button, a client-side JavaScript is invoked to process the input URL, download the relevant information from YouTube, and modify the calling page's HTML to display the actual download link and video preview image. This is why you don't see the download link when you simply retrieve the HTML - you are not invoking the JavaScript that performs the actual work of preparing the download link. And you will not be able to do that from an application (without a LOT of extra work), it has to be done inside of a web browser that has a real JavaScript engine and a real DOM for the script to manipulate.

Get the HTML of a site [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I'm trying to get into a string (or a char[]) the html of a page...( and such)
I know how to use basic sockets, and connect as a client/server...
I've wrote a client in the past, that gets an ip & port, and connects to it, and send images and such using sockets betwen the client & the server...
I've searched the internet a bit, and found I can connect to the website, and send a GET request, to get the HTTP content of a page and store it in a variable, though I have a few problems :
1) I'm trying to get the HTML of a page that isnt the main page of a site, like, not stackoverflow.com, but stackoverflow.com/help and such (not the "official page of the site", but something inside that site)
2) I'm not sure how to either send or store the data I got from the GET request...
I saw there are outside libraries I could use, but I rather use sockets only...
By the way - I'm using Windows 7, and I aim that it'll work on Windows only(so it's fine if it wont work for Linux)
Thanks for you'r help! :)
To access a resource on some host you just specify the path to the resource in the first line of the request, just after the 'GET'. E.g. check http://www.jmarshall.com/easy/http/#http1.1
GET /path/file.html HTTP/1.1
Host: www.host1.com:80
[blank line here]
I'd also recomend using some portable library like Boost.ASIO instead of sockets. But I'd strongly recomend you to use some existing, portable library implementing HTTP protocol. Of course only if it is not a matter of learning how to implement it.
Even if you want to implement it by yourself it'd be worth knowing the existing solutions. For instance this is how you can get a webpage using cpp-netlib (http://cpp-netlib.org/0.10.1/index.html):
using namespace boost::network;
using namespace boost::network::http;
client::request request_("http://127.0.0.1:8000/");
request_ << header("Connection", "close");
client client_;
client::response response_ = client_.get(request_);
std::string body_ = body(response_);
This is how you can do it using cURL library (http://curl.haxx.se/libcurl/c/simple.html):
#include <stdio.h>
#include <curl/curl.h>
int main(void)
{
CURL *curl;
CURLcode res;
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_URL, "http://example.com");
/* example.com is redirected, so we tell libcurl to follow redirection */
curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1L);
/* Perform the request, res will get the return code */
res = curl_easy_perform(curl);
/* Check for errors */
if(res != CURLE_OK)
fprintf(stderr, "curl_easy_perform() failed: %s\n",
curl_easy_strerror(res));
/* always cleanup */
curl_easy_cleanup(curl);
}
return 0;
}
Both libraries are portable but if you'd like to use some Windows-specific API you might check WinINet (http://msdn.microsoft.com/en-us/library/windows/desktop/aa383630%28v=vs.85%29.aspx) but it's less pleasant to use.

Read HTML source to string

I hope you don't frown on me too much, but this should be answerable by someone fairly easily. I want to read a file on a website into a string, so I can extract information from it.
I just want a simple way to get the HTML source read into a string. After looking around for hours I see all these libraries and curl and stuff. All I need is the raw HTML data. I don't even need a definite answer. Just something that will help me refine my search.
Just to be clear I want the raw code in a string I can manipulate, don't need any parsing etc.
You need an HTTP Client library, one of many is libcurl. You would then issue a GET request to a URL and read the response back how ever your chosen library provides it.
Here is an example to get you started, it is C so I am sure you can work it out.
#include <stdio.h>
#include <curl/curl.h>
int main(void)
{
CURL *curl;
CURLcode res;
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_URL, "http://example.com");
res = curl_easy_perform(curl);
/* always cleanup */
curl_easy_cleanup(curl);
}
return 0;
}
But you tagged this C++ so if you want a C++ wrapper for libcurl then use curlpp
#include <curlpp/curlpp.hpp>
#include <curlpp/Easy.hpp>
#include <curlpp/Options.hpp>
using namespace curlpp::options;
int main(int, char **)
{
try
{
// That's all that is needed to do cleanup of used resources
curlpp::Cleanup myCleanup;
// Our request to be sent.
curlpp::Easy myRequest;
// Set the URL.
myRequest.setOpt<Url>("http://example.com");
// Send request and get a result.
// By default the result goes to standard output.
myRequest.perform();
}
catch(curlpp::RuntimeError & e)
{
std::cout << e.what() << std::endl;
}
catch(curlpp::LogicError & e)
{
std::cout << e.what() << std::endl;
}
return 0;
}
HTTP is built on top of TCP. If you know socket programming, you can write a simple networking application that opens a socket to the desired server and issues an HTTP GET command. Whatever the server responds with, you'll have to remove the HTTP headers that precede the actual document you want.
If that sounds complicated, then just stick with libcurl.
if it is a hack - then just grab the source from show source, and save as txt. then you can open it with a normal file io stream.
all thos pesky libraries are a hint that it is a common and non-trivial excercise to do it right... :)
If all you want to do is grab the entire HTML code without any kind of parsing and extern libraries, my sugestion would be copying the code with a IO stream into a string.
It is the simplest way that I have in mind but be aware that it isn't the most efficient way to do it.