I have installed cURL, and I was able to download an image from website, and it works fine.
Here is the code:
#define CURL_STATICLIB
#include <stdio.h>
#include <stdlib.h>
#include </usr/include/curl/curl.h>
#include </usr/include/curl/stdcheaders.h>
#include </usr/include/curl/easy.h>
size_t write_data(void *ptr, size_t size, size_t nmemb, FILE *stream) {
size_t written = fwrite(ptr, size, nmemb, stream);
return written;
}
int main(void) {
CURL *curl;
FILE *fp;
CURLcode res;
char *url = "http://www.example.com/test_img.png";
char outfilename[FILENAME_MAX] = "/home/c++_proj/output/web_req_img.png";
curl = curl_easy_init();
if (curl) {
fp = fopen(outfilename,"wb");
curl_easy_setopt(curl, CURLOPT_URL, url);
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_data);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, fp);
res = curl_easy_perform(curl);
/* always cleanup */
curl_easy_cleanup(curl);
fclose(fp);
}
return 0;
}
I also have a dlink DCS-930L camera. I can easily connect my camera a static IP address, and I was able to view live video on the camera, by logging into the camera (e.g. http://192.168.1.5).
I don't need any special software or anything to start watch video.
Now, I would like to use cURL to download images from camera. But I am not sure how to do it.
Could someone please tell me, or provide some piece of code for it?
All I want to do is to capture (sample) few of the images that are being streamed.
How do I know when to make a request, and when would be the boundary between the images.
I would truly appreciate some advise and piece of code that could get me going.
T
According to the manual for this camera [1], you need use a Java or ActiveX plugin to receive and watch the video:
Please make sure that you have the latest version of Java application
installed on your computer to ensure proper operation when viewing the
video in Java mode. The Java application can be downloaded at no cost
from Sun’s web site (http://www.java.com).
When you connect to the home page of your camera, you will be prompted
to download ActiveX. If you want to use ActiveX to view your video
images instead of Java, then you must download ActiveX.
This suggests that grabbing the image is going to be more difficult than simply making an HTTP request.
[1] http://www.dlink.com/us/en/support/product/-/media/Consumer_Products/DCS/DCS%20930L/Manual/DCS%20930L_Manual_EN_US.pdf
Related
I'll preface this by saying I'm still a new C/C++ programmer, so please excuse me for what may be a redundant question.
I'm writing a program in C/C++ to interact with this website: http://www.youtube-mp3.org/.
From what I understand, to get my program to download a link for me I'll have to send a POST request to the server containing the URL I want to convert, then find a way of getting it to follow the URL that is generated allowing me to download the file. I also understand that libcurl is a good way of doing this sort of thing in C/C++.
I've tried using the POST examples on the libcurl website (http://curl.haxx.se/libcurl/c/simplepost.html and one other) but neither seems to work. In addition, I'm not sure how to then get my program to follow the link that appears saying 'Download' . I've tried sending a POST request, then telling my program to get the html source of the page and store this in a file, but that file doesn't seem to contain any download link. When this is done through a browser, the page source definitely includes a working download link.
Would really appreciate some help, as I'm not sure whether I've got completely the wrong idea!
EDIT: My question wasn't very clear at all. Here is the relevant code I'm using for the POST request:
static const char *postthis="http://www.youtube.com/watch?v=KMU0tzLwhbE";
CURL *curl;
CURLcode res;
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_URL, "http://www.youtube-mp3.org/");
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, postthis);
curl_easy_setopt(curl, CURLOPT_POSTFIELDSIZE, (long)strlen(postthis));
/* Perform the request, res will get the return code */
res = curl_easy_perform(curl);
/* Check for errors */
if(res != CURLE_OK)
fprintf(stderr, "curl_easy_perform() failed: %s\n",
curl_easy_strerror(res));
}
And for writing the html source to file:
static size_t write_data(void *ptr, size_t size, size_t nmemb, void *stream)
{
int written = fwrite(ptr, size, nmemb, (FILE *)stream);
return written;
}
{
static const char *filename = "head.txt";
FILE *htmlfile;
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_data);
// open the file
htmlfile = fopen(filename,"w");
if (htmlfile == NULL) {
curl_easy_cleanup(curl);
return -1;
}
curl_easy_setopt(curl,CURLOPT_WRITEDATA, htmlfile);
curl_easy_perform(curl);
/* close the header file */
fclose(htmlfile);
/* always clean up */
curl_easy_cleanup(curl);
}
Your code does not work because you are assuming the wrong logic to begin with.
http://www.youtube-mp3.org does NOT use POST, in fact its download form doesn't even submit to a server-side URL at all. When you click on the "Convert Video" button, a client-side JavaScript is invoked to process the input URL, download the relevant information from YouTube, and modify the calling page's HTML to display the actual download link and video preview image. This is why you don't see the download link when you simply retrieve the HTML - you are not invoking the JavaScript that performs the actual work of preparing the download link. And you will not be able to do that from an application (without a LOT of extra work), it has to be done inside of a web browser that has a real JavaScript engine and a real DOM for the script to manipulate.
I'm having trouble getting an image from a URL using curl. It works if I pass the URL in as the constructor of an ImageMagick Image object. But using curl I'm not having much luck and I need to use curl.
Right now I'm doing...
curl_easy_setopt(curl, CURLOPT_URL, url.c_str());
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, &curlCallback);
curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1L);
curl_easy_perform(curl);
And then
size_t curlCallback(char* buf, size_t size, size_t nmemb, void* up)
{
ofstream out;
out.open("/home/name/Desktop/img.png");
out.write(buf, nmemb * size);
return size * nmemb;
}
It does seem to get the start of a PNG, but not the whole thing. It only returns 251 bytes (header info or something maybe??). An image viewer will open it as a png and know its resolution, but the image itself is blank. If I print the buffer to console, I see ?PNG and then the binary data symbol.
I know its not a problem with the remote host because if I use ImageMagick:
Image image = Image(url);
Then I get the image in its entirety and can save it and it's just fine.
The function set with CURLOPT_WRITEFUNCTION (curlCallback in your case) can be called multiple times during the download (see the docs).
Using CURLOPT_WRITEDATA passing in a FILE* might be easier.
I'm trying to download a file needed for my application off the internet (as part of installation) so that the first time the app starts up, the needed files get downloaded. For now I'm putting them on Google Drive and making them public, then I'm going to use libcURL to download them. The problem is, I just can't get the data.
I use the following link: https://docs.google.com/uc?id=documentID&export=download and replace documentID with the id. When I try connecting to the site though, it keeps giving me a small snippet of HTML code that basically says "Moved Temporarily" and gives me a link to the new URL. When I use the new link in my program, I get no output whatsoever. However, both links work just fine in my web browser, even when I'm not signed in. So Why don't they work in my program? Am I not setting up SSL options correctly, or is Google Drive simply not meant for this kind of thing?
Here's my code:
#include <curl/curl.h>
int main()
{
curl_global_init(CURL_GLOBAL_ALL);
CURL *curl;
CURLcode res;
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_URL, "https://docs.google.com/uc?id=documentID&export=download");
curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, 0L);
curl_easy_setopt(curl, CURLOPT_SSL_VERIFYHOST, 0L);
curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1L);
curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L);
res = curl_easy_perform(curl);
}
curl_easy_cleanup(curl);
return 0;
}
Any help would be appreciated.
You'll need to set the CURLOPT_FOLLOWLOCATION option to tell cURL to follow redirects.
I do not know if this helps directly but I have always made the call
curl_global_init(CURL_GLOBAL_ALL);
which I see you don't use. I have seen this call made here in the threaded SSL code example http://curl.haxx.se/libcurl/c/threaded-ssl.html. This `curl_global_init() call will perform SSL initialisation amongst other things. It is discussed in this link http://curl.haxx.se/libcurl and also in the libcurl tutorial here http://curl.haxx.se/libcurl/c/libcurl-tutorial.html
I have a simple question. Is it possible to write simple code to download a file from the internet (from URL to disk) without using C++ (for mac osx) libraries like curl?
I have seen some examples but all of these use the Curl library.
i use this code on my xcode projet..but i have some compilation (linking) errors
#define CURL_STATICLIB
#include <stdio.h>
#include <curl/curl.h>
#include <curl/types.h>
#include <curl/easy.h>
#include <string>
size_t write_data(void *ptr, size_t size, size_t nmemb, FILE *stream) {
size_t written;
written = fwrite(ptr, size, nmemb, stream);
return written;
}
int main(void) {
CURL *curl;
FILE *fp;
CURLcode res;
char *url = "http://localhost/aaa.txt";
char outfilename[FILENAME_MAX] = "bbb.txt";
curl = curl_easy_init();
if (curl) {
fp = fopen(outfilename,"wb");
curl_easy_setopt(curl, CURLOPT_URL, url);
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_data);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, fp);
res = curl_easy_perform(curl);
curl_easy_cleanup(curl);
fclose(fp);
}
return 0;
}
how can i link the curl library to my xcode project?
You can launch a console command, it is very simple :D
system("curl -o ...")
or
system("wget ...")
"Downloading a file from URL" means basically doing an GET request to some remote HTTP server. So you need to have your application know how to do that HTTP request.
But HTTP is now a quite complex protocol. Its specification alone is long and complex (more than a hundred pages). libcurl is a good library implementing it.
Why do you want to avoid using a good free library implementing a complex protocol? Of course, you could implement the complex HTTP protocol by yourself (probably that needs years of work), or make a minimal program which don't implement all the details of HTTP protocol but might work (but won't work with weird HTTP servers).
You have to learn bits of "socket programming" and implement a very basic HTTP protocol; the minimalist thing is to send string like "GET /this/path/to/file.png HTTP/1.0\r\n" to the site; then, likely it will answer with an HTTP header you have to parse to know at least the length of the binary data following (if the request succeeded, otherwise you have to handle HTTP errors, or a unexpected contet-type like a html page).
This guide should give you the basic to start with; about HTTP, it depends on your need, sometimes sending a "raw" GET could suffice, sometimes not.
EDIT
Changed to pretend that the request comes from a HTTP/1 compliant client, since HTTP/1.1 wants the Host header to be sent, as commenter has rightly pointed.
EDIT2
The OP changed the question, which became something about how to link with a library in Xcode. There's already a similar question on SO.
I'm using Visual Studio C++ 2010 and it's working just fine with cURL but the problem is that https requests returns nothing. instead of showing the output:
#include "StdAfx.h"
#include <stdio.h>
#include <curl/curl.h>
#include <conio.h>
int main(void)
{
CURL *curl;
CURLcode res;
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_URL, "https://www.google.com");
curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, 0L);
curl_easy_setopt(curl, CURLOPT_SSL_VERIFYHOST, 0L);
res = curl_easy_perform(curl);
curl_easy_cleanup(curl);
}
_getch();
return 0;
}
This code for example, it's just a https request to Google, but it returns nothing just because it starts with https. and if I take away the "s" of https, it works just fine: "http://www.google.com.br" shows the result normally. what am I missing here? I'm using the example from the cURL.
I tried with other websites and same thing happened. :/ like https://www.facebook.com
Btw also if you guys know how do I store the webpage content in a string, I would be glad to know.
Thanks in advance. :)
This simple example works for me:
#include <stdio.h>
#include <curl/curl.h>
int main(void)
{
CURL *curl;
CURLcode res;
curl = curl_easy_init();
if(curl) {
/* First set the URL that is about to receive our POST. This URL can
just as well be a https:// URL if that is what should receive the
data. */
curl_easy_setopt(curl, CURLOPT_URL, "https://www.google.co.uk");
/* Perform the request, res will get the return code */
res = curl_easy_perform(curl);
/* always cleanup */
curl_easy_cleanup(curl);
}
return 0;
}
I got the source code and built the library long time ago but I'm sure I enabled the SSL support before compiling the sources. Make sure you have OpenSSL installed and (in case you're working under Linux) you have the PKG_CONFIG_PATH variable initialized properly. These are two options you specify when executing the configure script.
--with-ssl=PATH Where to look for OpenSSL, PATH points to the SSL
installation (default: /usr/local/ssl); when
possible, set the PKG_CONFIG_PATH environment
variable instead of using this option
--without-ssl disable OpenSSL
I hope it helps.
If you're using Windows, this post can be also useful for you:
Building libcurl with SSL support on Windows