I'm attempting to save an image from a pointer variable of type IplImage*. I'm using C++ with opencv on a Raspberry Pi OS 32-bit. I installed the opencv library via the sudo apt-get install libopencv-dev command from the terminal.
Firstly, it is my understanding that the image saving function changes from opencv2 onwards, with the cvSaveImage function being replaced by the imwrite function on opencv3 and onwards.
I'm using opencv2, therefore I should be able to call the cvSaveImage function with the arguments of my desired image filepath and the IplImage* pointer, however whenever I try to build my program with the cvSaveImage function, it prompts me with an error stating the function is not declared within the scope... All the other opencv functions I was calling were found successfully.
I'm not experienced with opencv at all, therefore I looked into the actual files I was including from the library.
I was including the opencv2/highgui/highgui_c.h header files to access all my functions. After searching that header file, I was unable to find the cvSaveImage function anywhere.
Which file actually contains the cvSaveImage function that I'm looking for? Is it in a different file?
Thanks for reading my post, any guidance is appreciated.
Thanks to comments by Micka and Chrostoph Rackwitz, I found a workaround;
If one has iamge data in the form of a pointer variable IplImage* imagePtr, it can be saved to a file via the cv::imwrite command, in my case I used something along the lines of:
cv::imwrite("test.jpg", cv::Mat(height, width, type, imagePtr->imageData));
Related
I am creating a project in c++ with QtCreator (5.14.1, MingGW compiler) and trying to use OpenCv (3.4.16) to read video files. I have tried many files of standard formats and codecs (H.264, yuv420, .mov etc). However, no matter what I try, VideoCapture() always silently fails. It doesn’t crash or show any error code, instead isOpened() is just always false.
I think the cause is that I am building opencv (via this tutorial https://wiki.qt.io/How_to_setup_Qt_and_openCV_on_Windows) without internet connection (I cannot have internet connection on this machine, so please do not ask me to) and therefore it can’t download the FFMPEG libraries during this process. I have been looking everywhere for information about how to download the FFMPEG libraries for opencv directly but I haven’t had any luck.
Can someone please explain what libraries I need to download and how opencv goes about looking for them? because at the moment I don’t know what I need, nor where to put them, and I cant find any information on the topic.
Or, can someone explain why calling VideoCapture(“video.mov”, cv::CAP_ANY) doesn’t have any effect? (despite being able to play the video easily in VLC, MediaPlayer etc).
Code:
`
int main()
{
VideoCapture cap(“C://video.mov”);
//VideoCapture cap(“C:/video.mov”);
//VideoCapture cap(“C:\video.mov”);
//VideoCapture cap(“C:\video.mov”);
if (!cap.isOpened()){
cout << "Error opening video stream or file"<< endl;
return -1;
}
}
`
I have tried downloading some ffmpeg DLLs and the EXEs and adding them to the PATH, no success. I have also tried downloading the shared GPL of ffmpeg (that comes with the libs and include) and added them my .pro file but no change in VideoCapture behavior.
I have also tried moving the opencv_ffmpeg_64.dll (found in opencv/build/bin) to my executable directory but that didn't fix anything.
In the end I used this guys answer,
How do i compile opencv_ffmpeg.dll file using mingw on windows 10 64 bit?
Just note that some of the directories are a little different now. You don't need to put them in a folder named after the hash, or in a 'download' subdirectory, and you need to copy all of them to opencv-build/3rd party/ffmpeg/. I also put them in opencv/source/3rd party/ffmpeg, but not sure if I needed to do that. Finally you need to go into the ffmpeg.cmake file and set 'status' to TRUE when the download fails (or just remove the download part altogether), this lets it call ffmpeg_version.cmake and set things up.
https://github.com/vikram-ma/OCR
when I try to run main.cpp from this code i got the following error
In file included from /home/akash/Desktop/OCR-master/main.cpp:9:0:
/home/akash/Desktop/OCR-master/OCR.h:43:3: error: ‘CvKNearest’ does not name a type
CvKNearest *knn;
^
CMakeFiles/OCR.dir/build.make:62: recipe for target 'CMakeFiles/OCR.dir/main.o' failed
please help
At first glance it seems you don't have OpenCV installed/downloaded.
The code you are poining to uses OpenCV library and it assumes you already have it.
You should go to OpenCV releases and download the version you need.
Edit:
I looked into it more closely and as suspected the code was using an old OpencCV version. Right now you are using 3.2.0 so you need to make some updates to the code itself.
Either you should go with an older version of the library (which I'm not suggesting but will be probably less effort) like 2.3-2.4 or update the code to the version you've already installed.
If you wish to do the latter, you can start by looking here: Transition guide
Among others, it is shown there that what used to be CvKNearest is now moved to cv::ml::KNearest. Updating accordingly should fix your first error.
I encountered a strange error- whenever I'm including #include <dlib/gui_widgets.h>
to my projects and declaring a variable (for example dlib::image_window win) following errors appear:
'DLIB_NO_GUI_SUPPORT is defined so you can't use the GUI code. Turn DLIB_NO_GUI_SUPPORT off if you want to use it.' and 'Also make sure you have libx11-dev installed on your system' (from guy_core_kernel_2.h). I was searching and found some suggestions that the cmake of dlib could fail- but I really doubt so, I'm already doing a detection of landmarks.
The reason why I'm trying to include and declare one of the widgets is displaying values on the screen (there is no equivalent of opencv putText(), is there?.
I would be very grateful for any help. :)
At the end I decided to use covertion from dlib::array2d to cv::Matrix (dlib::toMat(dlib::array2d) function) and use cv::putText() (as mentioned previousely). It turned out the Xcode didn't properly linked libraries from openCV- helpful were the following links: How can I create OpenCV framework?
and
http://blogs.wcode.org/2014/11/howto-setup-xcode-6-1-to-work-with-opencv-libraries/
Also if someone can't find Frameworks in Library:
https://discussions.apple.com/thread/4195808?tstart=0
I need a simple ffmpeg conversion task to be done inside an application:
ffmpeg -i input_file.m4v -vcodec copy -acodec copy -vbsf h264_mp4toannexb output_file.ts
This works well using the terminal. I've successfully compiled ffmpeg's static lib. Some examples work perfectly, that means the lib is working. How do I implement the behaviour of the above command line with this library?
I looked into ffmpeg.c. But there is so much code inside that it took me hours to get an idea on how it works. Finally I still don't really understand the whole structure.
I would be very happy if someone could help me understanding how to use the library to do the very same what the example command line does. (At the end I just want to transmux mp4 files to ts files without reencoding)
Thanks in advance
Jack
I would like to open a file in the solution directory so that whenever i move the whole project file, It'll work.
I'm currently using code ! as shown below and when i tried to use code 2, it fails.
How do i do this?
Code 1:
IplImage *src=cvLoadImage("C:\\Documents\\Visual Studio 2008\\Project1\\ABC.jpg"); //A function that load image
Code 2:
`IplImage *src=cvLoadImage("$(SolutionDir)/ABC.jpg"); //A function that load image
You could use
"..\\example.txt"
that will find the file on top project directory.
Also doesn't cvLoadImage takes 2 argument! I assuming you are using OpenCV