I'm using Visual Studio 2022 with VCPKG to create an application which captures video from RaspberryPi streamed via UDP with GStreamer. I updated vcpkg/ports/gstreamer/portfile.cmake and added:
-Dgst-plugins-good:udp=enabled under vcpkg_configure_meson. After that, installed opencv4 with gstreamer, so vcpkh list looks like that:
gstreamer:x64-windows 1.19.2#9 GStreamer open-source multimedia framework core ...
gstreamer[flac]:x64-windows FLAC audio codec plugin
gstreamer[gl-graphene]:x64-windows Use Graphene in OpenGL plugin
gstreamer[plugins-base]:x64-windows 'Base' GStreamer plugins and helper libraries
gstreamer[plugins-good]:x64-windows 'Good' GStreamer plugins and helper libraries
gstreamer[plugins-ugly]:x64-windows 'Ugly' GStreamer plugins and helper libraries
gstreamer[rawparse]:x64-windows Build with libraw support
gstreamer[x264]:x64-windows Colon separated list of additional x264 library ...
opencv4:x64-windows 4.6.0#5 computer vision library
opencv4[default-features]:x64-windows Platform-dependent default features
opencv4[dnn]:x64-windows Enable dnn module
opencv4[gstreamer]:x64-windows gstreamer support for opencv
opencv4[jpeg]:x64-windows JPEG support for opencv
opencv4[png]:x64-windows PNG support for opencv
opencv4[quirc]:x64-windows Enable QR code module
opencv4[tiff]:x64-windows TIFF support for opencv
opencv4[webp]:x64-windows WebP support for opencv
However, even with that gstudp.dll is not copied anywhere, and if I copy it manually to /x64/Debug and run my app, I got:
[ WARN:0#0.086] global D:\C++Libs\vcpkg\buildtrees\opencv4\src\4.6.0-e24d1d7a25.clean\modules\videoio\src\cap_gstreamer.cpp (1127) cv::GStreamerCapture::open OpenCV | GStreamer warning: Error opening bin: no element "udpsrc"
[ WARN:0#0.086] global D:\C++Libs\vcpkg\buildtrees\opencv4\src\4.6.0-e24d1d7a25.clean\modules\videoio\src\cap_gstreamer.cpp (862) cv::GStreamerCapture::isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
Failed to open camera.
What am I doing wrong to use OpenCV/GStreamer via UDP with vcpkg?
I'm trying to output a video from file using OpenCV 3.4.0. Xcode gives an error message: OpenCV: Couldn't read video stream from file. When i capture video from cam xcode run successfull. I try install ffmpeg 3.4.1 and don't know what headers i need to include. It's all on OSX 10.12 Sierra
Found solution, i point wrong path to file "~/Desktop/test.mp4" just changed to "/Users/jameson/Desktop/test.mp4" apparantly opencv not enough abridged version of path
It turned out that I need to capture video from desktop with QT. The most frequently adviced tool for that was GStreamer. So, I installed it, installed all dependecies from README file, but I have absolutely no idea how do I compile this lib and link it to my project.
Any suggestions about how to use it?
I installed OpenCV 2.4.13 after installing the required dependencies and I'm running it with Qt Creator 5.7 for programming in C++. Following some tutorials I managed to load still images, process them and so on. But when I try to use the cv::VideoCapture, for example:
cv::VideoCapture cap;
cap.open("<File location/file name>");
or:
cv::VideoCapture("<File location/file name");
cap.isOpened() always return false. Even if I try to ignore this and go further with Mat, read, imshow and so on, the program crashes. I've already tried everything about loading the file: giving a std::string as argument, giving a *char, putting the video file on easier locations, in the project directory and nothing works. I tried with two different .mp4, with a .mov and with the cube4.avi file given in the "samples" folder (from openCV examples). All these files are perfectly played by my VLC. I've already tried in Qt with a QtWidgets project and with a plainCpp project. It never works. After searching about it, I've seen that this bug is very recurrent, but I've just found solutions for Windows, regarding on adding a .dll file to the project. But what should I do on Ubuntu, since Linux uses no dlls?
I figured out how to solve this and I'm here to share with everyone who encounter similar problems. First of all, my FFmpeg installation was for any unknown reason ignored on the OpenCV compilation. Since FFmpeg is needed to read and write videos, OpenCV wasn't able to do these jobs and displayed no error or warning messages about that. Anyway I decided to try everything again from the beginning. Keep in mind: if you just reinstall FFmpeg, it's not going to solve your problems; after that you have to do cmake, compile and install OpenCV again. I downloaded the most recent version of FFmpeg from the repository, compiled and installed it again. I tried to compile OpenCV again, but it was always stuck at about 15% when processing a FFmpeg (Libav) .h file with the message:
<file_name.h> can not be used when making a shared object; recompile with -fPIC
So I searched for it and with some effort I found out that it was necessary to compile the FFmpeg with the following configure line before doing the make:
.\configure --enable-pic --enable-shared
Some ffmpeg installation tutorials include even more commands after "configure". Then I compiled and reinstalled FFmpeg and compiled and reinstalled OpenCV. After that, I was able to load videos (the cap.isOpened() was no longer false), but nonetheless I was getting the following error for any video and they weren't being read:
Assertion desc failed at libswscale/swscale_internal.h:674
The way I found to solve this was downloading the newest stable FFmpeg version instead of the newest on, compiling it again, compiling OpenCV again and then it was successful! Now I can both load and write video files (I haven't tested it with multiple codecs yet). I wonder why they don't launch pre-compiled versions of OpenCV with everything so that we don't need to deal with all these stuffs...
Summarizing everything:
Download the last stable release of FFmpeg. If it doesn't work, try an older one. (I'm using the 2.8.6, the last stable right now).
Unpack it, open the terminal from the ffmpeg folder, compile and install it typing:
.\configure --enable-pic --enable-shared
make
sudo make install
Download OpenCV (if you haven't done it yet), unpack and install it following the official linux installation tutorial:
http://docs.opencv.org/2.4/doc/tutorials/introduction/linux_install/linux_install.html#linux-installation
I solve the problem by install "opencv3.2.0-dev" instead "opencv3.2.0";
pip install opencv-python is not enough to install OPENCV.
I'm trying to get VideoCapture working with OpenCV. The video I'm trying to load is in XVID format (checked it with VideoCapture::get(CV_CAP_PROP_FOURCC)). It works fine, but whenever I try to get the video framerate (VideoCapture::get(CV_CAP_PROP_FPS)) I get -nan.
I've used the same video and the same code on another computer (at uni, they have a custom Debian installation) and I can confirm that the framerate info is there (it works fine there). I read somewhere that Ubuntu recently removed ffmpeg from their repositories (I use Linux Mint 17.2), so I installed the ffmpeg package from the ppa:kirillshkrogalev/ffmpeg-next repositories. After that I recompiled OpenCV and installed again, without any change.
I'm using OpenCV 2.4.11 with C++ under Linux Mint 17.2.
Probably XVID is missing on your linux machine which is required by ffmpeg. Try installing XVID as below, it may help you.
cd /opt
wget http://downloads.xvid.org/downloads/xvidcore-1.3.2.tar.gz
tar xzvf xvidcore-1.3.2.tar.gz
cd xvidcore/build/generic
./configure --prefix="$HOME/ffmpeg_build"
make
make install