I am trying to build a simple program using GStreamer using the faceblur plugin that can be found in https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad on Windows.
As a starting point I used the first tutorial : https://gstreamer.freedesktop.org/documentation/tutorials/basic/hello-world.html?gi-language=c
I have modified the pipeline as follows :
GError* error = NULL;
pipeline = gst_parse_launch("autovideosrc ! videoconvert ! faceblur ! videoconvert ! autovideosink", &error);
// print errors
if (error != NULL) {
g_error("Could not create pipeline: %s\n", error->message);
g_error_free(error);
}
As you can see, I'm trying to use the faceblur plugin here.
However, at runtime, the following error is thrown :
ERROR **: 12:09:56.343: Could not create pipeline: no element "faceblur"
My question is : how to make this work on Windows ?
How to "install" the required plugin ?
I am using Visual studio for linking/compiling.
Thank you for your help
When installing gstreamer on windows, you should select ALL plugins (or have a complete installation).
Related
I am creating a video stream from my headless raspberry pi using usb camera and opencv.
Ubuntu 18.04 LTS
Raspi OS: Bullseye
USB Camera
gst syntax test on terminal:
gst-launch-1.0 v4l2src device=/dev/video0 ! videoconvert! videoscale ! video/x-raw, width=640, height=480 ! autovideosink -v
In OpenCV C++:
string pipeline;
pipeline = "v4l2src device=/dev/video0 ! videoconvert! videoscale ! video/x-raw, width=640, height=480 ! autovideosink -v";
cv::VideoCapture cap(pipeline, cv::CAP_GSTREAMER);
When I launch the pipeline from terminal it creates a window with by usb webcam feed. However when I try to use the same pipeline from c++ it throws this error:
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (734) open OpenCV | GStreamer warning: Error opening bin: syntax error
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (501) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
terminate called after throwing an instance of 'std::invalid_argument'
what(): cannot predict with an empty OpenCV image
Aborted
What am i missing in the syntax in the code?
Edit:
After some trials I could get the pipeline generated, with this:
pipeline = "v4l2src device=/dev/video0 ! videoconvert! video/x-raw, width=640, height=480 ! appsink";
and warning:
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (961) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
.
.
.
some stuff for that frame
.
.
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (509) isPipelinePlaying OpenCV | GStreamer warning: unable to query pipeline state
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (1824) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module v4l2src0 reported: Could not read from resource.
When searched for it, I found this post. Which led to a bug report here and here.
I am using OpenCV 4.5, and I assume that bug would have been fixed by later version as it was reported for OpenCV 3.4.
I'm currently facing an issue on a project using libvpx v1.10.0 ( https://github.com/webmproject/libvpx/releases ).
I have successfully built the library for Visual Studio 16 on Windows 10 (PC x64).[I must build libvpx by my own since I need it to run on a Windows 10 ARM64 / VS16 as well (Hololens 2) and a such build is not officially provided]
I've made a C++ DLL that uses the static libs from libvpx (to be used as a native plugin in Unity).
While the VP9 encoding part seems to work correctly in a sample app using my DLL, I cannot initialize the VP9 decoder. Maybe I am missing something in the configuration step of libvpx?
To build the libvpx static libraries, I have launched MSYS2 from the x64 Native Tools Command Prompt of Visual Studio 2019.
Then, I have set the configuration as follows, inspired by what we can find in an ArchLinux AUR package ( https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=libvpx-git ):
./configure --target=x86_64-win64-vs16 --enable-libyuv \
--enable-vp8 --enable-vp9 --enable-postproc --enable-vp9-postproc \
--enable-vp9-highbitdepth --enable-vp9-temporal-denoising
make -j
At the end of the compilation, the build succeeds with 0 error but 2 warnings. The --help of the configure scripts indicates that the --enable-vp9 option enables both the VP9 encoder and decoder.
Then, when I run my app using the C++ DLL that performs the encoding and decoding stuff, I get this error message from libvpx:
Codec does not implement requested capability .
It occurs when I call the vpx_codec_dec_init() function. I don't understand why it cannot be initialized as I think that the VP9 codec is fully built. The error appears as well when I add the --enable-vp9-encoder and --enable-vp9-decoder` options and all other VP9 related options to the configuration.
Is there something to do in the code itself before initializing the VP9 decoder? I have not seen a such thing in the samples of code. Notice that the problem occurs if I use VP8 as well (encoding OK / decoding KO, same error).
Here is the beginning of my function for decoding a frame:
vpx_codec_err_t resultError;
vpx_codec_ctx_t codec;
const vpx_codec_iface_t* decoderInterface = vpx_codec_vp9_cx(); // >>> OK!
if (!decoderInterface)
{
return "libvpx: unsupported codec (decoder)";
}
resultError = vpx_codec_dec_init(&codec, decoderInterface, nullptr, 0); // >>> KO...
if (resultError)
{
std::cout << vpx_codec_error(&codec) << std::endl; // outputs "Codec does not implement requested capability"
return "libvpx: failed to initialize decoder";
}
vpx_codec_iter_t iter = nullptr;
vpx_image_t* yuvFrame = nullptr;
resultError = vpx_codec_decode(&codec, compressedFrame, (unsigned int)compressedFrameSize, nullptr, 0);
if (resultError)
{
return "libvpx: failed to decode frame";
}
// ....
Any help would be great! Thank you. :)
OK, I've figured it out! :)
The line:
const vpx_codec_iface_t* decoderInterface = vpx_codec_vp9_cx();
must be replaced by (+ #include <vpx/vp8dx.h>):
const vpx_codec_iface_t* decoderInterface = vpx_codec_vp9_dx();
The reason I have made this error is due to a previous experience in encoding/decoding videos. I've developed a webcam streaming app using the H.264 codec, which needs a set up "context" structure. So, because of the name of the vpx_codec_vp9_cx() function, I've thought it was creating a such context for VP9. In fact, cx matches for encoding and dx for decoding... Not really obvious though. I don't like this kind of function names.
Anyway, I hope it will help anybody in a same situation. ;)
I am trying to use the ROS GSCam package in a Docker container to read from a camera stream and publish to a ROS topic. Using GStreamer via gst-launch works fine in the container. For example, running
gst-launch-1.0 -v tcpclientsrc host=10.0.0.20 port=7001 ! decodebin ! filesink location= xyz.flv
in the container successfully saves the camera stream to the xyz.flv file. When I try to use GSCam using the following commands
roscd gscam
mkdir bin
cd bin
export GSCAM_CONFIG="tcpclientsrc host=10.0.0.20 port=7001 ! decodebin ! ffmpegcolorspace"
rosrun gscam gscam
the camera stream is published to a ROS topics if these commands are run outside the container, but when run inside the container they result in the following output:
[ INFO] [1625351876.482947987]: Using GStreamer config from env: "tcpclientsrc host=10.0.0.20 port=7001 ! decodebin ! ffmpegcolorspace"
[ INFO] [1625351876.486672898]: using default calibration URL
[ INFO] [1625351876.486702714]: camera calibration URL: file:///root/.ros/camera_info/camera.yaml
[ INFO] [1625351876.486735225]: Unable to open camera calibration file [/root/.ros/camera_info/camera.yaml]
[ WARN] [1625351876.486750795]: Camera calibration file /root/.ros/camera_info/camera.yaml not found.
[ INFO] [1625351876.486759250]: Loaded camera calibration from
[ INFO] [1625351876.502708790]: Time offset: 1625351189.738
[FATAL] [1625351876.519311654]: Failed to PAUSE stream, check your GStreamer configuration.
[FATAL] [1625351876.519330710]: Failed to initialize GSCam stream!
What can I do to fix this?
EDIT. The problem seems to come from this piece of the code:
gst_element_set_state(pipeline_, GST_STATE_PAUSED);
if (gst_element_get_state(pipeline_, NULL, NULL, -1) == GST_STATE_CHANGE_FAILURE) {
ROS_FATAL("Failed to PAUSE stream, check your gstreamer configuration.");
return false;
}
Using code from here, I can see that the return value of gst_element_set_state() is SUCCESS, as is the return value of gst_element_get_state() for each element of the pipeline (tcpclientsrc0, decodebin0, and appsink0). Strangely however, the output of gst_element_get_state(pipeline_, NULL, NULL, -1) is FAILURE, which doesn't seem to make sense. What am I missing? Can a pipeline state change fail despite all its elements changing state successfully? or does the pipeline have some other (perhaps hidden) element that fails to change state?
From the answer given here I realized that my problem stemmed from the fact that the common ROS GSCam package (downloaded via ppa) uses GStreamer0.10, but because I was building the package from source inside the Docker image it used GStreamer1.0. Changing the build dependencies to
<build_depend>libgstreamer0.10-dev</build_depend>
<build_depend>libgstreamer-plugins-base0.10-dev</build_depend>
in the package.xml file solved the problem.
I'm trying to open UDP stream video in Raspberry Pi using this pipeline:
VideoCapture video("udpsrc port=5600 ! application/x-rtp,payload=96,encoding-name=H264 !"
"rtpjitterbuffer mode=1 ! rtph264depay ! h264parse ! decodebin ! videoconvert ! appsink emit-signals=true sync=false max-buffers=2 drop=true", cv::CAP_GSTREAMER);
// Exit if video is not opened
if(!video.isOpened())
{
cout << "Could not read video file" << endl;
return 1;
}
However, video.isOpened() return false and I couldn't be able to open with this code. This works on loopback test and another Ubuntu 18.04 PC but RPi 4 (Buster OS) couldn't run it. Also following lines can run upcoming gstream video:
gst-launch-1.0 udpsrc port=5600 caps='application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264' ! rtph264depay ! h264parse ! avdec_h264 ! autovideosink fps-update-interval=1000 sync=false
Furthermore specific code stack (e.g. [video_udp.cpp][1]) can easily handle video but also it's hard to use with opencv.
NOTE: OpenCV version is 4.2.0-pre
The problem is about using GStreamer library as a plugin of OpenCV. OpenCV doesn't throw exception even you build source code without GStreamer support. (In default, GStreamer library was directly found by Ubuntu, conversely Raspberry Pi 4 couldn't find it.)
Firstly I check build information of OpenCV with std::cout<<cv::getBuildInformation(); in Ubuntu 18.04 machine and found that:
GStreamer: YES (1.14.5)
Also I just check this on Raspberry Pi 4 side and build information was:
GStreamer:NO
Before the build OpenCV I just compare GStreamer plugins with gst-inspect-1.0 command for both of them and I just install some missing plugins like gstreamer1.0-tools . Also I wasn't know the problem, before the checking build information, so I installed some other GStreamer plugins that currently I don't remember.
Lastly, I build system by adding -D WITH_GSTREAMER=ON flag. And now it works well.
I'll edit answer if the problem related to missing plugins those are installed later. For this, I'll check this issue with clean Buster OS image.
I'm trying to develop an application which should analyse a video stream from a MIPI camera(5MP). So I'm using gstreamer to get the video feed access it using OpenCV. I tried the following pipeline and it's working:
imxv4l2videosrc device="/dev/video0" ! autovideosink
But when I try to use it with OpenCV, it gives some errors.
VideoCapture cap("imxv4l2videosrc device=\"/dev/video0\" ! autovideosink");
OpenCV Error: Unspecified error (GStreamer: cannot find appsink in manual pipeline
) in cvCaptureFromCAM_GStreamer, file /root/OpenCV/opencv/modules/videoio/src/cap_gstreamer.cpp, line 759
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:
/root/OpenCV/opencv/modules/videoio/src/cap_gstreamer.cpp:759: error: (-2) GStreamer: cannot find appsink in manual pipeline
in function cvCaptureFromCAM_GStreamer
Then I tried to use the following pipeline, and it's not working as well:
VideoCapture cap("imxv4l2videosrc device=\"/dev/video0\" ! appsink");
ERROR: unrecognized std! 0 (PAL=ff, NTSC=b000
ERROR: v4l2 capture: unsupported ioctrl!
GStreamer Plugin: Embedded video playback halted; module imxv4l2videosrc0 reported: Internal data flow error.
ERROR: v4l2 capture: unsupported ioctrl!
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /root/OpenCV/opencv/modules/videoio/src/cap_gstreamer.cpp, line 832
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:
/root/OpenCV/opencv/modules/videoio/src/cap_gstreamer.cpp:832: error: (-2) GStreamer: unable to start pipeline
in function cvCaptureFromCAM_GStreamer
GStreamer version: 1.0
OpenCV version: 3.2
What is the piece i'm missing here?
Or is my approach is wrong?
Here is the answer to my question(with #Alper Kucukkomurler's help)
You can access the MIPI camera through OpenCV (with GStreamer) with
VideoCapture cap("imxv4l2videosrc device=\"/dev/video0\" ! videoconvert ! appsink");
Also If you want to change the resolution of the input, imx-capture-mode parameter can be used, which is of imxv4l2videosrc element.
For example,
imxv4l2videosrc imx-capture-mode=5 ! <other elements>