I am trying to use the ROS GSCam package in a Docker container to read from a camera stream and publish to a ROS topic. Using GStreamer via gst-launch works fine in the container. For example, running
gst-launch-1.0 -v tcpclientsrc host=10.0.0.20 port=7001 ! decodebin ! filesink location= xyz.flv
in the container successfully saves the camera stream to the xyz.flv file. When I try to use GSCam using the following commands
roscd gscam
mkdir bin
cd bin
export GSCAM_CONFIG="tcpclientsrc host=10.0.0.20 port=7001 ! decodebin ! ffmpegcolorspace"
rosrun gscam gscam
the camera stream is published to a ROS topics if these commands are run outside the container, but when run inside the container they result in the following output:
[ INFO] [1625351876.482947987]: Using GStreamer config from env: "tcpclientsrc host=10.0.0.20 port=7001 ! decodebin ! ffmpegcolorspace"
[ INFO] [1625351876.486672898]: using default calibration URL
[ INFO] [1625351876.486702714]: camera calibration URL: file:///root/.ros/camera_info/camera.yaml
[ INFO] [1625351876.486735225]: Unable to open camera calibration file [/root/.ros/camera_info/camera.yaml]
[ WARN] [1625351876.486750795]: Camera calibration file /root/.ros/camera_info/camera.yaml not found.
[ INFO] [1625351876.486759250]: Loaded camera calibration from
[ INFO] [1625351876.502708790]: Time offset: 1625351189.738
[FATAL] [1625351876.519311654]: Failed to PAUSE stream, check your GStreamer configuration.
[FATAL] [1625351876.519330710]: Failed to initialize GSCam stream!
What can I do to fix this?
EDIT. The problem seems to come from this piece of the code:
gst_element_set_state(pipeline_, GST_STATE_PAUSED);
if (gst_element_get_state(pipeline_, NULL, NULL, -1) == GST_STATE_CHANGE_FAILURE) {
ROS_FATAL("Failed to PAUSE stream, check your gstreamer configuration.");
return false;
}
Using code from here, I can see that the return value of gst_element_set_state() is SUCCESS, as is the return value of gst_element_get_state() for each element of the pipeline (tcpclientsrc0, decodebin0, and appsink0). Strangely however, the output of gst_element_get_state(pipeline_, NULL, NULL, -1) is FAILURE, which doesn't seem to make sense. What am I missing? Can a pipeline state change fail despite all its elements changing state successfully? or does the pipeline have some other (perhaps hidden) element that fails to change state?
From the answer given here I realized that my problem stemmed from the fact that the common ROS GSCam package (downloaded via ppa) uses GStreamer0.10, but because I was building the package from source inside the Docker image it used GStreamer1.0. Changing the build dependencies to
<build_depend>libgstreamer0.10-dev</build_depend>
<build_depend>libgstreamer-plugins-base0.10-dev</build_depend>
in the package.xml file solved the problem.
Related
I am creating a video stream from my headless raspberry pi using usb camera and opencv.
Ubuntu 18.04 LTS
Raspi OS: Bullseye
USB Camera
gst syntax test on terminal:
gst-launch-1.0 v4l2src device=/dev/video0 ! videoconvert! videoscale ! video/x-raw, width=640, height=480 ! autovideosink -v
In OpenCV C++:
string pipeline;
pipeline = "v4l2src device=/dev/video0 ! videoconvert! videoscale ! video/x-raw, width=640, height=480 ! autovideosink -v";
cv::VideoCapture cap(pipeline, cv::CAP_GSTREAMER);
When I launch the pipeline from terminal it creates a window with by usb webcam feed. However when I try to use the same pipeline from c++ it throws this error:
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (734) open OpenCV | GStreamer warning: Error opening bin: syntax error
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (501) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
terminate called after throwing an instance of 'std::invalid_argument'
what(): cannot predict with an empty OpenCV image
Aborted
What am i missing in the syntax in the code?
Edit:
After some trials I could get the pipeline generated, with this:
pipeline = "v4l2src device=/dev/video0 ! videoconvert! video/x-raw, width=640, height=480 ! appsink";
and warning:
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (961) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
.
.
.
some stuff for that frame
.
.
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (509) isPipelinePlaying OpenCV | GStreamer warning: unable to query pipeline state
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (1824) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module v4l2src0 reported: Could not read from resource.
When searched for it, I found this post. Which led to a bug report here and here.
I am using OpenCV 4.5, and I assume that bug would have been fixed by later version as it was reported for OpenCV 3.4.
I am using this command: gst-launch-1.0 v4l2src ! xvimagesink
to stream video over usb on my nvidia jetson nano and I am getting this output:
Setting pipeline to PAUSED...
ERROR: Pipeline doesn't want to pause.
ERROR: from element /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0: Could not initialise Xv output
Additional debug info:
xvimagesink.c(1773): gst_xv_image_sink_open (): /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0:
Could not display (null)
Setting pipeline to NULL..
Freeing pipeline...
I am trying to build a simple program using GStreamer using the faceblur plugin that can be found in https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad on Windows.
As a starting point I used the first tutorial : https://gstreamer.freedesktop.org/documentation/tutorials/basic/hello-world.html?gi-language=c
I have modified the pipeline as follows :
GError* error = NULL;
pipeline = gst_parse_launch("autovideosrc ! videoconvert ! faceblur ! videoconvert ! autovideosink", &error);
// print errors
if (error != NULL) {
g_error("Could not create pipeline: %s\n", error->message);
g_error_free(error);
}
As you can see, I'm trying to use the faceblur plugin here.
However, at runtime, the following error is thrown :
ERROR **: 12:09:56.343: Could not create pipeline: no element "faceblur"
My question is : how to make this work on Windows ?
How to "install" the required plugin ?
I am using Visual studio for linking/compiling.
Thank you for your help
When installing gstreamer on windows, you should select ALL plugins (or have a complete installation).
I'm trying to open UDP stream video in Raspberry Pi using this pipeline:
VideoCapture video("udpsrc port=5600 ! application/x-rtp,payload=96,encoding-name=H264 !"
"rtpjitterbuffer mode=1 ! rtph264depay ! h264parse ! decodebin ! videoconvert ! appsink emit-signals=true sync=false max-buffers=2 drop=true", cv::CAP_GSTREAMER);
// Exit if video is not opened
if(!video.isOpened())
{
cout << "Could not read video file" << endl;
return 1;
}
However, video.isOpened() return false and I couldn't be able to open with this code. This works on loopback test and another Ubuntu 18.04 PC but RPi 4 (Buster OS) couldn't run it. Also following lines can run upcoming gstream video:
gst-launch-1.0 udpsrc port=5600 caps='application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264' ! rtph264depay ! h264parse ! avdec_h264 ! autovideosink fps-update-interval=1000 sync=false
Furthermore specific code stack (e.g. [video_udp.cpp][1]) can easily handle video but also it's hard to use with opencv.
NOTE: OpenCV version is 4.2.0-pre
The problem is about using GStreamer library as a plugin of OpenCV. OpenCV doesn't throw exception even you build source code without GStreamer support. (In default, GStreamer library was directly found by Ubuntu, conversely Raspberry Pi 4 couldn't find it.)
Firstly I check build information of OpenCV with std::cout<<cv::getBuildInformation(); in Ubuntu 18.04 machine and found that:
GStreamer: YES (1.14.5)
Also I just check this on Raspberry Pi 4 side and build information was:
GStreamer:NO
Before the build OpenCV I just compare GStreamer plugins with gst-inspect-1.0 command for both of them and I just install some missing plugins like gstreamer1.0-tools . Also I wasn't know the problem, before the checking build information, so I installed some other GStreamer plugins that currently I don't remember.
Lastly, I build system by adding -D WITH_GSTREAMER=ON flag. And now it works well.
I'll edit answer if the problem related to missing plugins those are installed later. For this, I'll check this issue with clean Buster OS image.
I have one of the new camera add-ons for a Raspberry Pi. It doesn't yet have video4linux support but comes with a small program that spits out a 1080p h264 stream. I have verified this works and got it pushing the video to stdout with:
raspivid -n -t 1000000 -vf -b 2000000 -fps 25 -o -
I would like to process this stream such that I end up with a snapshot of the video taken once a second.
Since it's 1080p I will need to use the rpi's hardware support for H264 encoding. I believe gstreamer is the only app to support this so solutions using ffmpeg or avconv won't work. I've used the build script at http://www.trans-omni.co.uk/pi/GStreamer-1.0/build_gstreamer to make gstreamer and the plugin for hardware H264 encoding and it appears to work:
root#raspberrypi:~/streamtest# GST_OMX_CONFIG_DIR=/etc/gst gst-inspect-1.0 | grep 264
...
omx: omxh264enc: OpenMAX H.264 Video Encoder
omx: omxh264dec: OpenMAX H.264 Video Decoder
So I need to construct a gst-launch pipeline that takes video on stdin and spits out a fresh jpeg once a second. I know I can use gstreamer's 'multifilesink' sink to do this so have come up with the following short script to launch it:
root#raspberrypi:~/streamtest# cat test.sh
#!/bin/bash
export GST_OMX_CONFIG_DIR=/etc/gst
raspivid -n -t 1000000 -vf -b 2000000 -fps 25 -o - | \
gst-launch-1.0 fdsrc fd=0 ! decodebin ! videorate ! video/x-raw,framerate=1/1 ! jpegenc ! multifilesink location=img_%03d.jpeg
Trouble is it doesn't work: gstreamer just sits forever in the prerolling state and never spits out my precious jpegs.
root#raspberrypi:~/streamtest# ./test.sh
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
[waits forever]
In case it's helpful output with gstreamer's -v flag set is at http://pastebin.com/q4WySu4L
Can anyone explain what I'm doing wrong?
We finally found a solution to this. My gstreamer pipeline was mostly right but two problems combined to stop it working:
raspivid doesn't add timestamps to the h264 frames it produces
recent versions of gstreamer have a bug which stop it handling untimestamped frames
Run a 1.0 build of gstreamer (be sure to build from scratch & remove all traces of previous attempts) and the problem goes away.
See http://gstreamer-devel.966125.n4.nabble.com/Capturing-jpegs-from-an-h264-stream-tt4660254.html for the mailing list thread.