Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
Improve this question
I am trying to install the official NVIDIA Codecs for GStreamer. I have the following setup:
Ubuntu 18.04
Gstreamer 1.14.5
NVIDIA QUADRO P2000
NVIDIA-SMI 440.100 Driver Version: 440.100
CUDA Version 10.2.89
NVIDIA Video_Codec_SDK_9.0.20
I followed this installation guide http://lifestyletransfer.com/how-to-install-nvidia-gstreamer-plugins-nvenc-nvdec-on-ubuntu/
After the installation I can use nvdec in the following command without a problem:
gst-launch-1.0 filesrc location=jumanji.mp4 ! qtdemux ! h264parse ! nvdec ! glimagesink sync=false
Howerver, when trying to use the encoder nvh264enc with the following command:
gst-launch-1.0 videotestsrc num-buffers=10000 ! nvh264enc ! h264parse ! mp4mux ! filesink location=video.mp4
I get the following error:
Error: from Element /GstPipeline:pipeline0/GstNvH264Enc:nvh264enc0: The Supported library could not be initialized. gstvideoencoder.c(1627): gst_video_encoder_change_state (): /GstPipeline:pipeline0/GstNvH264Enc:nvh264enc0: Failed to open encoder
I have tried to look for similar error reports without luck. Any lead on how to solve it would be deeply appreciated.
EDIT:
By executing the previous pipeline with the Debug level --gst-debug-level=5 I can read the following error message in the log:
nvenc gstnvenc.c:267:gst_nvenc_create_cuda_context: Initialising CUDA..
0:00:00.523634157 7971 0x56375974c600 INFO nvenc gstnvenc.c:276:gst_nvenc_create_cuda_context: Initialised CUDA
0:00:00.523654036 7971 0x56375974c600 INFO nvenc gstnvenc.c:284:gst_nvenc_create_cuda_context: 1 CUDA device(s) detected
0:00:00.523702909 7971 0x56375974c600 INFO nvenc gstnvenc.c:290:gst_nvenc_create_cuda_context: GPU #0 supports NVENC: yes (Quadro P2000) (Compute SM 6.1)
0:00:00.646223264 7971 0x56375974c600 INFO nvenc gstnvenc.c:312:gst_nvenc_create_cuda_context: Created CUDA context 0x5637599d78f0
0:00:00.646239492 7971 0x56375974c600 ERROR nvenc gstnvbaseenc.c:437:gst_nv_base_enc_open: Failed to create NVENC encoder session, ret=15
0:00:00.646262028 7971 0x56375974c600 INFO nvenc gstnvenc.c:320:gst_nvenc_destroy_cuda_context: Destroying CUDA context 0x5637599d78f0
0:00:00.755491991 7971 0x56375974c600 WARN videoencoder gstvideoencoder.c:1627:gst_video_encoder_change_state: error: Failed to open encoder
You need to build and install the gst plugins bad with nvidia encoders/decoders enabled. They don't exist until you do. This involves downloading and installing cuda, cloning the repo, building the plugins, installing in the relevant directories.
Related
I'm building the custom yocto image for iMX8m mini.
It builds gstreamer-1.18.5.imx
At first, I am seeing video4linux2 shared lib was installed in rootfs with default package configuration options as mentioned at gstreamer1.0-plugins-good_1.18.5.imx.bb
The build succeeded.
But on the target board the $gst-inspect-1.0 | grep v4l2 only outputs 3 plugins:
v4l2src
v4l2sink
v4l2device...
I'm actually looking forward is below video decoder plugins-
I don't have it and I don't really understand why it's not there whereas the yocto recipes have all succeeded.
Can anyone suggest something which I have to look for to fix this issue
I am using this command: gst-launch-1.0 v4l2src ! xvimagesink
to stream video over usb on my nvidia jetson nano and I am getting this output:
Setting pipeline to PAUSED...
ERROR: Pipeline doesn't want to pause.
ERROR: from element /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0: Could not initialise Xv output
Additional debug info:
xvimagesink.c(1773): gst_xv_image_sink_open (): /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0:
Could not display (null)
Setting pipeline to NULL..
Freeing pipeline...
I'm trying to open UDP stream video in Raspberry Pi using this pipeline:
VideoCapture video("udpsrc port=5600 ! application/x-rtp,payload=96,encoding-name=H264 !"
"rtpjitterbuffer mode=1 ! rtph264depay ! h264parse ! decodebin ! videoconvert ! appsink emit-signals=true sync=false max-buffers=2 drop=true", cv::CAP_GSTREAMER);
// Exit if video is not opened
if(!video.isOpened())
{
cout << "Could not read video file" << endl;
return 1;
}
However, video.isOpened() return false and I couldn't be able to open with this code. This works on loopback test and another Ubuntu 18.04 PC but RPi 4 (Buster OS) couldn't run it. Also following lines can run upcoming gstream video:
gst-launch-1.0 udpsrc port=5600 caps='application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264' ! rtph264depay ! h264parse ! avdec_h264 ! autovideosink fps-update-interval=1000 sync=false
Furthermore specific code stack (e.g. [video_udp.cpp][1]) can easily handle video but also it's hard to use with opencv.
NOTE: OpenCV version is 4.2.0-pre
The problem is about using GStreamer library as a plugin of OpenCV. OpenCV doesn't throw exception even you build source code without GStreamer support. (In default, GStreamer library was directly found by Ubuntu, conversely Raspberry Pi 4 couldn't find it.)
Firstly I check build information of OpenCV with std::cout<<cv::getBuildInformation(); in Ubuntu 18.04 machine and found that:
GStreamer: YES (1.14.5)
Also I just check this on Raspberry Pi 4 side and build information was:
GStreamer:NO
Before the build OpenCV I just compare GStreamer plugins with gst-inspect-1.0 command for both of them and I just install some missing plugins like gstreamer1.0-tools . Also I wasn't know the problem, before the checking build information, so I installed some other GStreamer plugins that currently I don't remember.
Lastly, I build system by adding -D WITH_GSTREAMER=ON flag. And now it works well.
I'll edit answer if the problem related to missing plugins those are installed later. For this, I'll check this issue with clean Buster OS image.
I'm trying to develop an application which should analyse a video stream from a MIPI camera(5MP). So I'm using gstreamer to get the video feed access it using OpenCV. I tried the following pipeline and it's working:
imxv4l2videosrc device="/dev/video0" ! autovideosink
But when I try to use it with OpenCV, it gives some errors.
VideoCapture cap("imxv4l2videosrc device=\"/dev/video0\" ! autovideosink");
OpenCV Error: Unspecified error (GStreamer: cannot find appsink in manual pipeline
) in cvCaptureFromCAM_GStreamer, file /root/OpenCV/opencv/modules/videoio/src/cap_gstreamer.cpp, line 759
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:
/root/OpenCV/opencv/modules/videoio/src/cap_gstreamer.cpp:759: error: (-2) GStreamer: cannot find appsink in manual pipeline
in function cvCaptureFromCAM_GStreamer
Then I tried to use the following pipeline, and it's not working as well:
VideoCapture cap("imxv4l2videosrc device=\"/dev/video0\" ! appsink");
ERROR: unrecognized std! 0 (PAL=ff, NTSC=b000
ERROR: v4l2 capture: unsupported ioctrl!
GStreamer Plugin: Embedded video playback halted; module imxv4l2videosrc0 reported: Internal data flow error.
ERROR: v4l2 capture: unsupported ioctrl!
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /root/OpenCV/opencv/modules/videoio/src/cap_gstreamer.cpp, line 832
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:
/root/OpenCV/opencv/modules/videoio/src/cap_gstreamer.cpp:832: error: (-2) GStreamer: unable to start pipeline
in function cvCaptureFromCAM_GStreamer
GStreamer version: 1.0
OpenCV version: 3.2
What is the piece i'm missing here?
Or is my approach is wrong?
Here is the answer to my question(with #Alper Kucukkomurler's help)
You can access the MIPI camera through OpenCV (with GStreamer) with
VideoCapture cap("imxv4l2videosrc device=\"/dev/video0\" ! videoconvert ! appsink");
Also If you want to change the resolution of the input, imx-capture-mode parameter can be used, which is of imxv4l2videosrc element.
For example,
imxv4l2videosrc imx-capture-mode=5 ! <other elements>
I have one of the new camera add-ons for a Raspberry Pi. It doesn't yet have video4linux support but comes with a small program that spits out a 1080p h264 stream. I have verified this works and got it pushing the video to stdout with:
raspivid -n -t 1000000 -vf -b 2000000 -fps 25 -o -
I would like to process this stream such that I end up with a snapshot of the video taken once a second.
Since it's 1080p I will need to use the rpi's hardware support for H264 encoding. I believe gstreamer is the only app to support this so solutions using ffmpeg or avconv won't work. I've used the build script at http://www.trans-omni.co.uk/pi/GStreamer-1.0/build_gstreamer to make gstreamer and the plugin for hardware H264 encoding and it appears to work:
root#raspberrypi:~/streamtest# GST_OMX_CONFIG_DIR=/etc/gst gst-inspect-1.0 | grep 264
...
omx: omxh264enc: OpenMAX H.264 Video Encoder
omx: omxh264dec: OpenMAX H.264 Video Decoder
So I need to construct a gst-launch pipeline that takes video on stdin and spits out a fresh jpeg once a second. I know I can use gstreamer's 'multifilesink' sink to do this so have come up with the following short script to launch it:
root#raspberrypi:~/streamtest# cat test.sh
#!/bin/bash
export GST_OMX_CONFIG_DIR=/etc/gst
raspivid -n -t 1000000 -vf -b 2000000 -fps 25 -o - | \
gst-launch-1.0 fdsrc fd=0 ! decodebin ! videorate ! video/x-raw,framerate=1/1 ! jpegenc ! multifilesink location=img_%03d.jpeg
Trouble is it doesn't work: gstreamer just sits forever in the prerolling state and never spits out my precious jpegs.
root#raspberrypi:~/streamtest# ./test.sh
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
[waits forever]
In case it's helpful output with gstreamer's -v flag set is at http://pastebin.com/q4WySu4L
Can anyone explain what I'm doing wrong?
We finally found a solution to this. My gstreamer pipeline was mostly right but two problems combined to stop it working:
raspivid doesn't add timestamps to the h264 frames it produces
recent versions of gstreamer have a bug which stop it handling untimestamped frames
Run a 1.0 build of gstreamer (be sure to build from scratch & remove all traces of previous attempts) and the problem goes away.
See http://gstreamer-devel.966125.n4.nabble.com/Capturing-jpegs-from-an-h264-stream-tt4660254.html for the mailing list thread.