Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) 4.4
• TensorRT Version 7.0
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
print("Linking demux to the rtppayload in the Pipeline \n")
for i in range(number_of_sources):
demux_srcpad = streamdemux.get_request_pad("src_%u"%i)
if not demux_srcpad:
sys.stderr.write("Unable to get the src pad of streamdemux \n")
sinkpad = rtppayload_list[i].get_static_pad("sink")
if not sinkpad:
sys.stderr.write(" Unable to get sink pad of rtppayload \n")
demux_srcpad.link(sinkpad)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I am trying to create source pads for the nvstreamdemux element at run-time and link to several rtph264pay elements which reside inside the list : rtppayload_list. The above given code results in the following error:
gi.overrides.Gst.LinkError:
Any help would be appreciated. Thanks !
you can only link element and pads that are compatible with each other.
in you case, nvstreamdemux outputs raw data in NV12 or RGBA format at its source pad whereas rtph264pay takes h264 encoded stream at its input sink pad. So these two are incompatible with each other.
You need to link nvstreamdemux to some element that encodes raw data into h264 like nvv4l2h264enc and then in turn link nvv4l2h264enc to rtph264pay.
so your pipeline should look like
nvstreamdemux->nvv4l2h264enc->rtph264pay
Related
I'm working with gstreamer 1.18 (built with gst-build). I'm trying to use the lossless preset of the nvh265enc plugin. With the following pipeline, I can successfully use all presets except the lossless ones (lossless (6) and lossless-hp (7)):
gst-launch-1.0 videotestsrc ! nvh265enc preset=6 ! h265parse ! nvh265dec ! glimagesink
Whenever I set preset to 6 or 7, I get the following error.
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Got context from element 'sink': gst.gl.GLDisplay=context, gst.gl.GLDisplay=(GstGLDisplay)"\(GstGLDisplayX11\)\ gldisplayx11-0";
Got context from element 'nvh265dec0': gst.cuda.context=context, gst.cuda.context=(GstCudaContext)"\(GstCudaContext\)\ cudacontext0", cuda-device-id=(int)0;
ERROR: from element /GstPipeline:pipeline0/GstNvH265Enc:nvh265enc0: Could not configure supporting library.
Additional debug info:
../subprojects/gst-plugins-bad/sys/nvcodec/gstnvbaseenc.c(1712): gst_nv_base_enc_set_format (): /GstPipeline:pipeline0/GstNvH265Enc:nvh265enc0:
Failed to init encoder: 8
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
ERROR: from element /GstPipeline:pipeline0/GstNvH265Enc:nvh265enc0: Could not configure supporting library.
Additional debug info:
../subprojects/gst-plugins-bad/sys/nvcodec/gstnvbaseenc.c(1712): gst_nv_base_enc_set_format (): /GstPipeline:pipeline0/GstNvH265Enc:nvh265enc0:
Failed to init encoder: 8
ERROR: pipeline doesn't want to preroll.
Freeing pipeline ...
What's more puzzling is that the lossless preset works with the samples from the Nvidia Video Codec SDK 9.
Did I miss any additional configuration?
EDIT : finally I found that adding qp-const=0 or rc-mode=1 to nvh265enc worked.
Well, first of all, there is no difference between lossless and lossless-hp.
See https://superuser.com/questions/1528215/what-is-the-difference-between-nvenc-hevc-lossless-and-losslesshp-presets
Second of all, Gstreamer is not the application that Nvidia natively supports. FFmpeg, on the other hand, is. For example B-frames as reference mode with its two submodes (middle and each) is not supported too in GS. See: https://forum.videohelp.com/threads/387613-Nvidia-h-265-hevc-lossless#post2509093
ffmpeg -vsync 0 -r 60 -hwaccel cuda -hwaccel_output_format cuda -i "in.mp4" -c:v hevc_nvenc -preset lossless "out.mp4"
P.S. Gstreamer supports lossless with rc-mode=1 or qp-const=0.
I am new to Gstreamer and trying debug an issue with aac codec. I found different codec_data in different scenarios. Following are caps I got from the different scenarios.
src caps: audio/mpeg, mpegversion=(int)4, framed=(boolean)true, stream-format=(string)raw, level=(string)1, base-profile=(string)lc, profile=(string)lc, codec_data=(buffer)131056e59d4800, rate=(int)24000, channels=(int)2
setcaps: audio/mpeg, mpegversion=(int)4, codec_data=(string)11900800, stream-format=(string)raw, framed=(boolean)true, enable-svp=(string)true, rate=(int)48000, channels=(int)2
Could you please help me to understand what is codec_data?
codec_data contains additional data to initialize the decoder. E.g. it contains information about the sample rate and number of channels in the stream.
You can parse this data according to the codec being used. Check the codec specification about this data's format.
When I try to record an RTSP stream with audio and video using gstreamer I get the above error. When only video is recorded it works but when audio pipeline is added the file size becomes zero and the above error is displayed. Further following is also displayed
Missing element: MPEG4-GENERIC audio RTP depayloader
WARNING: from element /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0: No decoder available for type 'application/x-rtp, media=(string)audio, payload=(int)96, clock-rate=(int)48000, encoding-name=(string)MPEG4-GENERIC, streamtype=(string)5, profile-level-id=(string)1, mode=(string)aac-hbr, sizelength=(string)13, indexlength=(string)3, indexdeltalength=(string)3, config=(string)1188, a-tool=(string)"LIVE555\ Streaming\ Media\ v2016.01.29", a-type=(string)broadcast, x-qt-text-nam=(string)"KMStreaming\ Server", x-qt-text-inf=(string)ch01, clock-base=(uint)3130203504, seqnum-base=(uint)34845, npt-start=(guint64)0, play-speed=(double)1, play-scale=(double)1, ssrc=(uint)3216157947'.
Additional debug info:
gsturidecodebin.c(921): unknown_type_cb (): /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0
There are two different MPEG4 audio RTP formats in the wild. MP4A-LATM and MPEG4-GENERIC. See RFC 3016 and RFC 3640 respectively.
Looks like GStreamer only supports MP4A-LATM. So basically, yes, the format you are trying to receive is not supported.
I'm sure I've had this pipeline working on an earlier Ubuntu system I had set up (formatted for readability):
playbin
uri=rtspt://user:pswd#192.168.xxx.yyy/ch1/main
video-sink='videoconvert
! videoflip method=counterclockwise
! fpsdisplaysink'
Yet, when I try to use it within my program, I get:
Missing element: H.264 (Main Profile) decoder
WARNING: from element /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0:
No decoder available for type 'video/x-h264,
stream-format=(string)avc, alignment=(string)au,
codec_data=(buffer)014d001fffe10017674d001f9a6602802dff35010101400000fa000030d40101000468ee3c80,
level=(string)3.1, profile=(string)main, width=(int)1280,
height=(int)720, framerate=(fraction)0/1, parsed=(boolean)true'.
Additional debug info:
gsturidecodebin.c(938): unknown_type_cb ():
/GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0
Now I'm pretty certain I have an H264 decoder installed and indeed the gstreamer plugins autogen.sh/configure correctly recognised the fact. Installed packages are h264enc, libx264-142, libx264-dev and x264.
It does exactly the same thing if I use the more "acceptable" autovideosink in place of fpsdisplaysink, or if I try to play the RTSP stream with gst-play-1.0. However, it works if I use the test pattern source videotestsrc.
What am I doing wrong?
It looks like gstreamer cannot find a suitable plugin for decoding H264. Either you do not have an H264 decoder element installed, or gstreamer is looking in the wrong path for your elements.
First, try running gst-inspect-1.0. This should output a long list of all the elements gstreamer has detected.
If this doesn't return any elements, you probably need to set the GST_PLUGIN_PATH environment variable to point to the directory where your plugins are installed. Running Gstreamer - This link should help.
If it DOES return many elements, run gst-inspect-1.0 avdec_h264 to verify that you have the H264 decoder element.
I have searched all around and can not find any examples or tutorials on how to access a webcam using ffmpeg in C++. Any sample code or any help pointing me to some documentation, would greatly be appreciated.
Thanks in advance.
I have been working on this for months now. Your first "issue" is that ffmpeg (libavcodec and other ffmpeg libs) does NOT access web cams, or any other device.
For a basic USB webcam, or audio/video capture card, you first need driver software to access that device. For linux, these drivers fall under the Video4Linux (V4L2 as it is known) category, which are modules that are part of most distros. If you are working with MS Windows, then you need to get an SDK that allows you to access the device. MS may have something for accessing generic devices, (but from my experience, they are not very capable, if they work at all) If you've made it this far, then you now have raw frames (video and/or audio).
THEN you get to the ffmpeg part - libavcodec - which takes the raw frames (audio and/or video) and encodes them into a streams, which ffmpeg can then mux into your final container.
I have searched, but have found very few examples of all of these, and most are piece-meal.
If you don't need to actually code of this yourself, the command line ffmpeg, as well as vlc, can access these devices, capture and save to files, and even stream.
That's the best I can do for now.
ken
For windows use dshow
For Linux (like ubuntu) use Video4Linux (V4L2).
FFmpeg can take input from V4l2 and can do the process.
To find the USB video path type : ls /dev/video*
E.g : /dev/video(n) where n = 0 / 1 / 2 ….
AVInputFormat – Struct which holds the information about input device format / media device format.
av_find_input_format ( “v4l2”) [linux]
av_format_open_input(AVFormatContext , “/dev/video(n)” , AVInputFormat , NULL)
if return value is != 0 then error.
Now you have accessed the camera using FFmpeg and can continue the operation.
sample code is below.
int CaptureCam()
{
avdevice_register_all(); // for device
avcodec_register_all();
av_register_all();
char *dev_name = "/dev/video0"; // here mine is video0 , it may vary.
AVInputFormat *inputFormat =av_find_input_format("v4l2");
AVDictionary *options = NULL;
av_dict_set(&options, "framerate", "20", 0);
AVFormatContext *pAVFormatContext = NULL;
// check video source
if(avformat_open_input(&pAVFormatContext, dev_name, inputFormat, NULL) != 0)
{
cout<<"\nOops, could'nt open video source\n\n";
return -1;
}
else
{
cout<<"\n Success !";
}
} // end function
Note : Header file < libavdevice/avdevice.h > must be included
This really doesn't answer the question as I don't have a pure ffmpeg solution for you, However, I personally use Qt for webcam access. It is C++ and will have a much better API for accomplishing this. It does add a very large dependency on your code however.
It definitely depends on the webcam - for example, at work we use IP cameras that deliver a stream of jpeg data over the network. USB will be different.
You can look at the DirectShow samples, eg PlayCap (but they show AmCap and DVCap samples too). Once you have a directshow input device (chances are whatever device you have will be providing this natively) you can hook it up to ffmpeg via the dshow input device.
And having spent 5 minutes browsing the ffmpeg site to get those links, I see this...