I'm working with gstreamer 1.18 (built with gst-build). I'm trying to use the lossless preset of the nvh265enc plugin. With the following pipeline, I can successfully use all presets except the lossless ones (lossless (6) and lossless-hp (7)):
gst-launch-1.0 videotestsrc ! nvh265enc preset=6 ! h265parse ! nvh265dec ! glimagesink
Whenever I set preset to 6 or 7, I get the following error.
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Got context from element 'sink': gst.gl.GLDisplay=context, gst.gl.GLDisplay=(GstGLDisplay)"\(GstGLDisplayX11\)\ gldisplayx11-0";
Got context from element 'nvh265dec0': gst.cuda.context=context, gst.cuda.context=(GstCudaContext)"\(GstCudaContext\)\ cudacontext0", cuda-device-id=(int)0;
ERROR: from element /GstPipeline:pipeline0/GstNvH265Enc:nvh265enc0: Could not configure supporting library.
Additional debug info:
../subprojects/gst-plugins-bad/sys/nvcodec/gstnvbaseenc.c(1712): gst_nv_base_enc_set_format (): /GstPipeline:pipeline0/GstNvH265Enc:nvh265enc0:
Failed to init encoder: 8
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
ERROR: from element /GstPipeline:pipeline0/GstNvH265Enc:nvh265enc0: Could not configure supporting library.
Additional debug info:
../subprojects/gst-plugins-bad/sys/nvcodec/gstnvbaseenc.c(1712): gst_nv_base_enc_set_format (): /GstPipeline:pipeline0/GstNvH265Enc:nvh265enc0:
Failed to init encoder: 8
ERROR: pipeline doesn't want to preroll.
Freeing pipeline ...
What's more puzzling is that the lossless preset works with the samples from the Nvidia Video Codec SDK 9.
Did I miss any additional configuration?
EDIT : finally I found that adding qp-const=0 or rc-mode=1 to nvh265enc worked.
Well, first of all, there is no difference between lossless and lossless-hp.
See https://superuser.com/questions/1528215/what-is-the-difference-between-nvenc-hevc-lossless-and-losslesshp-presets
Second of all, Gstreamer is not the application that Nvidia natively supports. FFmpeg, on the other hand, is. For example B-frames as reference mode with its two submodes (middle and each) is not supported too in GS. See: https://forum.videohelp.com/threads/387613-Nvidia-h-265-hevc-lossless#post2509093
ffmpeg -vsync 0 -r 60 -hwaccel cuda -hwaccel_output_format cuda -i "in.mp4" -c:v hevc_nvenc -preset lossless "out.mp4"
P.S. Gstreamer supports lossless with rc-mode=1 or qp-const=0.
Related
I am running below pipeline on mac but it shows error while running:
$**gst-launch-1.0 osxaudiosrc device=92**
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
0:00:00.048601000 16777 0x7fafe585d980 WARN osxaudio gstosxcoreaudio.c:500:gst_core_audio_asbd_to_caps: No sample rate
0:00:00.048699000 16777 0x7fafe585d980 ERROR audio-info audio-info.c:304:gboolean gst_audio_info_from_caps(GstAudioInfo *, const GstCaps *): no channel-mask property given
0:00:00.048736000 16777 0x7fafe585d980 WARN basesrc gstbasesrc.c:3072:void gst_base_src_loop(GstPad *):<osxaudiosrc0> error: Internal data stream error.
0:00:00.048744000 16777 0x7fafe585d980 WARN basesrc gstbasesrc.c:3072:void gst_base_src_loop(GstPad *):<osxaudiosrc0> error: streaming stopped, reason not-negotiated (-4)
New clock: GstAudioSrcClock
**ERROR: from element /GstPipeline:pipeline0/GstOsxAudioSrc:osxaudiosrc0: Internal data stream error.**
Additional debug info:
gstbasesrc.c(3072): void gst_base_src_loop(GstPad *) ():
/GstPipeline:pipeline0/GstOsxAudioSrc:osxaudiosrc0:
streaming stopped, reason not-negotiated (-4)
Execution ended after 0:00:00.000101000
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
The device id mentioned in the cmd is fetched from gst-inspect and is of macbook speakers. I am using GStreamer 1.16.2 on catalina.
What is wrong/missing in this pipeline?
TL;DR: you have an incomplete pipeline.
Once the osxaudiosrc starts producing buffers, where is it supposed to go? Do you want to encode it and/or write it to file? Should it be streamed somewhere? Should it be plotted? ...
This is also the reason GStreamer is erroring out. There's no element after your source element, so if it were to start playing, those buffers would somehow end up in the void, with no destination to go to (to be a bit more thorough: you're trying to push data on a pad which has no peer, so it would try to dereference an invalid sinkpad). Since this is not possibe, GStreamer just plainly stops.
An example pipeline is given in the osxaudiosrc documentation:
gst-launch-1.0 osxaudiosrc ! wavenc ! filesink location=audio.wav
I want to preview RTMP using gstreamer xvimagesink. i can see the output if i use autovideosink like this:
gst-launch-1.0 -v rtmpsrc location='rtmp://127.0.0.1:1935/live/stream' ! decodebin3 ! autovideosink
but if i replace "autovideosink" with "xvimagesink" i get this:
Setting pipeline to PAUSED ...
ERROR: Pipeline doesn't want to pause.
ERROR: from element /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0: Could not initialise Xv output
Additional debug info:
xvimagesink.c(1773): gst_xv_image_sink_open (): /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0:
Could not open display (null)
Setting pipeline to NULL ...
Freeing pipeline ...
Both decodebin3 and autovideosink are auto-plugging GStreamer elements. It means that both elements are auto-selecting available and the most appropriate GStreamer plugins to demux/decode (decodebin3) and render video (autovideosink) from, in this case, live RTMP stream.
So it is very possible that, for example,
decodebin3 decodes video in format that xvimagesink cannot show on your platform/hardware and/or with your Gstreamer version,
xvimagesink is not set properly on your platform and it is not related with available display/monitor.
To find out more details about
video format decoded by decodebin3
video sink element "chosen" by autovideosink,
you can set higher (more detailed) debug level of GStreamer with, for example, export GST_DEBUG=3, rerun pipeline and inspect output.
I am trying to encode uncompressed video in H.265; however, when I write the following pipeline I receive an error message that I cannot resolve. I am following the example code in Tegra X1 Multimedia User Guide, and I do not understand why the following pipeline does not work. I am a beginner in video compression so any help would be very useful. The code/error message:
ubuntu#tegra-ubuntu:~$ gst-launch-1.0 filesrc location=small_mem_vid.mov ! 'video/x-raw, format=(string)I420, framerate=(fraction)30/1, width=(int)1280, height=(int)720' ! omxh265enc ! filesink location=new_encode.mov -e
Setting pipeline to PAUSED ...
Inside NvxLiteH264DecoderLowLatencyInitNvxLiteH264DecoderLowLatencyInit set DPB and MjstreamingInside NvxLiteH265DecoderLowLatencyInitNvxLiteH265DecoderLowLatencyInit set DPB and MjstreamingPipeline is PREROLLING ...
Framerate set to : 30 at NvxVideoEncoderSetParameterNvMMLiteOpen : Block : BlockType = 8
===== MSENC =====
NvMMLiteBlockCreate : Block : BlockType = 8
ERROR: from element /GstPipeline:pipeline0/GstOMXH265Enc-omxh265enc:omxh265enc-omxh265enc0: Could not write to resource.
Additional debug info:
/dvs/git/dirty/git-master_linux/external/gstreamer/gst-omx/omx/gstomxvideoenc.c(2139): gst_omx_video_enc_handle_frame (): /GstPipeline:pipeline0/GstOMXH265Enc-omxh265enc:omxh265enc-omxh265enc0:
Failed to write input into the OpenMAX buffer
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
ubuntu#tegra-ubuntu:~$
Are you sure the .mov file is really uncompressed video? The .mov extension is commonly used for quicktime video. You could use "mediainfo" in Linux to discover more details about the format of the file. In that case I don't think you can go directly from filesrc to the encoder. You probably need a qtdemux and a decoder, maybe avdec_h264 depending on what mediainfo shows.
You also might want to enable some more detailed debugging:
export GST_DEBUG=*:4
I'm sure I've had this pipeline working on an earlier Ubuntu system I had set up (formatted for readability):
playbin
uri=rtspt://user:pswd#192.168.xxx.yyy/ch1/main
video-sink='videoconvert
! videoflip method=counterclockwise
! fpsdisplaysink'
Yet, when I try to use it within my program, I get:
Missing element: H.264 (Main Profile) decoder
WARNING: from element /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0:
No decoder available for type 'video/x-h264,
stream-format=(string)avc, alignment=(string)au,
codec_data=(buffer)014d001fffe10017674d001f9a6602802dff35010101400000fa000030d40101000468ee3c80,
level=(string)3.1, profile=(string)main, width=(int)1280,
height=(int)720, framerate=(fraction)0/1, parsed=(boolean)true'.
Additional debug info:
gsturidecodebin.c(938): unknown_type_cb ():
/GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0
Now I'm pretty certain I have an H264 decoder installed and indeed the gstreamer plugins autogen.sh/configure correctly recognised the fact. Installed packages are h264enc, libx264-142, libx264-dev and x264.
It does exactly the same thing if I use the more "acceptable" autovideosink in place of fpsdisplaysink, or if I try to play the RTSP stream with gst-play-1.0. However, it works if I use the test pattern source videotestsrc.
What am I doing wrong?
It looks like gstreamer cannot find a suitable plugin for decoding H264. Either you do not have an H264 decoder element installed, or gstreamer is looking in the wrong path for your elements.
First, try running gst-inspect-1.0. This should output a long list of all the elements gstreamer has detected.
If this doesn't return any elements, you probably need to set the GST_PLUGIN_PATH environment variable to point to the directory where your plugins are installed. Running Gstreamer - This link should help.
If it DOES return many elements, run gst-inspect-1.0 avdec_h264 to verify that you have the H264 decoder element.
I'm designing a program to stream an icecast server (radio.clarkson.edu). Ultimately it will be written in Python3, but for now I'm using gst-launch to test the pipeline. I've been working on Debian Jessie and using gstreamer-1.0. Using a file on Wikimedia, I was able to play pretty easily using:
url=https://upload.wikimedia.org/wikipedia/commons/0/0c/Muriel-Nguyen-Xuan-Korsakov-Flight-of-the-bumblebee.flac.oga
gst-launch-1.0 -v souphttpsrc location =$url ! decodebin ! audioconvert ! audioresample ! alsasink
Running the same commands with my stream, I get the output:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstTypeFindElement:typefind.GstPad:src: caps = text/uri-list
Missing element: text/uri-list decoder
ERROR: from element /GstPipeline:pipeline0/GstDecodeBin:decodebin0: Your GStreamer installation is missing a plug-in.
Additional debug info:
gstdecodebin2.c(3977): gst_decode_bin_expose (): /GstPipeline:pipeline0/GstDecodeBin:decodebin0:
no suitable plugins found
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstTypeFindElement:typefind.GstPad:src: caps = "NULL"
Freeing pipeline ...
I have tried too many other pipelines to put on one post, but I can answer any other questions.
Thank you
By now you probably have solved that problem, but still here's an idea: text/uri-list indicates that you didn't hand an actual stream to gstreamer, but rather a (textual) playlist that contains stream addresses. I guess gstreamer can't handle those, hence you need to parse them beforehand and then hand an actual audio stream address to it.