Gstreamer Missing plugins - gstreamer

I am trying to run certain pipelines on the Command prompt for playing a video and I am often getting these errors/messages/warnings :
WARNING: erroneous pipeline: no element "qtdemux"
WARNING: erroneous pipeline: no element "playbin2"
WARNING: erroneous pipeline: no element "decodebin2"
ERROR: pipeline could not be constructed: no element "playbin".
Following are the pipelines :
gst-launch filesrc location=path to the mp4 file ! playbin2 ! queue ! ffmpegcolorspace ! autovideosink
or
gst-launch -v filesrc location=path to the mp4 file ! qtdemux name=demuxer ! { queue ! decodebin ! sdlvideosink } { demuxer. ! queue ! decodebin ! alsasink }
or
gst-launch -v playbin uri=path to the mp4 file
or
gst-launch -v playbin2 uri=path to the mp4 file
Questions
I wanted to know, if I am I missing the plugins to execute this.
How do I know which plugin is responsible for which or found where?
What is the benefit of implementing the pipeline via c code.Are the missing plugins still required.
Is it good to install the missing plugins form the Synaptic manager or form the Gstreamer site(base,good,bad,ugly)
When we do gst-inspect we get output like this:
postproc: postproc_hdeblock: LibPostProc hdeblock filter
libvisual: libvisual_oinksie: libvisual oinksie plugin plugin v.0.1
flump3dec: flump3dec: Fluendo MP3 Decoder (liboil build)
vorbis: vorbistag: VorbisTag
vorbis: vorbisparse: VorbisParse
vorbis: vorbisdec: Vorbis audio decoder
vorbis: vorbisenc: Vorbis audio encoder
coreindexers: fileindex: A index that stores entries in file
coreindexers: memindex: A index that stores entries in memory
amrnb: amrnbenc: AMR-NB audio encoder
amrnb: amrnbdec: AMR-NB audio decoder
audioresample: audioresample: Audio resampler
flv: flvmux: FLV muxer
flv: flvdemux: FLV Demuxer
What does the x : y ( x and y mean ) ?

Answers,
It looks like gstreamer at your ends was not installed correctly. playbin2, decodebin2 are basic and part of the base plugins
1 Yes you may be missing some plugins
2 Use gst-inspect command to check if it is available
3 From C code you can manage states, register callback, learn more
Yes missing plugins are still required
4 I guess gstreamer site would be better
5 Not sure about this one, would help if you arrange the result in a proper way

Most probably the GST_PLUGIN_PATH is incorrect. Please set the correct path to where the gstremer has been installed.

Related

GStreamer preview RTMP using xvimage

I want to preview RTMP using gstreamer xvimagesink. i can see the output if i use autovideosink like this:
gst-launch-1.0 -v rtmpsrc location='rtmp://127.0.0.1:1935/live/stream' ! decodebin3 ! autovideosink
but if i replace "autovideosink" with "xvimagesink" i get this:
Setting pipeline to PAUSED ...
ERROR: Pipeline doesn't want to pause.
ERROR: from element /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0: Could not initialise Xv output
Additional debug info:
xvimagesink.c(1773): gst_xv_image_sink_open (): /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0:
Could not open display (null)
Setting pipeline to NULL ...
Freeing pipeline ...
Both decodebin3 and autovideosink are auto-plugging GStreamer elements. It means that both elements are auto-selecting available and the most appropriate GStreamer plugins to demux/decode (decodebin3) and render video (autovideosink) from, in this case, live RTMP stream.
So it is very possible that, for example,
decodebin3 decodes video in format that xvimagesink cannot show on your platform/hardware and/or with your Gstreamer version,
xvimagesink is not set properly on your platform and it is not related with available display/monitor.
To find out more details about
video format decoded by decodebin3
video sink element "chosen" by autovideosink,
you can set higher (more detailed) debug level of GStreamer with, for example, export GST_DEBUG=3, rerun pipeline and inspect output.

GStreamer 'rawvideoparse' element reads wrong amount of bytes

I'm reading a byte-stream YUV420 at 972x720 pixels from a file with Gstreamer using the following command:
gst-launch-1.0 filesrc location=testfile blocksize=1049760 ! rawvideoparse width=972 height=720 framerate=1/1 ! xvimagesink
This works in so far that I get an image but it isn't displayed correctly. When exporting the frames seperately using command:
gst-launch-1.0 filesrc location=testfile blocksize=1049760 ! rawvideoparse width=972 height=720 framerate=1/1 ! multifilesink location="rvp_%d.raw"
I see that when using the element 'rawvideoparse' it will create a file of 1051200 bytes per frame instead of the expected 1049760. When I remove 'rawvideoparse' the frames are exported correctly but my objective is to read them directly from the file into an 'xvimagesink'
Where am I messing up?
Thanks to the GStreamer Development mailing list I got an answer. The problems was that the rawvideoparse element can't handle this resolution. When I switched to 976 width it works.

Saving webcam jpeg stream to multiple files with gstreamer

I'm trying to save a MJPEG stream from a logitech C920 webcam to multiple video files (matroska).
I've got this pipeline: (1 mkv file every 60s)
gst-launch-1.0 -ev v4l2src device=/dev/video0 \
! image/jpeg,width=1280,height=720,framerate=24/1 \
! matroskamux ! multifilesink next-file=max-duration max-file-duration=60000000000 location='test1-%02d.mkv'
It outputs several files, as expected, but the files have errors, so tools like avidemux can't play them back. mkvalidator reports these:
WRN080: Unknown element [FF] at 293 size 88
WRN080: Unknown element [FF] at 494 size 64
WRN080: Unknown element [7D][01] at 566 size w98603107602
WRN801: The segment has no SeekHead section
WRN0B8: Track #1 is defined but has no frame
BTW, saving to a single file using filesink produces an mkv file without errors.
Is there a way to save multiple mkv files properly?
Any other container is also OK, but I cannot transcode (need low CPU load) and I cannot use raw (need HD with high fps).
I'm using GStreamer 1.8.2 on Ubuntu 16.04.1.
Thanks.
Update:
Following the advice below, I tried with splitmuxsink:
gst-launch-1.0 -e v4l2src device=/dev/video1 \
! image/jpeg,width=1280,height=720,framerate=24/1 \
! splitmuxsink muxer=matroskamux location='test1-%02d.mkv' \
max-size-time=10000000000
But it doesn't work: The file is never split and keeps growing in size.
The following pipeline seems to work:
gst-launch-1.0 -e v4l2src ! x264enc key-int-max=10 ! h264parse ! splitmuxsink muxer=matroskamux location='test1-%02d.mkv' max-size-time=60000000000
multifilesink doesn't know nothing about the container format, so you must use splitmuxsink to do the spliting.
Here is the quote from multifilesink doc:
It is not possible to use this element to create independently
playable mp4 files, use the splitmuxsink element for that instead.
I've got success with an upgraded GStreamer (Ubuntu 18.04)
$ gst-launch-1.0 --gst-version
GStreamer Core Library version 1.14.1
Here is a pipeline with an AVI container, where a new file is generated every ten seconds:
gst-launch-1.0 -e v4l2src device=/dev/video1 \
! image/jpeg,width=1280,height=720,framerate=24/1 \
! splitmuxsink muxer=avimux location='test1-%02d.avi' max-size-time=10000000000
It also works with matroskamux.

Stream Icecast using Gstreamer

I'm designing a program to stream an icecast server (radio.clarkson.edu). Ultimately it will be written in Python3, but for now I'm using gst-launch to test the pipeline. I've been working on Debian Jessie and using gstreamer-1.0. Using a file on Wikimedia, I was able to play pretty easily using:
url=https://upload.wikimedia.org/wikipedia/commons/0/0c/Muriel-Nguyen-Xuan-Korsakov-Flight-of-the-bumblebee.flac.oga
gst-launch-1.0 -v souphttpsrc location =$url ! decodebin ! audioconvert ! audioresample ! alsasink
Running the same commands with my stream, I get the output:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstTypeFindElement:typefind.GstPad:src: caps = text/uri-list
Missing element: text/uri-list decoder
ERROR: from element /GstPipeline:pipeline0/GstDecodeBin:decodebin0: Your GStreamer installation is missing a plug-in.
Additional debug info:
gstdecodebin2.c(3977): gst_decode_bin_expose (): /GstPipeline:pipeline0/GstDecodeBin:decodebin0:
no suitable plugins found
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstTypeFindElement:typefind.GstPad:src: caps = "NULL"
Freeing pipeline ...
I have tried too many other pipelines to put on one post, but I can answer any other questions.
Thank you
By now you probably have solved that problem, but still here's an idea: text/uri-list indicates that you didn't hand an actual stream to gstreamer, but rather a (textual) playlist that contains stream addresses. I guess gstreamer can't handle those, hence you need to parse them beforehand and then hand an actual audio stream address to it.

Gstreamer: Could not swtich codebooks: rtpvorbisdepay

I am trying to stream audio with the following GStreamer pipeline:
Server:
gst-launch-1.0 -v audiotestsrc ! audioconvert ! vorbisenc ! rtpvorbispay ! udpsink host=127.0.0.1 port=5000
Client:
gst-launch-1.0 udpsrc port=5000 ! "application/x-rtp, media=audio, clock-rate=44100, encoding-name=VORBIS, encoding-params=1, payload=96" ! rtpvorbisdepay ! vorbisdec ! audioconvert ! autoaudiosink
I get the following message from GStreamer:
WARNING: from element /GstPipeline:pipeline0/GstRtpVorbisDepay:rtpvorbisdepay0: Could not decode stream.
Additional debug info: gstrtpvorbisdepay.c(614): gst_rtp_vorbis_depay_process (): /GstPipeline:pipeline 0/GstRtpVorbisDepay:rtpvorbisdepay0: Could not switch codebooks
And I don't get any sound on the client. Can anyone help?
[EDIT:]
When I copy-paste the caps from the server side... It works! But among those caps there is a configuration parameter which looks really ugly (link here). I noticed that if I just delete this parameter it doesn't work anymore. Moreover I used gst-inspect on udpsrc and rtpvorbisdepay elements and there is nothing about this parameter. Can someone explain me what this parameter corresponds to? Is there a way to avoid it?
I think this is Theora Vorbis thing.. those are some configuration parameters for initialization of decoder if I understand that properly..
Theora makes the same controversial design decision that Vorbis made to
include the entire probability model for the DCT coecients and all the quan-
tization parameters in the bitstream headers. This is often several hundred
elds. It is therefore impossible to decode any frame in the stream without
having previously fetched the codec info and codec setup headers.
~ from here
some similar question