I'm trying to figure out how to create a pipeline in GStreamer (1.4.4) beyond the very simple playbin one. I have a stream being fed into a GTK+ DrawingArea widget but it's currently letter-boxing it whereas I want to experiment with the video stream expanded to fit the entire widget.
To that end, I've played with the gst-launch-1.0 app but I'm finding that a fakesink at the end seems to work but an autovideosink doesn't. The two pipelines are (X being an rtspt:// URI for an IP camera):
gst-launch-1.0 rtspsrc location=X ! rtph264depay ! h264parse ! decodebin ! fakesink
gst-launch-1.0 rtspsrc location=X ! rtph264depay ! h264parse ! decodebin ! autovideosink
In other words, the only difference is the sink itself. It appears that, no matter where I place the sink (even if it's just an rtspsrc location=X ! sink), the problem still occurs, and that problem manifests itself as:
rtspsrc gstrtspsrc.c:5074:gst_rtspsrc_loop<rtspsrc0> error: Internal data flow error
rtspsrc gstrtspsrc.c:5074:gst_rtspsrc_loop<rtspsrc0> streaming task paused, reason not-linked (-1)
I've tried running at higher debug levels but the output doesn't seems to have any useful information beyond the warnings already given.
Note that both the following commands work okay:
gst-play-1.0 X
gst-launch-1.0 playbin uri=X
But, as discussed, I don't really want a playbin since I want to install be own video scaler in the pipeline.
My (albeit limited) understanding is that the rtph264depay removes the unnecessary RTSP protocol stuff, h264parse decodes the H.264 data, decodebin auto-magically selects the correct decoder and the autovideosink selects the correct sink for displaying the stream.
I'm not entirely certain how changing something at stage five of the pipeline would affect how stage one works.
So why is it that a fake sink works but the automatic selection one does not?
Add videoconvert before autovideosink will make it works.
gst-launch-1.0 rtspsrc location=X ! rtph264depay ! h264parse ! decodebin ! videoconvert ! autovideosink
The reason is sink element does not support format output from your decode, thus cause the error "streaming task paused, reason not-linked".
fakesink is different. It simply drops the data, not care about format, so it does not this error.
playbin can play because it automatically add convert element when need.
Related
I'm writing a Qt 5.15 application that should play an RTP / MPETGS / H.264 video on Linux UbuntuĀ 20.04 (Focal Fossa).
I'm running GStreamer 1.16.3.
Since I'm new to GStreamer, I made everything step by step starting from official tutorials... at this moment I'm able to play an RTP / H.264 stream almost realtime.
Now the last step (adding MPEGTS support) seems to be the hardest.
My source to make a test is an MP4 H.264 QuickTime file, and I stream it over the network through gst-launch.
The working RTP / H.264 output pipeline is the following shell command:
gst-launch-1.0 filesrc location=file.mp4 ! qtdemux ! h264parse ! avdec_h264 ! x264enc tune=zerolatency ! rtph264pay ! udpsink host=127.0.0.1 port=5000;
To test the input pipeline without messing up the Qt/C++ code, I use another shell command like this:
gst-launch-1.0 -v udpsrc port=5000 ! "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! autovideosink;
AFAIK, if the shell input pipeline works, it will work in my C++ code (of course elements after avdec_h264 depends on my programming/running environment, but if someone needs it I can share without problem).
To add mpegts support, I tried with these lines (the last of a long sequence of trials):
OUTPUT:
gst-launch-1.0 filesrc location=file.mp4 ! qtdemux ! h264parse ! avdec_h264 ! x264enc tune=zerolatency ! mpegtsmux ! rtpmp2tpay ! udpsink host=127.0.0.1 port=5000;
INPUT:
gst-launch-1.0 -v udpsrc port=5000 caps="application/x-rtp" ! rtpmp2tdepay ! tsparse ! tsdemux ! h264parse ! avdec_h264 ! videoconvert ! autovideosink;
It works, but the video seems to stumble/bounce while playing.
What I'm missing?
As a side question, I would like to avoid to re-encode the video of the source prior sending it trough RTP. I would like to remove such elements from the output pipeline:
avdec_h264 ! x264enc tune=zerolatency
I tried, but the result goes from nothing to this,
if I add the config-interval=-1 parameter to h264parse.
Please note that I would like to keep the latency as low as possible.
--- UPDATE ---
I tried putting a queue element between rtpmp2tdepay and tsparse and this makes video playing fluid, but latency grows to seconds while playing RTP / H.264, only it's nearly real-time.
Since MPEGTS is only a transport protocol, why should it add more delay than actual encoding?
Is there a way to shorten this delay? No matter if it changes the whole pipeline as long as protocols and encoding are kept the same.
BTW, I tried tuning max-size-buffers size, but using values under 150 will cause play to stumble.
--- UPDATE ---
If I use VLC to create the output stream using the same file, things get even worse:
*:sout=#rtp{dst=127.0.0.1,port=5000,mux=ts} :no-sout-all :sout-keep*
It is the same stumbling and scrambled video without a chance to fix it:
I found a partial fix to the latency problem and compatibility with VLC:
! autovideosink sync=false
Disabling the clock synchronisation allows to shorten delays, and also VLC output streaming is now collected correctly by GStreamer.
This also makes the queue element unnecessary (not in the general use case probably), and AFAIK also tsparse is redundant.
Anyway, I still need to understand why I need to re-encode H.264 video (in the output pipeline).
I'm developing an app receiving an H.264 video stream from an RTSP camera and displaying and storing it to MP4 without transcoding. For the purpose of my current test, I record for 5 sec only.
My problem is that the MP4 is not playable. The resulting file varies in size from one run of the app to another showing something is very wrong (unexpected since the recording time is fixed).
Here are my pipelines:
rtspsrc location = rtsp://192.168.0.61:8554/quality_h264 latency=0 ! rtph264depay ! h264parse ! video/x-h264,stream-format=avc ! queue ! interpipesink name=cam1
interpipesrc allow-renegotiation=true name=src listen-to=cam1 is-live=true ! h264parse ! queue ! decodebin ! autovideoconvert ! d3dvideosink sync=false
interpipesrc allow-renegotiation=true name=src listen-to=cam1 is-live=true ! h264parse ! queue ! mp4mux ! filesink location=test.mp4
In a next step I will add more cameras and will need to be able to change which camera gets recorded to MP4 on the fly, as well as pause/resume the recording. For this reason, I've opted to use interpipesink/src. It's a set of gstreamer elements that allow communication between two independent pipelines. https://github.com/RidgeRun/gst-interpipe
A thread waits for 10 sec, then sends EOS on the 3rd pipeline (recording). Then, when the bus receives GST_MESSAGE_EOS it sets the pipeline state to NULL. I have checked with a pad probe that the EOS event is indeeed received on the sink pad of the filesink.
I send EOS using this code: gst_element_send_event(m_pipeline, gst_event_new_eos()); m_pipeline is the 3rd pipeline.
Those exact pipelines produce a playable MP4 when run with gst-launch adding -e at the end.
If I replace mp4mux by matroskamux in my app, the mkv is playable and has the expected size. However, there's something wrong with the timestamps as the player shows it starting at time 10 sec insteasd of 0. Do I need to edit the timestamps before passing the buffers to the mux (mp4mux or matroskamux)?
It looks to me as if the MP4 is not fully written, but I can't see what else I can do appart from sending EOS?
I'm opened to suggestions to restructure the app, in case the use of the interpipe elements may cause a problem (although I can't see why at the moment).
I'm using Gstreamer 1.18.2 on Windows 10 (x64).
I have a program that captures video from USB camera, process and stream to rtsp udp. I am using OpenCV with Gstreamer.
When I use the main thread to write out the frames, I can capture it with no problem using gst-launch.
However, when I create another thread to do the writing out the frame, nothing happens with gst-launch. I know the other thread is running because I am able to "imshow" the frames in that thread. Also, I am sure that the writer is open since I checked it before writing.
Writer pipeline : appsrc ! videoconvert ! x264enc ! rtph264pay ! udpsink host=127.0.0.1 port=5015
Receiver: gst-launch-1.0 udpsrc port=5015 ! queue ! "application/x-rtp, media=(string)video, encoding-name=(string)H264, framerate=30/1" ! rtph264depay ! decodebin ! videoconvert ! autovideosink
This is already solved and is not related to multi-threading at all. It was in the composition of the pipeline. "port" keyword was not added in the ostream.
I've written a Gstreamer source plugin, it can produce buffers and transform to downstream elements and do preview. Recently I received a request to implement multi-stream, that one stream to do preview, and the other stream to do recording(using filesink, I suppose). I investigated 'tee' plugin before, but it turns out that it only supports multiple streams with the same formats/resolutions. What plugin should I use if two streams have different formats/resolutions, say, two capsfilters in one pipeline? If there are plugin could do that, could you provide some examples for how to use them?
the pipeline I expect goes like this:
gst-launch-1.0 mysrc ! (some plugins) name=t ! video/x-raw,format=NV12,width=320,height=240 ! xvimagesink t. ! video/x-raw,format=YUY2,width=640,height=480 ! filesink location=img_file
I think either you implement this in your plugin which will produce two src pads and you will just connect the filesink and videosink correctly..
Or you will use tee and videoscale videoconvert videorate elements to achieve different resolutions. This approach is of course more resource demanding and the first approach may be better optimisable (just guessing, I dont know anything about your plugin).
This is example with two videosink each different size.. You have to realise that you have one input from your mysrc.. that is you have to duplicate it and then one of the branches have to be resized (or maybe two if you need).. there is no other way. What you want is element of combination of tee and videoscale/videorate/videoconvert.. I am not sure if there is such element, and I am not sure it would be very usable(but maybe it has sense, I just do not see it)..
gst-launch-1.0 videotestsrc ! video/x-raw,width=640,height=480 ! tee name=t t. ! queue ! videoscale ! video/x-raw,width=320,height=240 ! videoconvert ! autovideosink t. ! queue ! videoscale ! video/x-raw,width=200,height=200 ! videoconvert ! autovideosink
Maybe I just didnt understand your question.
For the sake of testing, I'd like to construct a pipeline that encodes and then decodes live audio. I have tried with mp3 or aac encoding, and I can certainly do it if the source is non-live:
$ gst-launch-1.0 audiotestsrc ! lamemp3enc ! mpegaudioparse ! mad ! alsasink
$ gst-launch-1.0 audiotestsrc ! faac ! audio/mpeg, stream-format=raw ! aacparse ! faad ! alsasink
In the above cases, the pipeline is constructed and I can hear the audio playing back. However, if the source is live, the pipeline doesn't fail to play, but there's no audio played back.
I'm sure I'm missing some key concept, but can't see what!
Can it be the live source you are using causing the issue? It may have additional latency causing the audio sink to drop all samples.
How about this pipeline:
$ gst-launch-1.0 audiotestsrc is-live=true ! faac ! aacparse ! faad ! autoaudiosink
Here the audiotestsrc acts as if it was a live source. Also note that it is advised to add parsers after encoder elements. So "aacparse" for AAC audio and "mpegaudioparse" for MP3 audio.