I have this pipeline :
gst-launch -v filesrc location=video.mkv ! matroskademux name=d \
d. ! queue ! ffdec_h264 ! subtitleoverlay name=overlay ! ffmpegcolorspace ! x264enc ! mux. \
d. ! queue ! aacparse ! mux. \
filesrc location=fr.srt ! subparse ! overlay. \
matroskamux name=mux ! filesink location=vid.mkv
I'm trying to burn the subtitles to the video. I have succdeded to read the file with the subtitles but the above pipeline stuck and I have this message :
queue_dataflow gstqueue.c:1243:gst_queue_loop:<queue0> queue is empty
What's wrong with my pipeline? What the queue element do? I haven't really understood what it said in the doc.
The queue element adds a thread boundary to the pipeline and support for buffering. The input side will put buffers into a queue, which is then emptied on the output side from another thread. Via properties on the queue element you can set the size of the queue and some other things.
I don't see anything specifically wrong with your pipeline, but the message there tells you that at some point one of the queues is empty. Which might be a problem or not. It might become fuller again later.
You'll have to check the GStreamer debug logs to see if there's anything in there that hints at the actual problem. My best guess here would be that the audio queue running full because of the encoder latency of x264enc. Try making the audio queue larger, or set tune=zerolatency on x264enc.
Also I see that you're using GStreamer 0.10. It is no longer maintained since more than two years and for new applications you should really consider upgrading to the 1.x versions.
A queue is the thread boundary element through which you can force the use of threads. It does so by using a classic provider/consumer model as learned in threading classes at universities all around the world. By doing this, it acts both as a means to make data throughput between threads threadsafe, and it can also act as a buffer. Queues have several GObject properties to be configured for specific uses. For example, you can set lower and upper thresholds for the element. If there's less data than the lower threshold (default: disabled), it will block output. If there's more data than the upper threshold, it will block input or (if configured to do so) drop data.
To use a queue (and therefore force the use of two distinct threads in the pipeline), one can simply create a “queue” element and put this in as part of the pipeline. GStreamer will take care of all threading details internally.
Related
I would like to use gstreamer to play multiple sources (for instance two video files) simultaneously using a single pipeline but each video starting from a different position, for instance first video from the beginning and the second from the middle. Could someone guide me on how to achieve it?
Simplifying, my pipeline is an equivalent of:
gst-launch-1.0 \
uridecodebin uri=file:///Users/tmikolaj/Downloads/videoalpha_video_dancer1.webm ! videoconvert ! autovideosink \
uridecodebin uri=file:///Users/tmikolaj/Downloads/videoalpha_video_dancer1.webm ! videoconvert ! autovideosink
, but created programmatically.
Obviously, simple seeking the pipeline seeks two files at once.
I was trying to register a probe of the GST_PAD_PROBE_TYPE_EVENT_UPSTREAM type from inside the pad-added signal callback of the uridecodebin element. Inside the probe I wanted to catch the GST_EVENT_SEEK event and drop it for the first video. However, it seems that dropping the SEEK message leaves pipeline in a PAUSED state and even an explicit state change to PLAYING does nothing.
Does anybody has some hints on how to solve that problem?
My target is to stream and record video at the same time.
Gstreamer version: 1.16.1, OS: debian 11
Initially I had more complex pipeline containing compositor on one branch and different custom filters. The simplified version of my constructed pipeline is as follows:
gst-launch-1.0 videotestsrc ! "video/x-raw,width=500,height=300,framerate=50/1" ! tee name=t \
t. ! queue ! x264enc ! splitmuxsink name=mux_sink max-files=10000 next-file=5 max-size-time=600000000 location=video%02d.mp4 \
t. ! queue ! "video/x-raw,width=500,height=300,framerate=50/1" ! glimagesink
How this pipeline acts on my system is that it starts without problems but goes from NULL state to READY state and hangs there. The displayed video is also stationary and no video file is saved.
Here is svg file generated from the dot dump null->ready state: https://drive.google.com/file/d/1oGwDufDdljbuKr8b0YURvg5VxPzMtQWb/view?usp=sharing
I have already tried both branches separately without tee element - both working. I have also tried different combinations of caps filters on both queues. I tried raising gstreamer debug level to see if there was something suspicious there - nothing.
The task should be quite straight-forward, I must be missing something here.
Thanks in advance!
The latency for the default x264 settings are too high for this use case. Use tune=zerolatency option for the x264enc element or increase the queue size after the tee for the display path. This will prevent deadlocking for preroll.
How do I put unrelated audio into any generated video stream in a way that keeps them in sync in gstreamer?
Context:
I want to stream audio from icecast into a Kinesis Video stream, and then view it with Amazon's player. The player only works if there is video as well as audio, so I generate video with testvideosrc.
The video and audio need to be in sync in terms of timestamps, or the Kinesis sink 'kvssink' throws an error. But because they are two separate sources, they are not in sink.
I am using gst-launch-1.0 to run my pipeline.
My basic attempt was like this:
gst-launch-1.0 -v \
videotestsrc pattern=red ! video/x-raw,framerate=25/1 ! videoconvert ! x264enc ! h264parse ! video/x-h264,stream-format=avc,alignment=au ! \
queue ! kvssink name=sink stream-name="NAME" access-key="KEY" secret-key="S_KEY" \
uridecodebin uri=http://ice-the.musicradio.com/LBCLondon ! audioconvert ! voaacenc ! aacparse ! queue ! sink.
The error message I get translates to:
STATUS_MAX_FRAME_TIMESTAMP_DELTA_BETWEEN_TRACKS_EXCEEDED
This indicates that the audio and video timestamps are too different, so I want to force them to match, maybe by throwing away the video timestamps?
There are different meanings of "sync". Let us ignore lip sync for a moment (where audio and video match to each other).
There is sync in terms of timestamps - e.g. do they carry similar timestamps in their representation. And sync in terms of when in real time do these samples with timestamps actually arrive at the sink (latency).
Hard to tell by the error which one exactly the sink is complaining about.
Maybe try x264enc tune=zerolatency for a start, as without that options the encoder produces a two second latency which may cause issues for certain requirements.
Then again the audio stream will have some latency too. It may not be easy to tune these two to match. The sink should actually do the buffering ans synchronization.
The playbin pipeline in GStreamer is a wonderful thing in that I don't need to have any real knowledge about the individual elements needed to process the stream.
However, if I wanted to rotate the video 90 degrees (or flip it, or anything else), it appears I have to manually code up the pipeline. At the moment, I'm doing this with:
rtspsrc location=X
! rtph264depay
! h264parse
! decodebin
! videoflip method=Y
! videoconvert
! autovideosink
However, because I'm binding the video to a specific Gtk widget, I capture the message asking for the widget ID and provide that back to GStreamer so it can correctly bind.
Unfortunately, according to gst-inspect-1.0, none of those elements in the pipeline above appear to actually provide the GstVideoOverlay interface so that, when I query for one that can receive the widget identifier, I get null followed very quickly by a null pointer error. Or, if I do nothing when the null is returned, no binding occurs and GStreamer opens up a separate window to stream the video.
It turns out that playbin itself provides the required interface.
I also tried replacing autovideosink with ximagesink, and then with xvimagesink, both of which claim to support the interface but, in both cases, no element was found that supported the interface.
So my question s are basically this:
1/ Can I insert something into the above pipeline that will provide the interface?
2/ Failing that, is there a way to use playbin to analyse the stream correctly but then capture its output and pass that through more filters? The sort of thing I'm thinking of is:
playbin location=X
! videoflip method=Y
! autovideosink
In other words, can I use something like the video-sink property of playbin to stop it creating its own sink and instead pass its data through to the videoflip?
I'd prefer something that could be implemented with Gst.Parse.Launch() since I don't really want to have to mess about creating every single pipeline element manually if I can avoid it.
I'd say the way you are requesting the GstVideoOverlay is not correct or there is a bug in GStreamer, xvimagesink and ximagesink both support GstVideoOverlay interface. autovideosink doesn't but it is likely that the videosink inside it will support.
Anyway, you want to have a custom bin set to the video-sink property. You can create your bin and put the elements you want inside it, create a sink ghostpad and then set it as the video-sink of your playbin.
It is also possible to do it using parse-launch syntax:
gst-launch-1.0 playbin video-sink="videoconvert ! videoscale ! aasink" uri=file://<path/to/some/file>
Just replace the bin elements with whatever you need.
I have a gstreamer pipeline that takes video from the webcam and splits it into two threads:
1) use appsink so I can programmatically edit the captured frames
2) saves the video to a file
The pipeline looks like this:
gst-launch-1.0 v4l2src device=/dev/video0 \
! tee name=t ! queue ! videoconvert ! videoscale ! appsink name=sink caps="video/x-raw,format=RGB,width=800,framerate=15/1" t. \
! queue ! video/x-raw,width=800,framerate=15/1 ! jpegenc ! avimux ! filesink location=/tmp/output.avi
I'm using this inside a C++ app.
My problem is that in most of the time I don't need the two threads running simultaneously, but only one of them. And in rare cases - need both.
So I need some way to temporarily pause/stop either the appsink or the video saving - in order to save CPU.
The way I do it now is to destroy the pipeline and recreate it again with only one thread when needed, but that seems quite ugly.
I've been looking for a better solution, but no luck so far - is there any way to do that?
Thanks in advance!
An easier way to approach this may be to use a valve element. It has a drop property that you can set to true or false. Put it right after the queue on the tee.
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer-plugins/html/gstreamer-plugins-valve.html
EDIT: This doesn't work. Some more details are present in this post on the GStreamer mailing list:
http://gstreamer-devel.966125.n4.nabble.com/How-to-Stop-start-recording-using-Valve-element-td4661728.html