I've written a Gstreamer source plugin, it can produce buffers and transform to downstream elements and do preview. Recently I received a request to implement multi-stream, that one stream to do preview, and the other stream to do recording(using filesink, I suppose). I investigated 'tee' plugin before, but it turns out that it only supports multiple streams with the same formats/resolutions. What plugin should I use if two streams have different formats/resolutions, say, two capsfilters in one pipeline? If there are plugin could do that, could you provide some examples for how to use them?
the pipeline I expect goes like this:
gst-launch-1.0 mysrc ! (some plugins) name=t ! video/x-raw,format=NV12,width=320,height=240 ! xvimagesink t. ! video/x-raw,format=YUY2,width=640,height=480 ! filesink location=img_file
I think either you implement this in your plugin which will produce two src pads and you will just connect the filesink and videosink correctly..
Or you will use tee and videoscale videoconvert videorate elements to achieve different resolutions. This approach is of course more resource demanding and the first approach may be better optimisable (just guessing, I dont know anything about your plugin).
This is example with two videosink each different size.. You have to realise that you have one input from your mysrc.. that is you have to duplicate it and then one of the branches have to be resized (or maybe two if you need).. there is no other way. What you want is element of combination of tee and videoscale/videorate/videoconvert.. I am not sure if there is such element, and I am not sure it would be very usable(but maybe it has sense, I just do not see it)..
gst-launch-1.0 videotestsrc ! video/x-raw,width=640,height=480 ! tee name=t t. ! queue ! videoscale ! video/x-raw,width=320,height=240 ! videoconvert ! autovideosink t. ! queue ! videoscale ! video/x-raw,width=200,height=200 ! videoconvert ! autovideosink
Maybe I just didnt understand your question.
Related
I'm writing a Qt 5.15 application that should play an RTP / MPETGS / H.264 video on Linux UbuntuĀ 20.04 (Focal Fossa).
I'm running GStreamer 1.16.3.
Since I'm new to GStreamer, I made everything step by step starting from official tutorials... at this moment I'm able to play an RTP / H.264 stream almost realtime.
Now the last step (adding MPEGTS support) seems to be the hardest.
My source to make a test is an MP4 H.264 QuickTime file, and I stream it over the network through gst-launch.
The working RTP / H.264 output pipeline is the following shell command:
gst-launch-1.0 filesrc location=file.mp4 ! qtdemux ! h264parse ! avdec_h264 ! x264enc tune=zerolatency ! rtph264pay ! udpsink host=127.0.0.1 port=5000;
To test the input pipeline without messing up the Qt/C++ code, I use another shell command like this:
gst-launch-1.0 -v udpsrc port=5000 ! "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! autovideosink;
AFAIK, if the shell input pipeline works, it will work in my C++ code (of course elements after avdec_h264 depends on my programming/running environment, but if someone needs it I can share without problem).
To add mpegts support, I tried with these lines (the last of a long sequence of trials):
OUTPUT:
gst-launch-1.0 filesrc location=file.mp4 ! qtdemux ! h264parse ! avdec_h264 ! x264enc tune=zerolatency ! mpegtsmux ! rtpmp2tpay ! udpsink host=127.0.0.1 port=5000;
INPUT:
gst-launch-1.0 -v udpsrc port=5000 caps="application/x-rtp" ! rtpmp2tdepay ! tsparse ! tsdemux ! h264parse ! avdec_h264 ! videoconvert ! autovideosink;
It works, but the video seems to stumble/bounce while playing.
What I'm missing?
As a side question, I would like to avoid to re-encode the video of the source prior sending it trough RTP. I would like to remove such elements from the output pipeline:
avdec_h264 ! x264enc tune=zerolatency
I tried, but the result goes from nothing to this,
if I add the config-interval=-1 parameter to h264parse.
Please note that I would like to keep the latency as low as possible.
--- UPDATE ---
I tried putting a queue element between rtpmp2tdepay and tsparse and this makes video playing fluid, but latency grows to seconds while playing RTP / H.264, only it's nearly real-time.
Since MPEGTS is only a transport protocol, why should it add more delay than actual encoding?
Is there a way to shorten this delay? No matter if it changes the whole pipeline as long as protocols and encoding are kept the same.
BTW, I tried tuning max-size-buffers size, but using values under 150 will cause play to stumble.
--- UPDATE ---
If I use VLC to create the output stream using the same file, things get even worse:
*:sout=#rtp{dst=127.0.0.1,port=5000,mux=ts} :no-sout-all :sout-keep*
It is the same stumbling and scrambled video without a chance to fix it:
I found a partial fix to the latency problem and compatibility with VLC:
! autovideosink sync=false
Disabling the clock synchronisation allows to shorten delays, and also VLC output streaming is now collected correctly by GStreamer.
This also makes the queue element unnecessary (not in the general use case probably), and AFAIK also tsparse is redundant.
Anyway, I still need to understand why I need to re-encode H.264 video (in the output pipeline).
I am not yet a genius at gstreamer, but experimenting with the basics to become less ignorant. I tried this, expecting to see two test patterns in separate pop-up windows:
gstl videotestsrc ! tee name=t ! autovideosink t. ! autovideosink
This causes two new windows to pop up, but only one shows the color bars test pattern. The other shows a frozen snapshot of the desktop background it happened to cover. Why does this happen, and how would I modify my pipeline to work?
pls try like this:
gst-launch-1.0 videotestsrc ! tee name=t ! queue ! autovideosink t. ! queue ! autovideosink
I'm trying to figure out how to create a pipeline in GStreamer (1.4.4) beyond the very simple playbin one. I have a stream being fed into a GTK+ DrawingArea widget but it's currently letter-boxing it whereas I want to experiment with the video stream expanded to fit the entire widget.
To that end, I've played with the gst-launch-1.0 app but I'm finding that a fakesink at the end seems to work but an autovideosink doesn't. The two pipelines are (X being an rtspt:// URI for an IP camera):
gst-launch-1.0 rtspsrc location=X ! rtph264depay ! h264parse ! decodebin ! fakesink
gst-launch-1.0 rtspsrc location=X ! rtph264depay ! h264parse ! decodebin ! autovideosink
In other words, the only difference is the sink itself. It appears that, no matter where I place the sink (even if it's just an rtspsrc location=X ! sink), the problem still occurs, and that problem manifests itself as:
rtspsrc gstrtspsrc.c:5074:gst_rtspsrc_loop<rtspsrc0> error: Internal data flow error
rtspsrc gstrtspsrc.c:5074:gst_rtspsrc_loop<rtspsrc0> streaming task paused, reason not-linked (-1)
I've tried running at higher debug levels but the output doesn't seems to have any useful information beyond the warnings already given.
Note that both the following commands work okay:
gst-play-1.0 X
gst-launch-1.0 playbin uri=X
But, as discussed, I don't really want a playbin since I want to install be own video scaler in the pipeline.
My (albeit limited) understanding is that the rtph264depay removes the unnecessary RTSP protocol stuff, h264parse decodes the H.264 data, decodebin auto-magically selects the correct decoder and the autovideosink selects the correct sink for displaying the stream.
I'm not entirely certain how changing something at stage five of the pipeline would affect how stage one works.
So why is it that a fake sink works but the automatic selection one does not?
Add videoconvert before autovideosink will make it works.
gst-launch-1.0 rtspsrc location=X ! rtph264depay ! h264parse ! decodebin ! videoconvert ! autovideosink
The reason is sink element does not support format output from your decode, thus cause the error "streaming task paused, reason not-linked".
fakesink is different. It simply drops the data, not care about format, so it does not this error.
playbin can play because it automatically add convert element when need.
I have a gstreamer pipeline that takes video from the webcam and splits it into two threads:
1) use appsink so I can programmatically edit the captured frames
2) saves the video to a file
The pipeline looks like this:
gst-launch-1.0 v4l2src device=/dev/video0 \
! tee name=t ! queue ! videoconvert ! videoscale ! appsink name=sink caps="video/x-raw,format=RGB,width=800,framerate=15/1" t. \
! queue ! video/x-raw,width=800,framerate=15/1 ! jpegenc ! avimux ! filesink location=/tmp/output.avi
I'm using this inside a C++ app.
My problem is that in most of the time I don't need the two threads running simultaneously, but only one of them. And in rare cases - need both.
So I need some way to temporarily pause/stop either the appsink or the video saving - in order to save CPU.
The way I do it now is to destroy the pipeline and recreate it again with only one thread when needed, but that seems quite ugly.
I've been looking for a better solution, but no luck so far - is there any way to do that?
Thanks in advance!
An easier way to approach this may be to use a valve element. It has a drop property that you can set to true or false. Put it right after the queue on the tee.
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer-plugins/html/gstreamer-plugins-valve.html
EDIT: This doesn't work. Some more details are present in this post on the GStreamer mailing list:
http://gstreamer-devel.966125.n4.nabble.com/How-to-Stop-start-recording-using-Valve-element-td4661728.html
I need a bit of your help because I'm trying to receive rtsp stream by gstreamer and then put it into openCV to process video. What is worse, I will need it back from openCV but first things first. I'm quite new to this so I don't know Gstreamer well so I'm counting on you guys. Some simple examples would be best but I'll use what I have;)
Thanks in advance
You can use something like this:
uridecodebin uri=rtsp:// name=uridec ! queue ! tee name=t ! queue ! <some encoder and muxer> ! filesink t. ! queue ! videoconvert ! "video/x-raw, format=BGR" ! appsink t. ! queue ! <restream>
In this possible solution you are receiving and decoding at uridecodebin which means that for re-streaming you need to encode, as well as encoding for storing to a file. If that's not what you want you can replace uridecodebin with rtspsrc that will give you RTP streams instead of decoded raw streams. Something like:
rtspsrc ! rtpXdepay ! tee name=t ! ...
Replace X with the format you are receiving (can be done dynamically from your application). Now the output is an encoded stream that you can use in a similar way as the sample pipeline above.
Note that these suggestions are assuming your rtsp input is a single stream (video likely), if you want video and audio you need to add 2 branches out of uridecodebin or rtspsrc. I also assumed that by 'rtspStream' is some sort of external library/application that you are going to use to retransmit instead of using gstreamer itself. In any way, this should give you an idea.