Refer to http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/section-dynamic-pipelines.html, I tried to change udpsrc on gstreamer pipeline.
But something was wrong, state of the pipeline cannot be changed to PLAYING after change udp source.
Below is sequence of changing udpsrc.
The original pipeline bin consist of...
udpsrc - queue - tsdemux - queue - parser - videodecoder - queue videosink
first, block src pad of udpsrc
send eos event to queue (next to udpsrc)
wait until eos message is received from bus.
set state of udpsrc NULL, and remove udpsrc from pipeline bin.(unlink)
Create new udpsrc with new source uri.
link to queue
change state to PLAYING.
Is there any mistake in this sequence?
Thanks in advance.
You don't need to send an EOS through the pipeline in this case. That has the effect of signalling the end of a stream and while it can be recovered from in most cases, for this case it is not needed.
The scenario for sending an EOS through elements when changing the pipeline dynamically is for elements with both sink and src pads in order to drain any data that may be stuck inside.
Related
I have a two GStreamer pipelines, one is like a "source" pipeline streaming a live camera feed into an external channel, and the second pipeline is like a "sink" pipeline that reads from the other end of that channel and outputs the live video to some form of sink.
[videotestsrc] -> [appsink] ----- Serial Channel ------> [appsrc] -> [autovideosink]
First Pipeline Second Pipeline
The first pipeline starts from a videotestsrc, encodes the video and wraps it in gdppay payload, and then sinks the pipeline into a serial channel (but for the sake of the question, any sink that can be read from to start another pipeline like a filesink writing to serial port or udpsink), where it is read by the source of the next pipeline and shown via a autovideosrc:
"Source" Pipeline
gst-launch-1.0 -v videotestsrc ! videoconvert ! video/x-raw,format=I420 ! x265enc ! gdppay ! udpsink host=127.0.0.1 port=5004
"Sink" pipeline
gst-launch-1.0 -v udpsrc uri=udp://127.0.0.1:5004 ! gdpdepay ! h265parse ! avdec_h265 ! autovideosink
Note: Given the latency induced using a udpsink/udpsrc, that pipeline complains about timestamp issues. If you replace the udpsrc/udpsink with a filesrc/filesink to a serial port you can see the problem that I am about to describe.
Problem:
Now that I have described the pipelines, here is the problem:
If I start both pipelines, everything works as expected. However, if after 30s, I stop the "source" pipeline, and restart the pipeline, the Running Time gets reset back to zero, causing the timestamps of all buffers to be sent to be considered old buffers by the sink pipeline because it has already received buffers for timestamps 0 through 30s, so the playback on the other end won't resume until after 30s:
Source Pipeline: [28][29][30][0 ][1 ][2 ][3 ]...[29][30][31]
Sink Pipeline: [28][29][30][30][30][30][30]...[30][30][31]
________________________^
Source pipeline restarted
^^^^^^^^^^^^^^^^...^^^^^^^^
Sink pipeline will continue
to only show the "frame"
received at 30s until a
"newer" frame is sent, when
in reality each sent frame
is newer and should be shown
immediately.
Solution
I have found that adding sync=false to the autovideosink does solve the problem, however I was hoping to find a solution where the source would send its timestamps (DTS and PTS) based on the Clock time as seen in the image on that page.
I have seen this post and experimented with is-live and do-timestamp on my video source, but they do not seem to do what I want. I also tried to manually set the timestamps (DTS, PTS) in the buffers based on system time, however to no avail.
Any suggestions?
I think you should just restart the receiver pipeline as well. You could add the -e switch to the sender pipeline and when you stop the pipeline it should correctly propagate EOS via the GDP element to the receiver pipeline. Else I guess you can send a new segment or discontinuity to the receiver. Some event has to be signaled though to make the pipeline aware of that change, else it is somewhat bogus data. I'd say restarting the receiver is the simplest way.
I'm developing an app receiving an H.264 video stream from an RTSP camera and displaying and storing it to MP4 without transcoding. For the purpose of my current test, I record for 5 sec only.
My problem is that the MP4 is not playable. The resulting file varies in size from one run of the app to another showing something is very wrong (unexpected since the recording time is fixed).
Here are my pipelines:
rtspsrc location = rtsp://192.168.0.61:8554/quality_h264 latency=0 ! rtph264depay ! h264parse ! video/x-h264,stream-format=avc ! queue ! interpipesink name=cam1
interpipesrc allow-renegotiation=true name=src listen-to=cam1 is-live=true ! h264parse ! queue ! decodebin ! autovideoconvert ! d3dvideosink sync=false
interpipesrc allow-renegotiation=true name=src listen-to=cam1 is-live=true ! h264parse ! queue ! mp4mux ! filesink location=test.mp4
In a next step I will add more cameras and will need to be able to change which camera gets recorded to MP4 on the fly, as well as pause/resume the recording. For this reason, I've opted to use interpipesink/src. It's a set of gstreamer elements that allow communication between two independent pipelines. https://github.com/RidgeRun/gst-interpipe
A thread waits for 10 sec, then sends EOS on the 3rd pipeline (recording). Then, when the bus receives GST_MESSAGE_EOS it sets the pipeline state to NULL. I have checked with a pad probe that the EOS event is indeeed received on the sink pad of the filesink.
I send EOS using this code: gst_element_send_event(m_pipeline, gst_event_new_eos()); m_pipeline is the 3rd pipeline.
Those exact pipelines produce a playable MP4 when run with gst-launch adding -e at the end.
If I replace mp4mux by matroskamux in my app, the mkv is playable and has the expected size. However, there's something wrong with the timestamps as the player shows it starting at time 10 sec insteasd of 0. Do I need to edit the timestamps before passing the buffers to the mux (mp4mux or matroskamux)?
It looks to me as if the MP4 is not fully written, but I can't see what else I can do appart from sending EOS?
I'm opened to suggestions to restructure the app, in case the use of the interpipe elements may cause a problem (although I can't see why at the moment).
I'm using Gstreamer 1.18.2 on Windows 10 (x64).
I'm recording a wav file using GStreamer receiving G711 flow through a UDP port.
Any wav player can play the file, but shows a wrong duration and cannot fast forward.
I believe that GStreamer writes the header at the beginning with empty data.
This pipeline can reproduce the issue:
gst-launch-1.0 udpsrc port=3000 caps="application/x-rtp,media=(string)audio, payload=0,clock-rate=(int)8000" ! rtpjitterbuffer ! rtppcmudepay ! mulawdec ! wavenc ! filesink append=true location=c:/recordings/audio-zz.wav
Florian Zwoch suggested to use -e and the file will be closed properly.
Indeed it works perfectly.
I'm using this pipeline inside a Java program with the gst1-java-core library.
Seems that I'm missing something closing the pipeline.
My program has the same behaviour as gst-launch without -e parameter.
Before stopping the pipeline I send an EOS Event.
pipeline.sendEvent(new EOSEvent());
How can I fix it?
The append parameter of filesink element does not allow rewriting the header.
Thank you.
How do you stop the pipeline? If you interrupt the pipeline with ctrl-c it may indeed be that the header finalization is skipped. Run your pipeline with the -e option so that on ctrl-c your pipeline gets stopped gracefully.
I am sending an H.264 bytestream over RTP using gstreamer.
# sender
gst-launch-1.0 filesrc location=my_stream.h264 ! h264parse disable-passthrough=true ! rtph264pay config-interval=10 pt=96 ! udpsink host=localhost port=5004
Then I am receiving the frames, decoding and displaying in other gstreamer instance.
# receiver
gst-launch-1.0 udpsrc port=5004 ! application/x-rtp,payload=96,media="video",encoding-name="H264",clock-rate="90000" ! rtph264depay ! h264parse ! decodebin ! xvimagesink
This works as is, but I want to try adding an rtpjitterbuffer in order to perfectly smooth out playback.
# receiver
gst-launch-1.0 udpsrc port=5004 ! application/x-rtp,payload=96,media="video",encoding-name="H264",clock-rate="90000" ! rtpjitterbuffer ! rtph264depay ! h264parse ! decodebin ! xvimagesink
However, as soon as I do, the receiver only displays a single frame and freezes.
If I replace the .h264 file with an MP4 file, the playback works great.
I assume that my h264 stream does not have the required timestamps to enable the jitter buffer to function.
I made slight progress by adding identity datarate=1000000. This allows the jitterbuffer to play, however this screws with my framerate because P frames have less data than I frames. Clearly the identity element adds the correct timestamps, but just with the wrong numbers.
Is it possible to automatically generate timestamps on the sender by specifying the "framerate" caps correctly somewhere? So far my attempts have not worked.
You've partially answered the problem already:
If I replace the .h264 file with an MP4 file, the playback works great.
I assume that my h264 stream does not have the required timestamps to enable the jitter buffer to function.
Your sender pipeline has no negotiated frame rate because you're using a raw h264 stream, while you should really be using a container format (e.g., MP4) which has this information. Without timestamps udpsink cannot synchronise against clock to throttle, so the sender is spitting out packets as fast as pipeline can process them. It's not a live sink.
However adding a rtpjitterbuffer makes your receiver act as live source. It freezes because it's trying its best to cope with the barrage of packets of malformed timestamps. RTP doesn't transmit "missing" timestamps to best of my knowledge, so all packets will probably have the same timestamp. Thus it probably reconstructs the first frame and drops the rest as duplicates.
I must agree with user1998586 in the sense that it ought to be better for the pipeline to crash with a good error message in this case rather trying its best.
Is it possible to automatically generate timestamps on the sender by specifying the "framerate" caps correctly somewhere? So far my attempts have not worked.
No. You should really use a container.
In theory, however, an au aligned H264 raw stream could be timestamped by just knowing the frame rate, but there are no gstreamer elements (I know of) that do this and just specifying caps won't do it.
I had the same problem, and the best solution I found was to add timestamps to the stream on the sender side, by adding do-timestamp=1 to the source.
Without timestamps I couldn't get rtpjitterbuffer to pass more than one frame, no matter what options I gave it.
(The case I was dealing with was streaming from raspvid via fdsrc, I presume filesrc behaves similarly).
It does kinda suck that gstreamer so easily sends streams that gstreamer itself (and other tools) doesn't process correctly: if not having timestamps is valid, then rtpjitterbuffer should cope with it; if not having timestamps is invalid, then rtph264pay should refuse to send without timestamps. I guess it was never intended as a user interface...
You should try to set the rtpjitterbuffer mode to another value than the default one:
mode : Control the buffering algorithm in use
flags: readable, writable
Enum "RTPJitterBufferMode" Default: 1, "slave"
(0): none - Only use RTP timestamps
(1): slave - Slave receiver to sender clock
(2): buffer - Do low/high watermark buffering
(4): synced - Synchronized sender and receiver clocks
Like that:
... ! rtpjittrbuffer mode=0 ! ...
I have this pipeline :
gst-launch -v filesrc location=video.mkv ! matroskademux name=d \
d. ! queue ! ffdec_h264 ! subtitleoverlay name=overlay ! ffmpegcolorspace ! x264enc ! mux. \
d. ! queue ! aacparse ! mux. \
filesrc location=fr.srt ! subparse ! overlay. \
matroskamux name=mux ! filesink location=vid.mkv
I'm trying to burn the subtitles to the video. I have succdeded to read the file with the subtitles but the above pipeline stuck and I have this message :
queue_dataflow gstqueue.c:1243:gst_queue_loop:<queue0> queue is empty
What's wrong with my pipeline? What the queue element do? I haven't really understood what it said in the doc.
The queue element adds a thread boundary to the pipeline and support for buffering. The input side will put buffers into a queue, which is then emptied on the output side from another thread. Via properties on the queue element you can set the size of the queue and some other things.
I don't see anything specifically wrong with your pipeline, but the message there tells you that at some point one of the queues is empty. Which might be a problem or not. It might become fuller again later.
You'll have to check the GStreamer debug logs to see if there's anything in there that hints at the actual problem. My best guess here would be that the audio queue running full because of the encoder latency of x264enc. Try making the audio queue larger, or set tune=zerolatency on x264enc.
Also I see that you're using GStreamer 0.10. It is no longer maintained since more than two years and for new applications you should really consider upgrading to the 1.x versions.
A queue is the thread boundary element through which you can force the use of threads. It does so by using a classic provider/consumer model as learned in threading classes at universities all around the world. By doing this, it acts both as a means to make data throughput between threads threadsafe, and it can also act as a buffer. Queues have several GObject properties to be configured for specific uses. For example, you can set lower and upper thresholds for the element. If there's less data than the lower threshold (default: disabled), it will block output. If there's more data than the upper threshold, it will block input or (if configured to do so) drop data.
To use a queue (and therefore force the use of two distinct threads in the pipeline), one can simply create a “queue” element and put this in as part of the pipeline. GStreamer will take care of all threading details internally.