GStreamer rtph265pay/rtph265depay does not work if rtph265pay started before rtph265depay - gstreamer

Given two GStreamer pipelines:
Sender:
gst-launch-1.0 videotestsrc do-timestamp=true pattern=snow ! video/x-raw,width=640,height=480,framerate=30/1 ! x265enc ! h265parse ! rtph265pay ! udpsink host=127.0.0.1 port=5801
Receiver
gst-launch-1.0 -v udpsrc port=5801 ! application/x-rtp,encoding-name=H265 ! rtph265depay ! decodebin ! autovideosink sync=false
If I start the Receiver first, the pipeline works fine. If I start the Sender first, the receiver pipeline never actually starts showing any output. It does print the following to the terminal:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = application/x-rtp, encoding-name=(string)H265, media=(string)video, clock-rate=(int)90000
/GstPipeline:pipeline0/GstRtpH265Depay:rtph265depay0.GstPad:sink: caps = application/x-rtp, encoding-name=(string)H265, media=(string)video, clock-rate=(int)90000
Any ideas to why this happens? I am assuming there is some form of "start" packet sent at the beginning of the stream that the receiver needs to be "awake" for, but this is purely based on intuition, not any documentation.

I found the solution, I would have found it if I read the documentation of rtph265pay. https://gstreamer.freedesktop.org/documentation/rtp/rtph265pay.html?gi-language=c
There is a parameter called config-interval, which "Send VPS, SPS and PPS Insertion Interval in seconds". This parameter is initially 0, which means it likely only sends it at the beginning of the stream and never again. Setting this value to a positive number makes the receiver able to start reading the stream every time this data is sent. For my application, a value of 1s works great.

Related

GStreamer: receive/demultiplex multiple RTP streams on one port?

I'd like to use gstreamer to create a network sink for multiple UDP RTP streams. The basic setup (one sender, one receiver) works fine and looks like this:
# sender:
gst-launch-1.0 -vvtcm audiotestsrc ! rtpgstpay config-interval=1 ssrc=1 ! udpsink host=127.0.0.1 port=5000
# receiver:
gst-launch-1.0 -vvtcm udpsrc port=5000 caps="application/x-rtp,media=application,clock-rate=90000,encoding-name=X-GST" ! rtpssrcdemux ! rtpgstdepay ! autoaudiosink
However, I would like to have multiple senders that can dynamically start and stop streaming to the same port. AFAICT the SSRC field in RTP allows me to do exactly this, but I can't figure out how to configure rtpssrcdemux so that it will create additional sink pads.
E.g. when I start the following receiver pipeline:
gst-launch-1.0 -vvtcm udpsrc port=5000 caps="application/x-rtp,media=application,clock-rate=90000,encoding-name=X-GST" ! rtpssrcdemux name=demux demux.src_0 ! rtpgstdepay ! autoaudiosink demux.src_1 ! rtpgstdepay ! autoaudiosink
it will wait for the first audio stream, but when I start a second sender with a different SSRC, the pipeline stops with streaming task paused, reason not-linked (-1).
Hints welcome...?
For the record (and a few years late ;-), this is not possible using gst-launch alone, you need code to listen for the new-ssrc-pad signal as mentioned in the comment.

Gstreamer. PCM streaming

I have a pcm audio file that I want to stream via rtp. When I do
gst-launch-1.0 filesrc location=AudioRaw515151.pcm ! audio/x-raw, format=S16LE, channels=1, layout=interleaved, rate=8000 ! alawenc ! rtppcmapay ! udpsink host=192.168.2.5 port=5010
I have that kind of message
Pipeline is PREROLLING ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Got EOS from element "pipeline0".
Execution ended after 0:00:00.019270487
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
But I can play this audio, That means that audio is ok.
gst-launch-1.0 filesrc location=AudioRaw515151.pcm ! audio/x-raw, format=S16LE, channels=1, layout=interleaved, rate=8000 ! alawenc ! rtppcmapay ! rtppcmdepay ! alawdec ! audiosink
I tried to take another file-avi file, take audio from it and maked the same thing
gst-launch-1.0 filesrc location=file.avi ^
! qtdemux name=mux^
! queue ^
! faad ^
! audioconvert ^
! audioresample ^
! "audio/x-raw, layout=(string)interleaved, rate=(int)8000" ^
! alawenc ^
! rtppcmapay ^
! queue ^
! udpsink host=192.168.2.5 port=5010
As you see, this the same thing but with audio from avi. Everything works.
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
When I open Wireshark I see that when I run my pipeline with PCM, it fires all data without any delay and in every packet that I send is:
Header checksum: 0x0000 [incorrect, should be 0x40b5 (may be caused by "IP checksum offload"?)]
Message: Bad checksum
So here is a question. I think that I have a problem with timestamps or something like that, when I do !alawenc (encoding to G711), I am right?
And what solution can fix that problem?
First Question. Does the 2nd example play?
A few more comments:
file.avi and qtdemux sounds wrong, just use decodebin (or uridecodebin) to leave the pligging to gstreamer.
for raw audio I recommend to use the audioparse element
And finally there are a bunch of rtp examples in the git repo:
https://cgit.freedesktop.org/gstreamer/gst-plugins-good/tree/tests/examples/rtp/client-PCMA.sh

How to record pipeline even if sender doesn't send data in gstreamer

I'm a newbie to gstreamer so i would be appreciated if you could help me.
I'm trying to listen to a pipeline and record frames to a file.
I have tried the following pipeline:
gst-launch-1.0 udpsrc port=5600 do-timestamp=true ! application/x-rtp, payload=96 ! rtph264depay ! avdec_h264 ! clockoverlay ! jpegenc ! avimux ! filesink location=stream.avi
I want to record whole timeline even if the sender doesn't provide any frame data.
In default, recorder appends the frames when pipeline receive some valid frames. But I want to see some black frames when sender doesn't send data.
I experimented a bit and I don't think you'll be able to do this with a plain gst-launch command. Unfortunately what it would probably involve is to write an application that detects when packets/buffers are not coming in any more, and then modifying the pipeline. If you want to give it a go I'd suggest the input-selector element in something like this:
gst-launch-1.0 videotestsrc pattern=black ! video/x-raw ! input-selector name=selector ! clockoverlay ! jpegenc ! avimux ! filesink location=stream.avi
Then I'd create a method to attach the stream to the input-selector:
udpsrc port=5600 do-timestamp=true ! application/x-rtp, payload=96 ! rtph264depay ! avdec_h264 ! identity name=buffer-checker
To detect no packets coming in, you can listen for the handoff signal on the identity element, and then remove the stream when it times out and switch over to the black test pattern from the videotestsrc by using the active-pad property on the input-selector.
Using the videomixer element almost works, but I don't believe it will handle multiple stops and starts of the stream.
Anyway, hope someone else comes up with a better idea. You could also re-analyze your top level approach and see if there is a way you can work with multiple video clips instead of the one.

How to remove a branch of tee in an active GStreamer pipeline?

everyone
The version of GStreamer I use is 1.x. I've spent a lot of time in searching a way to delete a tee branch.
In an active pipeline, a recording bin is created as below and inserted into this pipeline by branching the tee element.
"queue ! video/x-h264, width=800, height=600, framerate=10/1, stream-format=(string)byte-stream ! h264parse ! mp4mux ! filesink location=/xxxx"
It works perfectly except that I want to dynamically delete the recording bin and get a playable mp4 file. According to some discussion and tutorial, to get a correct mp4 file , we need to handle something about EOS. After trying some methods, I always got broken mp4 files.
Does anyone have sample code written in C to show me ? I'd appreciate your help.
Your best bet for cases like this may be to create two processes. The first process would run the video, and half of the tee it has would deliver h264 data to the second process through whatever means.
Here are two pipelines demonstrating the concept using UDP sockets.
gst-launch-1.0 videotestsrc ! x264enc ! tee name=t ! h264parse ! avdec_h264 ! videoconvert ! ximagesink t. ! queue ! h264parse ! rtph264pay ! udpsink host=localhost port=8888
gst-launch-1.0 udpsrc port=8888 num-buffers=300 ! application/x-rtp,media=video,encoding-name=H264 ! rtph264depay ! h264parse ! mp4mux ! filesink location=/tmp/264.mp4
The trick to getting that clean mp4 is to make sure an EOS event is delivered reliably.
Instead of dynamically adding it you just have it in the pipeline by default, and add a probe callback at the source pad of the queue in the probe callback you have to do the trick either to pass the buffer or not (GST_PAD_PROBE_DROP drops the buffer and GST_PAD_PROBE_OK passes on the buffer to next element) so when you get an event to start/stop recoding you just need to return appropriate values. And filesink you can use multifilesink instead so as to write to different files everytime you start/stop.
Note the queue which drops the buffers needs before the mux element otherwise the file would be corrupt.
Hope that helps!
Finally, I came up with a solution.
Let's say that there is an active pipeline including a recording bin.
"udpsrc port=4444 caps=\"application/x-rtp, media=(string)video,
clock-rate=(int)90000, encoding-name=(string)H264 ! rtph264depay !
tee name=tp tp. ! queue ! video/x-h264, width=800, height=600,
framerate=10/1 ! decodebin ! videoconvert ! video/x-raw, format=RGBA !
autovideosink"
recording bin:
"queue ! video/x-h264, width=800, height=600, framerate=10/1,
stream-format=(string)byte-stream ! h264parse ! mp4mux ! filesink
location=/xxxx"
After a period of time, we want to stop recording and save as a mp4 file, and video media is still streaming.
First, I use a blocking probe to block the src pad of tee. In this blocking probe callback, I use an event probe to catch EOS in the sink pad of filesink and do a busy waiting.
*if EOS is catched in the event probe callback
self->isGotEOS = YES;
*busy waiting in the blocking probe callback
while (self->isGotEOS == NO) {
usleep(100000);
}
Before entering the busy waiting while loop, an EOS event is created and sent to the sink pad of recording bin.
After the busy waiting is done:
usleep(200000);
[self destory_record_elements];
I think usleep(200000) is a trick. Without it, a non-playable mp4 file is usually the result. It would seem that 200ms is long enough handling the EOS.
I had similar problem previously, my pipeline
videotestsrc do-timestamp="TRUE" ! videoflip method=0 ! tee name=t
t. ! queue ! videoconvert ! glupload ! glshader ! autovideosink async="FALSE"
t. ! queue ! identity drop-probability=1 ! videoconvert name=conv2 ! openh264enc ! h264parse ! avimux ! multifilesink async="FALSE" post-messages=true next-file=4
Then I just change drop-probability property on identity element
drop-probability = 1 + gst_pad_send_event(conv2_sinkpad, gst_event_new_eos()); - stop recording
drop-probability = 0 - resume recording

gstreamer rtpvp8depay cannot decode stream

I have two GStreamer instances : a sender and a receiver. I want to stream RTP / VP8 video. It works perfectly fine if I stream via UDP, like this :
sender
gst-launch-0.10 -v videotestsrc ! vp8enc ! rtpvp8pay ! udpsink host=127.0.0.1 port=9001
receiver
gst-launch-0.10 udpsrc port=9001 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)VP8-DRAFT-IETF-01, payload=(int)96" ! rtpvp8depay ! vp8dec ! ffmpegcolorspace ! autovideosink
That works fine. But when I try to stream throug a FIFO / named pipe (done with mkfifo()) with :
sender
gst-launch-0.10 -v videotestsrc ! vp8enc ! rtpvp8pay ! filesink location = myPipe
receiver
gst-launch-0.10 filesrc location = myPipe ! capsfilter caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)VP8-DRAFT-IETF-01, payload=(int)96 ! rtpvp8depay ! vp8dec ! ffmpegcolorspace ! autovideosink
It fails and my receiver continusouly outputs :
WARNING: from element /GstPipeline:pipeline0/GstRtpVP8Depay:rtpvp8depay0: Could not decode stream.
Additional debug info:
gstbasertpdepayload.c(387): gst_base_rtp_depayload_chain (): /GstPipeline:pipeline0/GstRtpVP8Depay:rtpvp8depay0:
Received invalid RTP payload, dropping
I think I read somewhere (but can't find it again) that it was because when using UDP, the RTP packets were separated properly, while using a named pipe like this, the packets being written are "chained" (not properly separated) and thus gstreamer doesn't know how much bytes to read to get a RTP packet.
Is this correct, and if yes, how can I change that ?
Thanks in advance !
When going through a named pipe, the RTP are not packetized properly. You could either,
Send the encoded stream directly through as a byte-stream, without using the rtpvp8pay element.
Use another RTP element in GStreamer that handles byte-stream format, such as rtpstreampay or rtpgdppay. (I believe the rtpstreampay might be a GStreamer 1.0 element though.)
I finally solved my problem.
I did not manage the byte-stream thing through a pipe, but I managed to use an AppSrc to feed the gst pipeline.
So my whole pipeline (might be useful for other people) looks like this : appsrc -> rtpvp8depay -> vp8dec -> videoconvert -> videoscale -> appsink (I'm using Gstreamer1.0 on ArchLinux).
Hope this helps !