Method to Cancel/Abort GStreamer tcpclientsink Timeout - gstreamer

I am working on an application that uses GStreamer to send a Motion JPEG video stream through a tcpclientsink element. The application works fine, except if I disrupt the network by switching the connection from wired to wireless or wireless to wired. When that happens, it looks like the tcpclientsink element waits 15 minutes before responding to messages. That becomes a problem if I try to shut down the application during this time. Here is what I've observed:
Start a Motion JPEG media stream with GStreamer using tcpclientsink as the sink. The code pushing the video runs in its own thread.
While the media stream is running, disrupt the connection by switching the type of network connection.
Start shutting down the application. Call gst_bus_post(bus, gst_message_new_eos(NULL)). This seems to get ignored.
Call pthread_join to wait for the video thread to exit. It does not respond for up to 15 minutes.
When I look at the GST_DEBUG messages, I can see that the GStreamer tcpclientsink hit an error while writing. It apparently waits 15 minutes while retrying.
Is there a way I can abort or cancel the timeout associated with tcpclientsink? Is there a different message I could send to cause the sink to terminate immediately?
I know I can use pthread_timedjoin_np and pthread_cancel to kill the video thread if GStreamer does not respond as fast as I would like, but I would prefer to have GStreamer exit as cleanly as possible.
Update
I should have mentioned I'm using GStreamer 0.10.36. Unfortunately this might just be a bug with that version. I see the handling has changed quite a bit in 1.2.x. I'm still hoping there is a workaround for the version I'm using.
I was able to recreate this problem using gst-launch-0.10. This might be more
complicated than necessary, but it worked for me:
Launch three scripts:
The following relays the data between the consumer and the producer:
while [ 1 ]
do
gst-launch-0.10 tcpserversrc host=0 port=${PORT_IN} ! jpegdec ! jpegenc !
tcpserversink port=${PORT_OUT}
done
The following is the script for the consumer
gst-launch-0.10 tcpclientsrc host=${IP_ADDR} port=${PORT_OUT} ! jpegdec !
ffmpegcolorspace ! ximagesink
The following is the script for the producer
gst-launch-0.10 ximagesrc ! videoscale !
video/x-raw-rgb,framerate=1/1,width=640,height=320 ! ffmpegcolorspace ! jpegenc
! tcpclientsink host=${IP_ADDR} port=${PORT_IN}
I ran the first two scripts on one machine and the third script on a second
machine. When I switched the network connection on the second machine from
wired to wireless, it took 15+ minutes for the tcpclientsink to report an
error.

In order to fix the problem, I had to patch GStreamer. I added code to specify the send timeout in the gst_tcp_client_sink_start() function of gsttcpclientsink.c
struct timeval timeout;
timeout.tv_sec = 60;
timeout.tv_usec = 0;
...
setsockopt (this->sock_fd.fd, SOL_SOCKET, SO_SNDTIMEO, (char *)&timeout, sizeof(timeout));
Now the application is capable of shutting down within one minute (acceptable for my situation) even if the network was disrupted while streaming video.
Note: It doesn't look like this will be a problem with version 1.2.1, but I need to stay with 0.10.36.

Related

GStreamer can't determine running time for this packet without knowing configured latency

I'm trying to forward an RTSP stream through WebRTC using GStreamer. I keep getting a massive amount of warnings about latency:
Can't determine running time for this packet without knowing configured latency
The pipeline is:
rtspsrc location=my_rtsp_url is-live=true !
queue !
decodebin !
videoconvert !
openh264enc !
video/x-h264,profile=constrained-baseline !
rtph264pay aggregate-mode=zero-latency !
webrtcbin turn-server=turn://test:test#localhost:3478 bundle-policy=max-bundle name=webrtcbin
I can't seem to figure out what I need to set to get rid of these messages. I tried looking through the source code for GStreamer. As best I can tell, the rtp session (gstrtpsession.c) is finding that send_latency is GST_CLOCK_TIME_NONE.
Is there something I can add to my pipeline to fix this?

Use "Clock Time" instead of Running Time for GStreamer Pipeline

I have a two GStreamer pipelines, one is like a "source" pipeline streaming a live camera feed into an external channel, and the second pipeline is like a "sink" pipeline that reads from the other end of that channel and outputs the live video to some form of sink.
[videotestsrc] -> [appsink] ----- Serial Channel ------> [appsrc] -> [autovideosink]
First Pipeline Second Pipeline
The first pipeline starts from a videotestsrc, encodes the video and wraps it in gdppay payload, and then sinks the pipeline into a serial channel (but for the sake of the question, any sink that can be read from to start another pipeline like a filesink writing to serial port or udpsink), where it is read by the source of the next pipeline and shown via a autovideosrc:
"Source" Pipeline
gst-launch-1.0 -v videotestsrc ! videoconvert ! video/x-raw,format=I420 ! x265enc ! gdppay ! udpsink host=127.0.0.1 port=5004
"Sink" pipeline
gst-launch-1.0 -v udpsrc uri=udp://127.0.0.1:5004 ! gdpdepay ! h265parse ! avdec_h265 ! autovideosink
Note: Given the latency induced using a udpsink/udpsrc, that pipeline complains about timestamp issues. If you replace the udpsrc/udpsink with a filesrc/filesink to a serial port you can see the problem that I am about to describe.
Problem:
Now that I have described the pipelines, here is the problem:
If I start both pipelines, everything works as expected. However, if after 30s, I stop the "source" pipeline, and restart the pipeline, the Running Time gets reset back to zero, causing the timestamps of all buffers to be sent to be considered old buffers by the sink pipeline because it has already received buffers for timestamps 0 through 30s, so the playback on the other end won't resume until after 30s:
Source Pipeline: [28][29][30][0 ][1 ][2 ][3 ]...[29][30][31]
Sink Pipeline: [28][29][30][30][30][30][30]...[30][30][31]
________________________^
Source pipeline restarted
^^^^^^^^^^^^^^^^...^^^^^^^^
Sink pipeline will continue
to only show the "frame"
received at 30s until a
"newer" frame is sent, when
in reality each sent frame
is newer and should be shown
immediately.
Solution
I have found that adding sync=false to the autovideosink does solve the problem, however I was hoping to find a solution where the source would send its timestamps (DTS and PTS) based on the Clock time as seen in the image on that page.
I have seen this post and experimented with is-live and do-timestamp on my video source, but they do not seem to do what I want. I also tried to manually set the timestamps (DTS, PTS) in the buffers based on system time, however to no avail.
Any suggestions?
I think you should just restart the receiver pipeline as well. You could add the -e switch to the sender pipeline and when you stop the pipeline it should correctly propagate EOS via the GDP element to the receiver pipeline. Else I guess you can send a new segment or discontinuity to the receiver. Some event has to be signaled though to make the pipeline aware of that change, else it is somewhat bogus data. I'd say restarting the receiver is the simplest way.

Audio Streaming: RTP-Stream receiving with Gstreamer - Latency

I am currently playing around with an AudioOverIP Project and wondered if you could help me out.
I have a LAN, with an Audio Source (Dante/AES67-RTP-Stream) which I would like to distribute to multiple receivers (SBC (e.g. RaspberryPi) with an Audio Output (e.g. Headphone jack):
PC-->Audio-USB-Dongle-->AES67/RTP-Multicast-Stream-->LAN-Network-Switch-->RPI (Gstreamer --> AudioJack)
I currently use Gstreamer for the Pipeline:
gst-launch-1.0 -v udpsrc uri=udp://239.69.xxx.xx:5004 caps="application/x-rtp,channels=(int)2,format=(string)S16LE,media=(string)audio,payload=(int)96,clock-rate=(int)48000,encoding-name=(string)L24" ! rtpL24depay ! audioconvert ! alsasink device=hw:0,0
It all works fine, but if I watch a video on the PC and listen to the Audio from the RPI, I have some latency (~200-300ms), therefore my questions:
Do I miss something in my Gstreamer Pipeline to be able to reduce latency?
What is the minimal Latency to be expected with RTP-Streams, is <50ms achievable?
Would the latency occur due to the network or due to the speed of the RPi?
Since my audio-input is not a Gstreamer input, I assume rtpjitterbuffer or similar would not help to decrease latency?

GStreamer recording a stream to a wav file with wrong duration

I'm recording a wav file using GStreamer receiving G711 flow through a UDP port.
Any wav player can play the file, but shows a wrong duration and cannot fast forward.
I believe that GStreamer writes the header at the beginning with empty data.
This pipeline can reproduce the issue:
gst-launch-1.0 udpsrc port=3000 caps="application/x-rtp,media=(string)audio, payload=0,clock-rate=(int)8000" ! rtpjitterbuffer ! rtppcmudepay ! mulawdec ! wavenc ! filesink append=true location=c:/recordings/audio-zz.wav
Florian Zwoch suggested to use -e and the file will be closed properly.
Indeed it works perfectly.
I'm using this pipeline inside a Java program with the gst1-java-core library.
Seems that I'm missing something closing the pipeline.
My program has the same behaviour as gst-launch without -e parameter.
Before stopping the pipeline I send an EOS Event.
pipeline.sendEvent(new EOSEvent());
How can I fix it?
The append parameter of filesink element does not allow rewriting the header.
Thank you.
How do you stop the pipeline? If you interrupt the pipeline with ctrl-c it may indeed be that the header finalization is skipped. Run your pipeline with the -e option so that on ctrl-c your pipeline gets stopped gracefully.

How to make rtpjitterbuffer work on a stream without timestamps?

I am sending an H.264 bytestream over RTP using gstreamer.
# sender
gst-launch-1.0 filesrc location=my_stream.h264 ! h264parse disable-passthrough=true ! rtph264pay config-interval=10 pt=96 ! udpsink host=localhost port=5004
Then I am receiving the frames, decoding and displaying in other gstreamer instance.
# receiver
gst-launch-1.0 udpsrc port=5004 ! application/x-rtp,payload=96,media="video",encoding-name="H264",clock-rate="90000" ! rtph264depay ! h264parse ! decodebin ! xvimagesink
This works as is, but I want to try adding an rtpjitterbuffer in order to perfectly smooth out playback.
# receiver
gst-launch-1.0 udpsrc port=5004 ! application/x-rtp,payload=96,media="video",encoding-name="H264",clock-rate="90000" ! rtpjitterbuffer ! rtph264depay ! h264parse ! decodebin ! xvimagesink
However, as soon as I do, the receiver only displays a single frame and freezes.
If I replace the .h264 file with an MP4 file, the playback works great.
I assume that my h264 stream does not have the required timestamps to enable the jitter buffer to function.
I made slight progress by adding identity datarate=1000000. This allows the jitterbuffer to play, however this screws with my framerate because P frames have less data than I frames. Clearly the identity element adds the correct timestamps, but just with the wrong numbers.
Is it possible to automatically generate timestamps on the sender by specifying the "framerate" caps correctly somewhere? So far my attempts have not worked.
You've partially answered the problem already:
If I replace the .h264 file with an MP4 file, the playback works great.
I assume that my h264 stream does not have the required timestamps to enable the jitter buffer to function.
Your sender pipeline has no negotiated frame rate because you're using a raw h264 stream, while you should really be using a container format (e.g., MP4) which has this information. Without timestamps udpsink cannot synchronise against clock to throttle, so the sender is spitting out packets as fast as pipeline can process them. It's not a live sink.
However adding a rtpjitterbuffer makes your receiver act as live source. It freezes because it's trying its best to cope with the barrage of packets of malformed timestamps. RTP doesn't transmit "missing" timestamps to best of my knowledge, so all packets will probably have the same timestamp. Thus it probably reconstructs the first frame and drops the rest as duplicates.
I must agree with user1998586 in the sense that it ought to be better for the pipeline to crash with a good error message in this case rather trying its best.
Is it possible to automatically generate timestamps on the sender by specifying the "framerate" caps correctly somewhere? So far my attempts have not worked.
No. You should really use a container.
In theory, however, an au aligned H264 raw stream could be timestamped by just knowing the frame rate, but there are no gstreamer elements (I know of) that do this and just specifying caps won't do it.
I had the same problem, and the best solution I found was to add timestamps to the stream on the sender side, by adding do-timestamp=1 to the source.
Without timestamps I couldn't get rtpjitterbuffer to pass more than one frame, no matter what options I gave it.
(The case I was dealing with was streaming from raspvid via fdsrc, I presume filesrc behaves similarly).
It does kinda suck that gstreamer so easily sends streams that gstreamer itself (and other tools) doesn't process correctly: if not having timestamps is valid, then rtpjitterbuffer should cope with it; if not having timestamps is invalid, then rtph264pay should refuse to send without timestamps. I guess it was never intended as a user interface...
You should try to set the rtpjitterbuffer mode to another value than the default one:
mode : Control the buffering algorithm in use
flags: readable, writable
Enum "RTPJitterBufferMode" Default: 1, "slave"
(0): none - Only use RTP timestamps
(1): slave - Slave receiver to sender clock
(2): buffer - Do low/high watermark buffering
(4): synced - Synchronized sender and receiver clocks
Like that:
... ! rtpjittrbuffer mode=0 ! ...