I am new to Gstreamer. I wrote a simple RTSP server that generates a pipeline like:
appsrc name=vsrc is-live=true do-timestamp=true ! queue ! h264parse ! rtph264pay name=pay0 pt=96
The SDP response is generated after the DESCRIBE request, but only after a few frames on the signal have been received by the appsrc input:
vsrc = gst_bin_get_by_name_recurse_up(GST_BIN(element), "vsrc"); // appsrc
if (nullptr != vsrc)
{
gst_util_set_object_arg(G_OBJECT(vsrc), "format", "time");
g_signal_connect(vsrc, "need-data", (GCallback)need_video_data, streamResource);
}
The time from which the video is to be played is passed in the RTSP request PLAY, in the Range header as an absolute:
PLAY rtsp://172.19.9.65:554/Recording/ RTSP/1.0
CSeq: 4
Immediate: yes
Range: clock=20220127T082831.039Z- // Start from ...
To the object GstRTSPClient attached the handler to the signal in which I process this request and make the move to the right time in my appsrc
g_signal_connect(client, "pre-play-request", (GCallback)pre_play_request, NULL);
The problem is that at this point my appsrc's start time frames have already arrived in pipline and I watch them first, and then the playback continues from the time specified in the PLAY request.
Can you please tell me how I can cut off these initial frames that came in before the PLAY call.
I've tried:
gst_element_seek - doesn't help because of peculiarities of appsrc implementation
Flush didn't help either, tried resetting sink at element rtph264pay:
gst_pad_push_event(sinkPad, gst_event_new_flush_start());
GST_PAD_STREAM_LOCK(sinkPad);
// ... seek in appsrc
gst_pad_push_event(sinkPad, gst_event_new_flush_stop(TRUE));
GST_PAD_STREAM_UNLOCK(sinkPad);
gst_object_unref(sinkPad);
Thank You!
I am trying to figure out the proper gstreamer element to use to transmit AAC audio over RTP.
By dumping the dot graph of a playbin on the file I can conclude that the caps coming out of the tsdemux is audio/mpeg,mpegversion:2,stream-format:adts .
If I use the following pipeline
gst-launch-1.0 -v filesrc location=$BA ! tsdemux ! audio/mpeg ! rtpmpapay ! filesink location=/tmp/test.rtp
it fails:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1: caps = audio/mpeg
WARNING: from element /GstPipeline:pipeline0/GstTSDemux:tsdemux0: Delayed linking failed.
Additional debug info:
/var/tmp/portage/media-libs/gstreamer-1.12.3/work/gstreamer-1.12.3/gst/parse/grammar.y(510): gst_parse_no_more_pads (): /GstPipeline:pipeline0/GstTSDemux:tsdemux0:
failed delayed linking some pad of GstTSDemux named tsdemux0 to some pad of GstRtpMPAPay named rtpmpapay0
ERROR: from element /GstPipeline:pipeline0/GstTSDemux:tsdemux0: Internal data stream error.
Additional debug info:
/var/tmp/portage/media-libs/gst-plugins-bad-1.12.3/work/gst-plugins-bad-1.12.3/gst/mpegtsdemux/mpegtsbase.c(1613): mpegts_base_loop (): /GstPipeline:pipeline0/GstTSDemux:tsdemux0:
streaming stopped, reason not-linked (-1)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
Which gstreamer element should I be using to wrap AAC audio in an RTP packet?
I guess its rtpmp4apay: RTP MPEG4 audio payloader. Maybe you want/need aacparse before the payloader as well.
I want to implement multicast streaming in embedded-linux (Yocto) system.
I thought that Gstreamer is easy to implement it, but the received data is choppy and like as if it passes low-pass filter when the filesrc is mp3.
When the filesrc is wav, the recived data is like as if it passes choppy and high-pass filter.
Here is the gst-launch command (mp3).
Tx:
GST_DEBUG=3 gst-launch-1.0 filesrc location="background.mp3" ! decodebin ! \
audioconvert ! rtpL16pay ! queue ! udpsink host=239.0.0.1 auto-multicast=true port=5004
Rx:
GST_DEBUG=3 gst-launch-1.0 udpsrc multicast-group=239.0.0.1 auto-multicast=true port=5004 \
caps="application/x-rtp, media=audio, clock-rate=44100, payload=0" ! rtpL16depay !\
audioconvert ! alsasink
GST_DEBUG3 result is as follows:
Tx:
Setting pipeline to PAUSED ...
0:00:00.115165875 936 0x7b8c40 WARN basesrc gstbasesrc.c:3486:gst_base_src_start_complete:<filesrc0> pad not activated yet
Pipeline is PREROLLING ...
====== BEEP: 4.1.4 build on Feb 14 2017 13:39:18. ======
Core: MP3 decoder Wrapper build on Mar 21 2014 15:04:50
file: /usr/lib/imx-mm/audio-codec/wrap/lib_mp3d_wrap_arm12_elinux.so.3
CODEC: BLN_MAD-MMCODECS_MP3D_ARM_02.13.00_CORTEX-A8 build on Jul 12 2016 13:15:30.
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Rx:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
0:00:00.269585916 1232 0x772320 WARN alsa conf.c:4974:snd_config_expand: alsalib error: Unknown parameters {AES0 0x02 AES1 0x82 AES2 0x00 AES3 0x02}
0:00:00.269914500 1232 0x772320 WARN alsa pcm.c:2495:snd_pcm_open_noupdate: alsalib error: Unknown PCM default:{AES0 0x02 AES1 0x82 AES2 0x00 AES3 0x02}
0:00:00.283770666 1232 0x772320 WARN alsa pcm_hw.c:1250:snd_pcm_hw_get_chmap: alsalib error: Cannot read Channel Map ctl
: No such file or directory
Redistribute latency...
0:00:06.335845459 1232 0x772320 WARN audiobasesink gstaudiobasesink.c:1807:gst_audio_base_sink_get_alignment:<alsasink0> Unexpected discontinuity in audio timestamps of -0:00:00.120430839, resyncing
0:00:07.167036751 1232 0x772320 WARN audiobasesink gstaudiobasesink.c:1512:gst_audio_base_sink_skew_slaving:<alsasink0> correct clock skew -0:00:00.020498109 < -+0:00:00.020000000
0:00:07.178596167 1232 0x772320 WARN audiobasesink gstaudiobasesink.c:1484:gst_audio_base_sink_skew_slaving:<alsasink0> correct clock skew +0:00:00.020102330 > +0:00:00.020000000
0:00:08.215633667 1232 0x772320 WARN audiobasesink gstaudiobasesink.c:1807:gst_audio_base_sink_get_alignment:<alsasink0> Unexpected discontinuity in audio timestamps of -0:00:00.128480725, resyncing
0:00:08.962452751 1232 0x772320 WARN audiobasesink gstaudiobasesink.c:1512:gst_audio_base_sink_skew_slaving:<alsasink0> correct clock skew -0:00:00.020283552 < -+0:00:00.020000000
0:00:09.095737543 1232 0x772320 WARN audiobasesink gstaudiobasesink.c:1484:gst_audio_base_sink_skew_slaving:<alsasink0> correct clock skew +0:00:00.020221135 > +0:00:00.020000000
0:00:10.135542001 1232 0x772320 WARN audiobasesink gstaudiobasesink.c:1807:gst_audio_base_sink_get_alignment:<alsasink0> Unexpected discontinuity in audio timestamps of -0:00:00.125238095, resyncing
Here is the gst-command (wav)
Tx:
GST_DEBUG=3 gst-launch-1.0 filesrc location="background.wav" ! decodebin ! \
audioconvert ! rtpL16pay ! queue ! udpsink host=239.0.0.1 auto-multicast=true port=5004
Rx:
GST_DEBUG=3 gst-launch-1.0 udpsrc multicast-group=239.0.0.1 auto-multicast=true port=5004 \
caps="application/x-rtp, media=audio, clock-rate=44100, payload=0" ! rtpL16depay !\
audioconvert ! alsasink
GST_DEBUG3 result is as follows:
Tx:
Setting pipeline to PAUSED ...
0:00:00.116759125 958 0x1c0cc40 WARN basesrc gstbasesrc.c:3486:gst_base_src_start_complete:<filesrc0> pad not activated yet
Pipeline is PREROLLING ...
0:00:00.136465125 958 0x1c1f460 FIXME default gstutils.c:3764:gst_pad_create_stream_id_internal:<wavparse0:src> Creating random stream-id, consider implementing a deterministic way of creating a stream-id
0:00:00.137230750 958 0x1c1f460 WARN riff riff-read.c:794:gst_riff_parse_info:<wavparse0> Unknown INFO (metadata) tag entry IPRT
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
0:00:00.152916625 958 0x1c0cc40 WARN bin gstbin.c:2597:gst_bin_do_latency_func:<pipeline0> did not really configure latency of 0:00:00.000000000
New clock: GstSystemClock
^Chandling interrupt.
Interrupt: Stopping pipeline ...
Execution ended after 0:00:03.435631250
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
Rx:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
0:00:00.270927792 1238 0x120d320 WARN alsa conf.c:4974:snd_config_expand: alsalib error: Unknown parameters {AES0 0x02 AES1 0x82 AES2 0x00 AES3 0x02}
0:00:00.271261625 1238 0x120d320 WARN alsa pcm.c:2495:snd_pcm_open_noupdate: alsalib error: Unknown PCM default:{AES0 0x02 AES1 0x82 AES2 0x00 AES3 0x02}
0:00:00.284991583 1238 0x120d320 WARN alsa pcm_hw.c:1250:snd_pcm_hw_get_chmap: alsalib error: Cannot read Channel Map ctl
: No such file or directory
Redistribute latency...
0:00:04.227007167 1238 0x120d320 WARN audiobasesink gstaudiobasesink.c:1807:gst_audio_base_sink_get_alignment:<alsasink0> Unexpected discontinuity in audio timestamps of +0:00:00.053514739, resyncing
0:00:04.314387751 1238 0x120d320 WARN audiobasesink gstaudiobasesink.c:1807:gst_audio_base_sink_get_alignment:<alsasink0> Unexpected discontinuity in audio timestamps of +0:00:00.055510204, resyncing
0:00:04.396900334 1238 0x120d320 WARN audiobasesink gstaudiobasesink.c:1807:gst_audio_base_sink_get_alignment:<alsasink0> Unexpected discontinuity in audio timestamps of +0:00:00.052607709, resyncing
0:00:04.483605876 1238 0x120d320 WARN audiobasesink gstaudiobasesink.c:1807:gst_audio_base_sink_get_alignment:<alsasink0> Unexpected discontinuity in audio timestamps of +0:00:00.055215419, resyncing
0:00:04.570297626 1238 0x120d320 WARN audiobasesink gstaudiobasesink.c:1807:gst_audio_base_sink_get_alignment:<alsasink0> Unexpected discontinuity in audio timestamps of +0:00:00.055215419, resyncing
If I use pulsesink instead of alsasink, following is appeared.
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Redistribute latency...
0:00:00.410499500 1255 0x70813120 WARN pulse pulsesink.c:702:gst_pulsering_stream_underflow_cb:<pulsesink0> Got underflow
0:00:00.423478917 1255 0x7e7920 WARN audiobasesink gstaudiobasesink.c:1807:gst_audio_base_sink_get_alignment:<pulsesink0> Unexpected discontinuity in audio timestamps of +0:00:00.038095238, resyncing
0:00:00.450453459 1255 0x70813120 WARN pulse pulsesink.c:702:gst_pulsering_stream_underflow_cb:<pulsesink0> Got underflow
What is the problem ? Can anybody solve this ?
I hope your kind reply.
Thank you for reading.
I reckon that the problem is that the application/x-rtp parameters don't match between the transmitter and receiver.
This can be easily solved putting verbose in the transmitter and then using the same parameters in the receiver:
Let's see in an example:
TX:
gst-launch-1.0 -v filesrc location="test.mp3" ! decodebin ! audioconvert ! rtpL16pay ! queue ! udpsink host=239.0.0.1 auto-multicast=true port=5004
Which last lines (thanks to -v) are next:
/GstPipeline:pipeline0/GstQueue:queue0.GstPad:sink: caps =
"application/x-rtp\,\ media\=(string)audio\,\
clock-rate\=(int)44100\,\ encoding-name\=(string)L16\,\
encoding-params\=(string)2\,\ channels\=(int)2\,\
payload\=(int)96\,\ ssrc\=(uint)1806894235\,\
timestamp-offset\=(uint)468998694\,\ seqnum-offset\=(uint)20785"
/GstPipeline:pipeline0/GstRtpL16Pay:rtpl16pay0.GstPad:sink: caps =
"audio/x-raw\,\ layout\=(string)interleaved\,\ rate\=(int)44100\,\
format\=(string)S16BE\,\ channels\=(int)2\,\
channel-mask\=(bitmask)0x0000000000000003"
/GstPipeline:pipeline0/GstAudioConvert:audioconvert0.GstPad:sink: caps
= "audio/x-raw\,\ format\=(string)S32LE\,\ layout\=(string)interleaved\,\ rate\=(int)44100\,\
channels\=(int)2\,\ channel-mask\=(bitmask)0x0000000000000003"
/GstPipeline:pipeline0/GstDecodeBin:decodebin0.GstDecodePad:src_0.GstProxyPad:proxypad1:
caps = "audio/x-raw\,\ format\=(string)S32LE\,\
layout\=(string)interleaved\,\ rate\=(int)44100\,\
channels\=(int)2\,\ channel-mask\=(bitmask)0x0000000000000003"
/GstPipeline:pipeline0/GstRtpL16Pay:rtpl16pay0: timestamp = 468998694
/GstPipeline:pipeline0/GstRtpL16Pay:rtpl16pay0: seqnum = 20785
Pipeline is PREROLLED ... Setting pipeline to PLAYING ... New clock:
GstSystemClock
Using the same parameters in the player or receiver:
RX:
gst-launch-1.0 udpsrc caps='application/x-rtp, media=(string)audio, clock-rate=(int)44100, encoding-name=(string)L16, encoding-params=(string)2, channels=(int)2, payload=(int)96' ! rtpL16depay ! pulsesink
And this plays perfectly.
Going to the .wav file in my case the transmitter is:
gst-launch-1.0 -v filesrc location="test.wav" ! decodebin ! audioconvert ! rtpL16pay ! queue ! udpsink host=239.0.0.1 auto-multicast=true port=5004
Last lines output:
/GstPipeline:pipeline0/GstQueue:queue0.GstPad:sink: caps =
"application/x-rtp\,\ media\=(string)audio\,\ clock-rate\=(int)44100\,\ encoding-name\=(string)L16\,\
encoding-params\=(string)1\,\ channels\=(int)1\,\
payload\=(int)96\,\ ssrc\=(uint)620824608\,\
timestamp-offset\=(uint)433377669\,\ seqnum-offset\=(uint)7103"
/GstPipeline:pipeline0/GstRtpL16Pay:rtpl16pay0.GstPad:sink: caps =
"audio/x-raw\,\ layout\=(string)interleaved\,\ rate\=(int)44100\,\
format\=(string)S16BE\,\ channels\=(int)1"
/GstPipeline:pipeline0/GstAudioConvert:audioconvert0.GstPad:sink: caps
= "audio/x-raw\,\ format\=(string)S16LE\,\ layout\=(string)interleaved\,\ channels\=(int)1\,\
rate\=(int)44100"
/GstPipeline:pipeline0/GstDecodeBin:decodebin0.GstDecodePad:src_0.GstProxyPad:proxypad1:
caps = "audio/x-raw\,\ format\=(string)S16LE\,\
layout\=(string)interleaved\,\ channels\=(int)1\,\
rate\=(int)44100" /GstPipeline:pipeline0/GstRtpL16Pay:rtpl16pay0:
timestamp = 433377669 /GstPipeline:pipeline0/GstRtpL16Pay:rtpl16pay0:
seqnum = 7103 Pipeline is PREROLLED ... Setting pipeline to PLAYING
... New clock: GstSystemClock
Using this information in the receiver:
gst-launch-1.0 udpsrc caps='application/x-rtp, media=(string)audio, clock-rate=(int)44100, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, payload=(int)96' ! rtpL16depay ! pulsesink
The audio also plays smoothly.
Hope this helps.
using gstreamer-1.10 I have been trying several version of pipeline to decode an RTSP stream that starts as a webrtc connection.
ffprobe reports stream as
Duration: N/A, start: 0.128000, bitrate: N/A
Stream #0:0: Audio: aac (LC), 48000 Hz, stereo, fltp
Stream #0:1: Video: h264 (Constrained Baseline), yuv420p, 512x288 [SAR 1:1 DAR 16:9], 30 fps, 30 tbr, 90k tbn, 60 tbc
using variations of the following pipeline
GST_DEBUG=3 gst-launch-1.0 -e rtspsrc location=rtsp://xxx.xxx.xxx.xxx:1935/alpha/Stream1 \
! decodebin name=decode \
decode. \
! x264enc bitrate=512 speed-preset=6 \
! video/x-h264, profile=baseline \
! queue ! mp4mux name=mp4mux ! filesink location=file.mp4 \
decode. ! avenc_aac bitrate=96000 ! aacparse ! queue ! mp4mux.
I get the following errors
0:00:00.299416405 7705 0x7f0d48001e80 WARN default grammar.y:510:gst_parse_no_more_pads:<decode> warning: Delayed linking failed.
0:00:00.299435518 7705 0x7f0d48001e80 WARN default grammar.y:510:gst_parse_no_more_pads:<decode> warning: failed delayed linking some pad of GstDecodeBin named decode to some pad of GstX264Enc named x264enc0
WARNING: from element /GstPipeline:pipeline0/GstDecodeBin:decode: Delayed linking failed.
Additional debug info:
./grammar.y(510): gst_parse_no_more_pads (): /GstPipeline:pipeline0/GstDecodeBin:decode:
failed delayed linking some pad of GstDecodeBin named decode to some pad of GstX264Enc named x264enc0
0:00:01.296295371 7705 0x7f0d6402a8f0 WARN basesrc gstbasesrc.c:2951:gst_base_src_loop:<udpsrc3> error: Internal data stream error.
0:00:01.296324999 7705 0x7f0d6402a8f0 WARN basesrc gstbasesrc.c:2951:gst_base_src_loop:<udpsrc3> error: streaming stopped, reason not-linked (-1)
what is the correct pipeline to decode this stream?
I am trying to encode uncompressed video in H.265; however, when I write the following pipeline I receive an error message that I cannot resolve. I am following the example code in Tegra X1 Multimedia User Guide, and I do not understand why the following pipeline does not work. I am a beginner in video compression so any help would be very useful. The code/error message:
ubuntu#tegra-ubuntu:~$ gst-launch-1.0 filesrc location=small_mem_vid.mov ! 'video/x-raw, format=(string)I420, framerate=(fraction)30/1, width=(int)1280, height=(int)720' ! omxh265enc ! filesink location=new_encode.mov -e
Setting pipeline to PAUSED ...
Inside NvxLiteH264DecoderLowLatencyInitNvxLiteH264DecoderLowLatencyInit set DPB and MjstreamingInside NvxLiteH265DecoderLowLatencyInitNvxLiteH265DecoderLowLatencyInit set DPB and MjstreamingPipeline is PREROLLING ...
Framerate set to : 30 at NvxVideoEncoderSetParameterNvMMLiteOpen : Block : BlockType = 8
===== MSENC =====
NvMMLiteBlockCreate : Block : BlockType = 8
ERROR: from element /GstPipeline:pipeline0/GstOMXH265Enc-omxh265enc:omxh265enc-omxh265enc0: Could not write to resource.
Additional debug info:
/dvs/git/dirty/git-master_linux/external/gstreamer/gst-omx/omx/gstomxvideoenc.c(2139): gst_omx_video_enc_handle_frame (): /GstPipeline:pipeline0/GstOMXH265Enc-omxh265enc:omxh265enc-omxh265enc0:
Failed to write input into the OpenMAX buffer
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
ubuntu#tegra-ubuntu:~$
Are you sure the .mov file is really uncompressed video? The .mov extension is commonly used for quicktime video. You could use "mediainfo" in Linux to discover more details about the format of the file. In that case I don't think you can go directly from filesrc to the encoder. You probably need a qtdemux and a decoder, maybe avdec_h264 depending on what mediainfo shows.
You also might want to enable some more detailed debugging:
export GST_DEBUG=*:4