hlsdemux pipeline - gstreamer

i have created a web server and hosted a AAC stream for HLS streaming, i am able to play the file on another machine using
gst-launch-0.10 souphttpsrc location=http://xx.xx.xx.xx/prog_index.m3u8 ! hlsdemux ! decodebin2 ! alsasink
but when i do this
souphttpsrc location=http://xx.xx.xx.xx/prog_index.m3u8 ! hlsdemux ! aacparse ! faad ! alsasink
i get this error in hlsdemux log and no audio output as aacparse doesn't receive any data
0:00:00.066165787 8139 0xb07098f0 INFO hlsdemux gsthlsdemux.c:734:gst_hls_demux_loop:<hlsdemux0> First fragments cached successfully
0:00:00.066190861 8139 0xb07098f0 DEBUG hlsdemux gsthlsdemux.c:680:switch_pads: Switching pads (oldpad:(nil))
0:00:00.066450610 8139 0xb07098f0 DEBUG hlsdemux gsthlsdemux.c:757:gst_hls_demux_loop:<hlsdemux0> Sending new-segment. segment start:0:00:00.000000000
0:00:00.066510536 8139 0xb07098f0 DEBUG hlsdemux gsthlsdemux.c:796:gst_hls_demux_loop:<hlsdemux0> error, stopping task
Pipeline is PREROLLED ...
0:00:00.066541057 8139 0xb07098f0 DEBUG hlsdemux gsthlsdemux.c:989:gst_hls_demux_stop_update:<hlsdemux0> Stopping updates thread
i am able to play the individual segment file using
gst-launch-0.10 filesrc location=fileSequence0.aac ! aacparse ! faad ! alsasink

try adding a caps: ... ! hlsdemux ! audio/mpeg ! aacparse ! ...

Try adding mpegtsdemux between hlsdemux and aacparse...
because hlsdemux fetches uri till it get .ts streams n u will need mpegtsdemux to demux it.

This is the solution for the HLS audio stream:
... m3u8 ! hlsdemux ! decodebin ! audioconvert ! autoaudiosink
or if you need to stream it again as an UDP:
... m3u8 ! hlsdemux ! decodebin ! audioconvert ! faac ! audio/mpeg, stream-format=raw ! aacparse ! mpegtsmux ! udpsink host=230.0.0.1 port=5000

Related

gstreamer: Demux & Remux MKV, preserving video

I am trying to reencode the audio part of a MKV file that contains some video/x-h264 and some audio/x-raw. I can't manage to just demux the MKV and remux it. Even simply:
gst-launch-1.0 filesrc location=test.mkv ! matroskademux name=demux \
matroskamux name=mux ! filesink location=test2.mkv \
demux.video_00 ! mux.video_00 \
demux.audio_00 ! mux.audio_00
fails miserably with:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
WARNING: from element /GstPipeline:pipeline0/GstMatroskaDemux:demux: Delayed linking failed.
Additional debug info:
../gstreamer/gst/parse/grammar.y(506): gst_parse_no_more_pads (): /GstPipeline:pipeline0/GstMatroskaDemux:demux:
failed delayed linking pad video_00 of GstMatroskaDemux named demux to pad video_00 of GstMatroskaMux named mux
WARNING: from element /GstPipeline:pipeline0/GstMatroskaDemux:demux: Delayed linking failed.
Additional debug info:
../gstreamer/gst/parse/grammar.y(506): gst_parse_no_more_pads (): /GstPipeline:pipeline0/GstMatroskaDemux:demux:
failed delayed linking pad audio_00 of GstMatroskaDemux named demux to pad audio_00 of GstMatroskaMux named mux
ERROR: from element /GstPipeline:pipeline0/GstMatroskaDemux:demux: Internal data stream error.
Additional debug info:
../gst-plugins-good/gst/matroska/matroska-demux.c(5715): gst_matroska_demux_loop (): /GstPipeline:pipeline0/GstMatroskaDemux:demux:
streaming stopped, reason not-linked (-1)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
My best attempt at the transcoding mentioned above goes:
gst-launch-1.0 -v filesrc location=test.mkv ! matroskademux name=demux \
matroskamux name=mux ! filesink location=test2.mkv \
demux.video_00 ! queue ! 'video/x-h264' ! h264parse ! mux. \
demux.audio_00 ! queue ! rawaudioparse ! audioconvert ! audioresample ! avenc_aac ! mux.
with the same result. Removing the pad name audio_00 leads to gst being stuck at PREROLLING.
I have seen a few people facing similar problems:
http://gstreamer-devel.966125.n4.nabble.com/Putting-h264-file-inside-a-container-td4668158.html
http://gstreamer-devel.966125.n4.nabble.com/Changing-the-container-format-td3576914.html
As therein, keeping only video or only audio works.
I think the rawaudioparse should not be here. I tried your pipeline and trouble with it too. I just came up with something as I would have done it and it seemed to work:
filesrc location=test.mkv ! matroskademux \
matroskademux0. ! queue ! audioconvert ! avenc_aac ! matroskamux ! filesink location=test2.mkv \
matroskademux0. ! queue ! h264parse ! matroskamux0.
Audio in my case was:
Stream #0:0(eng): Audio: pcm_f32le, 44100 Hz, 2 channels, flt, 2822 kb/s (default)
Another format may require addiitonal transformations..
The problem is that the pads video_00 and audio_00 have been renamed video_0 and audio_0. This can be seen using gst-inspect-1.0 matroskademux, which indicates that the format for the pads now reads video_%u. Note that some documentation pages of gstreamer are not updated to reflect that.
The first command, MKV to MKV should read:
gst-launch-1.0 filesrc location=test.mkv ! matroskademux name=demux \
matroskamux name=mux ! filesink location=test2.mkv \
demux.video_0 ! queue ! mux.video_0 \
demux.audio_0 ! queue ! mux.audio_0
(Note the added queues)
The second command, MKV to MKV reencoding audio should read:
gst-launch-1.0 -v filesrc location=test.mkv ! matroskademux name=demux \
matroskamux name=mux ! filesink location=test2.mkv \
demux.video_0 ! queue ! 'video/x-h264' ! h264parse ! mux. \
demux.audio_0 ! queue ! rawaudioparse ! audioconvert ! audioresample ! avenc_aac ! mux.
The same result could have been achieved by not specifying the pads and using cap filters if needed.
Thanks go to user Florian Zwoch for providing a working pipeline.

How to send data from a file to webrtcbin element in gstreamer?

I am a beginner with gstreamer so bear with me.
I have a working pipeline where audio and video from a test source is sent to the webrtcbin element used to send out offer. Pipeline is as follows:
PIPELINE_DESC = '''
webrtcbin name=sendrecv stun-server=stun://stun.l.google.com:19302
audiotestsrc is-live=true wave=red-noise ! audioconvert ! audioresample ! queue ! opusenc ! rtpopuspay !
queue ! application/x-rtp,media=audio,encoding-name=OPUS,payload=96 ! sendrecv.
videotestsrc is-live=true pattern=ball ! video/x-raw,width=320,height=240 ! videoconvert ! queue ! x264enc ! rtph264pay !
queue ! application/x-rtp,media=video,encoding-name=H264,payload=97 ! sendrecv.
'''
However doing this consumes a lot of CPU/Memory as gstreamer has to encode audio/video. Hence I was to use a pre-recorded file to lower the resource usage.
I want to use a sample file (sample.mp4) to send audio and video to the webRTCbin element. The mp4 file has H264 video and AAC audio. I have tried a lot of combinations of elements but it is not working. Could you please help me correct my pipeline?
PIPELINE_DESC = '''
webrtcbin name=sendrecv stun-server=stun://stun.l.google.com:19302
filesrc location=sample.mp4 ! decodebin ! audioconvert ! sendrecv.
filesrc location=sample.mp4 ! decodebin ! videoconvert ! sendrecv.
'''
Many thanks in advance.
mp4 file is a container file format and it needs to be demultiplexed to get video and audio. For that purpose, you can use GStreamer's qtdemux element.
Considering above, example pipeline could be something like this
PIPELINE_DESC = '''
filesrc location=test.mp4 ! qtdemux name=demux
webrtcbin name=sendrecv stun-server=stun://stun.l.google.com:19302
demux.audio_%u ! aacparse ! rtpmp4apay !
queue ! application/x-rtp,media=audio,encoding-name=MP4A-LATM,payload=96 ! sendrecv.
demux.video_%u ! h264parse ! rtph264pay !
queue ! application/x-rtp,media=video,encoding-name=H264,payload=97 ! sendrecv.
'''

rtsp audio+video using Gstreamer Android

I am trying to construct a RTSP pipeline on the client side to receive audio and video streams on android platform
Only video pipeline works fine
data->pipeline = gst_parse_launch("rtspsrc location=rtsp://192.168.1.100:8554/ss ! gstrtpjitterbuffer ! rtph264depay ! h264parse ! amcviddec-omxtiducati1videodecoder ! ffmpegcolorspace ! autovideosink",&error);
I need to receive audio streams also, so I tried with below pipeline
gst-launch rtspsrc location=rtsp://192.168.1.100:8554/ss demux. ! queue ! rtph264depay ! h264parse ! ffdec_h264 ! autovideosink demux. ! queue ! rtpmp4gdepay ! aacparse ! ffdec_aac ! audioconvert ! autoaudiosink
Gstreamer throws error saying no element "demux"
Please let me know proper rtsp pipeline to receive audio and video streams on android
Please try this, (tested):
gst-launch rtspsrc location=rtsp://192.168.1.100:8554/ss name=demux. ! queue ! rtph264depay ! h264parse ! ffdec_h264 ! autovideosink demux. ! queue ! rtpmp4gdepay ! aacparse ! ffdec_aac ! audioconvert ! autoaudiosink

How to demux audio and video from rtspsrc and then save to file using matroska mux?

I have been working on an application where I use rtspsrc to gather audio and video from one network camera to another. However I can not watch the stream from the camera and thereby cant verify that the stream works as intended. To verify that the stream is correct I want to record it on a SD card and then play the file on a computer. The problem is that I want the camera to do as much of the parsing, decoding, depayloading as possible since that is the purpose of the application.
I thereby have to separate the audio and video streams by a demuxer and do the parsing, decoding etc and thereafter mux them back into a matroska file.
The video decoder has been omitted since it is not done yet for this camera.
Demux to live playback sink(works)
gst-launch-0.10 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?resolution=1280x720&audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 name=d d. ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! autoaudiosink d. ! rtph264depay ! ffdec_h264 ! queue ! ffmpegcolorspace ! autovideosink
Multiple rtspsrc to matroska(works)
gst-launch-1.0 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! queue ! matroskamux name=mux ! filesink location=/var/spool/storage/SD_DISK/testmovie.mkv rtspsrc location="rtsp://root:pass#192.168.0.91/axis-media/media.amp?resolution=1280x720" latency=0 ! rtph264depay ! h264parse ! mux.
Single rtspsrc to matroska(fails)
gst-launch-1.0 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?resolution=1280x720&audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 name=d d. ! queue ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! queue ! matroskamux name=mux d. ! queue ! rtph264depay ! h264parse ! queue ! mux. ! filesink location=/var/spool/storage/SD_DISK/testmoviesinglertsp.mkv
The last example fails with the error message
WARNING: erroneous pipeline: link without source element
Have i missunderstood the usage of matroska mux and why does the 2 above examples work but not the last?
The problem is here:
queue ! mux. ! filesink
You need to do
queue ! mux. mux. ! filesink
mux. means that gst-launch should select a pad automatically from mux. and link it. You could also specify manually a name, like mux.src. So syntactically you are missing another element/pad there to link to the other element.

Gstreamer mux, caps refused

I'm using the following pipeline (SIMPLIFIED) in Gstreamer OSS Build 0.10.7 on Win 7 x64:
udpsrc ! application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96 !
gstrtpjitterbuffer latency=200 ! rtph264depay ! tee name=h264Tee
h264Tee. ! queue ! h264parse ! mux.
matroskamux name=mux ! filesink location=rec.mkv sync=false // same for avimux/mp4/qt
h264Tee. ! queue ! ffdec_h264 ! tee name=videoTee
//.videoTee ! queue ! dx9videosink
//.videoTee ! queue ! appsink
//udpsrc ! queue ! directsoundsink
audiotestsrc ! mux. //only for testing, should be connected to udpsrc
The pipeline is launched via Gstreamer-Sharp.
Here's the console output of the pipeline:
WARN default xoverlay.c:354:gst_x_overlay_set_xwindow_id:<videoSink> Using deprecated gst_x_overlay_set_xwindow_id()
ERROR d3dvideosink d3dvideosink.c:2204:gst_d3dvideosink_release_swap_chain: Direct3D device has not been initialized
WARN bin gstbin.c:2378:gst_bin_do_latency_func:<pipeline0> failed to query latency
WARN matroskamux matroska-mux.c:970:gst_matroska_mux_video_pad_setcaps:<mux> pad video_0 refused caps 05370C40
Both video and audio are playing just fine as long as I leave out the muxer. When include the muxer in the pipeline, the video freezes immedeately and no sound can be heard. What's wrong why does the muxer refuse the caps?
Ok solved it my self:
The video caps above don't contain sprop-parameter-sets which aren't needed for playback. For encoding however they are needed since various properties of the stream are encoded within these:
udpsrc !
application/x-rtp, media=(string)video, clock-rate=(int)90000,
encoding-name=(string)H264,
sprop-parameter-sets= (string)\"Z0LADdkBQfsBEAAAAwAQAAADAyjxQqSA\\,aMuMTIA\\=\",
payload=(int)96,
ssrc (uint)2332354585,
clock-base=(uint)1158355497,
seqnum-base=(uint)10049 !
gstrtpjitterbuffer latency=200 ! rtph264depay ! tee name=h264Tee
...