How to demux audio and video from rtspsrc and then save to file using matroska mux? - gstreamer

I have been working on an application where I use rtspsrc to gather audio and video from one network camera to another. However I can not watch the stream from the camera and thereby cant verify that the stream works as intended. To verify that the stream is correct I want to record it on a SD card and then play the file on a computer. The problem is that I want the camera to do as much of the parsing, decoding, depayloading as possible since that is the purpose of the application.
I thereby have to separate the audio and video streams by a demuxer and do the parsing, decoding etc and thereafter mux them back into a matroska file.
The video decoder has been omitted since it is not done yet for this camera.
Demux to live playback sink(works)
gst-launch-0.10 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?resolution=1280x720&audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 name=d d. ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! autoaudiosink d. ! rtph264depay ! ffdec_h264 ! queue ! ffmpegcolorspace ! autovideosink
Multiple rtspsrc to matroska(works)
gst-launch-1.0 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! queue ! matroskamux name=mux ! filesink location=/var/spool/storage/SD_DISK/testmovie.mkv rtspsrc location="rtsp://root:pass#192.168.0.91/axis-media/media.amp?resolution=1280x720" latency=0 ! rtph264depay ! h264parse ! mux.
Single rtspsrc to matroska(fails)
gst-launch-1.0 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?resolution=1280x720&audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 name=d d. ! queue ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! queue ! matroskamux name=mux d. ! queue ! rtph264depay ! h264parse ! queue ! mux. ! filesink location=/var/spool/storage/SD_DISK/testmoviesinglertsp.mkv
The last example fails with the error message
WARNING: erroneous pipeline: link without source element
Have i missunderstood the usage of matroska mux and why does the 2 above examples work but not the last?

The problem is here:
queue ! mux. ! filesink
You need to do
queue ! mux. mux. ! filesink
mux. means that gst-launch should select a pad automatically from mux. and link it. You could also specify manually a name, like mux.src. So syntactically you are missing another element/pad there to link to the other element.

Related

Gst-launch How do i edit this pipeline so play the audio?

As title, how can I change this so it also plays the files audio too?
gst-launch-1.0 filesrc location='/usr/share/myfile.mp4' ! qtdemux ! h264parse ! imxvpudec ! imxipuvideosink framebuffer=/dev/fb2 &
The I can get the file to play with audio using
gst-launch-1.0 -v playbin uri=file:///path/to/somefile.mp4
But I need the output to be onto device fb2 like in the first example
Many thanks
I posted a link to this question into the gstreamer reddit and a hero called Omerzet saved the day.
The following is the solution:
gst-launch-1.0 filesrc location='/usr/share/myfile.mp4' ! qtdemux name=demux demux.video_0 ! queue ! h264parse ! imxvpudec ! imxipuvideosink framebuffer=/dev/fb2 demux.audio_0 ! queue ! decodebin ! audioconvert ! audioresample ! alsasink device="sysdefault:CARD=imxhdmisoc"
Where framebuffer diverts the video to device /dev/fb2.
And
alsasink device="sysdefault:CARD=imxhdmisoc"
Diverts the audio to my define sound card.

Where should 'queue' be positioned in a pipeline?

When decoding a file or a stream, does anyone know where queue should be placed in the pipeline? All of the following seem to have the same behaviour:
gst-launch-1.0 filesrc location=r_25_1920.mp4 ! qtdemux name=demuxer demuxer. ! **queue** ! avdec_h264 ! autovideosink
gst-launch-1.0 rtspsrc location=rtsp://192.168.100.60:554/stream1 latency=0 ! rtph264depay ! **queue** ! avdec_h264 ! autovideosink
gst-launch-1.0 filesrc location=r_25_1920.mp4 ! **queue** ! qtdemux name=demuxer demuxer. ! avdec_h264 ! autovideosink
gst-launch-1.0 rtspsrc location=rtsp://192.168.100.60:554/stream1 latency=0 ! **queue**! rtph264depay ! avdec_h264 ! autovideosink
gst-launch-1.0 filesrc location=r_25_1920.mp4 ! qtdemux name=demuxer demuxer. ! avdec_h264 ! **queue** ! autovideosink
gst-launch-1.0 rtspsrc location=rtsp://192.168.100.60:554/stream1 latency=0 ! rtph264depay ! avdec_h264 ! **queue** ! autovideosink
Someone more skilled may better answer, but I don't think that queue can help a lot with this case.
Queue has many properties that can be set in order to have various behaviors. You may check these with gst-inspect-1.0 queue and play with.
Without any property set as different from default, queue is mainly used for solving deadlock/race/synchro issues that can happen when using parallel pipelines such as with tee/demuxer/... creating several streams from one, where you would use queue in front of each outgoing sub-pipeline such as:
...your_source_stream ! tee name=t \
t. ! queue ! some_processing... \
t. ! queue ! some_other_processing...
So the correct location for your case would be after qtdemux, but probably useless until a second stream is extracted from that container.
Or inversely when sinking several streams into a mux/compositor/..., where you would add queue at tail of each incoming stream before mux :
...some_input ! queue ! mux.sink_0 \
...some_other_input ! queue ! mux.sink_1 \
somemux name=mux ! what_you_do_with_muxed_stream...

is there a way to create dynamic pipeline in gst-launch-1.0 PIPELINE-DESCRIPTION?

I know how to create dynamic pipeline in python or c, but I wonder is it possible to create dynamic pipeline in the PIPELINE-DESCRIPTION language?
I have a hls stream, which may contain audio, video or audio+video, I want to be able to deal with the pipeline-description.
Following pipline breaks when there is video only or audio only
gst-launch-1.0 -e rtspsrc location='rtsp://localhost:554' latency=0 name=d d. ! queue ! capsfilter caps="application/x-rtp,media=video" ! rtph264depay ! mpegtsmux name=mux ! filesink location=file.ts d. ! queue ! capsfilter caps="application/x-rtp,media=audio" ! decodebin ! audioconvert ! audioresample ! lamemp3enc ! mux.

capture segmented audio and video with gstreamer

I'm trying to record audio and video from internal webcam and mic to segmented files with gstreamer.
It works to a single file by doing:
gst-launch-1.0 -e avfvideosrc !
video/x-raw ! vtenc_h264 ! h264parse ! queue !
mpegtsmux name=mux ! filesink location=test.mp4 osxaudiosrc !
decodebin ! audioconvert ! faac ! aacparse ! queue ! mux.
It doesn't work when doing:
gst-launch-1.0 -e avfvideosrc !
video/x-raw ! vtenc_h264 ! h264parse ! queue !
splitmuxsink
muxer=mpegtsmux
location=test%04d.mp4
max-size-time=1000000000
name=mux osxaudiosrc !
decodebin ! audioconvert ! faac ! aacparse ! queue ! mux.
saying erroneous pipeline: could not link queue1 to mux
I'm using gstreamer 1.12.3 on Mac OSX Sierra
Note: The H264/AAC encoding isn't necessary for what I want to achieve, so if there are solutions that only work with e.g. avimux, for whatever reason, that's fine.
EDIT: I've tried this on a windows machine with the same error.
gst-launch-1.0 -ev ksvideosrc ! video/x-raw !
videoconvert ! queue !
splitmuxsink max-size-time=1000000000 muxer=avimux name=mux
location=video%04d.avi autoaudiosrc !
decodebin ! audioconvert ! queue ! mux.
Just like on Mac, replacing splitmuxsink with avimux ! filesink works. I'm sure I'm just missing out on some 'pipeline' logic so any clarifiction that can push me in the right direction would be helpful.
I needed to send the audio stream to the audio track of the muxer like so: mux.audio_0
gst-launch-1.0 -ev ksvideosrc ! video/x-raw !
videoconvert ! queue !
splitmuxsink max-size-time=1000000000 muxer=avimux name=mux
location=video%04d.avi autoaudiosrc !
decodebin ! audioconvert ! queue ! mux.audio_O
This happens when the documentation should be clear but you're missing out on some basic knowledge on how to interpret it.

rtsp audio+video using Gstreamer Android

I am trying to construct a RTSP pipeline on the client side to receive audio and video streams on android platform
Only video pipeline works fine
data->pipeline = gst_parse_launch("rtspsrc location=rtsp://192.168.1.100:8554/ss ! gstrtpjitterbuffer ! rtph264depay ! h264parse ! amcviddec-omxtiducati1videodecoder ! ffmpegcolorspace ! autovideosink",&error);
I need to receive audio streams also, so I tried with below pipeline
gst-launch rtspsrc location=rtsp://192.168.1.100:8554/ss demux. ! queue ! rtph264depay ! h264parse ! ffdec_h264 ! autovideosink demux. ! queue ! rtpmp4gdepay ! aacparse ! ffdec_aac ! audioconvert ! autoaudiosink
Gstreamer throws error saying no element "demux"
Please let me know proper rtsp pipeline to receive audio and video streams on android
Please try this, (tested):
gst-launch rtspsrc location=rtsp://192.168.1.100:8554/ss name=demux. ! queue ! rtph264depay ! h264parse ! ffdec_h264 ! autovideosink demux. ! queue ! rtpmp4gdepay ! aacparse ! ffdec_aac ! audioconvert ! autoaudiosink