When decoding a file or a stream, does anyone know where queue should be placed in the pipeline? All of the following seem to have the same behaviour:
gst-launch-1.0 filesrc location=r_25_1920.mp4 ! qtdemux name=demuxer demuxer. ! **queue** ! avdec_h264 ! autovideosink
gst-launch-1.0 rtspsrc location=rtsp://192.168.100.60:554/stream1 latency=0 ! rtph264depay ! **queue** ! avdec_h264 ! autovideosink
gst-launch-1.0 filesrc location=r_25_1920.mp4 ! **queue** ! qtdemux name=demuxer demuxer. ! avdec_h264 ! autovideosink
gst-launch-1.0 rtspsrc location=rtsp://192.168.100.60:554/stream1 latency=0 ! **queue**! rtph264depay ! avdec_h264 ! autovideosink
gst-launch-1.0 filesrc location=r_25_1920.mp4 ! qtdemux name=demuxer demuxer. ! avdec_h264 ! **queue** ! autovideosink
gst-launch-1.0 rtspsrc location=rtsp://192.168.100.60:554/stream1 latency=0 ! rtph264depay ! avdec_h264 ! **queue** ! autovideosink
Someone more skilled may better answer, but I don't think that queue can help a lot with this case.
Queue has many properties that can be set in order to have various behaviors. You may check these with gst-inspect-1.0 queue and play with.
Without any property set as different from default, queue is mainly used for solving deadlock/race/synchro issues that can happen when using parallel pipelines such as with tee/demuxer/... creating several streams from one, where you would use queue in front of each outgoing sub-pipeline such as:
...your_source_stream ! tee name=t \
t. ! queue ! some_processing... \
t. ! queue ! some_other_processing...
So the correct location for your case would be after qtdemux, but probably useless until a second stream is extracted from that container.
Or inversely when sinking several streams into a mux/compositor/..., where you would add queue at tail of each incoming stream before mux :
...some_input ! queue ! mux.sink_0 \
...some_other_input ! queue ! mux.sink_1 \
somemux name=mux ! what_you_do_with_muxed_stream...
Related
I'm trying to record audio and video from internal webcam and mic to segmented files with gstreamer.
It works to a single file by doing:
gst-launch-1.0 -e avfvideosrc !
video/x-raw ! vtenc_h264 ! h264parse ! queue !
mpegtsmux name=mux ! filesink location=test.mp4 osxaudiosrc !
decodebin ! audioconvert ! faac ! aacparse ! queue ! mux.
It doesn't work when doing:
gst-launch-1.0 -e avfvideosrc !
video/x-raw ! vtenc_h264 ! h264parse ! queue !
splitmuxsink
muxer=mpegtsmux
location=test%04d.mp4
max-size-time=1000000000
name=mux osxaudiosrc !
decodebin ! audioconvert ! faac ! aacparse ! queue ! mux.
saying erroneous pipeline: could not link queue1 to mux
I'm using gstreamer 1.12.3 on Mac OSX Sierra
Note: The H264/AAC encoding isn't necessary for what I want to achieve, so if there are solutions that only work with e.g. avimux, for whatever reason, that's fine.
EDIT: I've tried this on a windows machine with the same error.
gst-launch-1.0 -ev ksvideosrc ! video/x-raw !
videoconvert ! queue !
splitmuxsink max-size-time=1000000000 muxer=avimux name=mux
location=video%04d.avi autoaudiosrc !
decodebin ! audioconvert ! queue ! mux.
Just like on Mac, replacing splitmuxsink with avimux ! filesink works. I'm sure I'm just missing out on some 'pipeline' logic so any clarifiction that can push me in the right direction would be helpful.
I needed to send the audio stream to the audio track of the muxer like so: mux.audio_0
gst-launch-1.0 -ev ksvideosrc ! video/x-raw !
videoconvert ! queue !
splitmuxsink max-size-time=1000000000 muxer=avimux name=mux
location=video%04d.avi autoaudiosrc !
decodebin ! audioconvert ! queue ! mux.audio_O
This happens when the documentation should be clear but you're missing out on some basic knowledge on how to interpret it.
I am trying to construct a RTSP pipeline on the client side to receive audio and video streams on android platform
Only video pipeline works fine
data->pipeline = gst_parse_launch("rtspsrc location=rtsp://192.168.1.100:8554/ss ! gstrtpjitterbuffer ! rtph264depay ! h264parse ! amcviddec-omxtiducati1videodecoder ! ffmpegcolorspace ! autovideosink",&error);
I need to receive audio streams also, so I tried with below pipeline
gst-launch rtspsrc location=rtsp://192.168.1.100:8554/ss demux. ! queue ! rtph264depay ! h264parse ! ffdec_h264 ! autovideosink demux. ! queue ! rtpmp4gdepay ! aacparse ! ffdec_aac ! audioconvert ! autoaudiosink
Gstreamer throws error saying no element "demux"
Please let me know proper rtsp pipeline to receive audio and video streams on android
Please try this, (tested):
gst-launch rtspsrc location=rtsp://192.168.1.100:8554/ss name=demux. ! queue ! rtph264depay ! h264parse ! ffdec_h264 ! autovideosink demux. ! queue ! rtpmp4gdepay ! aacparse ! ffdec_aac ! audioconvert ! autoaudiosink
I'm working on gstreamer.Is there any way to store the output of mpegtsdemux element in a pipeline to a file as I'm interested in seperating audio,video ts packets into different files.
You can seperate video and audio track after mpegtsdemux. I hope this exemple will help you:
gst-launch filesrc location="source.ts" ! mpegtsdemux name=demux ! queue max-size-buffers=400000000 ! decodebin ! videorate ! videoscale ! ffenc_mpeg4 ! matroskamux ! filesink location="your_video_file.mkv" demux. ! queue max-size-buffers=400000000 ! decodebin ! audioconvert ! wavenc! filesink location="your_audio_file.wav"
I have been working on an application where I use rtspsrc to gather audio and video from one network camera to another. However I can not watch the stream from the camera and thereby cant verify that the stream works as intended. To verify that the stream is correct I want to record it on a SD card and then play the file on a computer. The problem is that I want the camera to do as much of the parsing, decoding, depayloading as possible since that is the purpose of the application.
I thereby have to separate the audio and video streams by a demuxer and do the parsing, decoding etc and thereafter mux them back into a matroska file.
The video decoder has been omitted since it is not done yet for this camera.
Demux to live playback sink(works)
gst-launch-0.10 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?resolution=1280x720&audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 name=d d. ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! autoaudiosink d. ! rtph264depay ! ffdec_h264 ! queue ! ffmpegcolorspace ! autovideosink
Multiple rtspsrc to matroska(works)
gst-launch-1.0 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! queue ! matroskamux name=mux ! filesink location=/var/spool/storage/SD_DISK/testmovie.mkv rtspsrc location="rtsp://root:pass#192.168.0.91/axis-media/media.amp?resolution=1280x720" latency=0 ! rtph264depay ! h264parse ! mux.
Single rtspsrc to matroska(fails)
gst-launch-1.0 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?resolution=1280x720&audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 name=d d. ! queue ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! queue ! matroskamux name=mux d. ! queue ! rtph264depay ! h264parse ! queue ! mux. ! filesink location=/var/spool/storage/SD_DISK/testmoviesinglertsp.mkv
The last example fails with the error message
WARNING: erroneous pipeline: link without source element
Have i missunderstood the usage of matroska mux and why does the 2 above examples work but not the last?
The problem is here:
queue ! mux. ! filesink
You need to do
queue ! mux. mux. ! filesink
mux. means that gst-launch should select a pad automatically from mux. and link it. You could also specify manually a name, like mux.src. So syntactically you are missing another element/pad there to link to the other element.
Hi to all i try to play and record mp3 souphttpsrc in the same time but i don't have a good result someone can help please?
gst-launch-1.0 -e filesrc location=/dev/fd/0 ! h264parse ! tee name=myvid \! queue ! decodebin ! xvimagesink sync=false \ myvid. ! queue ! mux.video_0 \ alsasrc device="plughw:2,0" ! "audio/x-raw,rate=44100,channels=1,depth=24" ! audioconvert ! queue ! filesink location=/tmp/out.mp4
thank you
Hi your pipeline is slightly wrong.
There is no encoding happening with the audio so you're saving raw audio into the container.
There is no muxer and mux.video_0 therefore does not resolve to any pad on any element.
Here is a pipeline without these issues:
gst-launch-1.0 -e mp4mux name=mux ! filesink location=/tmp/out.mp4 filesrc location=/dev/fd/0 ! h264parse ! tee name=myvid ! queue ! decodebin ! xvimagesink sync=false myvid. ! queue ! mux.video_0 \ alsasrc device="plughw:2,0" ! "audio/x-raw,rate=44100,channels=1,depth=24" ! audioconvert ! queue ! lame ! mux.audio_0