GStreamer: vorbis transmuxing (pass-through) and transcoding via parsebin - gstreamer

Can anybody to help me with understand how I can to work with vorbis in Gstreamer via parsebin element ? In examples below I use matroska file with one video (h264) and one audio (vorbis) streams.
For example next case is worked (used auto****sink w/o vorbisparse):
gst-launch-1.0 filesrc location="h264.Vorbis.10sec.mkv" ! parsebin name=pb pb. ! queue ! avdec_h264 ! videoconvert ! autovideosink pb. ! queue ! vorbisdec ! audioconvert ! autoaudiosink
but in this case all hangs (used auto****sink with vorbisparse)
gst-launch-1.0 filesrc location="h264.Vorbis.10sec.mkv" ! parsebin name=pb pb. ! queue ! avdec_h264 ! videoconvert ! autovideosink pb. ! queue ! vorbisparse ! vorbisdec ! audioconvert ! autoaudiosink
worked (used separate filesinks with vorbisparse):
gst-launch-1.0 filesrc location="h264.Vorbis.10sec.mkv" ! parsebin name=pb pb. ! queue ! matroskamux ! filesink location=d:/v.mkv pb. ! queue ! vorbisparse ! matroskamux ! filesink location=d:/a.mkv
hangs (used separate filesinks w/o vorbisparse):
gst-launch-1.0 filesrc location="h264.Vorbis.10sec.mkv" ! parsebin name=pb pb. ! queue ! matroskamux ! filesink location=d:/v.mkv pb. ! queue ! matroskamux ! filesink location=d:/a.mkv
worked (used multiqueue, separate filesinks and vorbisparse):
gst-launch-1.0 filesrc location="h264.Vorbis.10sec.mkv" ! parsebin name=pb ! multiqueue name=mq pb. ! mq. mq.src_0 ! matroskamux name=mux ! filesink location="d:/v.mkv" mq.src_1 ! vorbisparse ! matroskamux ! filesink location="d:/a.mkv"
hangs (used multiqueue, single filesink and vorbisparse):
gst-launch-1.0 filesrc location="h264.Vorbis.10sec.mkv" ! parsebin name=pb ! multiqueue name=mq pb. ! mq. mq.src_0 ! matroskamux name=mux ! filesink location="d:/va.mkv" mq.src_1 ! vorbisparse ! mux.
P.S. My main goal is to use this parsebin element and get the ability to transcode or transmux streams as needed. For example:
video => transmux, audio => transmux
video => transmux, audio => transcode
video => transcode, audio => transmux
video => transcode, audio => transcode
I will be grateful for your clarifications and help

Related

Save RTSP stream to file

I cant save audio from stream I get only video in file. I suspect that I do not need two filesink in pipeline or there is some problem two different mux.
I tried to use autoadiosink and autovideosink and they works successfully.
autoadiosink and autovideosink pipeline:
gst-launch-1.0 rtspsrc location=rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov latency=0 droponlatency=1 name=rtp_source ! queue ! rtph264depay ! decodebin ! videoconvert ! autovideosink rtp_source. ! queue ! decodebin ! autoaudiosink
Save to file filesink pipeline:
gst-launch-1.0 rtspsrc location=rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov latency=0 droponlatency=1 name=rtp_source ! queue ! rtph264depay ! decodebin ! vp8enc ! webmmux ! filesink location=BigBuckBunny_115k.webm rtp_source. ! "application/x-rtp, media=(string)audio" ! queue ! decodebin ! vorbisenc ! oggmux ! filesink location=BigBuckBunny_115k.webm
I want to get also audio in resulting file.
You just reuse the existing mux - so that the vorbis is put into the webmmux too:
gst-launch-1.0 rtspsrc location=rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov \
latency=0 droponlatency=1 name=rtp_source ! queue ! rtph264depay ! decodebin ! \
vp8enc ! webmmux name=mux ! filesink location=BigBuckBunny_115k.webm rtp_source. ! \
application/x-rtp, media=(string)audio" ! queue ! decodebin ! vorbisenc ! mux.

capture segmented audio and video with gstreamer

I'm trying to record audio and video from internal webcam and mic to segmented files with gstreamer.
It works to a single file by doing:
gst-launch-1.0 -e avfvideosrc !
video/x-raw ! vtenc_h264 ! h264parse ! queue !
mpegtsmux name=mux ! filesink location=test.mp4 osxaudiosrc !
decodebin ! audioconvert ! faac ! aacparse ! queue ! mux.
It doesn't work when doing:
gst-launch-1.0 -e avfvideosrc !
video/x-raw ! vtenc_h264 ! h264parse ! queue !
splitmuxsink
muxer=mpegtsmux
location=test%04d.mp4
max-size-time=1000000000
name=mux osxaudiosrc !
decodebin ! audioconvert ! faac ! aacparse ! queue ! mux.
saying erroneous pipeline: could not link queue1 to mux
I'm using gstreamer 1.12.3 on Mac OSX Sierra
Note: The H264/AAC encoding isn't necessary for what I want to achieve, so if there are solutions that only work with e.g. avimux, for whatever reason, that's fine.
EDIT: I've tried this on a windows machine with the same error.
gst-launch-1.0 -ev ksvideosrc ! video/x-raw !
videoconvert ! queue !
splitmuxsink max-size-time=1000000000 muxer=avimux name=mux
location=video%04d.avi autoaudiosrc !
decodebin ! audioconvert ! queue ! mux.
Just like on Mac, replacing splitmuxsink with avimux ! filesink works. I'm sure I'm just missing out on some 'pipeline' logic so any clarifiction that can push me in the right direction would be helpful.
I needed to send the audio stream to the audio track of the muxer like so: mux.audio_0
gst-launch-1.0 -ev ksvideosrc ! video/x-raw !
videoconvert ! queue !
splitmuxsink max-size-time=1000000000 muxer=avimux name=mux
location=video%04d.avi autoaudiosrc !
decodebin ! audioconvert ! queue ! mux.audio_O
This happens when the documentation should be clear but you're missing out on some basic knowledge on how to interpret it.

When I make the audio and video sync in gstreamer, there would be a huge delay

I use the gstreamer to decode the H264, when I use the pipeline like this:
gst-launch-1.0 udpsrc uri=udp://0.0.0.0:15550 caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,payload=(int)33,encoding-name=(string)MP2T" ! .recv_rtp_sink_0 rtpbin latency=800 ! rtpmp2tdepay ! tsdemux name=demux demux. ! h264parse ! queue ! omxh264dec ! vspfilter ! video/x-raw,width=800,height=480 ! waylandsink sync=false max-lateness=-1 demux. ! aacparse ! queue max-size-buffers=8192000 max-size-time=2000000000 ! faad ! alsasink device=media
there would be only about 200ms delay.
And when I set the sync=true, like this:
gst-launch-1.0 udpsrc uri=udp://0.0.0.0:15550 caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,payload=(int)33,encoding-name=(string)MP2T" ! .recv_rtp_sink_0 rtpbin latency=800 ! rtpmp2tdepay ! tsdemux name=demux demux. ! h264parse ! queue ! omxh264dec ! vspfilter ! video/x-raw,width=800,height=480 ! waylandsink sync=true max-lateness=-1 demux. ! aacparse ! queue max-size-buffers=8192000 max-size-time=2000000000 ! faad ! alsasink device=media
the dalay would reach 1200ms
I have no idea about it.

rtsp audio+video using Gstreamer Android

I am trying to construct a RTSP pipeline on the client side to receive audio and video streams on android platform
Only video pipeline works fine
data->pipeline = gst_parse_launch("rtspsrc location=rtsp://192.168.1.100:8554/ss ! gstrtpjitterbuffer ! rtph264depay ! h264parse ! amcviddec-omxtiducati1videodecoder ! ffmpegcolorspace ! autovideosink",&error);
I need to receive audio streams also, so I tried with below pipeline
gst-launch rtspsrc location=rtsp://192.168.1.100:8554/ss demux. ! queue ! rtph264depay ! h264parse ! ffdec_h264 ! autovideosink demux. ! queue ! rtpmp4gdepay ! aacparse ! ffdec_aac ! audioconvert ! autoaudiosink
Gstreamer throws error saying no element "demux"
Please let me know proper rtsp pipeline to receive audio and video streams on android
Please try this, (tested):
gst-launch rtspsrc location=rtsp://192.168.1.100:8554/ss name=demux. ! queue ! rtph264depay ! h264parse ! ffdec_h264 ! autovideosink demux. ! queue ! rtpmp4gdepay ! aacparse ! ffdec_aac ! audioconvert ! autoaudiosink

what is the output of mpegtsdemux element in gstreamer pipeline?

I'm working on gstreamer.Is there any way to store the output of mpegtsdemux element in a pipeline to a file as I'm interested in seperating audio,video ts packets into different files.
You can seperate video and audio track after mpegtsdemux. I hope this exemple will help you:
gst-launch filesrc location="source.ts" ! mpegtsdemux name=demux ! queue max-size-buffers=400000000 ! decodebin ! videorate ! videoscale ! ffenc_mpeg4 ! matroskamux ! filesink location="your_video_file.mkv" demux. ! queue max-size-buffers=400000000 ! decodebin ! audioconvert ! wavenc! filesink location="your_audio_file.wav"