How to extract h264 and aac elementary stream from an mp4 / mkv using gstreamer - gstreamer

I could not find a stable and balanced approach to demux the A/V stream and then save it as a playable h264 annex B format video.
Well, I tried the following steps for shrinkage file.
gst-launch-0.10 filesrc
location=h264_720p_mp_3.1_3mbps_aac_shrinkage.mkv ! matroskademux !
filesink location=abc.h264
-rw-rw-r-- 1 XXX XXX 28697147 Nov 1 10:04 h264_720p_mp_3.1_3mbps_aac_shrinkage.mkv
-rw-rw-r-- 1 XXX XXX 27581733 Nov 1 10:19 abc.h264
a file got saved with "not so smaller" size but is not playable, however the parent container format is playable with the following pipeline
gst-launch-0.10 filesrc
location=h264_720p_mp_3.1_3mbps_aac_shrinkage.mkv ! matroskademux !
h264parse ! ffdec_h264 ! ffmpegcolorspace ! ximagesink
Questions
Q1. What are the methods to extract the video ES and Audio ES from different containers using gstreamer ?
Q2. Q1 using some other methods which always works and/or are easy ?

In general, you need to specify which pad you're interested in. Otherwise you couldn't distinguish the audio ES from the video ES.
The following works on my machine:
gst-launch-1.0 filesrc location=example.mkv ! queue ! matroskademux name=dmux dmux.video_0 ! queue ! filesink location=vid.265 dmux.audio_0 ! queue ! filesink location=aud.aac

Following all commands works for me. It creates h.264 byte stream file from mp4 video file. Newly created file also played using ffplay or gst-play-1.0
gst-launch-1.0 filesrc location=./VID-20190903-WA0012.mp4 ! qtdemux name=pnkj_demux ! h264parse ! video/x-h264,stream-format=byte-stream ! filesink location=./VID-20190903-WA0012_1.264
gst-launch-1.0 -e filesrc location=./VID-20190903-WA0012.mp4 ! qtdemux name=pnkj_demux ! h264parse ! video/x-h264,stream-format=byte-stream ! filesink location=./VID-20190903-WA0012_2.264
gst-launch-1.0 filesrc location=./VID-20190903-WA0012.mp4 ! qtdemux name=pnkj_demux pnkj_demux.video_0 ! h264parse ! video/x-h264,stream-format=byte-stream ! filesink location=./VID-20190903-WA0012_3.264
gst-launch-1.0 -e filesrc location=./VID-20190903-WA0012.mp4 ! qtdemux name=pnkj_demux pnkj_demux.video_0 ! h264parse ! video/x-h264,stream-format=byte-stream ! filesink location=./VID-20190903-WA0012_4.264

Related

Gst-launch How do i edit this pipeline so play the audio?

As title, how can I change this so it also plays the files audio too?
gst-launch-1.0 filesrc location='/usr/share/myfile.mp4' ! qtdemux ! h264parse ! imxvpudec ! imxipuvideosink framebuffer=/dev/fb2 &
The I can get the file to play with audio using
gst-launch-1.0 -v playbin uri=file:///path/to/somefile.mp4
But I need the output to be onto device fb2 like in the first example
Many thanks
I posted a link to this question into the gstreamer reddit and a hero called Omerzet saved the day.
The following is the solution:
gst-launch-1.0 filesrc location='/usr/share/myfile.mp4' ! qtdemux name=demux demux.video_0 ! queue ! h264parse ! imxvpudec ! imxipuvideosink framebuffer=/dev/fb2 demux.audio_0 ! queue ! decodebin ! audioconvert ! audioresample ! alsasink device="sysdefault:CARD=imxhdmisoc"
Where framebuffer diverts the video to device /dev/fb2.
And
alsasink device="sysdefault:CARD=imxhdmisoc"
Diverts the audio to my define sound card.

How can I fix the missing duration of a video file from a RTSP stream on Gstreamer?

I'm using the following pipeline to listen to a RTSP stream and save a video file:
gst-launch-1.0 -q rtspsrc location=rtsp://<ip>:<port>/video ! decodebin ! autovideoconvert ! x264enc pass=5 quantizer=25 speed-preset=6 ! h264parse ! matroskamux ! filesink location=<filename>
But even though I can see the files generated, they lack the duration of the video when playing on VLC.
I can fix it by passing it through ffmpeg later, but I want to generate the video from gstreamer already completely valid. How can I fix this pipeline?
gst-launch-1.0 -e rtspsrc location=rtsp://<ip>:<port>/video ! decodebin ! videoconvert ! x264enc ! mp4mux ! filesink location="filename.mp4"
This will create a video with duration shown correctly

Adding subtitle while doing H.264 encoding into a Matroska container

I have a requirement where I need to encode a v4l2src source in H.264 while using a Matroska container. If I have .mkv file with embedded subtitles it is easy to extract subtitles with
gst-launch-1.0 filesrc location=test.mkv ! matroskademux ! "text/x-raw" ! filesink location=subtitles
From what I understand and assuming I understand correctly, during the encoding process the "subtitle_%u" pad needs to be linked to text/x-raw source using textoverlay.
gst-launch-1.0 textoverlay text="Video 1" valignment=top halignment=left font-desc="Sans, 60" ! mux. imxv4l2src device=/dev/video0 ! timeoverlay ! videoconvert ! queue ! vpuenc_h264 ! capsfilter
caps="video/x-h264" ! matroskamux name=mux ! filesink location=sub.mkv
I use the above pipeline but I do not get the overlay in the .mkv video. What is the correct way to encode a subtitle/text overlay while encoding a source in H.264 in a matroska container and then also later be able to extract it using the first pipeline?
Sanchayan.
You may try this:
gst-launch-1.0 \
filesrc location=subtitles.srt ! subparse ! kateenc category=SUB ! mux.subtitle_0 \
imxv4l2src device=/dev/video0 ! timeoverlay ! videoconvert ! queue ! vpuenc_h264 ! \
capsfilter caps="video/x-h264" ! matroskamux name=mux ! filesink location=sub.mkv
And the subtitles.srt file may be like this:
1
00:00:00,500 --> 00:00:05,000
CAM 1
2
00:00:05,500 --> 00:00:10,000
That's all folks !

Gstreamer-1.0: mux raw video in a mp4 container

I have a raw video that I can play through gstreamer:
gst-launch-1.0 ... autovideoconvert ! autovideosink
I can encode this video:
gst-launch-1.0 ... ! autovideoconvert ! x264enc ! h264parse ! mp4mux ! filesink location=a.mp4
I would like now to put this raw video in a mp4 container "lossless", without any compression. How can I do that?
You answered in your question. Don't do compression
gst-launch-1.0 ... ! autovideoconvert ! mp4mux ! filesink location=a.mp4
But you know, without compression this file will be large (GBytes.)
I don't think I can use mp4mux but qtmux accept raw-uyvy. The following works:
gst-launch-1.0 ... ! autovideoconvert ! "video/x-raw,format=(string)UYVY" ! qtmux ! filesink location=a.mov
Sometimes the source data won't be suitable for re-muxing, but if it is a pipeline such as this should work:
gst-launch-1.0 filesrc location=... ! [DEMUX] ! h264parse ! qtmux !
filesink location=...
h264 data has different "stream formats" and "alignments" that it can be in. The stream formats are avc, avc3, and byte-stream. The possible alignments are au and nal. Different muxers take different combinations of these. h264parse will make that transformation if necessary.
And to re-iterate, sometimes the source data just won't be re-muxable into the desired container. It depends on a lot of factors.

How to demux audio and video from rtspsrc and then save to file using matroska mux?

I have been working on an application where I use rtspsrc to gather audio and video from one network camera to another. However I can not watch the stream from the camera and thereby cant verify that the stream works as intended. To verify that the stream is correct I want to record it on a SD card and then play the file on a computer. The problem is that I want the camera to do as much of the parsing, decoding, depayloading as possible since that is the purpose of the application.
I thereby have to separate the audio and video streams by a demuxer and do the parsing, decoding etc and thereafter mux them back into a matroska file.
The video decoder has been omitted since it is not done yet for this camera.
Demux to live playback sink(works)
gst-launch-0.10 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?resolution=1280x720&audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 name=d d. ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! autoaudiosink d. ! rtph264depay ! ffdec_h264 ! queue ! ffmpegcolorspace ! autovideosink
Multiple rtspsrc to matroska(works)
gst-launch-1.0 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! queue ! matroskamux name=mux ! filesink location=/var/spool/storage/SD_DISK/testmovie.mkv rtspsrc location="rtsp://root:pass#192.168.0.91/axis-media/media.amp?resolution=1280x720" latency=0 ! rtph264depay ! h264parse ! mux.
Single rtspsrc to matroska(fails)
gst-launch-1.0 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?resolution=1280x720&audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 name=d d. ! queue ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! queue ! matroskamux name=mux d. ! queue ! rtph264depay ! h264parse ! queue ! mux. ! filesink location=/var/spool/storage/SD_DISK/testmoviesinglertsp.mkv
The last example fails with the error message
WARNING: erroneous pipeline: link without source element
Have i missunderstood the usage of matroska mux and why does the 2 above examples work but not the last?
The problem is here:
queue ! mux. ! filesink
You need to do
queue ! mux. mux. ! filesink
mux. means that gst-launch should select a pad automatically from mux. and link it. You could also specify manually a name, like mux.src. So syntactically you are missing another element/pad there to link to the other element.