Interlaced mpeg2 video with avenc_mpeg2video gstreamer - gstreamer

The following GStreamer pipeline generates an mpeg2ts with an mpeg2 video track.
gst-launch-1.0 videotestsrc pattern=ball num-buffers=900 ! "video/x-raw, format=(string)I420, \
width=(int)704, height=(int)576, framerate=(fraction)25/1" ! avenc_mpeg2video ! \
mpegtsmux ! filesink location=test.ts
I need the video to be interlaced, but when adding the interlace component, I get:
gst-launch-1.0 videotestsrc pattern=ball num-buffers=900 ! "video/x-raw, format=(string)I420, \
width=(int)704, height=(int)576, framerate=(fraction)25/1" ! interlace ! avenc_mpeg2video ! \
mpegtsmux name=mux ! filesink location=test.ts
WARNING: erroneous pipeline: could not link interlace0 to avenc_mpeg2video0
Which is odd to me, as both components seem to support e.g. I420 on their pads. I've tried adding various capsfilters between interlace and avenc_mpeg2video, but to no avail - it just fails connecting in other ways. I believe the two should be compatible - can someone explain why the above fails and maybe even show what a working pipeline should look like?

Try to set alternate-scan parameter of avenc_mpeg2video to 1. There is no need to use the interlace component. This works for me with Gstreamer 1.19:
gst-launch-1.0 videotestsrc pattern=ball num-buffers=900 ! "video/x-raw, format=(string)I420, \
width=(int)704, height=(int)576, framerate=(fraction)25/1" ! avenc_mpeg2video alternate-scan=1 ! \
mpegtsmux ! filesink location=test.ts
You can check the parameters of avenc_mpeg2video with this command:
gst-inspect-1.0 avenc_mpeg2video

Related

Gstreamer get video from camera, resize it, and copy it ito two sinks

I have one camera and I would like to clone it to be able to use it in two different apps.
The following two things work ok, but I'm not able to combine them:
Read from /dev/video0 and clone to /dev/video1 and /dev/video2
gst-launch-1.0 v4l2src name=vsrc device=/dev/video0 ! \
video/x-raw,width=1920,height=1080,framerate=60/1,format=RGB ! \
tee name=t ! queue ! v4l2sink device=/dev/video1 t. ! \
queue ! v4l2sink device=/dev/video2
Read from /dev/video0 and rescale it and output to /dev/video1
gst-launch-1.0 v4l2src name=vsrc device=/dev/video0 ! \
video/x-raw,width=1920,height=1080,framerate=60/1,format=RGB ! \
videoscale ! video/x-raw,width=178,height=100 ! videoconvert ! \
v4l2sink device=/dev/video1
But the following does not work (reading -> rescaling -> clone)
gst-launch-1.0 v4l2src name=vsrc device=/dev/video0 ! \
video/x-raw,width=1920,height=1080,framerate=60/1,format=RGB ! \
videoscale ! video/x-raw,width=178,height=100 ! videoconvert ! \
tee name=t ! queue ! v4l2sink device=/dev/video1 t. ! \
queue ! v4l2sink device=/dev/video2
It fails with the following error:
ERROR: from element /GstPipeline:pipeline0/GstVideoScale:videoscale0: Failed to configure the buffer pool
Additional debug info:
gstbasetransform.c(904): gst_base_transform_default_decide_allocation (): /GstPipeline:pipeline0/GstVideoScale:videoscale0:
Configuration is most likely invalid, please report this issue.
Thanks!

Enable hardware acceleration Nvidia Jetsons for image to video encoding

I want to encode jpg/png images into h264/h265 mp4 video (h265 is preferred if possible).
I tried using the commands of this question:
How to create a mp4 video file from PNG images using Gstreamer
I got a mp4 video out with this command:
gst-launch-1.0 -e multifilesrc location="IMG%03d.png" index=1 caps="image/png,framerate=30/1" ! pngdec ! videoconvert ! omxh265enc ! qtmux ! filesink location=image2.mp4
or
gst-launch-1.0 -e multifilesrc location="IMG%03d.png" index=1 caps="image/png,framerate=30/1" ! pngdec ! videoconvert ! queue ! x264enc ! queue ! mp4mux ! filesink location=image3.mp4
However according to the docs:
Accelerated_GStreamer_User_Guide
We can have hardware acceleration with:
H.265 Encode (NVIDIA Accelerated Encode)
gst-launch-1.0 nvarguscamerasrc ! \
'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, \
format=(string)NV12, framerate=(fraction)30/1' ! nvv4l2h265enc \
bitrate=8000000 ! h265parse ! qtmux ! filesink \
location=<filename_h265.mp4> -e
I changed it a little bit for images as input:
gst-launch-1.0 multifilesrc location="IMG%03d.png" index=1 caps="image/png,framerate=30/1" ! pngdec ! videoconvert ! queue ! nvv4l2h265enc bitrate=8000000 ! h265parse ! qtmux ! filesink location=output.mp4 -e
However I get the error:
WARNING: erroneous pipeline: could not link queue0 to nvv4l2h264enc0
According to the docs in nvv4l2h265enc encoder should be available in GStreamer version 1.0
What I'm I doing wrong?
NVIDIA's devtalk forum is the best place for these sorts of questions, but multifilesrc probably puts images in normal CPU memory, not in the GPU NvBuffers that the nvv4l2h265enc element expects. Furthermore, the encoder only seems to work with NV12-formatted YCbCr data while I think the multifilesrc probably outputs in RGB.
The nvvidconv element converts between the "CPU" parts and the "NVIDIA accelerated" parts by moving the data to GPU memory and converting the color space to NV12.
This launch string worked for me:
gst-launch-1.0 \
multifilesrc location="IMG%03d.png" index=1 caps="image/png,framerate=30/1" ! pngdec \
! nvvidconv \
! 'video/x-raw(memory:NVMM), format=(string)NV12 \
! queue \
! nvv4l2h265enc bitrate=8000000 \
! h265parse \
! qtmux \
! filesink location=output.mp4 -e
The caps string after nvvidconv isn't acutally necessary (I also ran successfully without it). nvv4l2h265enc also provides caps and nvvidconv knows how to change what needs to be changed (color space and memory type). I added it for illustration purposes to let you know what is actually going on.
I hope this helps!

gstreamer: Demux & Remux MKV, preserving video

I am trying to reencode the audio part of a MKV file that contains some video/x-h264 and some audio/x-raw. I can't manage to just demux the MKV and remux it. Even simply:
gst-launch-1.0 filesrc location=test.mkv ! matroskademux name=demux \
matroskamux name=mux ! filesink location=test2.mkv \
demux.video_00 ! mux.video_00 \
demux.audio_00 ! mux.audio_00
fails miserably with:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
WARNING: from element /GstPipeline:pipeline0/GstMatroskaDemux:demux: Delayed linking failed.
Additional debug info:
../gstreamer/gst/parse/grammar.y(506): gst_parse_no_more_pads (): /GstPipeline:pipeline0/GstMatroskaDemux:demux:
failed delayed linking pad video_00 of GstMatroskaDemux named demux to pad video_00 of GstMatroskaMux named mux
WARNING: from element /GstPipeline:pipeline0/GstMatroskaDemux:demux: Delayed linking failed.
Additional debug info:
../gstreamer/gst/parse/grammar.y(506): gst_parse_no_more_pads (): /GstPipeline:pipeline0/GstMatroskaDemux:demux:
failed delayed linking pad audio_00 of GstMatroskaDemux named demux to pad audio_00 of GstMatroskaMux named mux
ERROR: from element /GstPipeline:pipeline0/GstMatroskaDemux:demux: Internal data stream error.
Additional debug info:
../gst-plugins-good/gst/matroska/matroska-demux.c(5715): gst_matroska_demux_loop (): /GstPipeline:pipeline0/GstMatroskaDemux:demux:
streaming stopped, reason not-linked (-1)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
My best attempt at the transcoding mentioned above goes:
gst-launch-1.0 -v filesrc location=test.mkv ! matroskademux name=demux \
matroskamux name=mux ! filesink location=test2.mkv \
demux.video_00 ! queue ! 'video/x-h264' ! h264parse ! mux. \
demux.audio_00 ! queue ! rawaudioparse ! audioconvert ! audioresample ! avenc_aac ! mux.
with the same result. Removing the pad name audio_00 leads to gst being stuck at PREROLLING.
I have seen a few people facing similar problems:
http://gstreamer-devel.966125.n4.nabble.com/Putting-h264-file-inside-a-container-td4668158.html
http://gstreamer-devel.966125.n4.nabble.com/Changing-the-container-format-td3576914.html
As therein, keeping only video or only audio works.
I think the rawaudioparse should not be here. I tried your pipeline and trouble with it too. I just came up with something as I would have done it and it seemed to work:
filesrc location=test.mkv ! matroskademux \
matroskademux0. ! queue ! audioconvert ! avenc_aac ! matroskamux ! filesink location=test2.mkv \
matroskademux0. ! queue ! h264parse ! matroskamux0.
Audio in my case was:
Stream #0:0(eng): Audio: pcm_f32le, 44100 Hz, 2 channels, flt, 2822 kb/s (default)
Another format may require addiitonal transformations..
The problem is that the pads video_00 and audio_00 have been renamed video_0 and audio_0. This can be seen using gst-inspect-1.0 matroskademux, which indicates that the format for the pads now reads video_%u. Note that some documentation pages of gstreamer are not updated to reflect that.
The first command, MKV to MKV should read:
gst-launch-1.0 filesrc location=test.mkv ! matroskademux name=demux \
matroskamux name=mux ! filesink location=test2.mkv \
demux.video_0 ! queue ! mux.video_0 \
demux.audio_0 ! queue ! mux.audio_0
(Note the added queues)
The second command, MKV to MKV reencoding audio should read:
gst-launch-1.0 -v filesrc location=test.mkv ! matroskademux name=demux \
matroskamux name=mux ! filesink location=test2.mkv \
demux.video_0 ! queue ! 'video/x-h264' ! h264parse ! mux. \
demux.audio_0 ! queue ! rawaudioparse ! audioconvert ! audioresample ! avenc_aac ! mux.
The same result could have been achieved by not specifying the pads and using cap filters if needed.
Thanks go to user Florian Zwoch for providing a working pipeline.

Adding subtitle while doing H.264 encoding into a Matroska container

I have a requirement where I need to encode a v4l2src source in H.264 while using a Matroska container. If I have .mkv file with embedded subtitles it is easy to extract subtitles with
gst-launch-1.0 filesrc location=test.mkv ! matroskademux ! "text/x-raw" ! filesink location=subtitles
From what I understand and assuming I understand correctly, during the encoding process the "subtitle_%u" pad needs to be linked to text/x-raw source using textoverlay.
gst-launch-1.0 textoverlay text="Video 1" valignment=top halignment=left font-desc="Sans, 60" ! mux. imxv4l2src device=/dev/video0 ! timeoverlay ! videoconvert ! queue ! vpuenc_h264 ! capsfilter
caps="video/x-h264" ! matroskamux name=mux ! filesink location=sub.mkv
I use the above pipeline but I do not get the overlay in the .mkv video. What is the correct way to encode a subtitle/text overlay while encoding a source in H.264 in a matroska container and then also later be able to extract it using the first pipeline?
Sanchayan.
You may try this:
gst-launch-1.0 \
filesrc location=subtitles.srt ! subparse ! kateenc category=SUB ! mux.subtitle_0 \
imxv4l2src device=/dev/video0 ! timeoverlay ! videoconvert ! queue ! vpuenc_h264 ! \
capsfilter caps="video/x-h264" ! matroskamux name=mux ! filesink location=sub.mkv
And the subtitles.srt file may be like this:
1
00:00:00,500 --> 00:00:05,000
CAM 1
2
00:00:05,500 --> 00:00:10,000
That's all folks !

Gstreamer pipeline for converting files with optional audio/video

I am using the following pipeline to convert an flv file to mp4.
gst-launch-1.0 -vvv -e filesrc location="c.flv" ! flvdemux name=demux \
demux.audio ! queue ! decodebin ! audioconvert ! faac bitrate=32000 ! mux. \
demux.video ! queue ! decodebin ! videoconvert ! video/x-raw,format=I420 ! x264enc speed-preset=superfast tune=zerolatency psy-tune=grain sync-lookahead=5 bitrate=480 key-int-max=50 ref=2 ! mux. \
mp4mux name=mux ! filesink location="c.mp4"
The problem is when (for example) audio is missing, the pipeline gets stuck. (Same thing happens if just hooking a fakesink to demux.audio).
I need a way for the filters to ignore missing tracks, or produce empty tracks.