I want to encode jpg/png images into h264/h265 mp4 video (h265 is preferred if possible).
I tried using the commands of this question:
How to create a mp4 video file from PNG images using Gstreamer
I got a mp4 video out with this command:
gst-launch-1.0 -e multifilesrc location="IMG%03d.png" index=1 caps="image/png,framerate=30/1" ! pngdec ! videoconvert ! omxh265enc ! qtmux ! filesink location=image2.mp4
or
gst-launch-1.0 -e multifilesrc location="IMG%03d.png" index=1 caps="image/png,framerate=30/1" ! pngdec ! videoconvert ! queue ! x264enc ! queue ! mp4mux ! filesink location=image3.mp4
However according to the docs:
Accelerated_GStreamer_User_Guide
We can have hardware acceleration with:
H.265 Encode (NVIDIA Accelerated Encode)
gst-launch-1.0 nvarguscamerasrc ! \
'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, \
format=(string)NV12, framerate=(fraction)30/1' ! nvv4l2h265enc \
bitrate=8000000 ! h265parse ! qtmux ! filesink \
location=<filename_h265.mp4> -e
I changed it a little bit for images as input:
gst-launch-1.0 multifilesrc location="IMG%03d.png" index=1 caps="image/png,framerate=30/1" ! pngdec ! videoconvert ! queue ! nvv4l2h265enc bitrate=8000000 ! h265parse ! qtmux ! filesink location=output.mp4 -e
However I get the error:
WARNING: erroneous pipeline: could not link queue0 to nvv4l2h264enc0
According to the docs in nvv4l2h265enc encoder should be available in GStreamer version 1.0
What I'm I doing wrong?
NVIDIA's devtalk forum is the best place for these sorts of questions, but multifilesrc probably puts images in normal CPU memory, not in the GPU NvBuffers that the nvv4l2h265enc element expects. Furthermore, the encoder only seems to work with NV12-formatted YCbCr data while I think the multifilesrc probably outputs in RGB.
The nvvidconv element converts between the "CPU" parts and the "NVIDIA accelerated" parts by moving the data to GPU memory and converting the color space to NV12.
This launch string worked for me:
gst-launch-1.0 \
multifilesrc location="IMG%03d.png" index=1 caps="image/png,framerate=30/1" ! pngdec \
! nvvidconv \
! 'video/x-raw(memory:NVMM), format=(string)NV12 \
! queue \
! nvv4l2h265enc bitrate=8000000 \
! h265parse \
! qtmux \
! filesink location=output.mp4 -e
The caps string after nvvidconv isn't acutally necessary (I also ran successfully without it). nvv4l2h265enc also provides caps and nvvidconv knows how to change what needs to be changed (color space and memory type). I added it for illustration purposes to let you know what is actually going on.
I hope this helps!
Related
I have googled it all but I couldn't find solution to my problem. I will be happy if anyone had similiar need and resolved somehow.
I do stream to RTMP server by following command. It captures video from HDMI Encoder, crops, rotates video.
gst-launch-1.0 -e v4l2src device=/dev/v4l/by-path/platform-fe801000.csi-video-index0 ! video/x-raw,format=UYVY,framerate=20/1 ! videoconvert ! videoscale ! video/x-raw, width=1280,height=720 ! videocrop top=0 left=0 right=800 bottom=0 ! videoflip method=counterclockwise ! omxh264enc ! h264parse! flvmux name=mux streamable=true ! rtmpsink sync=true async=true location='rtmp://XXXXX live=true'
and I want to add Audio to existing microphone on Raspberry. For example I can record microphone input to wav file by below pipeline.
gst-launch-1.0 alsasrc num-buffers=1000 device="hw:1,0" ! audio/x-raw,format=S16LE ! wavenc ! filesink location = a.wav
My question is; how can I add audio to my existing command line which streams to RTMP Server? And also, when I capture audio to file, there is a lots of Noise. How can I avoid?
Thank you
I have combined Audio & Video. But I have still Noise on Audio.
gst-launch-1.0 -e v4l2src device=/dev/v4l/by-path/platform-fe801000.csi-video-index0 ! video/x-raw,format=UYVY,framerate=20/1 ! videoconvert ! videoscale ! video/x-raw, width=1280,height=720 ! videocrop top=0 left=0 right=800 bottom=0 ! videoflip method=counterclockwise ! omxh264enc ! h264parse! flvmux name=mux streamable=true ! rtmpsink sync=true async=true location='rtmp://XXXXXXXXXXXXXXXX' alsasrc device="hw:1,0" ! queue ! audioconvert ! audioresample ! audio/x-raw,rate=44100 ! queue ! voaacenc bitrate=128000 ! audio/mpeg ! aacparse ! audio/mpeg, mpegversion=4 ! mux.
I have kind of resolved Noise by following code. but still not so good.
"ffmpeg -ar 48000 -ac 1 -f alsa -i hw:1,0 -acodec aac -ab 128k -af 'highpass=f=200, lowpass=f=200' -f flv rtmp://XXXXX.XXXXXXX.XXXXX/LiveApp/"+ str(Id) + "-" + str(deviceId)+"-Audio"
The following GStreamer pipeline generates an mpeg2ts with an mpeg2 video track.
gst-launch-1.0 videotestsrc pattern=ball num-buffers=900 ! "video/x-raw, format=(string)I420, \
width=(int)704, height=(int)576, framerate=(fraction)25/1" ! avenc_mpeg2video ! \
mpegtsmux ! filesink location=test.ts
I need the video to be interlaced, but when adding the interlace component, I get:
gst-launch-1.0 videotestsrc pattern=ball num-buffers=900 ! "video/x-raw, format=(string)I420, \
width=(int)704, height=(int)576, framerate=(fraction)25/1" ! interlace ! avenc_mpeg2video ! \
mpegtsmux name=mux ! filesink location=test.ts
WARNING: erroneous pipeline: could not link interlace0 to avenc_mpeg2video0
Which is odd to me, as both components seem to support e.g. I420 on their pads. I've tried adding various capsfilters between interlace and avenc_mpeg2video, but to no avail - it just fails connecting in other ways. I believe the two should be compatible - can someone explain why the above fails and maybe even show what a working pipeline should look like?
Try to set alternate-scan parameter of avenc_mpeg2video to 1. There is no need to use the interlace component. This works for me with Gstreamer 1.19:
gst-launch-1.0 videotestsrc pattern=ball num-buffers=900 ! "video/x-raw, format=(string)I420, \
width=(int)704, height=(int)576, framerate=(fraction)25/1" ! avenc_mpeg2video alternate-scan=1 ! \
mpegtsmux ! filesink location=test.ts
You can check the parameters of avenc_mpeg2video with this command:
gst-inspect-1.0 avenc_mpeg2video
I try to write a GStreamer pipeline to capture the screen, put a box on the corner capturing the webcam and record audio (all at the same time).
If I hit Ctrl+C to stop after ten seconds, for example, I realize I only record about 2 seconds of video (and audio). Actually, I don't care that the recording were done in real time, but I just want that GStreamer records the full lenght it should be.
This is the pipeline I have so far:
gst-launch-1.0 --gst-debug=3 ximagesrc use-damage=0 \
! video/x-raw,width=1366,height=768,framerate=30/1 ! videoconvert \
! videomixer name=mix sink_0::alpha=1 sink_1::alpha=1 sink_1::xpos=1046 sink_1::ypos=528 \
! videoconvert ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 \
! vp8enc ! webmmux name=mux ! filesink location="out.webm" \
pulsesrc ! audioconvert ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! vorbisenc ! mux. \
v4l2src do-timestamp=true ! video/x-raw,width=320,height=240,framerate=30/1 ! mix.
I hope to have a solution, thank you.
I have a requirement where I need to encode a v4l2src source in H.264 while using a Matroska container. If I have .mkv file with embedded subtitles it is easy to extract subtitles with
gst-launch-1.0 filesrc location=test.mkv ! matroskademux ! "text/x-raw" ! filesink location=subtitles
From what I understand and assuming I understand correctly, during the encoding process the "subtitle_%u" pad needs to be linked to text/x-raw source using textoverlay.
gst-launch-1.0 textoverlay text="Video 1" valignment=top halignment=left font-desc="Sans, 60" ! mux. imxv4l2src device=/dev/video0 ! timeoverlay ! videoconvert ! queue ! vpuenc_h264 ! capsfilter
caps="video/x-h264" ! matroskamux name=mux ! filesink location=sub.mkv
I use the above pipeline but I do not get the overlay in the .mkv video. What is the correct way to encode a subtitle/text overlay while encoding a source in H.264 in a matroska container and then also later be able to extract it using the first pipeline?
Sanchayan.
You may try this:
gst-launch-1.0 \
filesrc location=subtitles.srt ! subparse ! kateenc category=SUB ! mux.subtitle_0 \
imxv4l2src device=/dev/video0 ! timeoverlay ! videoconvert ! queue ! vpuenc_h264 ! \
capsfilter caps="video/x-h264" ! matroskamux name=mux ! filesink location=sub.mkv
And the subtitles.srt file may be like this:
1
00:00:00,500 --> 00:00:05,000
CAM 1
2
00:00:05,500 --> 00:00:10,000
That's all folks !
I'd like to record a .webm file beside my main .mkv file to serve, that .webm file, to a video object on html page to read from (kind of simple streaming just to see what it's recording)
I'm using pipeline below (with tee for this purpose) to record from my webcam:
gst-launch-1.0 v4l2src device=/dev/video1 ! tee name=t t. \
! image/jpeg,width=1920,height=1080 ! capssetter \
caps='image/jpeg,width=1920,height=1080,framerate=30/1' ! queue \
! matroskamux name=mux pulsesrc device="alsa_input.usb-046d_Logitech_Webcam_C930e_AAF8A63E-02-C930e.analog-stereo" \
! 'audio/x-raw,channels=1,rate=44100' ! audioconvert ! vorbisenc ! queue \
! mux. mux. ! filesink location=/home/sina/Desktop/Recordings/Webcam.mkv \
t. ! queue ! (...pipeline?...) ! filesink location=/home/sina/Desktop/Recordings/TestWebcam.webm
How should I fill the pipeline for the last line?(what structure? encoder? muxer? ...)
While it's still possible to convert stream of JPEG pictures to .WebM with VP8 stream inside, it will be consuming operation and results will not be pretty: encoding→decoding→encoding sequence will spoil output bad (and use more CPU).
If you don't need JPEGs and don't care about video format inside .mkv file, easiest solution will be to use single VP8 encoder (because both .mkv and .webm files can contain VP8) and split encoded streams:
gst-launch-1.0 -e \
v4l2src ! vp8enc ! tee name=t ! \
queue ! matroskamux name=m ! filesink location=1.mkv \
pulsesrc ! vorbisenc ! m. \
t. ! \
queue ! webmmux ! filesink location=1.webm
Also, make sure you use -e option to force EOS when you terminate command via Ctrl + C.
GStreamer WebM muxer is very tiny layer over Matroska muxer: .webm is almost equal to .mkv.