Gstreamer Combine Video and Sound from different sources and Broadcast to RTMP - gstreamer

I have googled it all but I couldn't find solution to my problem. I will be happy if anyone had similiar need and resolved somehow.
I do stream to RTMP server by following command. It captures video from HDMI Encoder, crops, rotates video.
gst-launch-1.0 -e v4l2src device=/dev/v4l/by-path/platform-fe801000.csi-video-index0 ! video/x-raw,format=UYVY,framerate=20/1 ! videoconvert ! videoscale ! video/x-raw, width=1280,height=720 ! videocrop top=0 left=0 right=800 bottom=0 ! videoflip method=counterclockwise ! omxh264enc ! h264parse! flvmux name=mux streamable=true ! rtmpsink sync=true async=true location='rtmp://XXXXX live=true'
and I want to add Audio to existing microphone on Raspberry. For example I can record microphone input to wav file by below pipeline.
gst-launch-1.0 alsasrc num-buffers=1000 device="hw:1,0" ! audio/x-raw,format=S16LE ! wavenc ! filesink location = a.wav
My question is; how can I add audio to my existing command line which streams to RTMP Server? And also, when I capture audio to file, there is a lots of Noise. How can I avoid?
Thank you

I have combined Audio & Video. But I have still Noise on Audio.
gst-launch-1.0 -e v4l2src device=/dev/v4l/by-path/platform-fe801000.csi-video-index0 ! video/x-raw,format=UYVY,framerate=20/1 ! videoconvert ! videoscale ! video/x-raw, width=1280,height=720 ! videocrop top=0 left=0 right=800 bottom=0 ! videoflip method=counterclockwise ! omxh264enc ! h264parse! flvmux name=mux streamable=true ! rtmpsink sync=true async=true location='rtmp://XXXXXXXXXXXXXXXX' alsasrc device="hw:1,0" ! queue ! audioconvert ! audioresample ! audio/x-raw,rate=44100 ! queue ! voaacenc bitrate=128000 ! audio/mpeg ! aacparse ! audio/mpeg, mpegversion=4 ! mux.

I have kind of resolved Noise by following code. but still not so good.
"ffmpeg -ar 48000 -ac 1 -f alsa -i hw:1,0 -acodec aac -ab 128k -af 'highpass=f=200, lowpass=f=200' -f flv rtmp://XXXXX.XXXXXXX.XXXXX/LiveApp/"+ str(Id) + "-" + str(deviceId)+"-Audio"

Related

GStreamer connect Audio (usb mic) & Video (camera) to rtspclientsink // error to connect qtmux0 to rtspclientsink0

I'm trying to mux audio & video to rtspclientsink.
My pipeline :
gst-launch-1.0 -v libcamerasrc ! video/x-raw, width=640, height=480, framerate=30/1 ! videoconvert ! videoscale ! clockoverlay time-format="%D %H:%M:%S" ! x264enc speed-preset=ultrafast bitrate=600 key-int-max=40 ! queue ! qtmux0. autoaudiosrc ! voaacenc ! qtmux ! rtspclientsink location=rtsp://localhost:8554/mystream
WARNING: erroneous pipeline: could not link qtmux0 to rtspclientsink0
I'm feeding rtsp-simple-server docker image to which handle the rtsp server side.
Could you help me how to fix my pipeline?
Many thanks for the help.
Only video pipeline is working:
gst-launch-1.0 libcamerasrc ! video/x-raw, width=640, height=480, framerate=30/1 ! videoconvert ! videoscale ! clockoverlay time-format="%D %H:%M:%S" ! x264enc speed-preset=ultrafast bitrate=600 key-int-max=40 ! queue ! rtspclientsink location=rtsp://localhost:8554/mystream
but missing the audio obviously.

gstreamer help/advice regarding decklinkvideosrc to avi out via a qtmux

I am currently trying to use gstreamer to record a duplicate of my PC output but I am struggling to find a pipeline that works.
The requirements I am trying to meet are:
decklinkvideo & decklinkaudio in
encode into h264 via my recording machines built in gpu (vaapih264enc)
output to an avi container.
The closest I have come so far is the following pipeline:
GST_DEBUG=3,decklink:5 gst-launch-1.0 -e decklinkvideosrc mode=1080p60 ! queue ! videoconvert ! vaapipostproc ! vaapih264enc tune=low-power ! h264parse ! queue ! mux. qtmux name=mux ! filesink location=/home/user/video_a.avi
However, this results in a video which is green and red only and the scale is way off.output video frame
Any advice or help would be greatly appreciated.
I managed to fix this issue with the following pipeline:
GST_DEBUG=3,decklink:5 gst-launch-1.0 -e decklinkvideosrc ! queue ! videoconvert ! vaapih264enc ! h264parse ! queue ! mux. decklinkaudiosrc ! queue ! audioconvert ! lamemp3enc ! mux. qtmux name=mux ! filesink location=test.avi
It seems that the vaapih264enc does not support I420 format when using the free drivers which the kernel pre-installs. Therefore, you could fix this issue with the below pipeline. Here we are converting the I420 into NV12.
GST_DEBUG=3,decklink:5 gst-launch-1.0 -e decklinkvideosrc mode=1080p60 ! queue ! videoconvert ! video/x-raw,format=NV12 ! vaapih264enc tune=low-power ! h264parse ! queue ! mux. qtmux name=mux ! filesink location=test.avi
You can also fix this by installing the non-free va drivers with:
sudo apt-get install intel-media-va-driver-non-free
Then run the below command to check they have been installed correct. The non-free driver unlocks high-power mode for the vaapih264enc and support for I420 format.
sudo vainfo
If you did install the non-free driver then the following pipeline should work for you with vaapih264enc running in high powered mode.
GST_DEBUG=3,decklink:5 gst-launch-1.0 -e decklinkvideosrc mode=1080p60 ! queue ! videoconvert ! vaapih264enc ! h264parse ! queue ! mux. decklinkaudiosrc ! queue ! audioconvert ! lamemp3enc ! mux. qtmux name=mux ! filesink location=test.avi

capture segmented audio and video with gstreamer

I'm trying to record audio and video from internal webcam and mic to segmented files with gstreamer.
It works to a single file by doing:
gst-launch-1.0 -e avfvideosrc !
video/x-raw ! vtenc_h264 ! h264parse ! queue !
mpegtsmux name=mux ! filesink location=test.mp4 osxaudiosrc !
decodebin ! audioconvert ! faac ! aacparse ! queue ! mux.
It doesn't work when doing:
gst-launch-1.0 -e avfvideosrc !
video/x-raw ! vtenc_h264 ! h264parse ! queue !
splitmuxsink
muxer=mpegtsmux
location=test%04d.mp4
max-size-time=1000000000
name=mux osxaudiosrc !
decodebin ! audioconvert ! faac ! aacparse ! queue ! mux.
saying erroneous pipeline: could not link queue1 to mux
I'm using gstreamer 1.12.3 on Mac OSX Sierra
Note: The H264/AAC encoding isn't necessary for what I want to achieve, so if there are solutions that only work with e.g. avimux, for whatever reason, that's fine.
EDIT: I've tried this on a windows machine with the same error.
gst-launch-1.0 -ev ksvideosrc ! video/x-raw !
videoconvert ! queue !
splitmuxsink max-size-time=1000000000 muxer=avimux name=mux
location=video%04d.avi autoaudiosrc !
decodebin ! audioconvert ! queue ! mux.
Just like on Mac, replacing splitmuxsink with avimux ! filesink works. I'm sure I'm just missing out on some 'pipeline' logic so any clarifiction that can push me in the right direction would be helpful.
I needed to send the audio stream to the audio track of the muxer like so: mux.audio_0
gst-launch-1.0 -ev ksvideosrc ! video/x-raw !
videoconvert ! queue !
splitmuxsink max-size-time=1000000000 muxer=avimux name=mux
location=video%04d.avi autoaudiosrc !
decodebin ! audioconvert ! queue ! mux.audio_O
This happens when the documentation should be clear but you're missing out on some basic knowledge on how to interpret it.

Adding subtitle while doing H.264 encoding into a Matroska container

I have a requirement where I need to encode a v4l2src source in H.264 while using a Matroska container. If I have .mkv file with embedded subtitles it is easy to extract subtitles with
gst-launch-1.0 filesrc location=test.mkv ! matroskademux ! "text/x-raw" ! filesink location=subtitles
From what I understand and assuming I understand correctly, during the encoding process the "subtitle_%u" pad needs to be linked to text/x-raw source using textoverlay.
gst-launch-1.0 textoverlay text="Video 1" valignment=top halignment=left font-desc="Sans, 60" ! mux. imxv4l2src device=/dev/video0 ! timeoverlay ! videoconvert ! queue ! vpuenc_h264 ! capsfilter
caps="video/x-h264" ! matroskamux name=mux ! filesink location=sub.mkv
I use the above pipeline but I do not get the overlay in the .mkv video. What is the correct way to encode a subtitle/text overlay while encoding a source in H.264 in a matroska container and then also later be able to extract it using the first pipeline?
Sanchayan.
You may try this:
gst-launch-1.0 \
filesrc location=subtitles.srt ! subparse ! kateenc category=SUB ! mux.subtitle_0 \
imxv4l2src device=/dev/video0 ! timeoverlay ! videoconvert ! queue ! vpuenc_h264 ! \
capsfilter caps="video/x-h264" ! matroskamux name=mux ! filesink location=sub.mkv
And the subtitles.srt file may be like this:
1
00:00:00,500 --> 00:00:05,000
CAM 1
2
00:00:05,500 --> 00:00:10,000
That's all folks !

Gstreamer pipeline for converting files with optional audio/video

I am using the following pipeline to convert an flv file to mp4.
gst-launch-1.0 -vvv -e filesrc location="c.flv" ! flvdemux name=demux \
demux.audio ! queue ! decodebin ! audioconvert ! faac bitrate=32000 ! mux. \
demux.video ! queue ! decodebin ! videoconvert ! video/x-raw,format=I420 ! x264enc speed-preset=superfast tune=zerolatency psy-tune=grain sync-lookahead=5 bitrate=480 key-int-max=50 ref=2 ! mux. \
mp4mux name=mux ! filesink location="c.mp4"
The problem is when (for example) audio is missing, the pipeline gets stuck. (Same thing happens if just hooking a fakesink to demux.audio).
I need a way for the filters to ignore missing tracks, or produce empty tracks.