I am currently trying to use gstreamer to record a duplicate of my PC output but I am struggling to find a pipeline that works.
The requirements I am trying to meet are:
decklinkvideo & decklinkaudio in
encode into h264 via my recording machines built in gpu (vaapih264enc)
output to an avi container.
The closest I have come so far is the following pipeline:
GST_DEBUG=3,decklink:5 gst-launch-1.0 -e decklinkvideosrc mode=1080p60 ! queue ! videoconvert ! vaapipostproc ! vaapih264enc tune=low-power ! h264parse ! queue ! mux. qtmux name=mux ! filesink location=/home/user/video_a.avi
However, this results in a video which is green and red only and the scale is way off.output video frame
Any advice or help would be greatly appreciated.
I managed to fix this issue with the following pipeline:
GST_DEBUG=3,decklink:5 gst-launch-1.0 -e decklinkvideosrc ! queue ! videoconvert ! vaapih264enc ! h264parse ! queue ! mux. decklinkaudiosrc ! queue ! audioconvert ! lamemp3enc ! mux. qtmux name=mux ! filesink location=test.avi
It seems that the vaapih264enc does not support I420 format when using the free drivers which the kernel pre-installs. Therefore, you could fix this issue with the below pipeline. Here we are converting the I420 into NV12.
GST_DEBUG=3,decklink:5 gst-launch-1.0 -e decklinkvideosrc mode=1080p60 ! queue ! videoconvert ! video/x-raw,format=NV12 ! vaapih264enc tune=low-power ! h264parse ! queue ! mux. qtmux name=mux ! filesink location=test.avi
You can also fix this by installing the non-free va drivers with:
sudo apt-get install intel-media-va-driver-non-free
Then run the below command to check they have been installed correct. The non-free driver unlocks high-power mode for the vaapih264enc and support for I420 format.
sudo vainfo
If you did install the non-free driver then the following pipeline should work for you with vaapih264enc running in high powered mode.
GST_DEBUG=3,decklink:5 gst-launch-1.0 -e decklinkvideosrc mode=1080p60 ! queue ! videoconvert ! vaapih264enc ! h264parse ! queue ! mux. decklinkaudiosrc ! queue ! audioconvert ! lamemp3enc ! mux. qtmux name=mux ! filesink location=test.avi
Related
I know how to create dynamic pipeline in python or c, but I wonder is it possible to create dynamic pipeline in the PIPELINE-DESCRIPTION language?
I have a hls stream, which may contain audio, video or audio+video, I want to be able to deal with the pipeline-description.
Following pipline breaks when there is video only or audio only
gst-launch-1.0 -e rtspsrc location='rtsp://localhost:554' latency=0 name=d d. ! queue ! capsfilter caps="application/x-rtp,media=video" ! rtph264depay ! mpegtsmux name=mux ! filesink location=file.ts d. ! queue ! capsfilter caps="application/x-rtp,media=audio" ! decodebin ! audioconvert ! audioresample ! lamemp3enc ! mux.
I'm trying to record audio and video from internal webcam and mic to segmented files with gstreamer.
It works to a single file by doing:
gst-launch-1.0 -e avfvideosrc !
video/x-raw ! vtenc_h264 ! h264parse ! queue !
mpegtsmux name=mux ! filesink location=test.mp4 osxaudiosrc !
decodebin ! audioconvert ! faac ! aacparse ! queue ! mux.
It doesn't work when doing:
gst-launch-1.0 -e avfvideosrc !
video/x-raw ! vtenc_h264 ! h264parse ! queue !
splitmuxsink
muxer=mpegtsmux
location=test%04d.mp4
max-size-time=1000000000
name=mux osxaudiosrc !
decodebin ! audioconvert ! faac ! aacparse ! queue ! mux.
saying erroneous pipeline: could not link queue1 to mux
I'm using gstreamer 1.12.3 on Mac OSX Sierra
Note: The H264/AAC encoding isn't necessary for what I want to achieve, so if there are solutions that only work with e.g. avimux, for whatever reason, that's fine.
EDIT: I've tried this on a windows machine with the same error.
gst-launch-1.0 -ev ksvideosrc ! video/x-raw !
videoconvert ! queue !
splitmuxsink max-size-time=1000000000 muxer=avimux name=mux
location=video%04d.avi autoaudiosrc !
decodebin ! audioconvert ! queue ! mux.
Just like on Mac, replacing splitmuxsink with avimux ! filesink works. I'm sure I'm just missing out on some 'pipeline' logic so any clarifiction that can push me in the right direction would be helpful.
I needed to send the audio stream to the audio track of the muxer like so: mux.audio_0
gst-launch-1.0 -ev ksvideosrc ! video/x-raw !
videoconvert ! queue !
splitmuxsink max-size-time=1000000000 muxer=avimux name=mux
location=video%04d.avi autoaudiosrc !
decodebin ! audioconvert ! queue ! mux.audio_O
This happens when the documentation should be clear but you're missing out on some basic knowledge on how to interpret it.
I am using the following pipeline to convert an flv file to mp4.
gst-launch-1.0 -vvv -e filesrc location="c.flv" ! flvdemux name=demux \
demux.audio ! queue ! decodebin ! audioconvert ! faac bitrate=32000 ! mux. \
demux.video ! queue ! decodebin ! videoconvert ! video/x-raw,format=I420 ! x264enc speed-preset=superfast tune=zerolatency psy-tune=grain sync-lookahead=5 bitrate=480 key-int-max=50 ref=2 ! mux. \
mp4mux name=mux ! filesink location="c.mp4"
The problem is when (for example) audio is missing, the pipeline gets stuck. (Same thing happens if just hooking a fakesink to demux.audio).
I need a way for the filters to ignore missing tracks, or produce empty tracks.
I have been working on an application where I use rtspsrc to gather audio and video from one network camera to another. However I can not watch the stream from the camera and thereby cant verify that the stream works as intended. To verify that the stream is correct I want to record it on a SD card and then play the file on a computer. The problem is that I want the camera to do as much of the parsing, decoding, depayloading as possible since that is the purpose of the application.
I thereby have to separate the audio and video streams by a demuxer and do the parsing, decoding etc and thereafter mux them back into a matroska file.
The video decoder has been omitted since it is not done yet for this camera.
Demux to live playback sink(works)
gst-launch-0.10 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?resolution=1280x720&audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 name=d d. ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! autoaudiosink d. ! rtph264depay ! ffdec_h264 ! queue ! ffmpegcolorspace ! autovideosink
Multiple rtspsrc to matroska(works)
gst-launch-1.0 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! queue ! matroskamux name=mux ! filesink location=/var/spool/storage/SD_DISK/testmovie.mkv rtspsrc location="rtsp://root:pass#192.168.0.91/axis-media/media.amp?resolution=1280x720" latency=0 ! rtph264depay ! h264parse ! mux.
Single rtspsrc to matroska(fails)
gst-launch-1.0 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?resolution=1280x720&audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 name=d d. ! queue ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! queue ! matroskamux name=mux d. ! queue ! rtph264depay ! h264parse ! queue ! mux. ! filesink location=/var/spool/storage/SD_DISK/testmoviesinglertsp.mkv
The last example fails with the error message
WARNING: erroneous pipeline: link without source element
Have i missunderstood the usage of matroska mux and why does the 2 above examples work but not the last?
The problem is here:
queue ! mux. ! filesink
You need to do
queue ! mux. mux. ! filesink
mux. means that gst-launch should select a pad automatically from mux. and link it. You could also specify manually a name, like mux.src. So syntactically you are missing another element/pad there to link to the other element.
Hi to all i try to play and record mp3 souphttpsrc in the same time but i don't have a good result someone can help please?
gst-launch-1.0 -e filesrc location=/dev/fd/0 ! h264parse ! tee name=myvid \! queue ! decodebin ! xvimagesink sync=false \ myvid. ! queue ! mux.video_0 \ alsasrc device="plughw:2,0" ! "audio/x-raw,rate=44100,channels=1,depth=24" ! audioconvert ! queue ! filesink location=/tmp/out.mp4
thank you
Hi your pipeline is slightly wrong.
There is no encoding happening with the audio so you're saving raw audio into the container.
There is no muxer and mux.video_0 therefore does not resolve to any pad on any element.
Here is a pipeline without these issues:
gst-launch-1.0 -e mp4mux name=mux ! filesink location=/tmp/out.mp4 filesrc location=/dev/fd/0 ! h264parse ! tee name=myvid ! queue ! decodebin ! xvimagesink sync=false myvid. ! queue ! mux.video_0 \ alsasrc device="plughw:2,0" ! "audio/x-raw,rate=44100,channels=1,depth=24" ! audioconvert ! queue ! lame ! mux.audio_0