I'm new to GStreamer and I'm trying to create a pipeline to display a video and record it at the same time. I've managed to make the display part using:
ss << "filesrc location=/home/videos/video1.avi ! avidemux name=demux demux.video_0 ! mpeg4videoparse ! avdec_mpeg4 ! nvvidconv ! video/x-raw,format=I420 ! appsink name=mysink";
Also, I've read that filesink location=somepath is used for saving data into a file but I don't know how combine it with the rest of the pipeline.
So, how do I use appsink and filesink in the same pipeline?
GStreamer offers a tee element for such cases. Note however that in most cases you will want a queue after each branch of a tee to prevent deadlocks. E.g.
filesrc location=/home/videos/video1.avi ! avidemux name=demux demux.video_0 ! mpeg4videoparse ! avdec_mpeg4 ! nvvidconv ! video/x-raw,format=I420 ! tee name=mytee ! queue ! appsink name=mysink mytee. ! queue ! filesink location=out.raw
Related
I know how to create dynamic pipeline in python or c, but I wonder is it possible to create dynamic pipeline in the PIPELINE-DESCRIPTION language?
I have a hls stream, which may contain audio, video or audio+video, I want to be able to deal with the pipeline-description.
Following pipline breaks when there is video only or audio only
gst-launch-1.0 -e rtspsrc location='rtsp://localhost:554' latency=0 name=d d. ! queue ! capsfilter caps="application/x-rtp,media=video" ! rtph264depay ! mpegtsmux name=mux ! filesink location=file.ts d. ! queue ! capsfilter caps="application/x-rtp,media=audio" ! decodebin ! audioconvert ! audioresample ! lamemp3enc ! mux.
I am a beginner with gstreamer so bear with me.
I have a working pipeline where audio and video from a test source is sent to the webrtcbin element used to send out offer. Pipeline is as follows:
PIPELINE_DESC = '''
webrtcbin name=sendrecv stun-server=stun://stun.l.google.com:19302
audiotestsrc is-live=true wave=red-noise ! audioconvert ! audioresample ! queue ! opusenc ! rtpopuspay !
queue ! application/x-rtp,media=audio,encoding-name=OPUS,payload=96 ! sendrecv.
videotestsrc is-live=true pattern=ball ! video/x-raw,width=320,height=240 ! videoconvert ! queue ! x264enc ! rtph264pay !
queue ! application/x-rtp,media=video,encoding-name=H264,payload=97 ! sendrecv.
'''
However doing this consumes a lot of CPU/Memory as gstreamer has to encode audio/video. Hence I was to use a pre-recorded file to lower the resource usage.
I want to use a sample file (sample.mp4) to send audio and video to the webRTCbin element. The mp4 file has H264 video and AAC audio. I have tried a lot of combinations of elements but it is not working. Could you please help me correct my pipeline?
PIPELINE_DESC = '''
webrtcbin name=sendrecv stun-server=stun://stun.l.google.com:19302
filesrc location=sample.mp4 ! decodebin ! audioconvert ! sendrecv.
filesrc location=sample.mp4 ! decodebin ! videoconvert ! sendrecv.
'''
Many thanks in advance.
mp4 file is a container file format and it needs to be demultiplexed to get video and audio. For that purpose, you can use GStreamer's qtdemux element.
Considering above, example pipeline could be something like this
PIPELINE_DESC = '''
filesrc location=test.mp4 ! qtdemux name=demux
webrtcbin name=sendrecv stun-server=stun://stun.l.google.com:19302
demux.audio_%u ! aacparse ! rtpmp4apay !
queue ! application/x-rtp,media=audio,encoding-name=MP4A-LATM,payload=96 ! sendrecv.
demux.video_%u ! h264parse ! rtph264pay !
queue ! application/x-rtp,media=video,encoding-name=H264,payload=97 ! sendrecv.
'''
I'm trying to use this command to create multiple files from stream but they have no audio playback, I think decodebin should be dealing with it, what am I doing wrong?
gst-launch-1.0 -e filesrc location=video.mp4 ! queue ! decodebin ! queue ! videoconvert ! queue ! timeoverlay ! x264enc key-int-max=10 ! h264parse ! splitmuxsink location=videos/test%02d.mp4 max-size-time=1000000000000
Why do you make that assumption that decodebin will handle it? decodebin will decode the audio track to raw audio and exposes an audio pad. If you don't make use of that pad it will not make itself into the file.
Since you transcode you will have to re-encode the audio too:
gst-launch-1.0 -e filesrc location=video.mp4 ! queue ! decodebin ! queue ! \
videoconvert ! queue ! timeoverlay ! x264enc key-int-max=10 ! h264parse ! \
splitmuxsink location=videos/test%02d.mp4 max-size-time=1000000000000 \
decodebin0. ! queue ! voaacenc ! aacparse ! splitmuxsink0.
If you don't want to re-encode but passthrough the audio decodebin is the wrong way. parsebin may be a better fit in that case.
I'm trying to record audio and video from internal webcam and mic to segmented files with gstreamer.
It works to a single file by doing:
gst-launch-1.0 -e avfvideosrc !
video/x-raw ! vtenc_h264 ! h264parse ! queue !
mpegtsmux name=mux ! filesink location=test.mp4 osxaudiosrc !
decodebin ! audioconvert ! faac ! aacparse ! queue ! mux.
It doesn't work when doing:
gst-launch-1.0 -e avfvideosrc !
video/x-raw ! vtenc_h264 ! h264parse ! queue !
splitmuxsink
muxer=mpegtsmux
location=test%04d.mp4
max-size-time=1000000000
name=mux osxaudiosrc !
decodebin ! audioconvert ! faac ! aacparse ! queue ! mux.
saying erroneous pipeline: could not link queue1 to mux
I'm using gstreamer 1.12.3 on Mac OSX Sierra
Note: The H264/AAC encoding isn't necessary for what I want to achieve, so if there are solutions that only work with e.g. avimux, for whatever reason, that's fine.
EDIT: I've tried this on a windows machine with the same error.
gst-launch-1.0 -ev ksvideosrc ! video/x-raw !
videoconvert ! queue !
splitmuxsink max-size-time=1000000000 muxer=avimux name=mux
location=video%04d.avi autoaudiosrc !
decodebin ! audioconvert ! queue ! mux.
Just like on Mac, replacing splitmuxsink with avimux ! filesink works. I'm sure I'm just missing out on some 'pipeline' logic so any clarifiction that can push me in the right direction would be helpful.
I needed to send the audio stream to the audio track of the muxer like so: mux.audio_0
gst-launch-1.0 -ev ksvideosrc ! video/x-raw !
videoconvert ! queue !
splitmuxsink max-size-time=1000000000 muxer=avimux name=mux
location=video%04d.avi autoaudiosrc !
decodebin ! audioconvert ! queue ! mux.audio_O
This happens when the documentation should be clear but you're missing out on some basic knowledge on how to interpret it.
I'm working on gstreamer.Is there any way to store the output of mpegtsdemux element in a pipeline to a file as I'm interested in seperating audio,video ts packets into different files.
You can seperate video and audio track after mpegtsdemux. I hope this exemple will help you:
gst-launch filesrc location="source.ts" ! mpegtsdemux name=demux ! queue max-size-buffers=400000000 ! decodebin ! videorate ! videoscale ! ffenc_mpeg4 ! matroskamux ! filesink location="your_video_file.mkv" demux. ! queue max-size-buffers=400000000 ! decodebin ! audioconvert ! wavenc! filesink location="your_audio_file.wav"