I dumped raw RTP payload in mu-law format into a file and feeded this file into gstreamer like this:
gst-launch-1.0 filesrc location=interlocutor2.raw ! audio/x-mulaw, rate=8000, channels=1 ! mulawdec ! audioconvert ! audioresample ! autoaudiosink
But it playbacks too fast. What I'm doing wrong?
Related
As title, how can I change this so it also plays the files audio too?
gst-launch-1.0 filesrc location='/usr/share/myfile.mp4' ! qtdemux ! h264parse ! imxvpudec ! imxipuvideosink framebuffer=/dev/fb2 &
The I can get the file to play with audio using
gst-launch-1.0 -v playbin uri=file:///path/to/somefile.mp4
But I need the output to be onto device fb2 like in the first example
Many thanks
I posted a link to this question into the gstreamer reddit and a hero called Omerzet saved the day.
The following is the solution:
gst-launch-1.0 filesrc location='/usr/share/myfile.mp4' ! qtdemux name=demux demux.video_0 ! queue ! h264parse ! imxvpudec ! imxipuvideosink framebuffer=/dev/fb2 demux.audio_0 ! queue ! decodebin ! audioconvert ! audioresample ! alsasink device="sysdefault:CARD=imxhdmisoc"
Where framebuffer diverts the video to device /dev/fb2.
And
alsasink device="sysdefault:CARD=imxhdmisoc"
Diverts the audio to my define sound card.
I am a beginner with gstreamer so bear with me.
I have a working pipeline where audio and video from a test source is sent to the webrtcbin element used to send out offer. Pipeline is as follows:
PIPELINE_DESC = '''
webrtcbin name=sendrecv stun-server=stun://stun.l.google.com:19302
audiotestsrc is-live=true wave=red-noise ! audioconvert ! audioresample ! queue ! opusenc ! rtpopuspay !
queue ! application/x-rtp,media=audio,encoding-name=OPUS,payload=96 ! sendrecv.
videotestsrc is-live=true pattern=ball ! video/x-raw,width=320,height=240 ! videoconvert ! queue ! x264enc ! rtph264pay !
queue ! application/x-rtp,media=video,encoding-name=H264,payload=97 ! sendrecv.
'''
However doing this consumes a lot of CPU/Memory as gstreamer has to encode audio/video. Hence I was to use a pre-recorded file to lower the resource usage.
I want to use a sample file (sample.mp4) to send audio and video to the webRTCbin element. The mp4 file has H264 video and AAC audio. I have tried a lot of combinations of elements but it is not working. Could you please help me correct my pipeline?
PIPELINE_DESC = '''
webrtcbin name=sendrecv stun-server=stun://stun.l.google.com:19302
filesrc location=sample.mp4 ! decodebin ! audioconvert ! sendrecv.
filesrc location=sample.mp4 ! decodebin ! videoconvert ! sendrecv.
'''
Many thanks in advance.
mp4 file is a container file format and it needs to be demultiplexed to get video and audio. For that purpose, you can use GStreamer's qtdemux element.
Considering above, example pipeline could be something like this
PIPELINE_DESC = '''
filesrc location=test.mp4 ! qtdemux name=demux
webrtcbin name=sendrecv stun-server=stun://stun.l.google.com:19302
demux.audio_%u ! aacparse ! rtpmp4apay !
queue ! application/x-rtp,media=audio,encoding-name=MP4A-LATM,payload=96 ! sendrecv.
demux.video_%u ! h264parse ! rtph264pay !
queue ! application/x-rtp,media=video,encoding-name=H264,payload=97 ! sendrecv.
'''
I'm trying to record audio and video from internal webcam and mic to segmented files with gstreamer.
It works to a single file by doing:
gst-launch-1.0 -e avfvideosrc !
video/x-raw ! vtenc_h264 ! h264parse ! queue !
mpegtsmux name=mux ! filesink location=test.mp4 osxaudiosrc !
decodebin ! audioconvert ! faac ! aacparse ! queue ! mux.
It doesn't work when doing:
gst-launch-1.0 -e avfvideosrc !
video/x-raw ! vtenc_h264 ! h264parse ! queue !
splitmuxsink
muxer=mpegtsmux
location=test%04d.mp4
max-size-time=1000000000
name=mux osxaudiosrc !
decodebin ! audioconvert ! faac ! aacparse ! queue ! mux.
saying erroneous pipeline: could not link queue1 to mux
I'm using gstreamer 1.12.3 on Mac OSX Sierra
Note: The H264/AAC encoding isn't necessary for what I want to achieve, so if there are solutions that only work with e.g. avimux, for whatever reason, that's fine.
EDIT: I've tried this on a windows machine with the same error.
gst-launch-1.0 -ev ksvideosrc ! video/x-raw !
videoconvert ! queue !
splitmuxsink max-size-time=1000000000 muxer=avimux name=mux
location=video%04d.avi autoaudiosrc !
decodebin ! audioconvert ! queue ! mux.
Just like on Mac, replacing splitmuxsink with avimux ! filesink works. I'm sure I'm just missing out on some 'pipeline' logic so any clarifiction that can push me in the right direction would be helpful.
I needed to send the audio stream to the audio track of the muxer like so: mux.audio_0
gst-launch-1.0 -ev ksvideosrc ! video/x-raw !
videoconvert ! queue !
splitmuxsink max-size-time=1000000000 muxer=avimux name=mux
location=video%04d.avi autoaudiosrc !
decodebin ! audioconvert ! queue ! mux.audio_O
This happens when the documentation should be clear but you're missing out on some basic knowledge on how to interpret it.
I have a requirement where I need to encode a v4l2src source in H.264 while using a Matroska container. If I have .mkv file with embedded subtitles it is easy to extract subtitles with
gst-launch-1.0 filesrc location=test.mkv ! matroskademux ! "text/x-raw" ! filesink location=subtitles
From what I understand and assuming I understand correctly, during the encoding process the "subtitle_%u" pad needs to be linked to text/x-raw source using textoverlay.
gst-launch-1.0 textoverlay text="Video 1" valignment=top halignment=left font-desc="Sans, 60" ! mux. imxv4l2src device=/dev/video0 ! timeoverlay ! videoconvert ! queue ! vpuenc_h264 ! capsfilter
caps="video/x-h264" ! matroskamux name=mux ! filesink location=sub.mkv
I use the above pipeline but I do not get the overlay in the .mkv video. What is the correct way to encode a subtitle/text overlay while encoding a source in H.264 in a matroska container and then also later be able to extract it using the first pipeline?
Sanchayan.
You may try this:
gst-launch-1.0 \
filesrc location=subtitles.srt ! subparse ! kateenc category=SUB ! mux.subtitle_0 \
imxv4l2src device=/dev/video0 ! timeoverlay ! videoconvert ! queue ! vpuenc_h264 ! \
capsfilter caps="video/x-h264" ! matroskamux name=mux ! filesink location=sub.mkv
And the subtitles.srt file may be like this:
1
00:00:00,500 --> 00:00:05,000
CAM 1
2
00:00:05,500 --> 00:00:10,000
That's all folks !
I have accidentally write a H264 data without any container using gstreamer by using following pipeline:
gst-launch-1.0 filesrc location=input.avi ! image/jpeg,width=672,height=378,framerate=15/1 ! jpegdec ! videoconvert ! x264enc ! filesink location=output.mkv
The correct pipeline should have been as follows:
gst-launch-1.0 filesrc location=input.avi ! image/jpeg,width=672,height=378,framerate=15/1 ! jpegdec ! videoconvert ! x264enc ! matroskamuxer ! filesink location=output.mkv
Now I have been trying to correct these streaming files, but I cannot find any appropriate solution yet.
Could you please give me suggestions on solution to this problem?
Regards.
I would start by seeing if you can just extract the data and play it. Then try and mux it. Something like this:
gst-launch-1.0 filesrc location=output.mkv ! video/x-h264,framerate=30/1 ! h264parse ! avdec_h264 ! ximagesink
You may have to tweak the caps, but then once you get it playing you can mux it.
gst-launch-1.0 filesrc location=output.mkv ! video/x-h264,framerate=30/1 ! h264parse ! matroskamux ! filesink location=good.mkv