I'm trying to convert an mp3 file to wav with gstreamer. Here's the pipeline:
gst-launch-1.0 filesrc location=audio.mp3 ! audio/mpeg ! mpg123audiodec ! wavenc ! filesink location=audio.wav
Also, I'd like the output to be 24 bit/48kHz
I get this error:
ERROR: from element /GstPipeline:pipeline0/GstCapsFilter:capsfilter0: Filter caps do not completely specify the output format
There was another similar thread that I saw here and tried to comment, but I had to have 50 points or whatever;)
I would make use of the bins to make your life easier. I came up with this:
gst-launch-1.0 filesrc location=in.mp3 ! decodebin ! audioresample ! audioconvert ! \
audio/x-raw,format=S24LE,rate=48000 ! wavenc ! filesink location=out.wav
Which gives me this result:
$ file out.wav
out.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 24 bit, stereo 48000 Hz
Related
I need to demux the video frames and KLV data from an MPEG-TS stream in sync, frame-by-frame.
The following command to demux the KLV data and outputs a text file with the KLV data.
gst-launch-1.0 filesrc location="some_file.ts" ! tsdemux name=demux \
demux. ! queue ! meta/x-klv ! filesink location="some_file-KLV.txt"
The following command to demux the video and outputs a video file.
gst-launch-1.0 filesrc location="some_file.ts" ! tsdemux name=demux \
demux. ! queue ! decodebin ! videorate ! videoscale ! x264enc ! mp4mux ! filesink location="some_file-video.mp4"
On combining the above two:
gst-launch-1.0 filesrc location="some_file.ts" ! tsdemux name=demux \
demux. ! queue ! decodebin ! videorate ! videoscale ! x264enc ! mp4mux ! filesink location="some_file-video.mp4"
demux. ! queue ! meta/x-klv ! filesink location="some_file.txt"
The command doesn't work. It just gets stuck after the following message on the terminal;
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
and, the size text and video files is 0 bytes.
An example .ts file can be found at(this file hasn't been uploaded and created by me, it is part of data for some code on github(https://gist.github.com/All4Gis/509fbe06ce53a0885744d16595811e6f)): https://drive.google.com/drive/folders/1AIbCGTqjk8NgA4R818pGSvU1UCcm-lib?usp=sharing
Thank you for helping! Cheers. :)
Edit:
I realised that there can be some confusion. The files in the link above were just used to create the .ts file.
The .ts file I am using, is available directly in either of the links below:
https://drive.google.com/drive/folders/1t-u8rnEE2MftWQkS1q3UB-J3ogXBr3p9?usp=sharing
https://easyupload.io/xufeny
It seems if we use Gstreamer's multiqueue element, instead of queue, the files are being created.
I tried the above based on a suggestion from a commenter on another website I had posted the question on.
But, the KLV data and frames are still not in sync. That is what I am trying to do now.
Regards.
I cannot figure out an gst-launch invocation to simply play an opus file to pulseaudio. Any help?
Things I've tried
130 % file foo.opus
foo.opus: Ogg data, Opus audio, version 0.1, stereo, 44100 Hz (Input Sample Rate)
0 % gst-launch-1.0 filesrc location=foo.opus ! pulsesink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
ERROR: from element /GstPipeline:pipeline0/GstPulseSink:pulsesink0: The stream is in the wrong format.
1 % gst-launch-1.0 filesrc location=foo.opus ! opusparse ! pulsesink
WARNING: erroneous pipeline: could not link opusparse0 to pulsesink0
1 % gst-launch-1.0 filesrc location=foo.opus ! opusdec ! pulsesink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
ERROR: from element /GstPipeline:pipeline0/GstFileSrc:filesrc0: Internal data stream error.
# this runs, but only makes one burst of noise
0 % gst-launch-1.0 filesrc location=foo.opus ! opusparse ! opusdec ! pulsesink
Here is a working example.
gst-launch-1.0 filesrc location=foo.opus ! oggdemux ! opusparse ! opusdec ! alsasink
I was able to piece this together from the example pipelines here
https://gstreamer.freedesktop.org/documentation/opus/opusdec.html?gi-language=c
and
https://gstreamer.freedesktop.org/documentation/opus/opusenc.html?gi-language=c
Although, in those examples, there is no opusparse. I don't know why opusparse is needed for my files, but it isn't needed for sine.ogg created with opusenc. Or what the difference is between parsing and decoding the opus file data ...
im trying to construct a pipeline that will read any file (mp3, ogg, flac etc) and updates its tags using the taginject element, but it is not working.
Here are my attempts:
gst-launch-1.0 filesrc location=file.mp3 ! decodebin ! taginject tags="title=bla,artist=blub" ! filesink location=output_file.mp3
Result: The Pipeline runs, but it creates a 50mb file from a 4mb file, and that large file is not playable (and probably not containing tags, also).
gst-launch-1.0 filesrc location=file.mp3 ! taginject tags="title=test,artist=blub" ! filesink location=output_file.mp3
Result: The pipeline runs and creates a playable output file, but it contains no tags.
gst-launch-1.0 filesrc location=file.mp3 ! decodebin ! taginject tags="title=test,artist=blub" ! encodebin ! filesink location=output_file.mp3
Result: The pipeline does not run. It says taginject cannot be linked with encodebin.
I would appreciate any help on this, I just dont know what I am doing wrong (probably using the wrong elements... but I just cant find which are the right ones)
You need to add a muxer after taginject, e.g. something like:
gst-launch-1.0 filesrc location=file.mp3 ! parsebin ! \
taginject tags="title=bla,artist=blub" ! id3v2mux ! \
filesink location=output_file.mp3
Also using parsebin avoids decoding.
I dumped raw RTP payload in mu-law format into a file and feeded this file into gstreamer like this:
gst-launch-1.0 filesrc location=interlocutor2.raw ! audio/x-mulaw, rate=8000, channels=1 ! mulawdec ! audioconvert ! audioresample ! autoaudiosink
But it playbacks too fast. What I'm doing wrong?
I could not find a stable and balanced approach to demux the A/V stream and then save it as a playable h264 annex B format video.
Well, I tried the following steps for shrinkage file.
gst-launch-0.10 filesrc
location=h264_720p_mp_3.1_3mbps_aac_shrinkage.mkv ! matroskademux !
filesink location=abc.h264
-rw-rw-r-- 1 XXX XXX 28697147 Nov 1 10:04 h264_720p_mp_3.1_3mbps_aac_shrinkage.mkv
-rw-rw-r-- 1 XXX XXX 27581733 Nov 1 10:19 abc.h264
a file got saved with "not so smaller" size but is not playable, however the parent container format is playable with the following pipeline
gst-launch-0.10 filesrc
location=h264_720p_mp_3.1_3mbps_aac_shrinkage.mkv ! matroskademux !
h264parse ! ffdec_h264 ! ffmpegcolorspace ! ximagesink
Questions
Q1. What are the methods to extract the video ES and Audio ES from different containers using gstreamer ?
Q2. Q1 using some other methods which always works and/or are easy ?
In general, you need to specify which pad you're interested in. Otherwise you couldn't distinguish the audio ES from the video ES.
The following works on my machine:
gst-launch-1.0 filesrc location=example.mkv ! queue ! matroskademux name=dmux dmux.video_0 ! queue ! filesink location=vid.265 dmux.audio_0 ! queue ! filesink location=aud.aac
Following all commands works for me. It creates h.264 byte stream file from mp4 video file. Newly created file also played using ffplay or gst-play-1.0
gst-launch-1.0 filesrc location=./VID-20190903-WA0012.mp4 ! qtdemux name=pnkj_demux ! h264parse ! video/x-h264,stream-format=byte-stream ! filesink location=./VID-20190903-WA0012_1.264
gst-launch-1.0 -e filesrc location=./VID-20190903-WA0012.mp4 ! qtdemux name=pnkj_demux ! h264parse ! video/x-h264,stream-format=byte-stream ! filesink location=./VID-20190903-WA0012_2.264
gst-launch-1.0 filesrc location=./VID-20190903-WA0012.mp4 ! qtdemux name=pnkj_demux pnkj_demux.video_0 ! h264parse ! video/x-h264,stream-format=byte-stream ! filesink location=./VID-20190903-WA0012_3.264
gst-launch-1.0 -e filesrc location=./VID-20190903-WA0012.mp4 ! qtdemux name=pnkj_demux pnkj_demux.video_0 ! h264parse ! video/x-h264,stream-format=byte-stream ! filesink location=./VID-20190903-WA0012_4.264