Encoding a audio file using ffenc_aac - gstreamer

I am trying to encode an audio file using gstreamer. I am using the command
gst launch filesrc location=s.pcm ! audio/x-raw-int, rate=4000, channels=2, endianness=1234, width=16, depth=16, signed=true ! ffenc_aac ! filesink location=file.wav
And i am getting an error message:-
Setting pipeline to PAUSED ... Pipeline is PREROLLING ... ERROR: from
element /GstPipeline:pipeline0/GstFileSrc:filesrc0: Internal data flow
error. Additional debug info: gstbasesrc.c(2625): gst_base_src_loop
(): /GstPipeline:pipeline0/GstFileSrc:filesrc0: streaming task paused,
reason not-negotiated (-4) ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ... Freeing pipeline ...
can any one guide me to overcome this issue

Don't confuse encoding with containers. You cannot have an AAC encoded WAV, WAV's are PCM. You can have a 4k WAV or you can have an AAC encoded file in an MP4 or M4A container. Both examples are below. Note that in these examples the AAC encoders get very picky if you try to change the sample rate below 48000.
Create raw audio file
gst-launch audiotestsrc num-buffers=100 \
! audio/x-raw-int, rate=48000, channels=2, endianness=1234, width=16, depth=16, signed=true \
! filesink location=foo.pcm
Encode it as a WAV
gst-launch filesrc location=foo.pcm \
! audio/x-raw-int, rate=48000, channels=2, endianness=1234, width=16, depth=16, signed=true \
! audioresample \
! audio/x-raw-int, rate=4000 \
! wavenc \
! filesink location=foo.wav
Encode it as AAC and mux into mp4
dont really know why I had to encode then decode again, but nothing else worked, even though I could go directly from the audiotest src.
gst-launch filesrc location=foo.pcm \
! audio/x-raw-int, rate=48000, channels=2, endianness=1234, width=16, depth=16, signed=true \
! wavenc \
! wavparse \
! ffenc_aac \
! mp4mux \
! filesink location=foo.mp4
..alternately using faac
the pipeline was a lot cleaner and the output file was smaller
gst-launch filesrc location=foo.pcm \
! audio/x-raw-int, rate=48000, channels=2, endianness=1234, width=16, depth=16, signed=true \
! faac \
! mp4mux \
! filesink location=foo.mp4
or voaacenc
voaacenc wouldn't work below 48000 even though it looks to have the most flexible capabilities. I tried 8k,16k,48k,96k and 44100 which anecdotally changed the pitch of the test tone.
gst-launch filesrc location=foo.pcm \
! audio/x-raw-int, rate=48000, channels=2, endianness=1234, width=16, depth=16, signed=true \
! voaacenc \
! mp4mux \
! filesink location=foo.mp4
Low bit rate AAC
The lowest AAC bitrates I was successful with was 16000, here are those tests, again noting that faac produced the smallest file size.
gst-launch audiotestsrc num-buffers=100 \
! audio/x-raw-int, rate=16000, channels=2, endianness=1234, width=16, depth=16, signed=true \
! ffenc_aac \
! mp4mux \
! filesink location=foo.mp4
gst-launch audiotestsrc num-buffers=100 \
! audio/x-raw-int, rate=16000, channels=2, endianness=1234, width=16, depth=16, signed=true \
! faac \
! mp4mux \
! filesink location=foo.mp4

Related

Corrupted H264 video when streaming via RTP/UDP

I'm trying to stream a video encoded in H264 over RTP/UDP.
Sending:
gst-launch-1.0 \
videotestsrc ! \
video/x-raw,format=RGBx,width=960,height=540,framerate=25/1 ! \
videoconvert ! \
x264enc bitrate=2000 ! \
rtph264pay config-interval=1 pt=96 ! \
udpsink port=5000
Receive:
gst-launch-1.0 \
udpsrc port=5000 caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! \
rtph264depay ! \
decodebin ! \
videoconvert ! \
ximagesink
If I start receiving the video before sending it, then everything works as intended.
However, if I start receiving video after the start of sending, then the image "breaks".
An example of a corrupted image
How to fix this problem?
The problem was solved by specifying caps after videoconvert
...
videoconvert ! video/x-raw,format=I420
...

Gstreamer compositor using filesrc mp4 file

I'm trying to get used to using the gstreamer compositor.
I have this basic boilerplate example working. (Compositing 2 videotestsrc next to each other):
gst-launch-1.0 compositor name=comp \
sink_0::alpha=1 sink_0::xpos=0 sink_0::ypos=0 \
sink_1::alpha=0.5 sink_1::xpos=320 sink_1::ypos=0 ! \
queue2 ! video/x-raw, width=800, height=600 ! videoconvert ! xvimagesink \
videotestsrc pattern=1 ! "video/x-raw" ! comp.sink_0 \
videotestsrc pattern=8 ! "video/x-raw" ! comp.sink_1
Then I tried changing one of the video test src to a mp4 file
I know that this command line works:
gst-launch-1.0 filesrc location=tst.mp4 ! decodebin ! videoconvert ! autovideosink
So I tried combining these two working pipelines
gst-launch-1.0 compositor name=comp \
sink_0::alpha=1 sink_0::xpos=0 sink_0::ypos=0 \
sink_1::alpha=0.5 sink_1::xpos=320 sink_1::ypos=0 ! \
queue2 ! decodebin ! video/x-raw, width=800, height=600 ! videoconvert ! xvimagesink \
videotestsrc pattern=1 ! "video/x-raw" ! comp.sink_0 \
filesrc location=tst.mp4 ! "video/x-raw" ! comp.sink_1
When I run this I get an error saying that the filter caps do not complete specify the output format.... output caps are unfixed.
I'm positive this must be a simple syntax error. Does anyone know how to fix my pipeline?
No, you need to use most of the elements that made the standalone command line work. E.g.
gst-launch-1.0 compositor name=comp \
sink_0::alpha=1 sink_0::xpos=0 sink_0::ypos=0 \
sink_1::alpha=0.5 sink_1::xpos=320 sink_1::ypos=0 ! \
queue2 ! decodebin ! video/x-raw, width=800, height=600 ! videoconvert ! xvimagesink \
videotestsrc pattern=1 ! "video/x-raw" ! comp.sink_0 \
filesrc location=tst.mp4 ! decodebin ! videoconvert ! comp.sink_1

pipeline to decode and encode my mpeg4 video passing through some filter

I use the following pipeline to play my video on screen
gst-launch-1.0 filesrc location=01_189_libxvid_1920x1080_6M.mp4 !
qtdemux ! mpeg4videoparse ! omxmpeg4videodec ! videobalance brightness=100 !
video/x-raw,format=BGRA ! waylandsink --gst-debug=*:2
but now instead of directly playing I want to encode it and save it in some folder. Please suggest
Should be something like this (example with h264 codec):
gst-launch-1.0 -e --gst-debug=3 \
filesrc location="/path/input/sample_in.mp4" \
! qtdemux \
! mpeg4videoparse \
! omxmpeg4videodec \
! queue \
! x264enc \
! qtmux \
! filesink location="/path/output/sample_out.mp4"

How to encode PCM file to MP3 or AC3 using Gstream pipeline in c

I have recorded a audio file using alsa in PCM file format and try to encode this in WAv file format using Gstream pipeline in linux terminal but I am getting a warning message like WARNING:
erroneous pipeline: no element "audioconvert"
The command I used is
--gst-launch filesrc location=file.pcm ! audio/x-raw-int, rate=8000, channels=1, endianness=4321, width=16, depth=16, signed=true ! audioconvert ! audio/x-raw-int, rate=8000, channels=1, endianness=1234, width=16, depth=16, signed=true ! wavenc ! filesink location=file.wav
If you are still getting the error message the audioconvert plugin is not installed correctly. To verify the pipeline is correct, I have simulated your task below successfully using the following piplines. Consider changing the title to something that represents the error message rather than than the particular encoding task as the pipline is fine.
Duplicate your raw audio file
gst-launch audiotestsrc num-buffers=100 \
! audio/x-raw-int, rate=8000, channels=1, endianness=1234, width=16, depth=16, signed=true \
! audioconvert \
! audio/x-raw-int, rate=8000, channels=1, endianness=4321, width=16, depth=16, signed=true \
! filesink location=foo.pcm
..encode it as a WAV
gst-launch filesrc location=foo.pcm \
! audio/x-raw-int, rate=8000, channels=1, endianness=4321, width=16, depth=16, signed=true \
! audioconvert \
! audio/x-raw-int, rate=8000, channels=1, endianness=1234, width=16, depth=16, signed=true \
! wavenc \
! filesink location=foo.wav

gstreamer muxing with x264enc

I am trying to convert a DVD to mkv file with gstreamer. The pipeline I use is:
gst-launch -evv multifilesrc location="VTS_01_%d.VOB" index=1 ! dvddemux name=demuxer \
matroskamux name=mux ! filesink location=test.mkv \
demuxer.current_video ! queue ! mpeg2dec ! x264enc ! mux. \
demuxer.current_audio ! queue ! ffdec_ac3 ! lamemp3enc ! mux.
Unfortunately the pipeline does not go beyond prerolling. When I replace x264enc with for instance ffenc_mpeg4, then everything works fine..
This may work :
gst-launch filesrc location=file.vob \
! queue \
! dvddemux name=demuxer matroskamux name=mux \
! queue \
! filesink location=test.mkv demuxer.current_video\
! queue \
! ffdec_mpeg2video \
! ffdeinterlace \
! x264enc \
! 'video/x-h264, width=720, height=576, framerate=25/1' \
! mux. demuxer.current_audio \
! queue max-size-bytes=0 max-size-buffers=0 max-size-time=10000000000 \
! ffdec_ac3 \
! audioconvert \
! lamemp3enc \
! mux.
Byte stream should be 0 - sorry for that earlier
You need to give the caps of the video after the x264enc
and you need to increase the limits on the audio queue to handle the delay in x264enc
These two changes have the pipeline running at my end.
The deinterlacer is optional but desirable for interlaced content.