gstreamer failed queueing buffer while recording video on ubuntu 20.04 - gstreamer

I am new to the gst-launch-1.0 tool. In our project, we have upgraded Ubuntu 18.04 kernel 4.15.0-142-generic to Ubuntu 20.04 is 5.4.0-135-generic. gst-launch-1.0 version is 1.14.5(GStreamer 1.14.5), it is the same for both Operating Systems```
We are using the gst-launch-1.0 for video recording, I was an able to record a video with the below command on ubuntu 18.04, but the same command is not working on Ubuntu 20.04:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
/GstPipeline:pipeline0/GstSplitMuxSink:splitmuxsink0/GstFileSink:sink: async = false
/GstPipeline:pipeline0/GstV4l2Src:v4l2src0.GstPad:src: caps = video/x-raw, width=(int)394, height=(int)392, framerate=(fraction)30/1, format=(string)BGR, colorimetry=(string)sRGB, interlace-mode=(string)progressive
Setting pipeline to PLAYING ...
New clock: GstSystemClock
/GstPipeline:pipeline0/GstVideoConvert:videoconvert0.GstPad:src: caps = video/x-raw, width=(int)394, height=(int)392, framerate=(fraction)30/1, format=(string)I420, interlace-mode=(string)progressive
Redistribute latency...
/GstPipeline:pipeline0/GstX264Enc:x264enc0.GstPad:sink: caps = video/x-raw, width=(int)394, height=(int)392, framerate=(fraction)30/1, format=(string)I420, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstVideoConvert:videoconvert0.GstPad:sink: caps = video/x-raw, width=(int)394, height=(int)392, framerate=(fraction)30/1, format=(string)BGR, colorimetry=(string)sRGB, interlace-mode=(string)progressive
After setting export GST_DEBUG=5 below is last part of the log:
0:00:01.113209286 1373 0x55b5f70d2d20 DEBUG v4l2bufferpool gstv4l2bufferpool.c:1464:gst_v4l2_buffer_pool_release_buffer:<v4l2src0:pool:src> release buffer 0x7f124417e360
0:00:01.113238267 1373 0x55b5f70d2d20 ERROR v4l2allocator gstv4l2allocator.c:1269:gst_v4l2_allocator_qbuf:<v4l2src0:pool:src:allocator> failed queueing buffer 0: Invalid request descriptor
0:00:01.113260991 1373 0x55b5f70d2d20 ERROR v4l2bufferpool gstv4l2bufferpool.c:1196:gst_v4l2_buffer_pool_qbuf:<v4l2src0:pool:src> could not queue a buffer 0
0:00:01.113281859 1373 0x55b5f70d2d20 DEBUG GST_PERFORMANCE gstbufferpool.c:1309:default_release_buffer:<v4l2src0:pool:src> discarding buffer 0x7f124417e360: memory tag set
0:00:01.113306708 1373 0x55b5f70d2d20 DEBUG GST_MEMORY gstmemory.c:88:_gst_memory_free: free memory 0x7f1244178150
0:00:01.113328592 1373 0x55b5f70d2d20 DEBUG fdmemory gstfdmemory.c:74:gst_fd_mem_free: 0x7f1244178150: freed
0:00:01.113352970 1373 0x55b5f70d2d20 DEBUG GST_MEMORY gstmemory.c:88:_gst_memory_free: free memory 0x55b5f7122600
0:00:01.113373026 1373 0x55b5f70d2d20 DEBUG fdmemory gstfdmemory.c:74:gst_fd_mem_free: 0x55b5f7122600: freed
0:00:01.113412613 1373 0x55b5f70d2d20 DEBUG GST_SCHEDULING gstpad.c:4323:gst_pad_chain_data_unchecked:<x264enc0:sink> calling chainfunction &gst_video_encoder_chain with buffer buffer: 0x55b5f712dc60, pts 0:00:00.234978960, dts 99:99:99.999999999, dur 0:00:00.033333333, size 233632, offset 3, offset_end 4, flags 0x0
0:00:01.114983356 1373 0x55b5f70d2d20 DEBUG GST_SCHEDULING gstpad.c:4329:gst_pad_chain_data_unchecked:<x264enc0:sink> called chainfunction &gst_video_encoder_chain with buffer 0x55b5f712dc60, returned ok
0:00:01.115015593 1373 0x55b5f70d2d20 DEBUG GST_SCHEDULING gstpad.c:4329:gst_pad_chain_data_unchecked:<videoconvert0:sink> called chainfunction &gst_base_transform_chain with buffer 0x55b5f712d900, returned ok
0:00:01.115046926 1373 0x55b5f70d2d20 DEBUG basesrc gstbasesrc.c:2519:gst_base_src_get_range:<v4l2src0> calling create offset 18446744073709551615 length 4096, time 0
0:00:01.115070231 1373 0x55b5f70d2d20 DEBUG v4l2bufferpool gstv4l2bufferpool.c:1386:gst_v4l2_buffer_pool_acquire_buffer:<v4l2src0:pool:src> acquire
Please suggest the changes required in gst-launch-1.0 command option to record the video.

Related

Gstreamer: moov-recovery info with qtmux?

I am trying to experiment with Gstreamer and moov-recovery via qtmux.
When I try to get the recovery moov from a non-corrupted .mp4 file
gst-launch-1.0 filesrc location=full.mp4 ! qtdemux ! qtmux moov-recovery-file=moov_recov.mrf ! filesink location=recovered_video.mp4
then I get
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Got EOS from element "pipeline0".
Execution ended after 0:00:00.112361582
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
What is the reason for the Got EOS from element "pipeline0"?
And what would be the correct way to pull the recovery moov from the .mp4 file?
Thanks.
Your muxing process was a success. It took about a tenth of a second. Therefore the EOS. Since it did not crash or anything the file probaly gets removed after a successful muxing. There is no point in keeping that file.

Gstreamer pipeline fails to run with osxaudiosrc plugin

I am running below pipeline on mac but it shows error while running:
$**gst-launch-1.0 osxaudiosrc device=92**
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
0:00:00.048601000 16777 0x7fafe585d980 WARN osxaudio gstosxcoreaudio.c:500:gst_core_audio_asbd_to_caps: No sample rate
0:00:00.048699000 16777 0x7fafe585d980 ERROR audio-info audio-info.c:304:gboolean gst_audio_info_from_caps(GstAudioInfo *, const GstCaps *): no channel-mask property given
0:00:00.048736000 16777 0x7fafe585d980 WARN basesrc gstbasesrc.c:3072:void gst_base_src_loop(GstPad *):<osxaudiosrc0> error: Internal data stream error.
0:00:00.048744000 16777 0x7fafe585d980 WARN basesrc gstbasesrc.c:3072:void gst_base_src_loop(GstPad *):<osxaudiosrc0> error: streaming stopped, reason not-negotiated (-4)
New clock: GstAudioSrcClock
**ERROR: from element /GstPipeline:pipeline0/GstOsxAudioSrc:osxaudiosrc0: Internal data stream error.**
Additional debug info:
gstbasesrc.c(3072): void gst_base_src_loop(GstPad *) ():
/GstPipeline:pipeline0/GstOsxAudioSrc:osxaudiosrc0:
streaming stopped, reason not-negotiated (-4)
Execution ended after 0:00:00.000101000
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
The device id mentioned in the cmd is fetched from gst-inspect and is of macbook speakers. I am using GStreamer 1.16.2 on catalina.
What is wrong/missing in this pipeline?
TL;DR: you have an incomplete pipeline.
Once the osxaudiosrc starts producing buffers, where is it supposed to go? Do you want to encode it and/or write it to file? Should it be streamed somewhere? Should it be plotted? ...
This is also the reason GStreamer is erroring out. There's no element after your source element, so if it were to start playing, those buffers would somehow end up in the void, with no destination to go to (to be a bit more thorough: you're trying to push data on a pad which has no peer, so it would try to dereference an invalid sinkpad). Since this is not possibe, GStreamer just plainly stops.
An example pipeline is given in the osxaudiosrc documentation:
gst-launch-1.0 osxaudiosrc ! wavenc ! filesink location=audio.wav

which gstreamer rtp payloader element should I use to wrap AAC audio?

I am trying to figure out the proper gstreamer element to use to transmit AAC audio over RTP.
By dumping the dot graph of a playbin on the file I can conclude that the caps coming out of the tsdemux is audio/mpeg,mpegversion:2,stream-format:adts .
If I use the following pipeline
gst-launch-1.0 -v filesrc location=$BA ! tsdemux ! audio/mpeg ! rtpmpapay ! filesink location=/tmp/test.rtp
it fails:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1: caps = audio/mpeg
WARNING: from element /GstPipeline:pipeline0/GstTSDemux:tsdemux0: Delayed linking failed.
Additional debug info:
/var/tmp/portage/media-libs/gstreamer-1.12.3/work/gstreamer-1.12.3/gst/parse/grammar.y(510): gst_parse_no_more_pads (): /GstPipeline:pipeline0/GstTSDemux:tsdemux0:
failed delayed linking some pad of GstTSDemux named tsdemux0 to some pad of GstRtpMPAPay named rtpmpapay0
ERROR: from element /GstPipeline:pipeline0/GstTSDemux:tsdemux0: Internal data stream error.
Additional debug info:
/var/tmp/portage/media-libs/gst-plugins-bad-1.12.3/work/gst-plugins-bad-1.12.3/gst/mpegtsdemux/mpegtsbase.c(1613): mpegts_base_loop (): /GstPipeline:pipeline0/GstTSDemux:tsdemux0:
streaming stopped, reason not-linked (-1)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
Which gstreamer element should I be using to wrap AAC audio in an RTP packet?
I guess its rtpmp4apay: RTP MPEG4 audio payloader. Maybe you want/need aacparse before the payloader as well.

Multicast streaming of music file (wav, mp3, ...etc) with GStreamer: can receive but the data is Intermittent

I want to implement multicast streaming in embedded-linux (Yocto) system.
I thought that Gstreamer is easy to implement it, but the received data is choppy and like as if it passes low-pass filter when the filesrc is mp3.
When the filesrc is wav, the recived data is like as if it passes choppy and high-pass filter.
Here is the gst-launch command (mp3).
Tx:
GST_DEBUG=3 gst-launch-1.0 filesrc location="background.mp3" ! decodebin ! \
audioconvert ! rtpL16pay ! queue ! udpsink host=239.0.0.1 auto-multicast=true port=5004
Rx:
GST_DEBUG=3 gst-launch-1.0 udpsrc multicast-group=239.0.0.1 auto-multicast=true port=5004 \
caps="application/x-rtp, media=audio, clock-rate=44100, payload=0" ! rtpL16depay !\
audioconvert ! alsasink
GST_DEBUG3 result is as follows:
Tx:
Setting pipeline to PAUSED ...
0:00:00.115165875 936 0x7b8c40 WARN basesrc gstbasesrc.c:3486:gst_base_src_start_complete:<filesrc0> pad not activated yet
Pipeline is PREROLLING ...
====== BEEP: 4.1.4 build on Feb 14 2017 13:39:18. ======
Core: MP3 decoder Wrapper build on Mar 21 2014 15:04:50
file: /usr/lib/imx-mm/audio-codec/wrap/lib_mp3d_wrap_arm12_elinux.so.3
CODEC: BLN_MAD-MMCODECS_MP3D_ARM_02.13.00_CORTEX-A8 build on Jul 12 2016 13:15:30.
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Rx:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
0:00:00.269585916 1232 0x772320 WARN alsa conf.c:4974:snd_config_expand: alsalib error: Unknown parameters {AES0 0x02 AES1 0x82 AES2 0x00 AES3 0x02}
0:00:00.269914500 1232 0x772320 WARN alsa pcm.c:2495:snd_pcm_open_noupdate: alsalib error: Unknown PCM default:{AES0 0x02 AES1 0x82 AES2 0x00 AES3 0x02}
0:00:00.283770666 1232 0x772320 WARN alsa pcm_hw.c:1250:snd_pcm_hw_get_chmap: alsalib error: Cannot read Channel Map ctl
: No such file or directory
Redistribute latency...
0:00:06.335845459 1232 0x772320 WARN audiobasesink gstaudiobasesink.c:1807:gst_audio_base_sink_get_alignment:<alsasink0> Unexpected discontinuity in audio timestamps of -0:00:00.120430839, resyncing
0:00:07.167036751 1232 0x772320 WARN audiobasesink gstaudiobasesink.c:1512:gst_audio_base_sink_skew_slaving:<alsasink0> correct clock skew -0:00:00.020498109 < -+0:00:00.020000000
0:00:07.178596167 1232 0x772320 WARN audiobasesink gstaudiobasesink.c:1484:gst_audio_base_sink_skew_slaving:<alsasink0> correct clock skew +0:00:00.020102330 > +0:00:00.020000000
0:00:08.215633667 1232 0x772320 WARN audiobasesink gstaudiobasesink.c:1807:gst_audio_base_sink_get_alignment:<alsasink0> Unexpected discontinuity in audio timestamps of -0:00:00.128480725, resyncing
0:00:08.962452751 1232 0x772320 WARN audiobasesink gstaudiobasesink.c:1512:gst_audio_base_sink_skew_slaving:<alsasink0> correct clock skew -0:00:00.020283552 < -+0:00:00.020000000
0:00:09.095737543 1232 0x772320 WARN audiobasesink gstaudiobasesink.c:1484:gst_audio_base_sink_skew_slaving:<alsasink0> correct clock skew +0:00:00.020221135 > +0:00:00.020000000
0:00:10.135542001 1232 0x772320 WARN audiobasesink gstaudiobasesink.c:1807:gst_audio_base_sink_get_alignment:<alsasink0> Unexpected discontinuity in audio timestamps of -0:00:00.125238095, resyncing
Here is the gst-command (wav)
Tx:
GST_DEBUG=3 gst-launch-1.0 filesrc location="background.wav" ! decodebin ! \
audioconvert ! rtpL16pay ! queue ! udpsink host=239.0.0.1 auto-multicast=true port=5004
Rx:
GST_DEBUG=3 gst-launch-1.0 udpsrc multicast-group=239.0.0.1 auto-multicast=true port=5004 \
caps="application/x-rtp, media=audio, clock-rate=44100, payload=0" ! rtpL16depay !\
audioconvert ! alsasink
GST_DEBUG3 result is as follows:
Tx:
Setting pipeline to PAUSED ...
0:00:00.116759125 958 0x1c0cc40 WARN basesrc gstbasesrc.c:3486:gst_base_src_start_complete:<filesrc0> pad not activated yet
Pipeline is PREROLLING ...
0:00:00.136465125 958 0x1c1f460 FIXME default gstutils.c:3764:gst_pad_create_stream_id_internal:<wavparse0:src> Creating random stream-id, consider implementing a deterministic way of creating a stream-id
0:00:00.137230750 958 0x1c1f460 WARN riff riff-read.c:794:gst_riff_parse_info:<wavparse0> Unknown INFO (metadata) tag entry IPRT
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
0:00:00.152916625 958 0x1c0cc40 WARN bin gstbin.c:2597:gst_bin_do_latency_func:<pipeline0> did not really configure latency of 0:00:00.000000000
New clock: GstSystemClock
^Chandling interrupt.
Interrupt: Stopping pipeline ...
Execution ended after 0:00:03.435631250
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
Rx:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
0:00:00.270927792 1238 0x120d320 WARN alsa conf.c:4974:snd_config_expand: alsalib error: Unknown parameters {AES0 0x02 AES1 0x82 AES2 0x00 AES3 0x02}
0:00:00.271261625 1238 0x120d320 WARN alsa pcm.c:2495:snd_pcm_open_noupdate: alsalib error: Unknown PCM default:{AES0 0x02 AES1 0x82 AES2 0x00 AES3 0x02}
0:00:00.284991583 1238 0x120d320 WARN alsa pcm_hw.c:1250:snd_pcm_hw_get_chmap: alsalib error: Cannot read Channel Map ctl
: No such file or directory
Redistribute latency...
0:00:04.227007167 1238 0x120d320 WARN audiobasesink gstaudiobasesink.c:1807:gst_audio_base_sink_get_alignment:<alsasink0> Unexpected discontinuity in audio timestamps of +0:00:00.053514739, resyncing
0:00:04.314387751 1238 0x120d320 WARN audiobasesink gstaudiobasesink.c:1807:gst_audio_base_sink_get_alignment:<alsasink0> Unexpected discontinuity in audio timestamps of +0:00:00.055510204, resyncing
0:00:04.396900334 1238 0x120d320 WARN audiobasesink gstaudiobasesink.c:1807:gst_audio_base_sink_get_alignment:<alsasink0> Unexpected discontinuity in audio timestamps of +0:00:00.052607709, resyncing
0:00:04.483605876 1238 0x120d320 WARN audiobasesink gstaudiobasesink.c:1807:gst_audio_base_sink_get_alignment:<alsasink0> Unexpected discontinuity in audio timestamps of +0:00:00.055215419, resyncing
0:00:04.570297626 1238 0x120d320 WARN audiobasesink gstaudiobasesink.c:1807:gst_audio_base_sink_get_alignment:<alsasink0> Unexpected discontinuity in audio timestamps of +0:00:00.055215419, resyncing
If I use pulsesink instead of alsasink, following is appeared.
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Redistribute latency...
0:00:00.410499500 1255 0x70813120 WARN pulse pulsesink.c:702:gst_pulsering_stream_underflow_cb:<pulsesink0> Got underflow
0:00:00.423478917 1255 0x7e7920 WARN audiobasesink gstaudiobasesink.c:1807:gst_audio_base_sink_get_alignment:<pulsesink0> Unexpected discontinuity in audio timestamps of +0:00:00.038095238, resyncing
0:00:00.450453459 1255 0x70813120 WARN pulse pulsesink.c:702:gst_pulsering_stream_underflow_cb:<pulsesink0> Got underflow
What is the problem ? Can anybody solve this ?
I hope your kind reply.
Thank you for reading.
I reckon that the problem is that the application/x-rtp parameters don't match between the transmitter and receiver.
This can be easily solved putting verbose in the transmitter and then using the same parameters in the receiver:
Let's see in an example:
TX:
gst-launch-1.0 -v filesrc location="test.mp3" ! decodebin ! audioconvert ! rtpL16pay ! queue ! udpsink host=239.0.0.1 auto-multicast=true port=5004
Which last lines (thanks to -v) are next:
/GstPipeline:pipeline0/GstQueue:queue0.GstPad:sink: caps =
"application/x-rtp\,\ media\=(string)audio\,\
clock-rate\=(int)44100\,\ encoding-name\=(string)L16\,\
encoding-params\=(string)2\,\ channels\=(int)2\,\
payload\=(int)96\,\ ssrc\=(uint)1806894235\,\
timestamp-offset\=(uint)468998694\,\ seqnum-offset\=(uint)20785"
/GstPipeline:pipeline0/GstRtpL16Pay:rtpl16pay0.GstPad:sink: caps =
"audio/x-raw\,\ layout\=(string)interleaved\,\ rate\=(int)44100\,\
format\=(string)S16BE\,\ channels\=(int)2\,\
channel-mask\=(bitmask)0x0000000000000003"
/GstPipeline:pipeline0/GstAudioConvert:audioconvert0.GstPad:sink: caps
= "audio/x-raw\,\ format\=(string)S32LE\,\ layout\=(string)interleaved\,\ rate\=(int)44100\,\
channels\=(int)2\,\ channel-mask\=(bitmask)0x0000000000000003"
/GstPipeline:pipeline0/GstDecodeBin:decodebin0.GstDecodePad:src_0.GstProxyPad:proxypad1:
caps = "audio/x-raw\,\ format\=(string)S32LE\,\
layout\=(string)interleaved\,\ rate\=(int)44100\,\
channels\=(int)2\,\ channel-mask\=(bitmask)0x0000000000000003"
/GstPipeline:pipeline0/GstRtpL16Pay:rtpl16pay0: timestamp = 468998694
/GstPipeline:pipeline0/GstRtpL16Pay:rtpl16pay0: seqnum = 20785
Pipeline is PREROLLED ... Setting pipeline to PLAYING ... New clock:
GstSystemClock
Using the same parameters in the player or receiver:
RX:
gst-launch-1.0 udpsrc caps='application/x-rtp, media=(string)audio, clock-rate=(int)44100, encoding-name=(string)L16, encoding-params=(string)2, channels=(int)2, payload=(int)96' ! rtpL16depay ! pulsesink
And this plays perfectly.
Going to the .wav file in my case the transmitter is:
gst-launch-1.0 -v filesrc location="test.wav" ! decodebin ! audioconvert ! rtpL16pay ! queue ! udpsink host=239.0.0.1 auto-multicast=true port=5004
Last lines output:
/GstPipeline:pipeline0/GstQueue:queue0.GstPad:sink: caps =
"application/x-rtp\,\ media\=(string)audio\,\ clock-rate\=(int)44100\,\ encoding-name\=(string)L16\,\
encoding-params\=(string)1\,\ channels\=(int)1\,\
payload\=(int)96\,\ ssrc\=(uint)620824608\,\
timestamp-offset\=(uint)433377669\,\ seqnum-offset\=(uint)7103"
/GstPipeline:pipeline0/GstRtpL16Pay:rtpl16pay0.GstPad:sink: caps =
"audio/x-raw\,\ layout\=(string)interleaved\,\ rate\=(int)44100\,\
format\=(string)S16BE\,\ channels\=(int)1"
/GstPipeline:pipeline0/GstAudioConvert:audioconvert0.GstPad:sink: caps
= "audio/x-raw\,\ format\=(string)S16LE\,\ layout\=(string)interleaved\,\ channels\=(int)1\,\
rate\=(int)44100"
/GstPipeline:pipeline0/GstDecodeBin:decodebin0.GstDecodePad:src_0.GstProxyPad:proxypad1:
caps = "audio/x-raw\,\ format\=(string)S16LE\,\
layout\=(string)interleaved\,\ channels\=(int)1\,\
rate\=(int)44100" /GstPipeline:pipeline0/GstRtpL16Pay:rtpl16pay0:
timestamp = 433377669 /GstPipeline:pipeline0/GstRtpL16Pay:rtpl16pay0:
seqnum = 7103 Pipeline is PREROLLED ... Setting pipeline to PLAYING
... New clock: GstSystemClock
Using this information in the receiver:
gst-launch-1.0 udpsrc caps='application/x-rtp, media=(string)audio, clock-rate=(int)44100, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, payload=(int)96' ! rtpL16depay ! pulsesink
The audio also plays smoothly.
Hope this helps.

Gstreamer hangs while generating timelapse from JPEGs on Raspberry pi

Situation:
I want to generate a timelapse on my Raspberry Pi 512mb, using the onboard H.264 encoder.
Input: +300 JPEG files (2592 x 1944 pixels), example: http://i.imgur.com/czohiki.jpg
Output: h264 video file (2592 x 1944 pixels)
GStreamer 1.0.8 + omxencoder (http://pastebin.com/u8T7mE18)
Raspberry Pi version: Jun 17 2013 20:45:38 version d380dde43fe729f043befb5cf775f99e54586cde (clean) (release)
Memory: gpu_mem_512=400
Gstreamer pipeline:
sudo gst-launch-1.0 -v multifilesrc location=GOPR%04d.JPG
start-index=4711 stop-index=4750
caps="image/jpeg,framerate=(fraction)25/1" do-timestamp=true !
omxmjpegdec ! videorate ! video/x-raw,framerate=1/5 ! videoconvert !
omxh264enc ! "video/x-h264,profile=high" ! h264parse ! queue
max-size-bytes=10000000 ! matroskamux ! filesink location=test.mkv
--gst-debug=4
Problem:
Gstreamer hangs and no output is generated.
--gst-debug=4:
0:00:01.027331700 2422 0x17824f0 INFO GST_EVENT
gstevent.c:709:gst_event_new_segment: creating segment event time
segment start=0:00:00.000000000, stop=99:99:99.999999999,
rate=1.000000, applied_rate=1.000000, flags=0x00,
time=0:00:00.000000000, base=0:00:00.000000000, position
0:00:00.000000000, duration 99:99:99.999999999
0:00:29.346875982 2422 0x17824f0 INFO basesrc
gstbasesrc.c:2619:gst_base_src_loop: pausing after
gst_base_src_get_range() = eos
--gst-debug=5:
0:01:16.089222125 2232 0x1fa8f0 DEBUG basesrc
gstbasesrc.c:2773:gst_base_src_loop: pausing task,
reason eos
0:01:16.095962979 2232 0x1fa8f0 DEBUG GST_PADS
gstpad.c:5251:gst_pad_pause_task: pause task
0:01:16.107724723 2232 0x1fa8f0 DEBUG task
gsttask.c:662:gst_task_set_state: Changing task
0x2180a8 to state 2
0:01:16.435800597 2232 0x1fa8f0 DEBUG GST_EVENT
gstevent.c:300:gst_event_new_custom: creating new event 0x129f80 eos
28174
0:01:16.436191588 2232 0x1fa8f0 DEBUG GST_PADS
gstpad.c:4628:gst_pad_push_event: event eos updated
0:01:16.436414584 2232 0x1fa8f0 DEBUG GST_PADS
gstpad.c:3333:check_sticky: pushing all sticky
events
0:01:16.436620579 2232 0x1fa8f0 DEBUG GST_PADS
gstpad.c:3282:push_sticky: event stream-start was
already received
0:01:16.436816575 2232 0x1fa8f0 DEBUG GST_PADS
gstpad.c:3282:push_sticky: event caps was already
received
0:01:16.437001571 2232 0x1fa8f0 DEBUG GST_PADS
gstpad.c:3282:push_sticky: event segment was
already received
0:01:16.440457495 2232 0x1fa8f0 DEBUG GST_EVENT
gstpad.c:4771:gst_pad_send_event_unchecked:
have event type eos event at time 99:99:99.999999999: (NULL)
0:01:16.449986289 2232 0x1fa8f0 DEBUG videodecoder
gstvideodecoder.c:1144:gst_video_decoder_sink_event:
received event 28174, eos
0:01:16.462165024 2232 0x1fa8f0 DEBUG omxvideodec
gstomxvideodec.c:2489:gst_omx_video_dec_drain:
Draining component
0:01:16.463930986 2232 0x1fa8f0 DEBUG omx
gstomx.c:1223:gst_omx_port_acquire_buffer:
Acquiring video_decode buffer from port 130
0:01:16.465537951 2232 0x1fa8f0 DEBUG omx
gstomx.c:1334:gst_omx_port_acquire_buffer:
video_decode port 130 has pending buffers
0:01:16.466576928 2232 0x1fa8f0 DEBUG omx
gstomx.c:1353:gst_omx_port_acquire_buffer:
Acquired buffer 0x21f938 (0xb2068550) from video_decode port 130: 0
0:01:16.468237892 2232 0x1fa8f0 DEBUG omx
gstomx.c:1375:gst_omx_port_release_buffer:
Releasing buffer 0x21f938 (0xb2068550) to video_decode port 130
0:01:16.470360846 2232 0x1fa8f0 DEBUG omx
gstomx.c:1420:gst_omx_port_release_buffer:
Released buffer 0x21f938 to video_decode port 130: None (0x00000000)
0:01:16.472046809 2232 0x1fa8f0 DEBUG omxvideodec
gstomxvideodec.c:2544:gst_omx_video_dec_drain:
Waiting until component is drained
Full console dump: https://mega.co.nz/#!eI1ASBSY!R4mnuGqRH7M8dT4q6j03mBKsQ1A-7oCXU4stu50LnOw
Question:
What am I doing wrong?
Is there another or more efficient way to create high res timelapses from JPEGs on a raspberry pi?
Sorry about the necro, but I think this is trying to use the Raspberry Pi HW H264 encoder at a higher resolution than it is capable of. It can manage just over 1080p30, and has a maximum line length of 2048 pixels, so your source images are too large.
You could try MJPEG which does not have the same limitation.
I don't have a Pi to test on right now, but I'd suspect one possible issue is that you have two OMX elements in the same process. GStreamer is just wrapping OMX and IIRC the OMX API doesn't really want you running two things at once, particularly in the same process...
I'd try it with a jpegdec instead of omxmjpegdec, with a pipeline more along these lines:
gst-launch-1.0 multifilesrc location="GOPR%04d.JPG "start-index=4711 stop-index=4750 ! image/jpeg,framerate=1/5 ! jpegdec ! videoconvert ! omxh264enc ! h264parse ! matroskamux ! filesink location=test.mkv
I don't think there is any point to using queue elements on the Pi either.