How to ensure GStreamer has grabbed frames before decoding a jpeg - gstreamer

I have a GStreamer pipeline that grabs an mjpeg webcam stream from 3 separate cameras, and saves 2 frames each. I'm executing these commands on the USB 3.0 bus of an Odroid XU4 with Ubuntu 18.04 When doing this, I discovered I would occasionally get a garbled image like this in the collection of output images:
It wouldn't always happen, but maybe 1/5 executions an image might look like that.
I then discovered if I decode the jpeg then re-encode, this doesn't ever happen. See the below pipeline:
gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=2 ! \
image/jpeg,width=3840,height=2160 ! queue ! jpegdec ! queue ! \
jpegenc ! multifilesink location=/root/cam0_%02d.jpg v4l2src \
device=/dev/video1 num-buffers=2 ! image/jpeg,width=3840,height=2160 ! \
queue ! jpegdec ! queue ! jpegenc ! multifilesink \
location=/root/cam1_%02d.jpg v4l2src device=/dev/video2 num-buffers=2 ! \
image/jpeg,width=3840,height=2160 ! queue ! jpegdec ! queue ! jpegenc ! \
multifilesink location=/root/cam2_%02d.jpg
Now, when I run this pipeline I have a 1/5 chance of getting this error:
/GstPipeline:pipeline0/GstJpegDec:jpegdec0: No valid frames decoded before end of stream
Is there a way to make gstreamer wait for the frames instead of simply failing? I attempted this by adding the !queue! in the above pipeline, to no success.

Related

Run GStreamer pipeline branches without synchronization

I have following pipeline. One branch is used to display video, second one uploads frames every second to the HTTP server.
gst-launch-1.0 -e \
filesrc location=test.mp4 ! queue ! qtdemux name=d d.video_0 ! h264parse ! avdec_h264 ! tee name=t \
t. ! queue ! videoscale ! 'video/x-raw,width=(int)640,height=(int)480' ! autovideosink \
t. ! queue ! videorate ! 'video/x-raw,framerate=1/1' ! jpegenc ! curlhttpsink \
location=http://192.168.100.150:8080/upload_picture \
user=admin passwd=test \
content-type=image/jpeg \
use-content-length=false
Problem occurs in case server is unreachable or does not process uploads fast enough. In that case video playback will stop for the time it takes for upload branch to catch up. I would expect tee in combination with queue to allow video to run out of the queued buffers while queue in upload branch gets filled.
Is such out of sync behavior possible? I tried both sync and async properties but without desired result.

Audio and video alignment with gstreamer

I am using something similar to the following pipeline to ingest rtsp stream from a camera and provide original and 360p(transcoded) variants in a manifest. I am dynamically generating this pipeline using gstreamer rust.
The video works fine in web, VLC and ffplay. However it fails with AVPlayer(quicktime).
I found that the issue seems to be the audio/video alignment in the ts segments generated by gstreamer.
How can I ensure that the audio and video are aligned in the ts segments? Can audiobuffersplit be helpful? I am not sure how to use it in a pipeline like mine where hlssink2 internally is muxing it.
Appreciate any help in this! Thanks!
gst-launch-1.0 hlssink2 name=ingest1 playlist-length=5 max-files=10
target-duration=2 send-keyframe-requests=true
playlist-location=/live/stream.m3u8 location=/live/%d.ts \
rtspsrc location=rtsp://admin:password#10.10.10.20:554/ protocols=4
name=rtspsrc0 rtspsrc0. ! rtph264depay ! tee name=t t.! queue !
ingest1.video \
t.! queue ! decodebin name=video_decoder ! tee name=one_decode \
one_decode. ! queue ! videorate ! video/x-raw,framerate=15/1 !
videoscale ! video/x-raw,width=640,height=360 ! vaapih264enc ! hlssink2
name=ingest2 target-duration=2 playlist-location=/live/360/stream.m3u8
location=/live/360/%d.ts \
rtspsrc0. ! decodebin name=audio_decoder ! fdkaacenc ! tee name=audio_t \
audio_t. ! queue ! ingest1.audio \
audio_t. ! queue ! ingest2.audio

Extract h264 stream from USB webcam (logitech C920)

So, I'm starting to play around with gstreamer and I'm able to do very simple pipes such as
gst-launch-1.0 -v v4l2src device=/dev/video1 ! video/x-raw,format=YUY2,width=640,height=480,framerate=10/1 ! videoconvert ! autovideosink
Now, as my USB webcam (which is video1, video0 being the computer's built in camera) supports h264 (I have checked using lsusb), I would like to try to get the h264 feed directly. I understand that this feed is muxed in the mjpeg one, but looking around on the web it seems that gstreamer is able to get it nonetheless.
Since my end goal is to stream it from a Beaglebone, I made an attempt using the solution given to this post (adding a listener from a different terminal):
#sender
gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-264,width=320,height=90,framerate=10/1 ! tcpserversink host=192.168.148.112 port=9999
But this yields the following error :
WARNING: erroneous pipeline: could not link v4l2src0 to tcpserversink0
I also tried something similar to my first command, changing the source from raw to h264 (based on that post , trying the full command given there gives the same error message)
gst-launch-1.0 -v v4l2src device=/dev/video1 ! video/x-h264,width=640,height=480,framerate=10/1 ! h264parse ! avdec_h264 ! autovideosink
But again, this did not work either:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2948): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming task paused, reason not-negotiated (-4)
Execution ended after 0:00:00.036309961
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
I admit this is driving me pretty crazy: looking on SO or elsewhere on the web, there seems to be a lot of people who made it work with exactly the same webcam as the one I have (Logitech C920), but I keep running into issues one after the other.
What would be an example of correct pipe to extract the h264 from that webcam?
You definitely need to use a payloader before it hits the wire. For example rtph264pay. Here is an example that cannot test as I don't have your hardware available. I have working udp examples from alternates sources if this doesn't steer you in the right direction.
server
gst-launch v4l2src device=/dev/video1 \
! video/x-264,width=320,height=90,framerate=10/1 \
! x264enc \
! queue \
! rtph264pay, config-interval=3, pt=96, mtu=1500 \
! queue \
! tcpserversink host=127.0.0.1 port=9002
client
gst-launch tcpserversrc host=127.0.0.1 port=9002 \
! application/x-rtp, media=video, clock-rate=90000, encoding-name=H264, payload=96 \
! rtph264depay \
! video/x-h264 \
! queue \
! ffdec_h264 \
! queue \
! xvimagesink

Is it possible to record a .webm file using gstreamer?

I'd like to record a .webm file beside my main .mkv file to serve, that .webm file, to a video object on html page to read from (kind of simple streaming just to see what it's recording)
I'm using pipeline below (with tee for this purpose) to record from my webcam:
gst-launch-1.0 v4l2src device=/dev/video1 ! tee name=t t. \
! image/jpeg,width=1920,height=1080 ! capssetter \
caps='image/jpeg,width=1920,height=1080,framerate=30/1' ! queue \
! matroskamux name=mux pulsesrc device="alsa_input.usb-046d_Logitech_Webcam_C930e_AAF8A63E-02-C930e.analog-stereo" \
! 'audio/x-raw,channels=1,rate=44100' ! audioconvert ! vorbisenc ! queue \
! mux. mux. ! filesink location=/home/sina/Desktop/Recordings/Webcam.mkv \
t. ! queue ! (...pipeline?...) ! filesink location=/home/sina/Desktop/Recordings/TestWebcam.webm
How should I fill the pipeline for the last line?(what structure? encoder? muxer? ...)
While it's still possible to convert stream of JPEG pictures to .WebM with VP8 stream inside, it will be consuming operation and results will not be pretty: encoding→decoding→encoding sequence will spoil output bad (and use more CPU).
If you don't need JPEGs and don't care about video format inside .mkv file, easiest solution will be to use single VP8 encoder (because both .mkv and .webm files can contain VP8) and split encoded streams:
gst-launch-1.0 -e \
v4l2src ! vp8enc ! tee name=t ! \
queue ! matroskamux name=m ! filesink location=1.mkv \
pulsesrc ! vorbisenc ! m. \
t. ! \
queue ! webmmux ! filesink location=1.webm
Also, make sure you use -e option to force EOS when you terminate command via Ctrl + C.
GStreamer WebM muxer is very tiny layer over Matroska muxer: .webm is almost equal to .mkv.

Recording audio+video from webcam with gstreamer

I'm having a problem trying to record audio+video from my webcam to a file. If I use videotestsrc and autoaudiosrc I get everything right (read as in I get a file with audio recorded from the webcam's mic, and test-video image), but as soon as I replace videotestsrc with v4l2src (or autovideosrc) I get Error starting streaming on device '/dev/video0'.
The command I'm using:
gst-launch-0.10 videotestsrc ! queue ! ffmpegcolorspace! theoraenc ! queue ! oggmux name=mux autoaudiosrc ! queue ! audioconvert ! vorbisenc ! queue ! mux. mux. ! queue ! filesink location = test.ogg
Why is that happening? What am I doing wrong?
EDIT:
In fact, something as simple as
gst-launch-0.10 autovideosrc ! autovideosink autoaudiosrc ! autoaudiosink
is failing with the same error (Error starting streaming on device '/dev/video0')
Replacing autovideosrc with videotestsrc gives me test image + real audio.
Replacing autoauidosrc with audiotestsrc gives me real image + test audio.
I'm starting to think that this is some kind of limitation of my webcam. Is that possible?
EDIT:
GST_DEBUG=2 log here: http://pastie.org/4755009
EDIT 2:
GST_DEBUG="v4l2*:5" (gstreamer 0.10): http://pastie.org/4810519
GST_DEBUG="v4l2*:5" (gstreamer 1.0): http://pastie.org/4810502
Please do a
gst-launch-1.0 v4l2src ! videoscale ! videoconvert ! autovideosink
Does that run? If not repeat as
GST_DEBUG="v4l2*:5" GST_DEBUG_NO_COLOR=1 gst-launch 2>debug.log ...
and check the log for errors. You also might want to run v4l-info (install v4l-conf under debian/ubuntu) and report what formats your camera supports.