Audio and video alignment with gstreamer - gstreamer

I am using something similar to the following pipeline to ingest rtsp stream from a camera and provide original and 360p(transcoded) variants in a manifest. I am dynamically generating this pipeline using gstreamer rust.
The video works fine in web, VLC and ffplay. However it fails with AVPlayer(quicktime).
I found that the issue seems to be the audio/video alignment in the ts segments generated by gstreamer.
How can I ensure that the audio and video are aligned in the ts segments? Can audiobuffersplit be helpful? I am not sure how to use it in a pipeline like mine where hlssink2 internally is muxing it.
Appreciate any help in this! Thanks!
gst-launch-1.0 hlssink2 name=ingest1 playlist-length=5 max-files=10
target-duration=2 send-keyframe-requests=true
playlist-location=/live/stream.m3u8 location=/live/%d.ts \
rtspsrc location=rtsp://admin:password#10.10.10.20:554/ protocols=4
name=rtspsrc0 rtspsrc0. ! rtph264depay ! tee name=t t.! queue !
ingest1.video \
t.! queue ! decodebin name=video_decoder ! tee name=one_decode \
one_decode. ! queue ! videorate ! video/x-raw,framerate=15/1 !
videoscale ! video/x-raw,width=640,height=360 ! vaapih264enc ! hlssink2
name=ingest2 target-duration=2 playlist-location=/live/360/stream.m3u8
location=/live/360/%d.ts \
rtspsrc0. ! decodebin name=audio_decoder ! fdkaacenc ! tee name=audio_t \
audio_t. ! queue ! ingest1.audio \
audio_t. ! queue ! ingest2.audio

Related

gstreamer playbin3 to kinesis pipeline: audio stream missing

Firstly, big thanks to the gstreamer community for your excellent software.
I'm trying to use gstreamer to consume a DASH/HLS/MSSS stream (using playbin3) and restream to AWS Kinesis video:
gst-launch-1.0 -v -e \
playbin3 uri=https://dash.akamaized.net/dash264/TestCasesUHD/2b/2/MultiRate.mpd \
video-sink="videoconvert ! x264enc bframes=0 key-int-max=45 bitrate=2048 ! queue ! kvssink name=kvss stream-name=\"test_stream\" access-key=${AWS_ACCESS_KEY_ID} secret-key=${AWS_SECRET_ACCESS_KEY}" \
audio-sink="audioconvert ! audioresample ! avenc_aac ! kvss."
After much experimentation I decided against using uridecodebin3 as it does not handle the incoming stream as completely.
The above command results in a video stream on KVS but the audio is missing. I tried moving the kvssink out of the video-sink pipeline and accessing it as kvss. in both but that fails to link.
I can create separate kvs streams for the audio and video but would prefer them to be muxed.
Does anyone know if this is even possible? I'm open to other stacks for this.
SOLVED
Just posting back here in case anyone else comes accross this problem.
I've got this working using streamlink to restream locally over http:
streamlink <streamUrl> best --player-external-http --player-external-http-port <httpport>
Then using the java JNI bindings for gstreamer to run this pipeline:
kvssink name=kvs stream-name=<streamname> access-key=<awskey> secret-key=<awssecret> aws-region=<awsregion> uridecodebin3 uri=http://localhost:<port> name=d d. ! queue2 ! videoconvert ! videorate ! x264enc bframes=0 key-int-max=45 bitrate=2048 tune=zerolatency ! queue2 ! kvs. d. ! queue2 ! audioconvert ! audioresample ! avenc_aac ! queue2 ! kvs.
I needed to use java to pause and restart the stream on buffering discontinuities so as not to break the stream.
Files arriving in kvs complete with audio.

gstreamer saved files have no audio

I'm trying to use this command to create multiple files from stream but they have no audio playback, I think decodebin should be dealing with it, what am I doing wrong?
gst-launch-1.0 -e filesrc location=video.mp4 ! queue ! decodebin ! queue ! videoconvert ! queue ! timeoverlay ! x264enc key-int-max=10 ! h264parse ! splitmuxsink location=videos/test%02d.mp4 max-size-time=1000000000000
Why do you make that assumption that decodebin will handle it? decodebin will decode the audio track to raw audio and exposes an audio pad. If you don't make use of that pad it will not make itself into the file.
Since you transcode you will have to re-encode the audio too:
gst-launch-1.0 -e filesrc location=video.mp4 ! queue ! decodebin ! queue ! \
videoconvert ! queue ! timeoverlay ! x264enc key-int-max=10 ! h264parse ! \
splitmuxsink location=videos/test%02d.mp4 max-size-time=1000000000000 \
decodebin0. ! queue ! voaacenc ! aacparse ! splitmuxsink0.
If you don't want to re-encode but passthrough the audio decodebin is the wrong way. parsebin may be a better fit in that case.

How to ensure GStreamer has grabbed frames before decoding a jpeg

I have a GStreamer pipeline that grabs an mjpeg webcam stream from 3 separate cameras, and saves 2 frames each. I'm executing these commands on the USB 3.0 bus of an Odroid XU4 with Ubuntu 18.04 When doing this, I discovered I would occasionally get a garbled image like this in the collection of output images:
It wouldn't always happen, but maybe 1/5 executions an image might look like that.
I then discovered if I decode the jpeg then re-encode, this doesn't ever happen. See the below pipeline:
gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=2 ! \
image/jpeg,width=3840,height=2160 ! queue ! jpegdec ! queue ! \
jpegenc ! multifilesink location=/root/cam0_%02d.jpg v4l2src \
device=/dev/video1 num-buffers=2 ! image/jpeg,width=3840,height=2160 ! \
queue ! jpegdec ! queue ! jpegenc ! multifilesink \
location=/root/cam1_%02d.jpg v4l2src device=/dev/video2 num-buffers=2 ! \
image/jpeg,width=3840,height=2160 ! queue ! jpegdec ! queue ! jpegenc ! \
multifilesink location=/root/cam2_%02d.jpg
Now, when I run this pipeline I have a 1/5 chance of getting this error:
/GstPipeline:pipeline0/GstJpegDec:jpegdec0: No valid frames decoded before end of stream
Is there a way to make gstreamer wait for the frames instead of simply failing? I attempted this by adding the !queue! in the above pipeline, to no success.

How to improve performance on screencasts with audio using GStreamer?

I try to write a GStreamer pipeline to capture the screen, put a box on the corner capturing the webcam and record audio (all at the same time).
If I hit Ctrl+C to stop after ten seconds, for example, I realize I only record about 2 seconds of video (and audio). Actually, I don't care that the recording were done in real time, but I just want that GStreamer records the full lenght it should be.
This is the pipeline I have so far:
gst-launch-1.0 --gst-debug=3 ximagesrc use-damage=0 \
! video/x-raw,width=1366,height=768,framerate=30/1 ! videoconvert \
! videomixer name=mix sink_0::alpha=1 sink_1::alpha=1 sink_1::xpos=1046 sink_1::ypos=528 \
! videoconvert ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 \
! vp8enc ! webmmux name=mux ! filesink location="out.webm" \
pulsesrc ! audioconvert ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! vorbisenc ! mux. \
v4l2src do-timestamp=true ! video/x-raw,width=320,height=240,framerate=30/1 ! mix.
I hope to have a solution, thank you.

Is it possible to record a .webm file using gstreamer?

I'd like to record a .webm file beside my main .mkv file to serve, that .webm file, to a video object on html page to read from (kind of simple streaming just to see what it's recording)
I'm using pipeline below (with tee for this purpose) to record from my webcam:
gst-launch-1.0 v4l2src device=/dev/video1 ! tee name=t t. \
! image/jpeg,width=1920,height=1080 ! capssetter \
caps='image/jpeg,width=1920,height=1080,framerate=30/1' ! queue \
! matroskamux name=mux pulsesrc device="alsa_input.usb-046d_Logitech_Webcam_C930e_AAF8A63E-02-C930e.analog-stereo" \
! 'audio/x-raw,channels=1,rate=44100' ! audioconvert ! vorbisenc ! queue \
! mux. mux. ! filesink location=/home/sina/Desktop/Recordings/Webcam.mkv \
t. ! queue ! (...pipeline?...) ! filesink location=/home/sina/Desktop/Recordings/TestWebcam.webm
How should I fill the pipeline for the last line?(what structure? encoder? muxer? ...)
While it's still possible to convert stream of JPEG pictures to .WebM with VP8 stream inside, it will be consuming operation and results will not be pretty: encoding→decoding→encoding sequence will spoil output bad (and use more CPU).
If you don't need JPEGs and don't care about video format inside .mkv file, easiest solution will be to use single VP8 encoder (because both .mkv and .webm files can contain VP8) and split encoded streams:
gst-launch-1.0 -e \
v4l2src ! vp8enc ! tee name=t ! \
queue ! matroskamux name=m ! filesink location=1.mkv \
pulsesrc ! vorbisenc ! m. \
t. ! \
queue ! webmmux ! filesink location=1.webm
Also, make sure you use -e option to force EOS when you terminate command via Ctrl + C.
GStreamer WebM muxer is very tiny layer over Matroska muxer: .webm is almost equal to .mkv.