How can I create a live steam using gstreamer? - gstreamer

I would like to stream my webcam, I tried with vlc, but I'm getting a 10-15s delay between the server and client on the same network
vlc v4l2:// :v4l2-dev=/dev/video0 :v4l2-width=640 :v4l2-height=480 --sout="#transcode{vcodec=h264,vb=800,scale=1,acodec=mp4a,ab=128,channels=2,samplerate=44100}:rtp{sdp=rtsp://:8554/live.ts}" -I dummy
Now I would like to test gstreamer, but I couldn't found any example, how can I setup a live webcam stream(rtsp or http) using gstreamer?

To create a YouTube live event, one needs a RTMP stream containing x264+aac.
gst-launch -v videotestsrc \
! video/x-raw-yuv,width=640,height=480,framerate=30/1 \
! x264enc key-int-max=60 \
! h264parse \
! flvmux name=mux \
audiotestsrc ! queue ! audioconvert ! ffenc_aac ! aacparse ! mux. \
mux. ! rtmpsink location="rtmp://<stream-server-url>/"
Key frames in live feed must appear each 2 seconds at most, thus key-int-max=<double framerate>.
Note that RTMP works over TCP, so on a bad channel it will suffer significant delays.

Take a look at the rtsp-server examples in
http://cgit.freedesktop.org/gstreamer/gst-rtsp-server/tree/examples

Related

Audio and video alignment with gstreamer

I am using something similar to the following pipeline to ingest rtsp stream from a camera and provide original and 360p(transcoded) variants in a manifest. I am dynamically generating this pipeline using gstreamer rust.
The video works fine in web, VLC and ffplay. However it fails with AVPlayer(quicktime).
I found that the issue seems to be the audio/video alignment in the ts segments generated by gstreamer.
How can I ensure that the audio and video are aligned in the ts segments? Can audiobuffersplit be helpful? I am not sure how to use it in a pipeline like mine where hlssink2 internally is muxing it.
Appreciate any help in this! Thanks!
gst-launch-1.0 hlssink2 name=ingest1 playlist-length=5 max-files=10
target-duration=2 send-keyframe-requests=true
playlist-location=/live/stream.m3u8 location=/live/%d.ts \
rtspsrc location=rtsp://admin:password#10.10.10.20:554/ protocols=4
name=rtspsrc0 rtspsrc0. ! rtph264depay ! tee name=t t.! queue !
ingest1.video \
t.! queue ! decodebin name=video_decoder ! tee name=one_decode \
one_decode. ! queue ! videorate ! video/x-raw,framerate=15/1 !
videoscale ! video/x-raw,width=640,height=360 ! vaapih264enc ! hlssink2
name=ingest2 target-duration=2 playlist-location=/live/360/stream.m3u8
location=/live/360/%d.ts \
rtspsrc0. ! decodebin name=audio_decoder ! fdkaacenc ! tee name=audio_t \
audio_t. ! queue ! ingest1.audio \
audio_t. ! queue ! ingest2.audio

How to ensure GStreamer has grabbed frames before decoding a jpeg

I have a GStreamer pipeline that grabs an mjpeg webcam stream from 3 separate cameras, and saves 2 frames each. I'm executing these commands on the USB 3.0 bus of an Odroid XU4 with Ubuntu 18.04 When doing this, I discovered I would occasionally get a garbled image like this in the collection of output images:
It wouldn't always happen, but maybe 1/5 executions an image might look like that.
I then discovered if I decode the jpeg then re-encode, this doesn't ever happen. See the below pipeline:
gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=2 ! \
image/jpeg,width=3840,height=2160 ! queue ! jpegdec ! queue ! \
jpegenc ! multifilesink location=/root/cam0_%02d.jpg v4l2src \
device=/dev/video1 num-buffers=2 ! image/jpeg,width=3840,height=2160 ! \
queue ! jpegdec ! queue ! jpegenc ! multifilesink \
location=/root/cam1_%02d.jpg v4l2src device=/dev/video2 num-buffers=2 ! \
image/jpeg,width=3840,height=2160 ! queue ! jpegdec ! queue ! jpegenc ! \
multifilesink location=/root/cam2_%02d.jpg
Now, when I run this pipeline I have a 1/5 chance of getting this error:
/GstPipeline:pipeline0/GstJpegDec:jpegdec0: No valid frames decoded before end of stream
Is there a way to make gstreamer wait for the frames instead of simply failing? I attempted this by adding the !queue! in the above pipeline, to no success.

Extract h264 stream from USB webcam (logitech C920)

So, I'm starting to play around with gstreamer and I'm able to do very simple pipes such as
gst-launch-1.0 -v v4l2src device=/dev/video1 ! video/x-raw,format=YUY2,width=640,height=480,framerate=10/1 ! videoconvert ! autovideosink
Now, as my USB webcam (which is video1, video0 being the computer's built in camera) supports h264 (I have checked using lsusb), I would like to try to get the h264 feed directly. I understand that this feed is muxed in the mjpeg one, but looking around on the web it seems that gstreamer is able to get it nonetheless.
Since my end goal is to stream it from a Beaglebone, I made an attempt using the solution given to this post (adding a listener from a different terminal):
#sender
gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-264,width=320,height=90,framerate=10/1 ! tcpserversink host=192.168.148.112 port=9999
But this yields the following error :
WARNING: erroneous pipeline: could not link v4l2src0 to tcpserversink0
I also tried something similar to my first command, changing the source from raw to h264 (based on that post , trying the full command given there gives the same error message)
gst-launch-1.0 -v v4l2src device=/dev/video1 ! video/x-h264,width=640,height=480,framerate=10/1 ! h264parse ! avdec_h264 ! autovideosink
But again, this did not work either:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2948): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming task paused, reason not-negotiated (-4)
Execution ended after 0:00:00.036309961
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
I admit this is driving me pretty crazy: looking on SO or elsewhere on the web, there seems to be a lot of people who made it work with exactly the same webcam as the one I have (Logitech C920), but I keep running into issues one after the other.
What would be an example of correct pipe to extract the h264 from that webcam?
You definitely need to use a payloader before it hits the wire. For example rtph264pay. Here is an example that cannot test as I don't have your hardware available. I have working udp examples from alternates sources if this doesn't steer you in the right direction.
server
gst-launch v4l2src device=/dev/video1 \
! video/x-264,width=320,height=90,framerate=10/1 \
! x264enc \
! queue \
! rtph264pay, config-interval=3, pt=96, mtu=1500 \
! queue \
! tcpserversink host=127.0.0.1 port=9002
client
gst-launch tcpserversrc host=127.0.0.1 port=9002 \
! application/x-rtp, media=video, clock-rate=90000, encoding-name=H264, payload=96 \
! rtph264depay \
! video/x-h264 \
! queue \
! ffdec_h264 \
! queue \
! xvimagesink

I want to create a HLS (HTTP Live Streaming) Stream using Gstreamer but Audio only

what I want to do is create an m3u8-file out of an alsa soundcard input.
Like:
arecord hw:1,0 -d 10 test.wav | gst-launch-1.0 ....
I tried this for testing:
gst-launch-1.0 audiotestsrc ! audioconvert ! audioresample ! hlssink
but it doesn't work.
Thank you for helping.
You can’t create directly HLS video transport segments (.ts) from audio raw source. You need to encode it with some encoder and then mux it before sending to hlssink plugin.
One of the problems that you’ll encounter is that the hlssink plugin won’t split the segments with only audio stream so you are going to need something like keyunitsscheduler to split correctly the streams and create the files.
An example pipeline using voaacenc to encode audio and mpegtmux to mux would be as follows:
gst-launch-1.0 audiotestsrc is-live=true ! audioconvert ! voaacenc bitrate=128000 ! aacparse ! audio/mpeg ! queue ! mpegtsmux ! keyunitsscheduler interval=5000000000 ! hlssink playlist-length=5 max-files=10 target-duration=5 playlist-root="http://localhost/hls/" playlist-location="/var/www/html/hls/stream0.m3u8" location="/var/www/html/hls/fragment%05d.ts"

Is it possible to record a .webm file using gstreamer?

I'd like to record a .webm file beside my main .mkv file to serve, that .webm file, to a video object on html page to read from (kind of simple streaming just to see what it's recording)
I'm using pipeline below (with tee for this purpose) to record from my webcam:
gst-launch-1.0 v4l2src device=/dev/video1 ! tee name=t t. \
! image/jpeg,width=1920,height=1080 ! capssetter \
caps='image/jpeg,width=1920,height=1080,framerate=30/1' ! queue \
! matroskamux name=mux pulsesrc device="alsa_input.usb-046d_Logitech_Webcam_C930e_AAF8A63E-02-C930e.analog-stereo" \
! 'audio/x-raw,channels=1,rate=44100' ! audioconvert ! vorbisenc ! queue \
! mux. mux. ! filesink location=/home/sina/Desktop/Recordings/Webcam.mkv \
t. ! queue ! (...pipeline?...) ! filesink location=/home/sina/Desktop/Recordings/TestWebcam.webm
How should I fill the pipeline for the last line?(what structure? encoder? muxer? ...)
While it's still possible to convert stream of JPEG pictures to .WebM with VP8 stream inside, it will be consuming operation and results will not be pretty: encoding→decoding→encoding sequence will spoil output bad (and use more CPU).
If you don't need JPEGs and don't care about video format inside .mkv file, easiest solution will be to use single VP8 encoder (because both .mkv and .webm files can contain VP8) and split encoded streams:
gst-launch-1.0 -e \
v4l2src ! vp8enc ! tee name=t ! \
queue ! matroskamux name=m ! filesink location=1.mkv \
pulsesrc ! vorbisenc ! m. \
t. ! \
queue ! webmmux ! filesink location=1.webm
Also, make sure you use -e option to force EOS when you terminate command via Ctrl + C.
GStreamer WebM muxer is very tiny layer over Matroska muxer: .webm is almost equal to .mkv.