I want to create a HLS (HTTP Live Streaming) Stream using Gstreamer but Audio only - gstreamer

what I want to do is create an m3u8-file out of an alsa soundcard input.
Like:
arecord hw:1,0 -d 10 test.wav | gst-launch-1.0 ....
I tried this for testing:
gst-launch-1.0 audiotestsrc ! audioconvert ! audioresample ! hlssink
but it doesn't work.
Thank you for helping.

You can’t create directly HLS video transport segments (.ts) from audio raw source. You need to encode it with some encoder and then mux it before sending to hlssink plugin.
One of the problems that you’ll encounter is that the hlssink plugin won’t split the segments with only audio stream so you are going to need something like keyunitsscheduler to split correctly the streams and create the files.
An example pipeline using voaacenc to encode audio and mpegtmux to mux would be as follows:
gst-launch-1.0 audiotestsrc is-live=true ! audioconvert ! voaacenc bitrate=128000 ! aacparse ! audio/mpeg ! queue ! mpegtsmux ! keyunitsscheduler interval=5000000000 ! hlssink playlist-length=5 max-files=10 target-duration=5 playlist-root="http://localhost/hls/" playlist-location="/var/www/html/hls/stream0.m3u8" location="/var/www/html/hls/fragment%05d.ts"

Related

Demux video and KLV data from MPEG-TS stream, in sync

I need to demux the video frames and KLV data from an MPEG-TS stream in sync, frame-by-frame.
The following command to demux the KLV data and outputs a text file with the KLV data.
gst-launch-1.0 filesrc location="some_file.ts" ! tsdemux name=demux \
demux. ! queue ! meta/x-klv ! filesink location="some_file-KLV.txt"
The following command to demux the video and outputs a video file.
gst-launch-1.0 filesrc location="some_file.ts" ! tsdemux name=demux \
demux. ! queue ! decodebin ! videorate ! videoscale ! x264enc ! mp4mux ! filesink location="some_file-video.mp4"
On combining the above two:
gst-launch-1.0 filesrc location="some_file.ts" ! tsdemux name=demux \
demux. ! queue ! decodebin ! videorate ! videoscale ! x264enc ! mp4mux ! filesink location="some_file-video.mp4"
demux. ! queue ! meta/x-klv ! filesink location="some_file.txt"
The command doesn't work. It just gets stuck after the following message on the terminal;
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
and, the size text and video files is 0 bytes.
An example .ts file can be found at(this file hasn't been uploaded and created by me, it is part of data for some code on github(https://gist.github.com/All4Gis/509fbe06ce53a0885744d16595811e6f)): https://drive.google.com/drive/folders/1AIbCGTqjk8NgA4R818pGSvU1UCcm-lib?usp=sharing
Thank you for helping! Cheers. :)
Edit:
I realised that there can be some confusion. The files in the link above were just used to create the .ts file.
The .ts file I am using, is available directly in either of the links below:
https://drive.google.com/drive/folders/1t-u8rnEE2MftWQkS1q3UB-J3ogXBr3p9?usp=sharing
https://easyupload.io/xufeny
It seems if we use Gstreamer's multiqueue element, instead of queue, the files are being created.
I tried the above based on a suggestion from a commenter on another website I had posted the question on.
But, the KLV data and frames are still not in sync. That is what I am trying to do now.
Regards.

gstreamer playbin3 to kinesis pipeline: audio stream missing

Firstly, big thanks to the gstreamer community for your excellent software.
I'm trying to use gstreamer to consume a DASH/HLS/MSSS stream (using playbin3) and restream to AWS Kinesis video:
gst-launch-1.0 -v -e \
playbin3 uri=https://dash.akamaized.net/dash264/TestCasesUHD/2b/2/MultiRate.mpd \
video-sink="videoconvert ! x264enc bframes=0 key-int-max=45 bitrate=2048 ! queue ! kvssink name=kvss stream-name=\"test_stream\" access-key=${AWS_ACCESS_KEY_ID} secret-key=${AWS_SECRET_ACCESS_KEY}" \
audio-sink="audioconvert ! audioresample ! avenc_aac ! kvss."
After much experimentation I decided against using uridecodebin3 as it does not handle the incoming stream as completely.
The above command results in a video stream on KVS but the audio is missing. I tried moving the kvssink out of the video-sink pipeline and accessing it as kvss. in both but that fails to link.
I can create separate kvs streams for the audio and video but would prefer them to be muxed.
Does anyone know if this is even possible? I'm open to other stacks for this.
SOLVED
Just posting back here in case anyone else comes accross this problem.
I've got this working using streamlink to restream locally over http:
streamlink <streamUrl> best --player-external-http --player-external-http-port <httpport>
Then using the java JNI bindings for gstreamer to run this pipeline:
kvssink name=kvs stream-name=<streamname> access-key=<awskey> secret-key=<awssecret> aws-region=<awsregion> uridecodebin3 uri=http://localhost:<port> name=d d. ! queue2 ! videoconvert ! videorate ! x264enc bframes=0 key-int-max=45 bitrate=2048 tune=zerolatency ! queue2 ! kvs. d. ! queue2 ! audioconvert ! audioresample ! avenc_aac ! queue2 ! kvs.
I needed to use java to pause and restart the stream on buffering discontinuities so as not to break the stream.
Files arriving in kvs complete with audio.

gstreamer shmsrc and shmsink with h264 data

i am trying to share an h264 encoded data from gstreamer to another two processes(both are based on gstreamer).After some research only way i found is to use the shm plugin.
this is what i am trying to do
gstreamer--->h264 encoder--->shmsink
shmrc--->process1
shmrc--->process2
i was able to get raw data from videotestsrc and webcam working. But for h264 encoded data it doesn't.
this is my test pipeline
gst-launch-1.0 videotestsrc ! video/x-raw,width=640,height=480,format=YUY2 !
x264enc ! shmsink socket-path=/tmp/foo sync=true wait-for-
connection=false shm-size=10000000
gst-launch-1.0 shmsrc socket-path=/tmp/foo ! avdec_h264 ! video/x-
raw,width=640,height=480,framerate=25/1,format=YUY2 ! autovideosink
have anyone tried shm plugins with h264 encoded data, please help
Iam not aware of the capabilities of your sink used in "autovideosink", but as per my knowledge you either need to use videoconvert if the format supported by the sink (like kmssink or ximagesink) are different than provided by the source (in your case YUY2) or use videoparse if the camera format is supported by the sink. You may check this using gst-inspect-1.0 for the formats supported.
Anyways I am able to run your pipeline with some modifications using videoconvert in my setup :
./gst-launch-1.0 videotestsrc ! x264enc ! shmsink socket-path=/tmp/foo sync=true wait-for-connection=false shm-size=10000000
./gst-launch-1.0 shmsrc socket-path=/tmp/foo ! h264parse ! avdec_h264 ! videoconvert ! ximagesink
You may modify it as per the resolutions you want.
Kindly let me know if you face any issue with above.

How to add audio to a h264 video stream using gstreamer

I can successfully stream HD video using following pipelines:
streame server:
gst-launch-1.0 filesrc location="Gravity.2013.720p.BluRay.x264.YIFY.mp4" ! decodebin ! x264enc ! rtph264pay pt=96 ssrc=0 timestamp-offset=0 seqnum-offset=0 pt=96 ! gdppay ! tcpclientsink host=192.168.1.93 port=5000
client:
gst-launch-1.0 tcpserversrc host=192.168.1.93 port=5000 ! gdpdepay ! rtph264depay ! decodebin ! autovideosink
I want to add the audio stream too.
I guess it is possible to use a different port and tcpserver/tcpclient combination to stream audio parallel to video. But I am not certain, how gstreamer would synchronize two streams properly to play the movie in the client end. Apart from this method, are there any other methods? such as muxing two streams before and demuxing it in client end?

Recording audio+video from webcam with gstreamer

I'm having a problem trying to record audio+video from my webcam to a file. If I use videotestsrc and autoaudiosrc I get everything right (read as in I get a file with audio recorded from the webcam's mic, and test-video image), but as soon as I replace videotestsrc with v4l2src (or autovideosrc) I get Error starting streaming on device '/dev/video0'.
The command I'm using:
gst-launch-0.10 videotestsrc ! queue ! ffmpegcolorspace! theoraenc ! queue ! oggmux name=mux autoaudiosrc ! queue ! audioconvert ! vorbisenc ! queue ! mux. mux. ! queue ! filesink location = test.ogg
Why is that happening? What am I doing wrong?
EDIT:
In fact, something as simple as
gst-launch-0.10 autovideosrc ! autovideosink autoaudiosrc ! autoaudiosink
is failing with the same error (Error starting streaming on device '/dev/video0')
Replacing autovideosrc with videotestsrc gives me test image + real audio.
Replacing autoauidosrc with audiotestsrc gives me real image + test audio.
I'm starting to think that this is some kind of limitation of my webcam. Is that possible?
EDIT:
GST_DEBUG=2 log here: http://pastie.org/4755009
EDIT 2:
GST_DEBUG="v4l2*:5" (gstreamer 0.10): http://pastie.org/4810519
GST_DEBUG="v4l2*:5" (gstreamer 1.0): http://pastie.org/4810502
Please do a
gst-launch-1.0 v4l2src ! videoscale ! videoconvert ! autovideosink
Does that run? If not repeat as
GST_DEBUG="v4l2*:5" GST_DEBUG_NO_COLOR=1 gst-launch 2>debug.log ...
and check the log for errors. You also might want to run v4l-info (install v4l-conf under debian/ubuntu) and report what formats your camera supports.