gstreamer playbin3 to kinesis pipeline: audio stream missing - gstreamer

Firstly, big thanks to the gstreamer community for your excellent software.
I'm trying to use gstreamer to consume a DASH/HLS/MSSS stream (using playbin3) and restream to AWS Kinesis video:
gst-launch-1.0 -v -e \
playbin3 uri=https://dash.akamaized.net/dash264/TestCasesUHD/2b/2/MultiRate.mpd \
video-sink="videoconvert ! x264enc bframes=0 key-int-max=45 bitrate=2048 ! queue ! kvssink name=kvss stream-name=\"test_stream\" access-key=${AWS_ACCESS_KEY_ID} secret-key=${AWS_SECRET_ACCESS_KEY}" \
audio-sink="audioconvert ! audioresample ! avenc_aac ! kvss."
After much experimentation I decided against using uridecodebin3 as it does not handle the incoming stream as completely.
The above command results in a video stream on KVS but the audio is missing. I tried moving the kvssink out of the video-sink pipeline and accessing it as kvss. in both but that fails to link.
I can create separate kvs streams for the audio and video but would prefer them to be muxed.
Does anyone know if this is even possible? I'm open to other stacks for this.

SOLVED
Just posting back here in case anyone else comes accross this problem.
I've got this working using streamlink to restream locally over http:
streamlink <streamUrl> best --player-external-http --player-external-http-port <httpport>
Then using the java JNI bindings for gstreamer to run this pipeline:
kvssink name=kvs stream-name=<streamname> access-key=<awskey> secret-key=<awssecret> aws-region=<awsregion> uridecodebin3 uri=http://localhost:<port> name=d d. ! queue2 ! videoconvert ! videorate ! x264enc bframes=0 key-int-max=45 bitrate=2048 tune=zerolatency ! queue2 ! kvs. d. ! queue2 ! audioconvert ! audioresample ! avenc_aac ! queue2 ! kvs.
I needed to use java to pause and restart the stream on buffering discontinuities so as not to break the stream.
Files arriving in kvs complete with audio.

Related

Audio and video alignment with gstreamer

I am using something similar to the following pipeline to ingest rtsp stream from a camera and provide original and 360p(transcoded) variants in a manifest. I am dynamically generating this pipeline using gstreamer rust.
The video works fine in web, VLC and ffplay. However it fails with AVPlayer(quicktime).
I found that the issue seems to be the audio/video alignment in the ts segments generated by gstreamer.
How can I ensure that the audio and video are aligned in the ts segments? Can audiobuffersplit be helpful? I am not sure how to use it in a pipeline like mine where hlssink2 internally is muxing it.
Appreciate any help in this! Thanks!
gst-launch-1.0 hlssink2 name=ingest1 playlist-length=5 max-files=10
target-duration=2 send-keyframe-requests=true
playlist-location=/live/stream.m3u8 location=/live/%d.ts \
rtspsrc location=rtsp://admin:password#10.10.10.20:554/ protocols=4
name=rtspsrc0 rtspsrc0. ! rtph264depay ! tee name=t t.! queue !
ingest1.video \
t.! queue ! decodebin name=video_decoder ! tee name=one_decode \
one_decode. ! queue ! videorate ! video/x-raw,framerate=15/1 !
videoscale ! video/x-raw,width=640,height=360 ! vaapih264enc ! hlssink2
name=ingest2 target-duration=2 playlist-location=/live/360/stream.m3u8
location=/live/360/%d.ts \
rtspsrc0. ! decodebin name=audio_decoder ! fdkaacenc ! tee name=audio_t \
audio_t. ! queue ! ingest1.audio \
audio_t. ! queue ! ingest2.audio

I want to create a HLS (HTTP Live Streaming) Stream using Gstreamer but Audio only

what I want to do is create an m3u8-file out of an alsa soundcard input.
Like:
arecord hw:1,0 -d 10 test.wav | gst-launch-1.0 ....
I tried this for testing:
gst-launch-1.0 audiotestsrc ! audioconvert ! audioresample ! hlssink
but it doesn't work.
Thank you for helping.
You can’t create directly HLS video transport segments (.ts) from audio raw source. You need to encode it with some encoder and then mux it before sending to hlssink plugin.
One of the problems that you’ll encounter is that the hlssink plugin won’t split the segments with only audio stream so you are going to need something like keyunitsscheduler to split correctly the streams and create the files.
An example pipeline using voaacenc to encode audio and mpegtmux to mux would be as follows:
gst-launch-1.0 audiotestsrc is-live=true ! audioconvert ! voaacenc bitrate=128000 ! aacparse ! audio/mpeg ! queue ! mpegtsmux ! keyunitsscheduler interval=5000000000 ! hlssink playlist-length=5 max-files=10 target-duration=5 playlist-root="http://localhost/hls/" playlist-location="/var/www/html/hls/stream0.m3u8" location="/var/www/html/hls/fragment%05d.ts"

In GStreamer how do I simultaneously playback and record an h264 AVI file of a v4l2src?

Recorded files with gstreamer-0.10 with FPS25 and FourCIF_Format plays in fast forward mode. Any solution would be appreciated. Some times skips 3-4 seconds in recorded files.
The pipeline I'm attempting to use is:
gst-launch v4l2src device=/dev/video2 !
'video/x-raw-yuv,width=704,height=576, framerate=25/1' ! tee
name=liveTee ! queue ! mfw_isink liveTee. ! queue ! vpuenc ! avimux !
filesink location=/home/Recording.avi
I'm gonna take a rough stab at it and re-format your question a bit. This is mostly a GStreamer and Freescale question, not so much QT.
gst-launch-1.0 -e videotestsrc pattern=ball do-timestamp=true
is-live=true ! timeoverlay !
'video/x-raw,width=704,height=576,framerate=25/1' ! tee name=liveTee !
queue leaky=downstream ! videoconvert !
ximagesink async=false
liveTee. ! queue leaky=downstream ! videoconvert ! queue ! x264enc !
avimux ! filesink location=/tmp/test.avi
The thing to keep in mind is that your encoder has to keep up with the live playback. So your pipeline needs to handle the case where the encoder falls out of sync. On the queue elements behind the tee, use the leaky attribute.
Then you also want to be careful about your video source and what it's supplying. It looks like in your case you want live video, but if your source was an existing video file the pipeline would probably need some more tweaking.
NOTE: It may be even simpler than that, just adding async=false to the videosink appears to be very important.

How to stream in h265 using gstreamer?

I am trying to use latest gstreamer and x265enc together. I saw that someone have already posted some commits in http://cgit.freedesktop.org/gstreamer/gst-plugins-bad/log/ext/x265/gstx265enc.c
Can anyone please give an example pipeline where it is known to working (gst-launch-1.0 pipeline example will be very helpful)
1)
What is the current status of x265enc plugin for gstreamer ? does it work really ?
Which branch of gstreamer I need to use to build x265enc? I want to build whole gsteamer source code which will be compatible with x265enc plugin.
What are the system requirement for x265enc and how to build it ? Any wiki/basic instructions will be very helpful.
My goal is to broadcast my ip cameras (h264 streams) as h265 stream on vaughnlive.tv
Currently, I am using following pipeline to broadcast in h264 format:
GST_DEBUG=2 gst-launch-1.0 flvmux name=mux streamable=true ! rtmpsink
sync=true location="rtmp://xxxxxxxxxxxx" rtspsrc
location="rtsp://xxxxxxx" caps="application/x-rtp,
media=(string)audio, clock-rate=(int)90000, encoding-name=(string)MPA,
payload=(int)96" ! rtpmpadepay ! mpegaudioparse ! queue ! mad !
audioconvert ! queue ! voaacenc bitrate=128000 ! aacparse !
audio/mpeg,mpegversion=4,stream-format=raw ! mux. rtspsrc
location="rtsp://xxxxxxx"
caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,
encoding-name=(string)H264" ! rtph264depay !
video/x-h264,stream-format=avc,alignment=au,byte-stream=false ! queue
! decodebin ! queue ! videorate ! "video/x-raw,framerate=30/1" ! queue
! x264enc threads=4 speed-preset=ultrafast bitrate=3072 ! mux.
2)
Can anyone please suggest on how should I change this pipeline to broadcast in h265 format using x265enc element?
A little late but, maybe some people will find this question when seeking info about H.265 support in gstreamer nowadays. This is with gstreamer 1.6.1 compiled from source on Ubuntu 15.10 which has packages ready for libx265..
1,
Encoder
There is x265enc which will be enabled when we have library libx265-dev.
The encoder is inside gst-plugins-bad so after doing autogen.sh you should see x265enc enabled.
You may also need h265parse, rtph265pay/depay
Decoder
I see two decoders, dont know which one is working, I guess libde265dec there is also avdec_h265.
mux
For mux for x264 I was using mpegtsmux, but this does not support video/x265, some work has to be done. The matroskamux should be working when using filesink etc..
[16:39] hi, which container is suitable for x265enc, for x264enc I was using mpegtsmux?
[16:54] otopolsky: mpegts would work if you add support for h265 there, not very difficult[16:55] slomo_: so we need to just add the caps compatibility?
[16:55] otopolsky: otherwise, matroskamux supports it. mp4mux/qtmux could get support for it relatively easily too
[16:55] otopolsky: a bit more than that. look at what tsdemux does for h265
[16:56] otopolsky: and check the gst_mpegts_descriptor_from_registration related code in tsmux
[17:00] slomo_: thanks
2,
Questioned flvmux also does not support h265 only h264..
matroskamux cannot be used for streaming, so only way is to patch mpegtsmux or flvmux etc.

gstreamer recording m3u8 stream

I'm trying to record stream from m3u8 file.
This pipeline works:
gst-launch-0.10 -e souphttpsrc location=(mysrc.m3u8) ! queue ! hlsdemux ! queue ! mpegtsparse ! queue ! mpegtsdemux ! queue ! audio/mpeg ! queue ! filesink location=test.ts
and (sometimes) record audio stream.
But i can't record video, whatever i do it crashes.
I tried something like this:
gst-launch-0.10 souphttpsrc location=(mysrc.m3u8) ! queue ! hlsdemux ! queue ! mpegtsparse ! queue ! mpegtsdemux ! queue ! video/x-264 ! queue ! filesink location=test.ts
But it does nothing.
You are using gstreamer 0.10 which is obsolete and unmantained, all users should upgrade to the 1.x series.
Given that warning, it is not clear whether you want to save the mpegts stream or the streams inside it.
To save the mpegts stream you can just do:
gst-launch-1.0 http://path/to/your/stream.m3u8 ! hlsdemux ! filesink
Be aware that if the HLS playlist contains multiple bitrates hlsdemux might switch bitrate and it will fail as gst-launch-1.0 isn't capable of handling this. (it is a debugging and testing tool). You can likely set a fixed "connection-speed" to make it always use the same bitrate you desire to overcome this issue.
If you want to get only the video stream and you know it is H264, try:
gst-launch-1.0 http://path/to/your/stream.m3u8 ! hlsdemux ! tsdemux ! queue ! video/x-h264 ! filesink
It might be a better idea to save it to a container format to allow easier use later, with something like:
gst-launch-1.0 http://path/to/your/stream.m3u8 ! hlsdemux ! tsdemux ! queue ! video/x-h264 ! h264parse ! qtmux ! filesink
But, as I said, please move to 1.x, HLS is much better at 1.x than it was in 0.10 and it should work.