Gstreamer HLSSink Pipeline Displays Black Frames in Safari - gstreamer

This get-launch-1.0 command line pipeline: gst-launch-1.0 videotestsrc num-buffers=680 ! x264enc ! mpegtsmux ! hlssink location=junk2.%05d.ts playlist-location=junk2.m3u8
This server: python -m SimpleHTTPServer 8000
This local url on Mac OS X Safari: http://localhost:8000/junk2.m3u8
Appears to play in local Safari browser, but displays black frames. Why?
Note: python console output looks pretty happy, so all the paths are correct:
$ python -m SimpleHTTPServer 8000
Serving HTTP on 0.0.0.0 port 8000 ...
127.0.0.1 - - [25/Apr/2018 11:40:34] "GET /junk2.m3u8 HTTP/1.1" 200 -
127.0.0.1 - - [25/Apr/2018 11:40:34] "GET /junk2.m3u8 HTTP/1.1" 200 -
127.0.0.1 - - [25/Apr/2018 11:40:34] "GET /junk2.00001.ts HTTP/1.1" 200 -
127.0.0.1 - - [25/Apr/2018 11:40:34] "GET /junk2.00000.ts HTTP/1.1" 200 -
Note: Also tried various options to the hlssink plugin without changes in behavior:
target-duration=2
max-files=0
playlist-length=0
Other Players: Plays and displays correctly in VLC:

Your x264enc selects the wrong profile. If you don't tell it what to use and in your use case with videotestsrc it will select a 4:4:4 color profile instead of 4:2:0. A lot of decoders don't support this.
Tell videotestsrc to feed a 4:2:0 format instead:
gst-launch-1.0 videotestsrc num-buffers=680 ! video/x-raw, format=I420 ! x264enc ! mpegtsmux ! hlssink location=junk2.%05d.ts playlist-location=junk2.m3u8

Related

How to make gst-launch keep using the same headers with curlhttpsrc ! hlsdemux?

While trying to implement a simple player (with gst-launch) for a CDN that uses the initial headers throughout all streams (probably to avoid bots), hlexdemux and adaptivedemux will not reuse the same initial headers from the initial source for the next requests.
Is it actually possible to have a pre-configured curlhttpsrc to be reused by hlsdemux and its super classes?
This is the pipeline I am using:
gst-launch-1.0 -v \
curlhttpsrc \
name=curl user-agent=my-user-agent \
location=http://localhost:8000/playlist.m3u8 curl. \
! hlsdemux \
! fakesink sync=false
the playlist was generated with:
gst-launch-1.0 -v \
videotestsrc is-live=true \
! x264enc \
! h264parse \
! hlssink2 max-files=5 playlist-root=http://localhost:8090
its output
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:43
#EXT-X-TARGETDURATION:15
#EXTINF:15.000000953674316,
http://localhost:8090/segment00043.ts
#EXTINF:15.000000953674316,
http://localhost:8090/segment00044.ts
#EXTINF:15.000000953674316,
http://localhost:8090/segment00045.ts
#EXTINF:15.000000953674316,
http://localhost:8090/segment00046.ts
#EXTINF:15.000000953674316,
http://localhost:8090/segment00047.ts
#EXT-X-ENDLIST
And to mimic the CDN, I used this snippet to serve the playlist from port 8000 and the streams from 8090 as the CDN uses different hosts and with it, I put a user-agent validation to see when my pipeline breaks.
from http.server import SimpleHTTPRequestHandler, test
import sys
class Handler(SimpleHTTPRequestHandler):
def parse_request(self) -> bool:
rv = super().parse_request()
if self.headers['User-Agent'] != "my-user-agent":
self.send_error(404, "Wrong user-agent")
return False
return rv
test(Handler, port=int(sys.argv[1]))

Send PCM data to VLC player using gstreamer

I want to send raw audio data(PCM) to VLC player on RTP for playing the PCM using gstreamer.
Here is the command to send the PCM
gst-launch-1.0 -v filesrc location=/home/webos/pcm_data_dump ! audio/x-raw, rate=44100, channels=2, endianness=1234, format=S16LE, layout=interleaved, clock-rate=44100 ! audioconvert ! audioresample ! audio/x-raw, rate=44100, channels=2, format=S32LE, layout=interleaved ! audioconvert ! rtpL16pay pt=10 ! application/x-rtp, pt=10, encoding-name=L16, payload=10, clock-rate=44100, channels=2 ! udpsink host=192.168.0.2 port=5555
Here is the VLC option to receive the PCM
rtp://192.168.0.2:5555
VLC player can get the PCM from gstreamer, but it cannot play.
VLC shows the debug message like below.
Lastly "core debug: Buffering 0%" message is shown repeatedly in VLC debug message.
core debug: output 'f32l' 44100 Hz Stereo frame=1 samples/8 bytes
core debug: looking for audio volume module matching "any": 2 candidates
core debug: using audio volume module "float_mixer"
core debug: input 's16l' 44100 Hz Stereo frame=1 samples/4 bytes
core debug: looking for audio filter module matching "scaletempo": 14 candidates
scaletempo debug: format: 44100 rate, 2 nch, 4 bps, fl32
scaletempo debug: params: 30 stride, 0.200 overlap, 14 search
scaletempo debug: 1.000 scale, 1323.000 stride_in, 1323 stride_out, 1059
standing, 264 overlap, 617 search, 2204 queue, fl32 mode
core debug: using audio filter module "scaletempo"
core debug: conversion: 's16l'->'f32l' 44100 Hz->44100 Hz Stereo->Stereo
core debug: looking for audio converter module matching "any": 12 candidates
audio_format debug: s16l->f32l, bits per sample: 16->32
core debug: using audio converter module "audio_format"
core debug: conversion pipeline complete
core debug: conversion: 'f32l'->'f32l' 44100 Hz->44100 Hz Stereo->Stereo
core debug: Buffering 0%
core debug: conversion pipeline complete
core debug: looking for audio resampler module matching "any": 3 candidates
core debug: Buffering 0%
core debug: Buffering 0%
core debug: Buffering 0%
core debug: Buffering 0%
core debug: Buffering 0%
.......
And, the log below is shown once the gstreamer command to send PCM starts.
Normally, gstreamer is blocked with this message"New clock: GstSystemClock" when command starts.
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = audio/x- raw, format=(string)S32LE, layout=(string)interleaved, rate=(int)44100, channels=(int)2
/GstPipeline:pipeline0/GstQueue:queue0.GstPad:sink: caps = audio/x-raw, format=(string)S32LE, layout=(string)interleaved, rate=(int)44100, channels= (int)2
/GstPipeline:pipeline0/GstQueue:queue0.GstPad:sink: caps = audio/x-raw, format=(string)S32LE, layout=(string)interleaved, rate=(int)44100, channels= (int)2
/GstPipeline:pipeline0/GstAudioConvert:audioconvert0.GstPad:src: caps = audio/x-raw, layout=(string)interleaved, rate=(int)44100, format=(string)S16BE, channels=(int)2, channel-mask=(bitmask)0x0000000000000003
/GstPipeline:pipeline0/GstQueue:queue1.GstPad:sink: caps = audio/x-raw, layout=(string)interleaved, rate=(int)44100, format=(string)S16BE, channels=(int)2, channel-mask=(bitmask)0x0000000000000003
/GstPipeline:pipeline0/GstQueue:queue1.GstPad:sink: caps = audio/x-raw, layout=(string)interleaved, rate=(int)44100, format=(string)S16BE, channels=(int)2, channel-mask=(bitmask)0x0000000000000003
/GstPipeline:pipeline0/GstRtpL16Pay:rtpl16pay0.GstPad:src: caps = application/x-rtp, media=(string)audio, clock-rate=(int)44100, encoding-name=(string)L16, encoding-params=(string)2, channels=(int)2, payload=(int)10, ssrc=(uint)2226113402, timestamp-offset=(uint)1744959080, seqnum-offset=(uint)62815
/GstPipeline:pipeline0/GstUDPSink:udpsink0.GstPad:sink: caps = application/x-rtp, media=(string)audio, clock-rate=(int)44100, encoding-name=(string)L16, encoding-params=(string)2, channels=(int)2, payload=(int)10, ssrc=(uint)2226113402, timestamp-offset=(uint)1744959080, seqnum-offset=(uint)62815
/GstPipeline:pipeline0/GstRtpL16Pay:rtpl16pay0.GstPad:sink: caps = audio/x-raw, layout=(string)interleaved, rate=(int)44100, format=(string)S16BE, channels=(int)2, channel-mask=(bitmask)0x0000000000000003
/GstPipeline:pipeline0/GstAudioConvert:audioconvert0.GstPad:sink: caps = audio/x-raw, format=(string)S32LE, layout=(string)interleaved, rate=(int)44100, channels=(int)2
/GstPipeline:pipeline0/GstRtpL16Pay:rtpl16pay0: timestamp = 1744959080
/GstPipeline:pipeline0/GstRtpL16Pay:rtpl16pay0: seqnum = 62815
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Got EOS from element "pipeline0".
Execution ended after 0:00:00.622147167
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
gst-launch-0.1 has no problem, only 1.0 has problem.
Is there any problem?
If I replace your filesrc with audiotestsrc, the example works for me. Still let me point out some room for improvement.
use audioparse instead of the first capsfilter
don't audioconvert twice.
Here is a simplified pipeline that works for me:
gst-launch-1.0 -v audiotestsrc ! audioresample ! audioconvert ! rtpL16pay pt=10 ! application/x-rtp, pt=10, encoding-name=L16, payload=10, clock-rate=44100, channels=2 ! udpsink host=localhost port=5555

What effect does the media type string inserted in a gstreamer pipeline have

I have seen this kind of pipeline-running commands in gstreamer:
e.g.,
gst-launch-1.0 videotestsrc ! video/x-raw, format=I420, framerate=25/1, width=640, height=360 ! xvimagesink
And I have read in some pages that video/x-raw, format=I420, framerate=25/1, width=640, height=360 specifies the media-type. But I am not able to understand what effect would it make - is it transforming the input to the specified framerate/format/width/height etc.. or is it just like specifying the input is in this framerate/width/ht already? And what effect it would have on the pipeline if it is just specifying the input is in this framerate etc... instead of transforming.
And is it really necessary or we can ignore it?
They call this video/x-raw, format=I420, framerate=25/1, width=640, height=360 thing "capabilities" or "caps" for short.
Sometimes it is required to specify explicitly what type of data flows between elements in a gstreamer pipe. And caps is the way to do this.
Why this is required sometimes? Because some gstreamer elements can accept (or produce)
several types of media. You can see this with gst-inspect command:
$ gst-inspect videotestsrc
...
Pad Templates:
SRC template: 'src'
Availability: Always
Capabilities:
video/x-raw-yuv
format: YUY2
color-matrix: { sdtv, hdtv }
chroma-site: { mpeg2, jpeg }
width: [ 1, 2147483647 ]
height: [ 1, 2147483647 ]
framerate: [ 0/1, 2147483647/1 ]
video/x-raw-yuv
format: UYVY
color-matrix: { sdtv, hdtv }
chroma-site: { mpeg2, jpeg }
width: [ 1, 2147483647 ]
height: [ 1, 2147483647 ]
framerate: [ 0/1, 2147483647/1 ]
video/x-raw-yuv
format: YVYU
color-matrix: { sdtv, hdtv }
...
This means that videotestsrc has pad src which can produce outputs in various formats (YUY2, UYVY, YVYU, etc), sizes, framerates, etc.
The same is for xvimagesink:
Pad Templates:
SINK template: 'sink'
Availability: Always
Capabilities:
video/x-raw-rgb
framerate: [ 0/1, 2147483647/1 ]
width: [ 1, 2147483647 ]
height: [ 1, 2147483647 ]
video/x-raw-yuv
framerate: [ 0/1, 2147483647/1 ]
width: [ 1, 2147483647 ]
height: [ 1, 2147483647 ]
It is able to view streams of data in various formats.
GStreamer uses the process called caps negotiation to decide which concrete data format to use between two elements. It tries to select "best fitting" format if no capabilities are provided by the user. That is why it is possible to drop capabilities from your pipeline and it will still work:
$ gst-launch-1.0 -v videotestsrc ! xvimagesink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
/GstPipeline:pipeline0/GstVideoTestSrc:videotestsrc0.GstPad:src: caps = video/x-raw, format=(string)YUY2, width=(int)320, height=(int)240, framerate=(fraction)30/1
/GstPipeline:pipeline0/GstXvImageSink:xvimagesink0.GstPad:sink: caps = video/x-raw, format=(string)YUY2, width=(int)320, height=(int)240, framerate=(fraction)30/1
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
I added -v flag to see what caps gstreamser actually decided to use.
But sometimes there are cases when gstreamser fails to negotiate data format or you have different preferences.
E.g. in the case of pipeline that reads stream from socket it is impossible guess data format for certain and you need to provide correct capabilities.
You can see that specifying caps makes difference by executing these two pipelines:
$ gst-launch -v videotestsrc ! 'video/x-raw-yuv, width=600, height=600' ! xvimagesink
$ gst-launch -v videotestsrc ! 'video/x-raw-yuv, width=60, height=60' ! xvimagesink
It is important to understand that capabilities does not convert data but specify which format elements will produce or consume.

gstreamer gstrtpbin sender/receiver in the same pipeline

I wish to build a single gstreamer pipeline that does both rtp audio send and receive.
Based on the examples (few as they are) that I've found, here is my almost working code.
(the program is written in Rexx, but it's pretty obvious what is happening, I think. Here, it looks a lot like bash!). Line catenation char is comma. The "", bits just insert blank lines for readability.
rtp_recv_port = 8554
rtp_send_port = 8555
pipeline = "gst-launch -e",
"",
"gstrtpbin",
" name=rtpbin",
"",
"udpsrc port="rtp_recv_port, -- do-timestamp=true
' ! "application/x-rtp,media=audio,payload=8,clock-rate=8000,encoding-name=PCMA,channels=1" ',
" ! rtpbin.recv_rtp_sink_0",
"",
"rtpbin. ",
" ! rtppcmadepay",
" ! decodebin ",
' ! "audio/x-raw-int, width=16, depth=16, rate=8000, channels=1" ',
" ! volume volume=5.0 ",
" ! autoaudiosink sync=false",
"",
"autoaudiosrc ",
" ! audioconvert ",
' ! "audio/x-raw-int,width=16,depth=16,rate=8000,channels=1" ',
" ! alawenc ",
" ! rtppcmapay perfect-rtptime=true mtu=2000",
" ! rtpbin.send_rtp_sink_1",
"",
"rtpbin.send_rtp_src_1 ",
" ! audioconvert",
" ! audioresample",
" ! udpsink port="rtp_send_port "host="ipaddr
pipeline "> pipe.out"
If I comment out the lines after
" ! autoaudiosink sync=false",
The receive-only portion works just fine. However, if I leave those lines in place I get this error:
ERROR: from element /GstPipeline:pipeline0/GstUDPSrc:udpsrc0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2582): gst_base_src_loop (): /GstPipeline:pipeline0/GstUDPSrc:udpsrc0:
streaming task paused, reason not-linked (-1)
So what's suddenly become unlinked? I'd understand if the error was in the autoaudiosrc portion, but suddenly showing up in the udpsrc section?
Suggestion of help, anyone?
(FWIW) After I get this part working I will go back in and add the rtcp parts or the pipeline.
Here is a pipeline that will send and receive audio (full duplex). I manually set the sources so that it is expandable(you can put video on this as well and I have a sample pipeline for you if you want to do both). I set the jitter buffer mode to BUFFER because mine is implemented on a network with a TON of jitter. Now, within this sample pipe, you could add all your variable changes (volume, your audio source, encoding and decoding etc.).
sudo gst-launch gstrtpbin \
name=rtpbin audiotestsrc ! queue ! audioconvert ! alawenc ! \
rtppcmapay pt=8 ! rtpbin.send_rtp_sink_0 rtpbin.send_rtp_src_0 ! \
multiudpsink clients="127.0.0.1:5002" sync=false async=false \
udpsrc port=5004 caps="application/x-rtp, media=audio, payload=8, clock-rate=8000, \
encoding-name=PCMA" ! queue ! rtpbin.recv_rtp_sink_0 \
rtpbin. buffer-mode=RTP_JITTER_BUFFER_MODE_BUFFER ! rtppcmadepay ! alawdec ! alsasink
I have had issues with the Control(RTCP) packets. I have found that a loop back test is not sufficient if you are utilizing RTCP. You will have to test on two computers talking to each other.
Let me know if this works for you as I have tested on 4 different machines and all have worked.

Django + lighttpd + fcgi performance

I am using Django to handle fairly long http post requests and I am wondering if my setup has some limitations when I received many requests at the same time.
lighttpd.conf fcgi:
fastcgi.server = (
"a.fcgi" => (
"main" => (
# Use host / port instead of socket for TCP fastcgi
"host" => "127.0.0.1",
"port" => 3033,
"check-local" => "disable",
"allow-x-send-file" => "enable"
))
)
Django init.d script start section:
start-stop-daemon --start --quiet \
--pidfile /var/www/tmp/a.pid \
--chuid www-data --exec /usr/bin/env -- python \
/var/www/a/manage.py runfcgi \
host=127.0.0.1 port=3033 pidfile=/var/www/tmp/a.pid
Starting Django using the script above results in a multi-threaded Django server:
www-data 342 7873 0 04:58 ? 00:01:04 python /var/www/a/manage.py runfcgi host=127.0.0.1 port=3033 pidfile=/var/www/tmp/a.pid
www-data 343 7873 0 04:58 ? 00:01:15 python /var/www/a/manage.py runfcgi host=127.0.0.1 port=3033 pidfile=/var/www/tmp/a.pid
www-data 378 7873 0 Feb14 ? 00:04:45 python /var/www/a/manage.py runfcgi host=127.0.0.1 port=3033 pidfile=/var/www/tmp/a.pid
www-data 382 7873 0 Feb12 ? 00:14:53 python /var/www/a/manage.py runfcgi host=127.0.0.1 port=3033 pidfile=/var/www/tmp/a.pid
www-data 386 7873 0 Feb12 ? 00:12:49 python /var/www/a/manage.py runfcgi host=127.0.0.1 port=3033 pidfile=/var/www/tmp/a.pid
www-data 7873 1 0 Feb12 ? 00:00:24 python /var/www/a/manage.py runfcgi host=127.0.0.1 port=3033 pidfile=/var/www/tmp/a.pid
In lighttpd error.log, I do see load = 10 which shows I am getting many requests at the same time, this happens few times a day:
2010-02-16 05:17:17: (mod_fastcgi.c.2979) got proc: pid: 0 socket: tcp:127.0.0.1:3033 load: 10
Is my setup correct to handle many long http post requests (can last few minutes each) at the same time ?
I think you may want to configure your fastcgi worker to run multi-processed, or multi-threaded.
From manage.py runfcgi help:
method=IMPL prefork or threaded (default prefork)
[...]
maxspare=NUMBER max number of spare processes / threads
minspare=NUMBER min number of spare processes / threads.
maxchildren=NUMBER hard limit number of processes / threads
So your start command would be:
start-stop-daemon --start --quiet \
--pidfile /var/www/tmp/a.pid \
--chuid www-data --exec /usr/bin/env -- python \
/var/www/a/manage.py runfcgi \
host=127.0.0.1 port=3033 pidfile=/var/www/tmp/a.pid \
method=prefork maxspare=4 minspare=4 maxchildren=8
You will want to adjust the number of processes as needed. Note that the more FCGI processes you have, your memory usage will increase linearly. Also, if your processes are CPU-bound, having more processes than number of available CPU cores won't help much for concurrency.