How to make gst-launch keep using the same headers with curlhttpsrc ! hlsdemux? - gstreamer

While trying to implement a simple player (with gst-launch) for a CDN that uses the initial headers throughout all streams (probably to avoid bots), hlexdemux and adaptivedemux will not reuse the same initial headers from the initial source for the next requests.
Is it actually possible to have a pre-configured curlhttpsrc to be reused by hlsdemux and its super classes?
This is the pipeline I am using:
gst-launch-1.0 -v \
curlhttpsrc \
name=curl user-agent=my-user-agent \
location=http://localhost:8000/playlist.m3u8 curl. \
! hlsdemux \
! fakesink sync=false
the playlist was generated with:
gst-launch-1.0 -v \
videotestsrc is-live=true \
! x264enc \
! h264parse \
! hlssink2 max-files=5 playlist-root=http://localhost:8090
its output
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:43
#EXT-X-TARGETDURATION:15
#EXTINF:15.000000953674316,
http://localhost:8090/segment00043.ts
#EXTINF:15.000000953674316,
http://localhost:8090/segment00044.ts
#EXTINF:15.000000953674316,
http://localhost:8090/segment00045.ts
#EXTINF:15.000000953674316,
http://localhost:8090/segment00046.ts
#EXTINF:15.000000953674316,
http://localhost:8090/segment00047.ts
#EXT-X-ENDLIST
And to mimic the CDN, I used this snippet to serve the playlist from port 8000 and the streams from 8090 as the CDN uses different hosts and with it, I put a user-agent validation to see when my pipeline breaks.
from http.server import SimpleHTTPRequestHandler, test
import sys
class Handler(SimpleHTTPRequestHandler):
def parse_request(self) -> bool:
rv = super().parse_request()
if self.headers['User-Agent'] != "my-user-agent":
self.send_error(404, "Wrong user-agent")
return False
return rv
test(Handler, port=int(sys.argv[1]))

Related

Gstreamer: Signal RTP header extension to the payloader

I have an RTP streaming app which implements the following pipeline using the C API.
gst-launch-1.0 -v rtpbin name=rtpbin \
videotestsrc ! x264enc ! rtph264pay! rtpbin.send_rtp_sink_0 \
rtpbin.send_rtp_src_0 ! udpsink port=5002 host=127.0.0.1 \
rtpbin.send_rtcp_src_0 ! udpsink port=5003 host=127.0.0.1 sync=false async=false \
udpsrc port=5007 ! rtpbin.recv_rtcp_sink_0
I want to add header extensions to the RTP packet; therefore I created an extension using the new GstRTPHeaderExtension class introduced in GStreamer v1.20. I want to set the attributes of the extension (e.g. color space properties for the example below). AFAIU this should be done by providing those as caps to the payloader element. However, I can't figure out how I should provide these caps exactly. Do I need to use a capsfilter here or what is the right way? In the current state, I can send the RTP packets and see that the extension is added but can't set the attributes.
Related parts of the code are below:
#define URN_COLORSPACE "http://www.webrtc.org/experiments/rtp-hdrext/color-space"
const GstVideoColorimetry colorimetry = {
GST_VIDEO_COLOR_RANGE_0_255,
GST_VIDEO_COLOR_MATRIX_BT601,
GST_VIDEO_TRANSFER_BT2020_10,
GST_VIDEO_COLOR_PRIMARIES_BT2020};
const GstVideoChromaSite chroma_site = GST_VIDEO_CHROMA_SITE_MPEG2;
ext = gst_rtp_header_extension_create_from_uri(URN_COLORSPACE);
gst_rtp_header_extension_set_id(ext, 1);
g_signal_emit_by_name(videopay, "add-extension", ext);
// other element definitions, links..
videopay = gst_element_factory_make("rtph264pay", "videopay");
colorimetry_str = gst_video_colorimetry_to_string(&colorimetry);
// How to provide these caps to the payloader set the extension properties?
caps = gst_caps_new_simple("application/x-rtp",
"media", G_TYPE_STRING, "video",
"clock-rate", G_TYPE_INT, 90000,
"encoding-name", G_TYPE_STRING, "H264",
"colorimetry", G_TYPE_STRING, colorimetry_str,
"chroma-site", G_TYPE_STRING,
gst_video_chroma_to_string(chroma_site), NULL);
The caps should be provided to the sink of the RTP payloader element using a capsfilter element:
GstElement *capsfilt;
capsfilt = gst_element_factory_make("capsfilter", "capsfilter");
g_object_set(capsfilt, "caps", caps, NULL);
gst_element_link_many(videosrc, videoenc, capsfilt, videopay, NULL)
where videosrc, videoenc, videopay are the source, encoder and payloader elements, respectively.
Also, the caps should have a media type matching to the encoder element. E.g. video/x-h264 if the encoder element is an instance of x264enc.
The payloader will try to automatically enable the extension with the attributes set in the caps by passing the caps to the extension, if auto-header-extension is enabled (set to true by default).
In a gst-launch pipeline, the caps are passed automatically when the header extension is inserted after the payloader.

how to use x264enc and avdec_h264?

I'm new for gstreamer. I want to encode the video of my MacbookPro. built-in cam to h264 and then play. in command line, I tried "
gst-launch-1.0 autovideosrc ! queue ! x264enc ! avdec_h264 ! queue ! autovideosink " and it works.but when I run the c++ code, it failed, only show a green screen.
video_src = gst_element_factory_make("autovideosrc", "video_source");
video_enc = gst_element_factory_make("x264enc", "videoEncoder");
video_dec = gst_element_factory_make("avdec_h264", "videodecoder");
video_sink = gst_element_factory_make("osxvideosink", nullptr);
gst_bin_add_many...
gst_element_link_many (video_src, screen_queue, video_enc, video_dec, video_sink, NULL);
not sure how to correct it. thanks!

Is it actually possible to mux subtitles into a .mkv file using GStreamer?

Wish in the document of matroskamux there could be an example demonstrating how to mux subtitles. After a couple of days trying, I doubt it is doable. Maybe it is a bug that matroskamux can not mux subtitles, except when the text stream is in subtitle/x-kate format. Below is the pipeline description that failed. Can someone please tell me where it went wrong, or verify that it is indeed a bug. Thanks.
gst-launch-1.0 \
videotestsrc num-buffers=300 \
! videoconvert \
! theoraenc \
! MUXER.video_%u \
filesrc location=src.srt \
! subparse \
! text/x-raw,format=utf8 \
! MUXER.subtitle_0 \
matroskamux name=MUXER \
! filesink location=dst.mkv
Below is a .srt file that can be used to try the above gst-launch-1.0 command.
1
00:00:01,000 --> 00:00:02,000
one
2
00:00:02,000 --> 00:00:03,000
two
3
00:00:03,000 --> 00:00:04,000
three
4
00:00:04,000 --> 00:00:05,000
four
5
00:00:05,000 --> 00:00:06,000
five
6
00:00:06,000 --> 00:00:07,000
six
7
00:00:07,000 --> 00:00:08,000
seven
8
00:00:08,000 --> 00:00:09,000
eight
9
00:00:09,000 --> 00:00:10,000
nine
10
00:00:10,000 --> 00:00:11,000
ten

Assert in Kaldi when used with GStreamer

Using GStreamer plugin from Alumae and the following pipeline :
appsrc source='appsrc' ! wavparse ! audioconvert ! audioresample ! queue ! kaldinnet2onlinedecoder <parameters snipped> ! filesink location=/tmp/test
I always get the following assert that I don't understand
KALDI_ASSERT(current_log_post_.NumRows() == info_.frames_per_chunk /
info_.opts.frame_subsampling_factor &&
current_log_post_.NumCols() == info_.output_dim);
What is this assert error about ? How to fix it ?
FYI, the data pushed into the pipeline come from a streamed wav file and replacing kaldinnetonlinedecoder with wavenc correctly generate a Wav file instead of a text file at the end.
EDIT
Here are the parameters used:
use-threaded-decoder=0
model=/opt/en/final.mdl
word-syms=<word-file>
fst=<fst_file>
mfcc-config=<mfcc-file>
ivector-extraction-config=/opt/en/ivector-extraction/ivector_extractor.conf
max-active=10000
beam=10.0
lattice-beam=6.0
do-endpointing=1
endpoint-silence-phones=\"1:2:3:4:5:6:7:8:9:10\"
traceback-period-in-secs=0.25
num-nbest=10
For your information, using the pipeline textual representation in python works but coding it (i.e using Gst.Element_Factory.make and so on) always throw the exception
SECOND UPDATE
Here is the full stack trace generated by the assert
ASSERTION_FAILED ([5.2]:AdvanceChunk():decodable-online-looped.cc:223) : 'current_log_post_.NumRows() == info_.frames_per_chunk / info_.opts.frame_subsampling_factor && current_log_post_.NumCols() == info_.output_dim'
[ Stack-Trace: ]
kaldi::MessageLogger::HandleMessage(kaldi::LogMessageEnvelope const&, char const*)
kaldi::MessageLogger::~MessageLogger()
kaldi::KaldiAssertFailure_(char const*, char const*, int, char const*)
kaldi::nnet3::DecodableNnetLoopedOnlineBase::AdvanceChunk()
kaldi::nnet3::DecodableNnetLoopedOnlineBase::EnsureFrameIsComputed(int)
kaldi::nnet3::DecodableAmNnetLoopedOnline::LogLikelihood(int, int)
kaldi::LatticeFasterOnlineDecoder::ProcessEmitting(kaldi::DecodableInterface*)
kaldi::LatticeFasterOnlineDecoder::AdvanceDecoding(kaldi::DecodableInterface*, int)
kaldi::SingleUtteranceNnet3Decoder::AdvanceDecoding()
I finally got it working, even with frame-subsampling-factor parameter.
The problem resides in the order of the parameters.
fst and model parameters have to be the last ones.
Thus the following textual chain works :
gst-launch-1.0 pulsesrc device=alsa_input.pci-0000_00_05.0.analog-stereo ! queue ! \
audioconvert ! \
audioresample ! tee name=t ! queue ! \
kaldinnet2onlinedecoder \
use-threaded-decoder=0 \
nnet-mode=3 \
word-syms=/opt/models/fr/words.txt \
mfcc-config=/opt/models/fr/mfcc_hires.conf \
ivector-extraction-config=/opt/models/fr/ivector-extraction/ivector_extractor.conf \
phone-syms=/opt/models/fr/phones.txt \
frame-subsampling-factor=3 \
max-active=7000 \
beam=13.0 \
lattice-beam=8.0 \
acoustic-scale=1 \
do-endpointing=1 \
endpoint-silence-phones=1:2:3:4:5:16:17:18:19:20 \
traceback-period-in-secs=0.25 \
num-nbest=2 \
chunk-length-in-secs=0.25 \
fst=/opt/models/fr/HCLG.fst \
model=/opt/models/fr/final.mdl \
! filesink async=0 location=/dev/stdout t. ! queue ! autoaudiosink async=0
I opened an issue on GitHub for this as for me, this can be really difficult to find and should at least be documented.

gstreamer gstrtpbin sender/receiver in the same pipeline

I wish to build a single gstreamer pipeline that does both rtp audio send and receive.
Based on the examples (few as they are) that I've found, here is my almost working code.
(the program is written in Rexx, but it's pretty obvious what is happening, I think. Here, it looks a lot like bash!). Line catenation char is comma. The "", bits just insert blank lines for readability.
rtp_recv_port = 8554
rtp_send_port = 8555
pipeline = "gst-launch -e",
"",
"gstrtpbin",
" name=rtpbin",
"",
"udpsrc port="rtp_recv_port, -- do-timestamp=true
' ! "application/x-rtp,media=audio,payload=8,clock-rate=8000,encoding-name=PCMA,channels=1" ',
" ! rtpbin.recv_rtp_sink_0",
"",
"rtpbin. ",
" ! rtppcmadepay",
" ! decodebin ",
' ! "audio/x-raw-int, width=16, depth=16, rate=8000, channels=1" ',
" ! volume volume=5.0 ",
" ! autoaudiosink sync=false",
"",
"autoaudiosrc ",
" ! audioconvert ",
' ! "audio/x-raw-int,width=16,depth=16,rate=8000,channels=1" ',
" ! alawenc ",
" ! rtppcmapay perfect-rtptime=true mtu=2000",
" ! rtpbin.send_rtp_sink_1",
"",
"rtpbin.send_rtp_src_1 ",
" ! audioconvert",
" ! audioresample",
" ! udpsink port="rtp_send_port "host="ipaddr
pipeline "> pipe.out"
If I comment out the lines after
" ! autoaudiosink sync=false",
The receive-only portion works just fine. However, if I leave those lines in place I get this error:
ERROR: from element /GstPipeline:pipeline0/GstUDPSrc:udpsrc0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2582): gst_base_src_loop (): /GstPipeline:pipeline0/GstUDPSrc:udpsrc0:
streaming task paused, reason not-linked (-1)
So what's suddenly become unlinked? I'd understand if the error was in the autoaudiosrc portion, but suddenly showing up in the udpsrc section?
Suggestion of help, anyone?
(FWIW) After I get this part working I will go back in and add the rtcp parts or the pipeline.
Here is a pipeline that will send and receive audio (full duplex). I manually set the sources so that it is expandable(you can put video on this as well and I have a sample pipeline for you if you want to do both). I set the jitter buffer mode to BUFFER because mine is implemented on a network with a TON of jitter. Now, within this sample pipe, you could add all your variable changes (volume, your audio source, encoding and decoding etc.).
sudo gst-launch gstrtpbin \
name=rtpbin audiotestsrc ! queue ! audioconvert ! alawenc ! \
rtppcmapay pt=8 ! rtpbin.send_rtp_sink_0 rtpbin.send_rtp_src_0 ! \
multiudpsink clients="127.0.0.1:5002" sync=false async=false \
udpsrc port=5004 caps="application/x-rtp, media=audio, payload=8, clock-rate=8000, \
encoding-name=PCMA" ! queue ! rtpbin.recv_rtp_sink_0 \
rtpbin. buffer-mode=RTP_JITTER_BUFFER_MODE_BUFFER ! rtppcmadepay ! alawdec ! alsasink
I have had issues with the Control(RTCP) packets. I have found that a loop back test is not sufficient if you are utilizing RTCP. You will have to test on two computers talking to each other.
Let me know if this works for you as I have tested on 4 different machines and all have worked.