I wish to build a single gstreamer pipeline that does both rtp audio send and receive.
Based on the examples (few as they are) that I've found, here is my almost working code.
(the program is written in Rexx, but it's pretty obvious what is happening, I think. Here, it looks a lot like bash!). Line catenation char is comma. The "", bits just insert blank lines for readability.
rtp_recv_port = 8554
rtp_send_port = 8555
pipeline = "gst-launch -e",
"",
"gstrtpbin",
" name=rtpbin",
"",
"udpsrc port="rtp_recv_port, -- do-timestamp=true
' ! "application/x-rtp,media=audio,payload=8,clock-rate=8000,encoding-name=PCMA,channels=1" ',
" ! rtpbin.recv_rtp_sink_0",
"",
"rtpbin. ",
" ! rtppcmadepay",
" ! decodebin ",
' ! "audio/x-raw-int, width=16, depth=16, rate=8000, channels=1" ',
" ! volume volume=5.0 ",
" ! autoaudiosink sync=false",
"",
"autoaudiosrc ",
" ! audioconvert ",
' ! "audio/x-raw-int,width=16,depth=16,rate=8000,channels=1" ',
" ! alawenc ",
" ! rtppcmapay perfect-rtptime=true mtu=2000",
" ! rtpbin.send_rtp_sink_1",
"",
"rtpbin.send_rtp_src_1 ",
" ! audioconvert",
" ! audioresample",
" ! udpsink port="rtp_send_port "host="ipaddr
pipeline "> pipe.out"
If I comment out the lines after
" ! autoaudiosink sync=false",
The receive-only portion works just fine. However, if I leave those lines in place I get this error:
ERROR: from element /GstPipeline:pipeline0/GstUDPSrc:udpsrc0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2582): gst_base_src_loop (): /GstPipeline:pipeline0/GstUDPSrc:udpsrc0:
streaming task paused, reason not-linked (-1)
So what's suddenly become unlinked? I'd understand if the error was in the autoaudiosrc portion, but suddenly showing up in the udpsrc section?
Suggestion of help, anyone?
(FWIW) After I get this part working I will go back in and add the rtcp parts or the pipeline.
Here is a pipeline that will send and receive audio (full duplex). I manually set the sources so that it is expandable(you can put video on this as well and I have a sample pipeline for you if you want to do both). I set the jitter buffer mode to BUFFER because mine is implemented on a network with a TON of jitter. Now, within this sample pipe, you could add all your variable changes (volume, your audio source, encoding and decoding etc.).
sudo gst-launch gstrtpbin \
name=rtpbin audiotestsrc ! queue ! audioconvert ! alawenc ! \
rtppcmapay pt=8 ! rtpbin.send_rtp_sink_0 rtpbin.send_rtp_src_0 ! \
multiudpsink clients="127.0.0.1:5002" sync=false async=false \
udpsrc port=5004 caps="application/x-rtp, media=audio, payload=8, clock-rate=8000, \
encoding-name=PCMA" ! queue ! rtpbin.recv_rtp_sink_0 \
rtpbin. buffer-mode=RTP_JITTER_BUFFER_MODE_BUFFER ! rtppcmadepay ! alawdec ! alsasink
I have had issues with the Control(RTCP) packets. I have found that a loop back test is not sufficient if you are utilizing RTCP. You will have to test on two computers talking to each other.
Let me know if this works for you as I have tested on 4 different machines and all have worked.
Related
I have this pipeline using an RTSP server with gstreamer. I really want to get the frame after the socketsrc, process it with opencv and then push it back to the pipeline.
I tried to add appsrc with appsink using this tutorial
https://gstreamer.freedesktop.org/documentation/tutorials/basic/short-cutting-the-pipeline.html?gi-language=c
but I didn't manage to do it.
pipeline = "("
"socketsrc name= socket_src ! application/x-rtp , payload = 96 ,clock-rate=90000 ! "
"rtpjitterbuffer name= jitter_buffer ! rtph264depay ! h264parse name= parse ! rtph264pay name=pay0 pt=96 "
")";
GstRTSPMountPoints *mounts = gst_rtsp_server_get_mount_points(p_server);
GstRTSPMediaFactory *new_factory = gst_rtsp_media_factory_new();
gst_rtsp_media_factory_set_profiles(new_factory, GST_RTSP_PROFILE_AVP);
gst_rtsp_media_factory_set_launch(new_factory, pipeline.c_str());
g_signal_connect(new_factory, "media-configure", (GCallback)media_configure_cb, this);
std::cout << domain_name << std::endl;
gst_rtsp_media_factory_set_shared(new_factory, false);
gst_rtsp_mount_points_add_factory(mounts, domain_name.c_str(), new_factory);
g_object_unref(mounts);
any idea how can I get the frame here using OpenCV, process it, and push it back to the pipeline?
thanks!
While trying to implement a simple player (with gst-launch) for a CDN that uses the initial headers throughout all streams (probably to avoid bots), hlexdemux and adaptivedemux will not reuse the same initial headers from the initial source for the next requests.
Is it actually possible to have a pre-configured curlhttpsrc to be reused by hlsdemux and its super classes?
This is the pipeline I am using:
gst-launch-1.0 -v \
curlhttpsrc \
name=curl user-agent=my-user-agent \
location=http://localhost:8000/playlist.m3u8 curl. \
! hlsdemux \
! fakesink sync=false
the playlist was generated with:
gst-launch-1.0 -v \
videotestsrc is-live=true \
! x264enc \
! h264parse \
! hlssink2 max-files=5 playlist-root=http://localhost:8090
its output
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:43
#EXT-X-TARGETDURATION:15
#EXTINF:15.000000953674316,
http://localhost:8090/segment00043.ts
#EXTINF:15.000000953674316,
http://localhost:8090/segment00044.ts
#EXTINF:15.000000953674316,
http://localhost:8090/segment00045.ts
#EXTINF:15.000000953674316,
http://localhost:8090/segment00046.ts
#EXTINF:15.000000953674316,
http://localhost:8090/segment00047.ts
#EXT-X-ENDLIST
And to mimic the CDN, I used this snippet to serve the playlist from port 8000 and the streams from 8090 as the CDN uses different hosts and with it, I put a user-agent validation to see when my pipeline breaks.
from http.server import SimpleHTTPRequestHandler, test
import sys
class Handler(SimpleHTTPRequestHandler):
def parse_request(self) -> bool:
rv = super().parse_request()
if self.headers['User-Agent'] != "my-user-agent":
self.send_error(404, "Wrong user-agent")
return False
return rv
test(Handler, port=int(sys.argv[1]))
My PIPELINE-DESCRIPTION only video works:
"rtspsrc protocols=tcp location=" + urlStream_ + " latency=300 ! decodebin3 ! autovideosink ! autoaudiosink";
But...
I would like receive video+audio. I only receive it on the first frame and no audio:
"rtspsrc protocols=tcp location=" + urlStream_ + " latency=300 ! decodebin3 ! autovideosink ! autoaudiosink";
You will need to connect the autoaudiosink the the decodebin3. Currently you are connecting the sink to the video sink - which obviously is bogus.
It it also advised to use a queue after each demuxer pad. Try:
"rtspsrc protocols=tcp location=" + urlStream_ + " latency=300 ! decodebin3 name=decodebin ! queue ! autovideosink decodebin. ! queue ! autoaudiosink";
I'm new for gstreamer. I want to encode the video of my MacbookPro. built-in cam to h264 and then play. in command line, I tried "
gst-launch-1.0 autovideosrc ! queue ! x264enc ! avdec_h264 ! queue ! autovideosink " and it works.but when I run the c++ code, it failed, only show a green screen.
video_src = gst_element_factory_make("autovideosrc", "video_source");
video_enc = gst_element_factory_make("x264enc", "videoEncoder");
video_dec = gst_element_factory_make("avdec_h264", "videodecoder");
video_sink = gst_element_factory_make("osxvideosink", nullptr);
gst_bin_add_many...
gst_element_link_many (video_src, screen_queue, video_enc, video_dec, video_sink, NULL);
not sure how to correct it. thanks!
I'd like to use pipeline below to play content with sound and without sound. Problem is that content without sound PREROLLING pipeline, but doesn't play
gst-launch-1.0.exe uridecodebin uri=file:///home/mymediafile.ogv name=d1 ! tee name=t1 ! queue max-size-buffers=2 ! jpegenc ! appsink name=myappsink t1. ! queue ! autovideosink d1. ! queue ! audioconvert ! audioresample ! autoaudiosink
How can I solve such issue?
I found no way to get your pipeline going on the command line. If I put in the audio portion of the pipeline, the files with no audio hang.
In your application however, you'll be able to add a signal for the pad_added events, and only added the audio portion of the pipeline when needed. Some pseudo code:
void decodebin_pad_added(GstElement *decodebin, GstPad *new_pad, gpointer user_data) {
GstElement* pipeline = (GstElement*)user_data;
GstCaps* audio_caps = gst_caps_from_string("audio/x-raw");
GstCaps* pad_caps = gst_pad_get_current_caps(new_pad);
if(! gst_caps_can_intersect(pad_caps, audio_caps)) {
return;
}
GstElement* audio_pipeline = gst_parse_launch("queue ! audioconvert ! audioresample ! autoaudiosink", NULL);
gst_bin_add(GST_BIN(pipeline), audio_pipeline);
GstElement* decodebin = gst_bin_get_by_name(GST_BIN(pipeline), "d1");
gst_element_link(decodebin, audio_pipeline);
gst_object_unref(decodebin);
}
void decodebin_no_more_pads(GstElement *decodebin, gpointer user_data) {
GstElement* pipeline = (GstElement*)user_data;
gst_element_set_state(pipeline, GST_PLAYING);
}
GstElement* pipeline = gst_parse_launch("uridecodebin uri=file:///home/mymediafile.ogv name=d1 ! tee name=t1 ! queue max-size-buffers=2 ! jpegenc ! appsink name=myappsink t1. ! queue ! autovideosink", NULL);
GstElement* decodebin = gst_bin_get_by_name(GST_BIN(pipeline), "d1");
g_signal_connect(decodebin, "pad-added", G_CALLBACK(decodebin_pad_added), pipeline);
g_signal_connect(decodebin, "no-more-pads", G_CALLBACK(decodebin_no_more_pads), pipeline);
gst_element_set_state(pipeline, GST_STATE_PAUSED); //pause to make demuxer and decoders get setup and find out what's in the file
Add async-handling=true to the autoaudiosink.
gst-launch-1.0.exe uridecodebin uri=file:///home/mymediafile.ogv
name=d1 ! tee name=t1 ! queue max-size-buffers=2 ! jpegenc ! appsink
name=myappsink t1. ! queue ! autovideosink d1. ! queue ! audioconvert
! audioresample ! autoaudiosink async-handling=true