I have this pipeline using an RTSP server with gstreamer. I really want to get the frame after the socketsrc, process it with opencv and then push it back to the pipeline.
I tried to add appsrc with appsink using this tutorial
https://gstreamer.freedesktop.org/documentation/tutorials/basic/short-cutting-the-pipeline.html?gi-language=c
but I didn't manage to do it.
pipeline = "("
"socketsrc name= socket_src ! application/x-rtp , payload = 96 ,clock-rate=90000 ! "
"rtpjitterbuffer name= jitter_buffer ! rtph264depay ! h264parse name= parse ! rtph264pay name=pay0 pt=96 "
")";
GstRTSPMountPoints *mounts = gst_rtsp_server_get_mount_points(p_server);
GstRTSPMediaFactory *new_factory = gst_rtsp_media_factory_new();
gst_rtsp_media_factory_set_profiles(new_factory, GST_RTSP_PROFILE_AVP);
gst_rtsp_media_factory_set_launch(new_factory, pipeline.c_str());
g_signal_connect(new_factory, "media-configure", (GCallback)media_configure_cb, this);
std::cout << domain_name << std::endl;
gst_rtsp_media_factory_set_shared(new_factory, false);
gst_rtsp_mount_points_add_factory(mounts, domain_name.c_str(), new_factory);
g_object_unref(mounts);
any idea how can I get the frame here using OpenCV, process it, and push it back to the pipeline?
thanks!
My PIPELINE-DESCRIPTION only video works:
"rtspsrc protocols=tcp location=" + urlStream_ + " latency=300 ! decodebin3 ! autovideosink ! autoaudiosink";
But...
I would like receive video+audio. I only receive it on the first frame and no audio:
"rtspsrc protocols=tcp location=" + urlStream_ + " latency=300 ! decodebin3 ! autovideosink ! autoaudiosink";
You will need to connect the autoaudiosink the the decodebin3. Currently you are connecting the sink to the video sink - which obviously is bogus.
It it also advised to use a queue after each demuxer pad. Try:
"rtspsrc protocols=tcp location=" + urlStream_ + " latency=300 ! decodebin3 name=decodebin ! queue ! autovideosink decodebin. ! queue ! autoaudiosink";
I have a gstreamer pipeline that works in the command line and I am trying to convert it to C++ code. I have most of it, except I need to be able to write the -e flag in C++ but I'm not sure how to add it to the pipeline. Here is the command line
gst-launch-1.0 -e udpsrc port=8000 ! application/x-rtp, encoding-name=H264, payload=109 ! tee name=t t. ! rtph264depay ! h264parse ! queue ! avdec_h264 ! videoconvert ! autovideosink t. ! rtph264depay ! h264parse ! queue ! mp4mux ! filesink location=!/camera.mp4"
Here is the C++ code I have. This works to display a live stream from a camera and write a mp4 file, however it is not readable. The -e flag makes the file able to be played.
// [1] Create Elements
pipeline = gst_pipeline_new("xvoverlay");
src = gst_element_factory_make("udpsrc", NULL);
caps = gst_element_factory_make("capsfilter", NULL);
tee = gst_element_factory_make("tee", "tee");
// Display
rtpDepay = gst_element_factory_make("rtph264depay", NULL);
h264Parse = gst_element_factory_make("h264parse", NULL);
displayQueue = gst_element_factory_make("queue", NULL);
decoder = gst_element_factory_make("avdec_h264", NULL);
videoConvert = gst_element_factory_make("videoconvert", NULL);
upload = gst_element_factory_make("d3d11upload", NULL);
sink = gst_element_factory_make("d3d11videosink", NULL);
// Record
recordRtpDepay = gst_element_factory_make("rtph264depay", NULL);
recordH264Parse = gst_element_factory_make("h264parse", NULL);
recordQueue = gst_element_factory_make("queue", "save_queue");
mux = gst_element_factory_make("mp4mux", NULL);
filesink = gst_element_factory_make("filesink", NULL);
// [2] Set element properties
g_object_set(src, "port", port, NULL);
g_object_set(caps, "caps", gst_caps_from_string("application/x-rtp, encoding-name=H264, payload=109"), NULL);
g_object_set(filesink, "location", "camera.mp4", NULL);
//g_object_set(mux, "faststart", true, NULL);
// [3] Add elements to pipeline and link together
//gst_bin_add_many(GST_BIN(pipeline), src, caps, rtpDepay, h264Parse, displayQueue, decoder, videoConvert, upload, sink, NULL);
//gst_element_link_many(src, caps, rtpDepay, h264Parse, displayQueue, decoder, videoConvert, upload, sink, NULL);
gst_bin_add_many(GST_BIN(pipeline), src, caps, tee, rtpDepay, h264Parse, displayQueue, decoder, videoConvert, upload, sink, recordRtpDepay, recordH264Parse, recordQueue, mux, filesink, NULL);
if (!gst_element_link_many(src, caps, tee, NULL)
|| !gst_element_link_many(tee, rtpDepay, h264Parse, displayQueue, decoder, videoConvert, upload, sink, NULL)
|| !gst_element_link_many(tee, recordRtpDepay, recordH264Parse, recordQueue, mux, filesink, NULL))
{
qDebug() << "Failed to link elements";
}
How do I add a -e flag as a GstElement? I've searched online and I can't find anyone trying to do this programatically with that flag.
The -e flag sends and EOS at the end of the stream. While processing the EOS message, the video writer will write the header information needed for the video to be playable.
The solution is to change the way you stop your pipeline. Instead of however you currently do it (you did not include that code), you should get access to the srcpad of your udpsrc object, and then send a GST_EVENT_EOS. This will signal to the application that you have ended the stream. Each element will process what it needs to, and then forward that event further down the pipeline. It will reach the video writer, which will then write the needed header information to your videofile, before entering a paused state.
On shutdown, call
gst_element_send_event(src, gst_event_new_eos ())
This will send EOS event downstream and write the required meta data.
it's easier to use gst_parse_launch and then gst_bin_get_by_name on elements you want to do fancy things with.
The -e flags can be given in gst_init
I'm new for gstreamer. I want to encode the video of my MacbookPro. built-in cam to h264 and then play. in command line, I tried "
gst-launch-1.0 autovideosrc ! queue ! x264enc ! avdec_h264 ! queue ! autovideosink " and it works.but when I run the c++ code, it failed, only show a green screen.
video_src = gst_element_factory_make("autovideosrc", "video_source");
video_enc = gst_element_factory_make("x264enc", "videoEncoder");
video_dec = gst_element_factory_make("avdec_h264", "videodecoder");
video_sink = gst_element_factory_make("osxvideosink", nullptr);
gst_bin_add_many...
gst_element_link_many (video_src, screen_queue, video_enc, video_dec, video_sink, NULL);
not sure how to correct it. thanks!
I'd like to use pipeline below to play content with sound and without sound. Problem is that content without sound PREROLLING pipeline, but doesn't play
gst-launch-1.0.exe uridecodebin uri=file:///home/mymediafile.ogv name=d1 ! tee name=t1 ! queue max-size-buffers=2 ! jpegenc ! appsink name=myappsink t1. ! queue ! autovideosink d1. ! queue ! audioconvert ! audioresample ! autoaudiosink
How can I solve such issue?
I found no way to get your pipeline going on the command line. If I put in the audio portion of the pipeline, the files with no audio hang.
In your application however, you'll be able to add a signal for the pad_added events, and only added the audio portion of the pipeline when needed. Some pseudo code:
void decodebin_pad_added(GstElement *decodebin, GstPad *new_pad, gpointer user_data) {
GstElement* pipeline = (GstElement*)user_data;
GstCaps* audio_caps = gst_caps_from_string("audio/x-raw");
GstCaps* pad_caps = gst_pad_get_current_caps(new_pad);
if(! gst_caps_can_intersect(pad_caps, audio_caps)) {
return;
}
GstElement* audio_pipeline = gst_parse_launch("queue ! audioconvert ! audioresample ! autoaudiosink", NULL);
gst_bin_add(GST_BIN(pipeline), audio_pipeline);
GstElement* decodebin = gst_bin_get_by_name(GST_BIN(pipeline), "d1");
gst_element_link(decodebin, audio_pipeline);
gst_object_unref(decodebin);
}
void decodebin_no_more_pads(GstElement *decodebin, gpointer user_data) {
GstElement* pipeline = (GstElement*)user_data;
gst_element_set_state(pipeline, GST_PLAYING);
}
GstElement* pipeline = gst_parse_launch("uridecodebin uri=file:///home/mymediafile.ogv name=d1 ! tee name=t1 ! queue max-size-buffers=2 ! jpegenc ! appsink name=myappsink t1. ! queue ! autovideosink", NULL);
GstElement* decodebin = gst_bin_get_by_name(GST_BIN(pipeline), "d1");
g_signal_connect(decodebin, "pad-added", G_CALLBACK(decodebin_pad_added), pipeline);
g_signal_connect(decodebin, "no-more-pads", G_CALLBACK(decodebin_no_more_pads), pipeline);
gst_element_set_state(pipeline, GST_STATE_PAUSED); //pause to make demuxer and decoders get setup and find out what's in the file
Add async-handling=true to the autoaudiosink.
gst-launch-1.0.exe uridecodebin uri=file:///home/mymediafile.ogv
name=d1 ! tee name=t1 ! queue max-size-buffers=2 ! jpegenc ! appsink
name=myappsink t1. ! queue ! autovideosink d1. ! queue ! audioconvert
! audioresample ! autoaudiosink async-handling=true