GStreamer rtspsrc stops working once 32 streams have been created - gstreamer

I have a device running embedded linux that can show RTSP streams from a camera. The user can change the stream from a windowed stream to a full screen stream, and vice versa. If the stream is changed 32 times, the stream stops working. I have possibly narrowed down the problem to the rtspsrc itself.
My question is, how does one clear the memory for the gst "stuff" without re-starting the program?
If I use gst-launch-1.0 with the pipeline, it works for more than 32 re-starts because the program is being killed every time.
However, if I run my program and increase the rtspsrc to 31 (by switching between the two streams), and then run gst-launch-1.0 with a rtsp pipeline, the steam does not show up! It appears that until every program that is using gst is killed, the rtspsrc will not reset back to 0.
I enabled debugging the rtspsrc:
export GST_DEBUG="rtspsrc:6"
Lots of log messages are shown each time the stream is started. They print the rtspsrcX, which increases even though the previous stream is stopped:
First run log print:
**rtspsrc gstrtspsrc.c:8834:gst_rtspsrc_print_sdp_media:<rtspsrc0> RTSP response message**
Second run:
**rtspsrc gstrtspsrc.c:8855:gst_rtspsrc_print_sdp_media:<rtspsrc1> RTSP response message**
Continue stopping/starting the stream, and it increases up to 31, at which point the stream no longer shows up:
**rtspsrc gstrtspsrc.c:8855:gst_rtspsrc_print_sdp_media:<rtspsrc31> RTSP response message**
I'm not sure how to "reset" the stream each time the user stops it. It seems that gst can't release memory unless I kill the whole program (all programs using gst).
I have tried creating a new context each time the stream is re-started, but this doesn't help.
When I call gst_is_initialized each subsequent time, it returns true.
The main loop is stopped by calling the following from another thread:
g_main_loop_quit(loop_);
The video feeds are controlled with the following:
GMainLoop *loop_;
pipeline = "rtspsrc location=rtsp://192.168.0.243/0 latency=0 ! rtph264depay ! h264parse ! imxvpudec ! imxipuvideosink window-width=512 window-height=384 sync=false"
or
pipeline = "rtspsrc location=rtsp://192.168.0.243/0 latency=0 ! rtph264depay ! h264parse ! imxvpudec ! imxipuvideosink window-width=1024 window-height=768 sync=false"
void stream_video(std::string pipeline)
{
GMainContext* context;
GstElement *pipelineElement;
GstBus *bus = NULL;
guint bus_watch_id = 0;
GstState state;
try
{
if(!gst_is_initialized())
{
std::cout << "GST Is not initialized - initializing " << pipeline.c_str();
gst_init_check(nullptr,nullptr,nullptr);
}
context = g_main_contextnew(); // Creating a new context to see if the camera can be started more than 32 times, but the rtspsrc still increases when debugging
loop_ = g_main_loopnew (context, FALSE);
pipelineElement = gst_parse_launch(pipeline.c_str(), NULL);
bus = gst_pipeline_get_bus (GST_PIPELINE (pipelineElement));
bus_watch_id = gst_bus_add_watch (bus, bus_call, loop_);
gst_object_unref (bus);
bus = NULL;
gst_element_set_state(pipelineElement, GST_STATE_READY );
gst_element_set_state(pipelineElement, GST_STATE_PAUSED );
gst_element_set_state(pipelineElement, GST_STATE_PLAYING);
if (gst_element_get_state (pipelineElement, &state, NULL, 2*GST_SECOND) == GST_STATE_CHANGE_FAILURE)
{
std::cout << "gst: Failed to chage states State:" << state << " ID: " << stream_id_;
}
else
{
std::cout << "gst: Running..." << " ID: " << stream_id_ << " State:" << state << " Loop:" << loop_;
g_main_looprun (loop_); // blocks until loop_ exits (EOS, error, stop request)
}
gst_element_set_state(pipelineElement, GST_STATE_PAUSED);
gst_element_set_state(pipelineElement, GST_STATE_READY );
gst_element_set_state(pipelineElement, GST_STATE_NULL); // Can only switch between certian states, see https://gstreamer.freedesktop.org/documentation/additional/design/states.html?gi-language=c
g_source_remove (bus_watch_id);
std::cout << "gst: Removing pipelineElement " << pipelineElement;
gst_object_unref (GST_OBJECT (pipelineElement));
pipelineElement = NULL;
g_main_contextunref (context);
context = NULL;
g_main_loopunref (loop_);
loop_ = nullptr;
std::cout << "gst: Deleted pipeline" << " ID: " << stream_id_ << " State: " << state;
}
catch(const std::exception& e)
{
std::cout << "Error Caught: stream_video " << e.what();
}
return;
}

Related

How to destroy Gst pipeline?

I'm building a pipeline in Gstreamer. I would like to destroy gst pipeline when the webrtc connection is gone. In this current situation, when webrtc connection is established again, I receive old sdp and offers from other pipelines.
Before the pipeline these commands run for the websocket.
QWebSocket *pSocket = websocketServer->nextPendingConnection();
m_client = pSocket;
m_client->sendTextMessage(QStringLiteral("Initiating WEBRTC handshake"));
QObject::connect(pSocket, &QWebSocket::textMessageReceived, this, &c_module_videostream::processTextMessage);
QObject::connect(pSocket, &QWebSocket::binaryMessageReceived, this, &c_module_videostream::processBinaryMessage);
QObject::connect(pSocket, &QWebSocket::disconnected, this, &c_module_videostream::socketDisconnected);
QObject::connect(this, &c_module_videostream::s_JSONReadyToSend, this, &c_module_videostream::sendJSONTextMessage);
QObject::connect(this, &c_module_videostream::s_needImageData, this, &c_module_videostream::startImageDataStream);
QObject::connect(this, &c_module_videostream::s_enoughImageData, this, &c_module_videostream::stopImageDataStream);
Pipeline is created with this command.
pipeline_description = pipeline_description + "appsrc name=CaliCam ! " + "video/x-raw, format=BGR, width=" + width + ", height=" + height + ", framerate=" + fps + "/1 ! " + "videoconvert !"
//"queue max-size-buffers=10 ! "
+ "x264enc bitrate=1000 speed-preset=ultrafast tune=zerolatency key-int-max=10 ! " + "video/x-h264,profile=constrained-baseline !" + "h264parse ! " + "rtph264pay config-interval=-1 name=payloader ! " + "application/x-rtp, media=video, encoding-name=H264, payload=96 !" + "webrtcbin name=webrtcbin_send";
pipeline = gst_parse_launch(pipeline_description.c_str(), &error);
After that here is the start command of the pipeline.
// Start the Pipeline
RCLCPP_INFO_STREAM(nh->get_logger(), MODULE_NAME << "Starting Pipeline");
GstState cur_state;
int returnvalue;
do
{
RCLCPP_INFO_STREAM(nh->get_logger(), MODULE_NAME << "Starting...");
returnvalue = gst_element_set_state(GST_ELEMENT(pipeline), GST_STATE_PLAYING);
usleep(2000000);
gst_element_get_state(GST_ELEMENT(pipeline), &cur_state, NULL, GST_CLOCK_TIME_NONE);
RCLCPP_INFO_STREAM(nh->get_logger(), MODULE_NAME << "Pipeline status " << cur_state << "/" << returnvalue << "/" << (int)GST_STATE_PLAYING);
} while (cur_state != GST_STATE_PLAYING);
// Pipeline is open, Subscribe to Image transport
image_transport_Subscriber = nh->create_subscription<sensor_msgs::msg::Image>("calicam_front/left/image_rect", 1, std::bind(&c_module_videostream::imageCb, this, std::placeholders::_1));
RCLCPP_INFO_STREAM(nh->get_logger(), MODULE_NAME << "subs::" << this->image_transport_Subscriber);
RCLCPP_INFO_STREAM(nh->get_logger(), MODULE_NAME << "Starting Pipeline DONE");
pushImages = true;
I tried to clear the pipeline with these functions but it still remains.
gst_element_set_state(this->pipeline, GST_STATE_NULL);
gst_object_unref(GST_OBJECT(this->pipeline));
gst_object_unref(GST_OBJECT(this->webrtcbin));
gst_object_unref(GST_OBJECT(this->appsrc));

GStreamer sendonly to multiple WebRTC clients

I've been trying to setup a simple sendonly WebRTC client with GStreamer but I'm having issues with getting the actual video to display on the WebRTC receiver side. I am new to both GStreamer and WebRTC.
I'm using the examples from https://gitlab.freedesktop.org/gstreamer/gst-examples/-/tree/master/webrtc to try and come up with a combination of certain parts. I've had 1:1 communication working but I wanted to introduce the rooms so I can have more clients viewing the "view-only" stream from GStreamer.
My current code is based on the multiparty-sendrecv example where I swapped out the audio for video. Furthermore, I'm using a modified version of the signalling server and a modified version of the javascript webrtc client. If necessary I could provide code for all of the above, but to keep things simple I won't. This is because I don't think the problem lies in either the signalling server or webrtc client, because the ICE candidates have been successfully negotiated along with the SDP offer & answer according to chrome://webrtc-internals/. See the image below.
In order to figure out what's going on I've exported a graph that shows the GStreamer pipeline after a user has joined the room and was added to the pipeline. See graph below.
As far as I can tell I should be receiving video data on my frontend, but I'm not. I've had a single weird case where the videotestsrc did show up, but I haven't been able to reproduce it. But because of this, it makes me think that the pipeline itself isn't neccesarily wrong, but perhaps we're dealing with some kind of race condition.
I've added the modified example of multiparty-sendrecv below, please take a look at it. Most of the methods have purposely been left out due to Stackoverflow's character limit.
Main functions
static void
handle_media_stream(GstPad* pad, GstElement* pipe, const char* convert_name,
const char* sink_name)
{
GstPad* qpad;
GstElement* q, * conv, * sink;
GstPadLinkReturn ret;
q = gst_element_factory_make("queue", NULL);
g_assert_nonnull(q);
conv = gst_element_factory_make(convert_name, NULL);
g_assert_nonnull(conv);
sink = gst_element_factory_make(sink_name, NULL);
g_assert_nonnull(sink);
gst_bin_add_many(GST_BIN(pipe), q, conv, sink, NULL);
gst_element_sync_state_with_parent(q);
gst_element_sync_state_with_parent(conv);
gst_element_sync_state_with_parent(sink);
gst_element_link_many(q, conv, sink, NULL);
qpad = gst_element_get_static_pad(q, "sink");
ret = gst_pad_link(pad, qpad);
g_assert_cmpint(ret, == , GST_PAD_LINK_OK);
}
static void
on_incoming_decodebin_stream(GstElement* decodebin, GstPad* pad,
GstElement* pipe)
{
GstCaps* caps;
const gchar* name;
if (!gst_pad_has_current_caps(pad)) {
g_printerr("Pad '%s' has no caps, can't do anything, ignoring\n",
GST_PAD_NAME(pad));
return;
}
caps = gst_pad_get_current_caps(pad);
name = gst_structure_get_name(gst_caps_get_structure(caps, 0));
if (g_str_has_prefix(name, "video")) {
handle_media_stream(pad, pipe, "videoconvert", "autovideosink");
}
else if (g_str_has_prefix(name, "audio")) {
handle_media_stream(pad, pipe, "audioconvert", "autoaudiosink");
}
else {
g_printerr("Unknown pad %s, ignoring", GST_PAD_NAME(pad));
}
}
static void
on_incoming_stream(GstElement* webrtc, GstPad* pad, GstElement* pipe)
{
GstElement* decodebin;
GstPad* sinkpad;
if (GST_PAD_DIRECTION(pad) != GST_PAD_SRC)
return;
decodebin = gst_element_factory_make("decodebin", NULL);
g_signal_connect(decodebin, "pad-added",
G_CALLBACK(on_incoming_decodebin_stream), pipe);
gst_bin_add(GST_BIN(pipe), decodebin);
gst_element_sync_state_with_parent(decodebin);
sinkpad = gst_element_get_static_pad(decodebin, "sink");
gst_pad_link(pad, sinkpad);
gst_object_unref(sinkpad);
}
static void
add_peer_to_pipeline(const gchar* peer_id, gboolean offer)
{
int ret;
gchar* tmp;
GstElement* tee, * webrtc, * q;
GstPad* srcpad, * sinkpad;
tmp = g_strdup_printf("queue-%s", peer_id);
q = gst_element_factory_make("queue", tmp);
g_free(tmp);
webrtc = gst_element_factory_make("webrtcbin", peer_id);
g_object_set(webrtc, "bundle-policy", GST_WEBRTC_BUNDLE_POLICY_MAX_BUNDLE, NULL);
gst_bin_add_many(GST_BIN(pipeline), q, webrtc, NULL);
srcpad = gst_element_get_static_pad(q, "src");
g_assert_nonnull(srcpad);
sinkpad = gst_element_get_request_pad(webrtc, "sink_%u");
g_assert_nonnull(sinkpad);
ret = gst_pad_link(srcpad, sinkpad);
g_assert_cmpint(ret, == , GST_PAD_LINK_OK);
gst_object_unref(srcpad);
gst_object_unref(sinkpad);
tee = gst_bin_get_by_name(GST_BIN(pipeline), "videotee");
g_assert_nonnull(tee);
srcpad = gst_element_get_request_pad(tee, "src_%u");
g_assert_nonnull(srcpad);
gst_object_unref(tee);
sinkpad = gst_element_get_static_pad(q, "sink");
g_assert_nonnull(sinkpad);
ret = gst_pad_link(srcpad, sinkpad);
g_assert_cmpint(ret, == , GST_PAD_LINK_OK);
gst_object_unref(srcpad);
gst_object_unref(sinkpad);
/* This is the gstwebrtc entry point where we create the offer and so on. It
* will be called when the pipeline goes to PLAYING.
* XXX: We must connect this after webrtcbin has been linked to a source via
* get_request_pad() and before we go from NULL->READY otherwise webrtcbin
* will create an SDP offer with no media lines in it. */
if (offer)
g_signal_connect(webrtc, "on-negotiation-needed",
G_CALLBACK(on_negotiation_needed), (gpointer)peer_id);
/* We need to transmit this ICE candidate to the browser via the websockets
* signalling server. Incoming ice candidates from the browser need to be
* added by us too, see on_server_message() */
g_signal_connect(webrtc, "on-ice-candidate",
G_CALLBACK(send_ice_candidate_message), (gpointer)peer_id);
/* Incoming streams will be exposed via this signal */
g_signal_connect(webrtc, "pad-added", G_CALLBACK(on_incoming_stream),
pipeline);
/* Set to pipeline branch to PLAYING */
ret = gst_element_sync_state_with_parent(q);
g_assert_true(ret);
ret = gst_element_sync_state_with_parent(webrtc);
g_assert_true(ret);
GST_DEBUG_BIN_TO_DOT_FILE(GST_BIN(pipeline), GST_DEBUG_GRAPH_SHOW_ALL, "pipeline");
}
static gboolean
start_pipeline(void)
{
GstStateChangeReturn ret;
GError* error = NULL;
/* NOTE: webrtcbin currently does not support dynamic addition/removal of
* streams, so we use a separate webrtcbin for each peer, but all of them are
* inside the same pipeline. We start by connecting it to a fakesink so that
* we can preroll early. */
/*pipeline = gst_parse_launch("tee name=videotee ! queue ! fakesink "
"videotestsrc is-live=true pattern=ball ! videoconvert ! queue ! vp8enc deadline=1 ! rtpvp8pay ! "
"queue ! " RTP_CAPS_VP8 "96 ! videotee. ", &error);*/
pipeline = gst_parse_launch("tee name=videotee ! queue ! fakesink "
"videotestsrc is-live=true pattern=ball ! videoconvert ! queue ! vp8enc deadline=1 ! rtpvp8pay ! "
"queue ! " RTP_CAPS_VP8 "96 ! videotee. ", &error);
if (error) {
g_printerr("Failed to parse launch: %s\n", error->message);
g_error_free(error);
goto err;
}
g_print("Starting pipeline, not transmitting yet\n");
ret = gst_element_set_state(GST_ELEMENT(pipeline), GST_STATE_PLAYING);
if (ret == GST_STATE_CHANGE_FAILURE)
goto err;
return TRUE;
err:
g_print("State change failure\n");
if (pipeline)
g_clear_object(&pipeline);
return FALSE;
}
/*
* When we join a room, we are responsible for calling by starting negotiation
* with each peer in it by sending an SDP offer and ICE candidates.
*/
static void
do_join_room(const gchar* text)
{
gint ii, len;
gchar** peer_ids;
if (app_state != ROOM_JOINING) {
cleanup_and_quit_loop("ERROR: Received ROOM_OK when not calling",
ROOM_JOIN_ERROR);
return;
}
app_state = ROOM_JOINED;
g_print("Room joined\n");
/* Start recording, but not transmitting */
if (!start_pipeline()) {
cleanup_and_quit_loop("ERROR: Failed to start pipeline", ROOM_CALL_ERROR);
return;
}
peer_ids = g_strsplit(text, " ", -1);
g_assert_cmpstr(peer_ids[0], == , "ROOM_OK");
len = g_strv_length(peer_ids);
/* There are peers in the room already. We need to start negotiation
* (exchange SDP and ICE candidates) and transmission of media. */
if (len > 1 && strlen(peer_ids[1]) > 0) {
g_print("Found %i peers already in room\n", len - 1);
app_state = ROOM_CALL_OFFERING;
for (ii = 1; ii < len; ii++) {
gchar* peer_id = g_strdup(peer_ids[ii]);
g_print("Negotiating with peer %s\n", peer_id);
/* This might fail asynchronously */
call_peer(peer_id);
peers = g_list_prepend(peers, peer_id);
}
}
g_strfreev(peer_ids);
return;
}
int
main(int argc, char* argv[])
{
GOptionContext* context;
GError* error = NULL;
context = g_option_context_new("- gstreamer webrtc sendrecv demo");
g_option_context_add_main_entries(context, entries, NULL);
g_option_context_add_group(context, gst_init_get_option_group());
if (!g_option_context_parse(context, &argc, &argv, &error)) {
g_printerr("Error initializing: %s\n", error->message);
return -1;
}
if (!check_plugins())
return -1;
if (!room_id) {
g_printerr("--room-id is a required argument\n");
return -1;
}
if (!local_id)
local_id = g_strdup_printf("%s-%i", g_get_user_name(),
g_random_int_range(10, 10000));
/* Sanitize by removing whitespace, modifies string in-place */
g_strdelimit(local_id, " \t\n\r", '-');
g_print("Our local id is %s\n", local_id);
if (!server_url)
server_url = g_strdup(default_server_url);
/* Don't use strict ssl when running a localhost server, because
* it's probably a test server with a self-signed certificate */
{
GstUri* uri = gst_uri_from_string(server_url);
if (g_strcmp0("localhost", gst_uri_get_host(uri)) == 0 ||
g_strcmp0("127.0.0.1", gst_uri_get_host(uri)) == 0)
strict_ssl = FALSE;
gst_uri_unref(uri);
}
loop = g_main_loop_new(NULL, FALSE);
connect_to_websocket_server_async();
g_main_loop_run(loop);
gst_element_set_state(GST_ELEMENT(pipeline), GST_STATE_NULL);
g_print("Pipeline stopped\n");
gst_object_unref(pipeline);
g_free(server_url);
g_free(local_id);
g_free(room_id);
return 0;
}

gst-rtsp-server: detect client disconnect

I am implementing a video streaming pipeline using gst-rtsp-server. I need to know when an RTSP client both connects and disconnects.
From the examples provided with gst-rtsp-server, I can detect a client connecting using the "client-connected" signal of the GstRTSPServer. I'm looking for something similar for when the client disconnects.
I have tried the "closed" and "teardown-request" signals of GstRTSPClient, but those don't do anything when I disconnect the client.
I have also tried calling the following function on a timer, like it is done in several examples. I would expect that to print "Removed 1 sessions" at some point after I've terminated the client, but it never does.
static gboolean
remove_sessions (GstRTSPServer * server)
{
GstRTSPSessionPool *pool;
pool = gst_rtsp_server_get_session_pool (server);
guint removed = gst_rtsp_session_pool_cleanup (pool);
g_object_unref (pool);
g_print("Removed %d sessions\n", removed);
return TRUE;
}
My client is the following gstreamer pipeline:
gst-launch-1.0 -v rtspsrc location=rtsp://$STREAM_IP:8554/test latency=50 ! queue ! rtph264depay ! queue ! avdec_h264 ! autovideosink sync=false
How can I detect client disconnections?
Call gst_rtsp_server_client_filter() when need to close RTSP server (before server deletion):
GstRTSPFilterResult clientFilterFunc(GstRTSPServer* server, GstRTSPClient* client, gpointer user)
{
return GST_RTSP_FILTER_REMOVE;
}
. . .
{
. . .
if( clientCount )
gst_rtsp_server_client_filter(server, clientFilterFunc, nullptr);
if (G_IS_OBJECT(server))
{
g_object_unref(server);
server = nullptr;
}
. . .
}
Code snipped for client connection and close:
{
void clientClosed(GstRTSPClient* client, gpointer user)
{
--clientCount ;
std::stringstream strm;
strm << "Client closed ... count: " << ptrTestData->m_clientCount << std::endl;
g_print("%s", strm.str().c_str());
}
void clientConnected(GstRTSPServer* server, GstRTSPClient* client, gpointer user)
{
++clientCount ;
// hook the client close callback
g_signal_connect(client, "closed", reinterpret_cast<GCallback>(clientClosed), user);
std::stringstream strm;
strm << "Client connected ... count: " << ptrTestData->m_clientCount << std::endl;
g_print("%s", strm.str().c_str());
}
{
. . .
g_signal_connect(server, "client-connected", reinterpret_cast<GCallback>(clientConnected), &(testData));
. . .
}
}
Not sure what problems I had before, but this actually works:
When the client is shut down (Ctrl+C on the gst-launch-1.0 pipeline), the "teardown-request" signal of GstRTSPClient is emitted.
If the client loses connection to the server, the remove_sessions (GstRTSPServer * server) function I posted will report that it removed a session after some time.

GStreamer pipeline hangs on gst_element_get_state

I have following very basic code using GStreamer library (GStreamer v1.8.1 on Xubuntu 16.04 if it important)
#include <gst/gst.h>
int main(int argc, char *argv[])
{
gst_init(&argc, &argv);
const gchar* pd =
"filesrc location=some.mp4 ! qtdemux name=d "
"d.video_0 ! fakesink "
"d.audio_0 ! fakesink ";
GError* error = nullptr;
GstElement *pipeline = gst_parse_launch(pd, &error);
GstState state; GstState pending;
switch(gst_element_set_state(pipeline, GST_STATE_PAUSED)) {
case GST_STATE_CHANGE_FAILURE:
case GST_STATE_CHANGE_NO_PREROLL:
return -1;
case GST_STATE_CHANGE_ASYNC: {
gst_element_get_state(pipeline, &state, &pending, GST_CLOCK_TIME_NONE);
}
case GST_STATE_CHANGE_SUCCESS:
break;
}
GMainLoop* loop = g_main_loop_new(nullptr, false);
g_main_loop_run(loop);
gst_object_unref(pipeline);
return 0;
}
The problem is when I try run this code it hangs on
gst_element_get_state(pipeline, &state, &pending, GST_CLOCK_TIME_NONE);
The question is - why it hangs? Especially if take into account, if I remove d.audio_0 ! fakesink from pipeline description it doesn't hang.
It is good practice to always add queues (or a multiqueue) after elements that produces multiple output branches in the pipeline e.g. demuxers.
The reason is that sinks will block waiting for other sinks to receive the first buffer (preroll). With a single thread, as your code, it will block the only thread available to push data into the sinks. A single thread is going from the demuxers to both sinks, once 1 blocks the there is no way for data to arrive on the second sink.
Using queues will spawn new threads and each sink will have a dedicated one.
That's quite an old thread but it probably hangs because you have an infinite timeout (GST_CLOCK_TIME_NONE).

Gstreamer Elements not linking

I am new to Gstreamer and I have a question about why my elements will not link together. Here is my code:
CustomData data;
data.videosource = gst_element_factory_make("uridecodebin", "source");
cout << "Created source element " << data.videosource << endl;
data.demuxer = gst_element_factory_make("qtdemux", "demuxer");
cout << "Created demux element " << data.demuxer << endl;
data.decoder = gst_element_factory_make("ffdec_h264", "video-decoder");
cout << "Went to the video path " << data.decoder << endl;
data.videoconvert = gst_element_factory_make("ffmpegcolorspace", "convert");
cout << "Created convert element " << data.videoconvert << endl;
data.videosink = gst_element_factory_make("autovideosink", "sink");
cout << "Created sink element " << data.videosink << endl;
if (!data.videosource ||!data.demuxer || !data.decoder || !data.videoconvert || !data.videosink)
{
g_printerr ("Not all elements could be created.\n");
system("PAUSE");
return;
}
//Creating the pipeline
data.pipeline = gst_pipeline_new("video-pipeline");
if (!data.pipeline)
{
g_printerr ("Pipeline could not be created.");
}
//Setting up the object
g_object_set(data.videosource, "uri", videoFileName[camID] , NULL);
//videoFileName[camID] is a char** with the content uri=file:///C://videofiles/...mp4
//Adding elements to the pipeline
gst_bin_add_many(GST_BIN (data.pipeline), data.videosource, data.demuxer, data.decoder, data.videoconvert, data.videosink, NULL);
//This is where the issue occurs
if(!gst_element_link(data.videosource, data.demuxer)){
g_printerr("Elements could not be linked. \n");
system("PAUSE");
return;
}
What I am trying to do is to break down a mp4 file and display only the video content but for some reason when I try to link source and demuxer, it comes out as false.
Thank you guys so much!
Let's have a look at the pipeline you're using (I'll use gst-launch here for its brevity, but the same goes for any GStreamer pipelines):
gst-launch uridecodebin uri=file:///path/to/movie.avi \
! qtdemux ! ffdec_h264 ! ffmpegcolorspace \
! autovideosink
gst-inspect uridecodebin states:
Autoplug and decode an URI to raw media
So uridecodebin takes any audio/video source and decodes it by internally using some of GStreamer's other elements.
Its output is something like video/x-raw-rgb or audio/x-raw-int (raw audio/video)
qtdemux on the other hand takes a QuickTime stream (still encoded) and demuxes it.
But what it gets in your example is the already decoded raw video (which is why it won't link).
So, you've basically got two options:
just use uridecodebin
gst-launch uridecodebin uri=file:///path/to/movie.avi \
! autovideosink
which will allow your pipeline to decode pretty much any video file
just use the qtdemux ! ffdec_h264 ! ffmpegcolorspace elements:
gst-launch filesrc=/path/to/movie.avi \
! qtdemux ! ffdec_h264 ! ffmpegcolorspace
! autovideosink
Keep in mind however that your pipeline doesn't play audio.
To get that as well do one of the following:
Simply use playbin2
gst-launch playbin2 uri=file:///path/to/movie.avi
Connect your decodebin to an audio sink as well
gst-launch uridecodebin name=d uri=... ! autovideosink d. ! autoaudiosink