GStreamer pipeline hangs on gst_element_get_state - c++

I have following very basic code using GStreamer library (GStreamer v1.8.1 on Xubuntu 16.04 if it important)
#include <gst/gst.h>
int main(int argc, char *argv[])
{
gst_init(&argc, &argv);
const gchar* pd =
"filesrc location=some.mp4 ! qtdemux name=d "
"d.video_0 ! fakesink "
"d.audio_0 ! fakesink ";
GError* error = nullptr;
GstElement *pipeline = gst_parse_launch(pd, &error);
GstState state; GstState pending;
switch(gst_element_set_state(pipeline, GST_STATE_PAUSED)) {
case GST_STATE_CHANGE_FAILURE:
case GST_STATE_CHANGE_NO_PREROLL:
return -1;
case GST_STATE_CHANGE_ASYNC: {
gst_element_get_state(pipeline, &state, &pending, GST_CLOCK_TIME_NONE);
}
case GST_STATE_CHANGE_SUCCESS:
break;
}
GMainLoop* loop = g_main_loop_new(nullptr, false);
g_main_loop_run(loop);
gst_object_unref(pipeline);
return 0;
}
The problem is when I try run this code it hangs on
gst_element_get_state(pipeline, &state, &pending, GST_CLOCK_TIME_NONE);
The question is - why it hangs? Especially if take into account, if I remove d.audio_0 ! fakesink from pipeline description it doesn't hang.

It is good practice to always add queues (or a multiqueue) after elements that produces multiple output branches in the pipeline e.g. demuxers.
The reason is that sinks will block waiting for other sinks to receive the first buffer (preroll). With a single thread, as your code, it will block the only thread available to push data into the sinks. A single thread is going from the demuxers to both sinks, once 1 blocks the there is no way for data to arrive on the second sink.
Using queues will spawn new threads and each sink will have a dedicated one.

That's quite an old thread but it probably hangs because you have an infinite timeout (GST_CLOCK_TIME_NONE).

Related

GStreamer rtspsrc stops working once 32 streams have been created

I have a device running embedded linux that can show RTSP streams from a camera. The user can change the stream from a windowed stream to a full screen stream, and vice versa. If the stream is changed 32 times, the stream stops working. I have possibly narrowed down the problem to the rtspsrc itself.
My question is, how does one clear the memory for the gst "stuff" without re-starting the program?
If I use gst-launch-1.0 with the pipeline, it works for more than 32 re-starts because the program is being killed every time.
However, if I run my program and increase the rtspsrc to 31 (by switching between the two streams), and then run gst-launch-1.0 with a rtsp pipeline, the steam does not show up! It appears that until every program that is using gst is killed, the rtspsrc will not reset back to 0.
I enabled debugging the rtspsrc:
export GST_DEBUG="rtspsrc:6"
Lots of log messages are shown each time the stream is started. They print the rtspsrcX, which increases even though the previous stream is stopped:
First run log print:
**rtspsrc gstrtspsrc.c:8834:gst_rtspsrc_print_sdp_media:<rtspsrc0> RTSP response message**
Second run:
**rtspsrc gstrtspsrc.c:8855:gst_rtspsrc_print_sdp_media:<rtspsrc1> RTSP response message**
Continue stopping/starting the stream, and it increases up to 31, at which point the stream no longer shows up:
**rtspsrc gstrtspsrc.c:8855:gst_rtspsrc_print_sdp_media:<rtspsrc31> RTSP response message**
I'm not sure how to "reset" the stream each time the user stops it. It seems that gst can't release memory unless I kill the whole program (all programs using gst).
I have tried creating a new context each time the stream is re-started, but this doesn't help.
When I call gst_is_initialized each subsequent time, it returns true.
The main loop is stopped by calling the following from another thread:
g_main_loop_quit(loop_);
The video feeds are controlled with the following:
GMainLoop *loop_;
pipeline = "rtspsrc location=rtsp://192.168.0.243/0 latency=0 ! rtph264depay ! h264parse ! imxvpudec ! imxipuvideosink window-width=512 window-height=384 sync=false"
or
pipeline = "rtspsrc location=rtsp://192.168.0.243/0 latency=0 ! rtph264depay ! h264parse ! imxvpudec ! imxipuvideosink window-width=1024 window-height=768 sync=false"
void stream_video(std::string pipeline)
{
GMainContext* context;
GstElement *pipelineElement;
GstBus *bus = NULL;
guint bus_watch_id = 0;
GstState state;
try
{
if(!gst_is_initialized())
{
std::cout << "GST Is not initialized - initializing " << pipeline.c_str();
gst_init_check(nullptr,nullptr,nullptr);
}
context = g_main_contextnew(); // Creating a new context to see if the camera can be started more than 32 times, but the rtspsrc still increases when debugging
loop_ = g_main_loopnew (context, FALSE);
pipelineElement = gst_parse_launch(pipeline.c_str(), NULL);
bus = gst_pipeline_get_bus (GST_PIPELINE (pipelineElement));
bus_watch_id = gst_bus_add_watch (bus, bus_call, loop_);
gst_object_unref (bus);
bus = NULL;
gst_element_set_state(pipelineElement, GST_STATE_READY );
gst_element_set_state(pipelineElement, GST_STATE_PAUSED );
gst_element_set_state(pipelineElement, GST_STATE_PLAYING);
if (gst_element_get_state (pipelineElement, &state, NULL, 2*GST_SECOND) == GST_STATE_CHANGE_FAILURE)
{
std::cout << "gst: Failed to chage states State:" << state << " ID: " << stream_id_;
}
else
{
std::cout << "gst: Running..." << " ID: " << stream_id_ << " State:" << state << " Loop:" << loop_;
g_main_looprun (loop_); // blocks until loop_ exits (EOS, error, stop request)
}
gst_element_set_state(pipelineElement, GST_STATE_PAUSED);
gst_element_set_state(pipelineElement, GST_STATE_READY );
gst_element_set_state(pipelineElement, GST_STATE_NULL); // Can only switch between certian states, see https://gstreamer.freedesktop.org/documentation/additional/design/states.html?gi-language=c
g_source_remove (bus_watch_id);
std::cout << "gst: Removing pipelineElement " << pipelineElement;
gst_object_unref (GST_OBJECT (pipelineElement));
pipelineElement = NULL;
g_main_contextunref (context);
context = NULL;
g_main_loopunref (loop_);
loop_ = nullptr;
std::cout << "gst: Deleted pipeline" << " ID: " << stream_id_ << " State: " << state;
}
catch(const std::exception& e)
{
std::cout << "Error Caught: stream_video " << e.what();
}
return;
}

How to use splitmuxsink in a dynamic pipeline

What is the correct way of using splitmuxsink in a dynamic pipeline?
Previously I have used filesink to record (no problem what so ever) but there is requirement to save the file in segments so I have tried to use splitmuxsink in dynamic pipeline(there is async time in recording). In doing so I have faced two problems
when I tried to stop the recording, I use a idle pad to block the recording queue and launch a callback function to do steps to delink the recording branch (send eos, set elements in recording bin to NULL, then dequeue the bin). I have set a downstream data probe to notify me that the eos has reached the splitmuxsink sink before I tried to do step 2..(set elemets to null)
However, the end result is that i still have an empty last file (o bytes). It seem that the pipe is not yet closed or having some problem. I did a workaround to split the video immediately when the record stop (though I lost a few frames)
How should one stop in a dynamic branch?
When I tried to create the recording bin when i start the recording(utilizing the pad-added signal when a pad is created to connect the recording bin). Previously I have created the recording bin in normal sequence (not creating them during the glib loop that I have created). The previous step work ok but the present step has the splitmuxsink's filesink in a locked state
How should I workaround this? What causes the lock state?
Here is my code
/// create record bin
static void
pad-added(GstElement * self,
GstPad * new_pad,
gpointer user_data)
{
char* pad_name = gst_pad_get_name(new_pad);
if(g_str_equal(pad_name,"src"))
{
//RECORD records;
records.recording = gst_bin_new("recording");
records.queue = gst_element_factory_make("queue","queue");
records.enc = gst_element_factory_make("vpuenc_h264","enc");
records.parser = gst_element_factory_make("h264parse","parser");
records.sink = gst_element_factory_make("splitmuxsink","sink");
// Add it to the main pipeline
gst_bin_add_many(GST_BIN(records.recording),
records.queue,
records.enc,
records.parser,
records.sink,NULL);
// link up the recording elements queue
gst_element_link_many(records.queue,
records.enc,
records.parser,
records.sink,NULL)
g_object_set(G_OBJECT(records.fsink),
//"location","video_%d.mp4",
"max-size-time", (guint64) 10L * GST_SECOND,
"async-handling", TRUE,
"async-finalize", TRUE,
NULL);
records.queue_sink_pad = gst_element_get_static_pad (records.queue, "sink");
records.ghost_pad = gst_ghost_pad_new ("sink", records.queue_sink_pad);
gst_pad_set_active(records.ghost_pad, TRUE);
gst_element_add_pad(GST_ELEMENT(records.recording),records.ghost_pad);
g_signal_connect (records.sink, "format-location",
(GCallback)format_location_callback,
&records);
}
}
gboolean cmd_loop()
{
// other cmd not shown here
if(RECORD)
{
//create tee sink pad
// this step will trigger the pad-added function
tee_sink_pad = gst_element_get_request_pad (tee,"src");
// ....other function
}
}
int main()
{
// add the pad-added signal response
g_signal_connect(tee, "pad-added", G_CALLBACK(pad-added), NULL);
// use to construct the loop (cycle every 1s)
GSource* source = g_timeout_source_new(1000);
// set function to watch for command
g_source_set_callback(source,
(GSourceFunc)cmd_loop,
NULL,
NULL);
}

Assisted-autoplugging (cutting) of uridecodebin

bool demuxDone = false;
gboolean
autopluggerCallback (GstElement * elem, GstPad *pad, GstCaps * caps)
{
if (cmpType(caps, "video/x-h264")) {
relayVideoPad = pad;
demuxDone = true;
}
if (cmpType(caps, "audio/x-ac3")) {
relayAudioPad = pad;
demuxDone = true;
}
if (demuxDone) {
return FALSE;
}
return TRUE;
}
I connected the autoplug-continue signal handler to uridecodebin. My goal is to prevent it from creating anything after the tsdemux and then connect video/audio to flvmux.
But the problem I am having is that one more element is still created, the multiqueue that is connected right after the tsdemux0. Why? I tried to detect the creation of a demuxer by catching the element-added signal instead of waiting for video/x-h264, but the result is the same.
The resulting pipeline is dumped to dot:
http://pastebin.com/acBUdfpi
Well, I can probably just connect multiqueue to the flvmux, but then I do not know how to get the multiqueue pointer. I tried gst_pad_get_peer->gst_get_pad_parent_element, (to go from demuxer src-video-pad to the next element), but gst_get_pad_parent_element returns 0 even though the peer is non 0.

mux klv data with h264 by mpegtsmux

I need to mux klv metadata into the h264 stream. I have created application. But the stream is playing only as long as klv-data is being inserted. When i stop pushing klv-data the whole stream stops. What is the right method to mux asynchronous klv data by mpegtsmux?
Klv-data need to be inserted into the following working pipeline:
v4l2src input-src=Camera ! videorate drop-only=true ! 'video/x-raw, format=(string)NV12, width=1920, height=1088, framerate=25/1' ! ce_h264enc target-bitrate=6000000 idrinterval=25 intraframe-interval=60 ! queue ! mpegtsmux alignment=7 ! udpsink host=192.168.0.1 port=3000 -v
This pipeline is collected in the application. To insert klv-metedata appsrc is created:
appSrc = gst_element_factory_make("appsrc", nullptr);
gst_app_src_set_caps (GST_APP_SRC (appSrc), gst_caps_new_simple("meta/x-klv", "parsed", G_TYPE_BOOLEAN, TRUE, "sparse", G_TYPE_BOOLEAN, TRUE, nullptr));
g_object_set(appSrc, "format", GST_FORMAT_TIME, nullptr);
Then appsrc is linked to the pipeline:
gst_bin_add(GST_BIN(pipeline), appSrc);
gst_element_link(appSrc, mpegtsmux);
Here is push function:
void AppSrc::pushData(const std::string &data)
{
GstBuffer *buffer = gst_buffer_new_allocate(nullptr, data.size(), nullptr);
GstMapInfo map;
GstClock *clock;
GstClockTime abs_time, base_time;
gst_buffer_map (buffer, &map, GST_MAP_WRITE);
memcpy(map.data, data.data(), data.size());
gst_buffer_unmap (buffer, &map);
GST_OBJECT_LOCK (element);
clock = GST_ELEMENT_CLOCK (element);
base_time = GST_ELEMENT (element)->base_time;
gst_object_ref (clock);
GST_OBJECT_UNLOCK (element);
abs_time = gst_clock_get_time (clock);
gst_object_unref (clock);
GST_BUFFER_PTS (buffer) = abs_time - base_time;
GST_BUFFER_DURATION (buffer) = gst_util_uint64_scale_int (1, GST_SECOND, 1);
gst_app_src_push_buffer(GST_APP_SRC(element), buffer);
}
Gstreamer version is 1.6.1.
What can be wrong with my code? I'd appreciate your help.
I can push dummy klv-packets to maintain video stream. But i don't want to pollute upcomming stream and i am sure there should be more delicate solution.
I have found that i can send event with GST_STREAM_FLAG_SPARSE, which should be appropriate for subtitles. But as a result i have no output at all.
GstEvent* stream_start = gst_event_new_stream_start("klv-04");
gst_event_set_stream_flags(stream_start, GST_STREAM_FLAG_SPARSE);
GstPad* pad = gst_element_get_static_pad(GST_ELEMENT(element), "src");
gst_pad_push_event (pad, stream_start);
While debugging i have found that after applying the following patch to the gstreamer and using GST_STREAM_FLAG_SPARSE, the stream doesn't stop when the appsrc stops pushing packets.
diff --git a/libs/gst/base/gstcollectpads.c b/libs/gst/base/gstcollectpads.c
index 8edfe41..14f9926 100644
--- a/libs/gst/base/gstcollectpads.c
+++ b/libs/gst/base/gstcollectpads.c
## -1440,7 +1440,8 ## gst_collect_pads_recalculate_waiting (GstCollectPads * pads)
if (!GST_COLLECT_PADS_STATE_IS_SET (data, GST_COLLECT_PADS_STATE_WAITING)) {
/* start waiting */
gst_collect_pads_set_waiting (pads, data, TRUE);
- result = TRUE;
+ if (!GST_COLLECT_PADS_STATE_IS_SET (data, GST_COLLECT_PADS_STATE_LOCKED))
+ result = TRUE;
}
}
}
Anyway, the receiver stops updating screen 10 seconds after the last klv packet.
This is a bit of an old thread but,
In my experience though, if there is no queue between the appsrc and the muxer, you will get this behavior. I would change your:
gst_element_link(appSrc, mpegtsmux);
To this:
gst_element_link(appSrc, appSrcQueue);
gst_element_link(appSrcQueue, mpegtsmux);
And I'm not sure if the mpegtsmux has the capability for it or not but the muxer that we have used has a property called do-timestamping and when that was set to TRUE we had a better experience.
Another tip I would give is to use the gst-inspect tool to see what options each elements have.

Gstreamer. Write appsink to filesink

I have written a code for appsrc to appsink and it works. I see the actual buffer. It's encoded in H264(vpuenc=avc). Now I want to save it in a file(filesink). How I approach it?
app:
int main(int argc, char *argv[]) {
gst_init (NULL, NULL);
GstElement *pipeline, *sink;
gchar *descr;
GError *error = NULL;
GstAppSink *appsink;
descr = g_strdup_printf (
"mfw_v4lsrc device=/dev/video1 capture_mode=0 ! " // grab from mipi camera
"ffmpegcolorspace ! vpuenc codec=avc ! "
"appsink name=sink"
);
pipeline = gst_parse_launch (descr, &error);
if (error != NULL) {
g_print ("could not construct pipeline: %s\n", error->message);
g_error_free (error);
exit (-1);
}
gst_element_set_state(pipeline, GST_STATE_PAUSED);
sink = gst_bin_get_by_name (GST_BIN (pipeline), "sink");
appsink = (GstAppSink *) sink;
gst_app_sink_set_max_buffers ( appsink, 2); // limit number of buffers queued
gst_app_sink_set_drop( appsink, true ); // drop old buffers in queue when full
gst_element_set_state (pipeline, GST_STATE_PLAYING);
int i = 0;
while( !gst_app_sink_is_eos(appsink) )
{
GstBuffer *buffer = gst_app_sink_pull_buffer(appsink);
uint8_t* data = (uint8_t*)GST_BUFFER_DATA(buffer);
uint32_t size = GST_BUFFER_SIZE(buffer);
gst_buffer_unref(buffer);
}
return 0; }
If as mentioned in the comments, what you actually want to know is how to do a network video stream in GStreamer, you should probably close this question because you're on the wrong path. You don't need to use an appsink or filesink for that. What you'll want to investigate are the GStreamer elements related to RTP, RTSP, RTMP, MPEGTS, or even MJPEGs (if your image size is small enough).
Here are two basic send/receive video stream pipelines:
gst-launch-0.10 v4l2src ! ffmpegcolorspace ! videoscale ! video/x-raw-yuv,width=640,height=480 ! vpuenc ! h264parse ! rtph264pay ! udpsink host=localhost port=5555
gst-launch-0.10 udpsrc port=5555 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! ffdec_h264 ! videoconvert ! ximagesink
In this situation you don't write your own while loop. You register callbacks and wait for buffers (GStreamer 0.10) to arrive. If you're using GStreamer 1.0, you use samples instead of buffers. Samples are a huge pain in the ass compared to buffers but oh well.
Register the callback:
GstAppSinkCallbacks* appsink_callbacks = (GstAppSinkCallbacks*)malloc(sizeof(GstAppSinkCallbacks));
appsink_callbacks->eos = NULL;
appsink_callbacks->new_preroll = NULL;
appsink_callbacks->new_sample = app_sink_new_sample;
gst_app_sink_set_callbacks(GST_APP_SINK(appsink), appsink_callbacks, (gpointer)pointer_to_data_passed_to_the_callback, free);
And your callback:
GstFlowReturn app_sink_new_sample(GstAppSink *sink, gpointer user_data) {
prog_data* pd = (prog_data*)user_data;
GstSample* sample = gst_app_sink_pull_sample(sink);
if(sample == NULL) {
return GST_FLOW_ERROR;
}
GstBuffer* buffer = gst_sample_get_buffer(src);
GstMemory* memory = gst_buffer_get_all_memory(buffer);
GstMapInfo map_info;
if(! gst_memory_map(memory, &map_info, GST_MAP_READ)) {
gst_memory_unref(memory);
gst_sample_unref(sample);
return GST_FLOW_ERROR;
}
//render using map_info.data
gst_memory_unmap(memory, &map_info);
gst_memory_unref(memory);
gst_sample_unref(sample);
return GST_FLOW_OK;
}
You can keep your while loop as it is--using gst_app_sink_is_eos()--but make sure to put a sleep in it. Most of the time I use something like the following instead:
GMainLoop* loop = g_main_loop_new(NULL, FALSE);
g_main_loop_run(loop);
g_main_loop_unref(loop);
Note: Unless you need to do something special with the data you can use the "filesink" element directly.
Simpler option would be write to the file directly in the appsink itself ie when you get a callback when the buffer is done write to the file and make sure you close it on eos.
Hope that helps.