Is it possible to set initial index of splitmuxsink? - gstreamer

I have setup gstreamer with few pipes (with help of RidgRun GSTd & gst-interpipe).
First pipe realize snapshots with multifilesink with max-files and could setup starting index=start_index.
Second pipe realize record with splitmuxsink and max-files & max-size-time
GStreamer 1.10.4
gstd v.0.7.0
multifilesink name=snapshot_sink index=${start_index} max-files=20 location=pic_%04d.jpg
splitmuxsink name=rec_file_sink location=rec_%03d.mpg max-size-time=60000000000 send-keyframe-requests=true max-files=5 muxer=mpegtsmux
The problem is that if I restart gstreamer (respectively gstd) the indexes are reset.
If I start recording in second pipe index begins from 000.
I could setup starting index in multifilesink pipe I couldn't find same for splitmuxsink.
Any ideas ?

How about the start-index property
https://gstreamer.freedesktop.org/documentation/multifile/splitmuxsink.html?gi-language=c#splitmuxsink:start-index

I just ran into this issue myself and I am afraid there is no way to do that using command line parameters only.
However, for those who are not afraid of diving into the API and create a gstreamer application, it is achievable using the 'format-location' signal (see the splitmuxsink documentation).
In C/C++, you may define the signal handler as follows:
static gchar* cb_FormatLocation(GstElement* splitmux, guint fragment_id, const int* offset)
{
char* location;
g_object_get(splitmux, "location", &location, nullptr);
gchar* fileName = g_strdup_printf(location, fragment_id + *offset);
g_free(location);
return fileName;
}
and, in the pipeline definition, all you need to do is to compute an offset and pass it to g_signal_connect:
#include <filesystem>
...
GstElement* sink = gst_element_factory_make("splitmuxsink", "sink");
...
std::filesystem::path fileTemplate = "/path/to/folder/%04d.mp4";
int offset = 0;
while (std::filesystem::exists(g_strdup_printf(fileTemplate.c_str(), offset))) offset++;
g_object_set(sink, "location", fileTemplate.c_str(), nullptr);
g_signal_connect (sink, "format-location", G_CALLBACK(cb_FormatLocation), &offset);
Side note: make sure the offset variable is not destroyed before the application terminates.
It should be possible to achieve the same behaviour with the Python API.

Related

How to set a GstPlayer pipeline?

I have constructed a custom GStreamer pipeline that I will use to play RTSP streams. At the same time I'd like to create a new GstPlayer to use this pipeline. The problem is that there isn't a way that I can see to set a GstPlayer's pipeline (the only related method is gst_player_get_pipeline(). I don't understand how there is no way to customize a pipeline for a GstPlayer. This seems like basic functionality, so I must be missing something.
My pipeline:
GstElement *pipeline, *source, *filter, *sink;
// Create pipeline elements
pipeline = gst_pipeline_new ("vdi-pipeline");
source = gst_element_factory_make ("rtspsrc", "vdi-source");
filter = gst_element_factory_make ("decodebin", "vdi-filter");
sink = gst_element_factory_make ("appsink", "vdi-sink");
if (!source || !filter || !sink)
{
__android_log_print (ANDROID_LOG_ERROR, "Error", "A GstElement could not be created. Exiting.");
return;
}
// Add elements to pipeline
gst_bin_add_many (GST_BIN (pipeline), source, filter, sink, NULL);
// Link elements together
if (!gst_element_link_many (source, filter, sink, NULL)) {
__android_log_print (ANDROID_LOG_ERROR, "Warning", "Failed to link elements!");
}
But you can play rtsp via GstPlayer out of the box.. why do you want custom pipeline?
The player is using playbin which accept any kind of url.. and it will create pipeline dynamically according to what is being played..
What about patching the player itself, if you really cannot use playbin? I dont think it is intended for custom pipelines.. but you can hack it here.
You will then have hook the newpads and other callback on the rtspsrc instead of playbin.. and other stuff - I guess you do not want this.
The other way is - when the playbin constructs pipeline it uses rtspsrc inside - you can get this element from pipeline object and change some parameters.. but be carefull as changing parameters during playback is very tricky..
UPDATE:
Hm I think I overlook the appsink somehow.. well I think you can set playbin property audio-sink or video-sink to override it to use appsink.
But still you will have to somehow get the playbin element out of GstPlayer or set the playbin parameter upon initialization (I dont know how) - in this case I would ask on IRC (freenode, #gstreamer) if you are going the right direction.
Maybe better way would be to create your own application using decodebin and or even playbin and pass there the appsink element.. why do you want to use GstPlayer if you are not playing but processing buffers?
HTH

Data Transfer through RTSP in Gstreamer

UPDATE::
I want to stream video data (H264) through RTSP in Gstreamer.
gst_rtsp_media_factory_set_launch (factory, "videotestsrc ! x264enc ! rtph264pay name=pay0 pt=96 ");
I want "videotestsrc ! x264enc ! rtph264pay name=pay0 pt=96" this pipeline would also be in C programming in place of direct command.
Actually I have custom pipeline, i want to pass this pipeline to GstRTSPMediaFactory.
With launch i am not able to pass my pipline.
source = gst_element_factory_make("videotestsrc", "test-source");
parse = gst_element_factory_make("x264enc", "parse");
sink = gst_element_factory_make("rtph264pay", "sink");
gst_bin_add_many(GST_BIN(pipeline), source, parse, sink, NULL);
gst_element_link_many(source, parse, sink, NULL);
Now, I want to stream this pipeline using RTSP. I can stream with gst_rtsp_media_factory_set_launch,
But i want to pass only pipeline variable, and has to stream the video.
Can it possible, if so How?
I Modified the rtsp-media-factory.c as follows,
Added GstElement *pipeline in struct _GstRTSPMediaFactoryPrivate.
And the Added two more functions get_pipeline & set pipeline
void
gst_rtsp_media_factory_set_launch_pipeline (GstRTSPMediaFactory * factory, GstElement *pipeline)
{
g_print("PRASANTH :: SET LAUNCH PIPELINE\n");
GstRTSPMediaFactoryPrivate *priv;
g_return_if_fail (GST_IS_RTSP_MEDIA_FACTORY (factory));
g_return_if_fail (pipeline != NULL);
priv = factory->priv;
GST_RTSP_MEDIA_FACTORY_LOCK (factory);
// g_free (priv->launch);
priv->pipeline = pipeline;
Bin = priv->pipeline;
GST_RTSP_MEDIA_FACTORY_UNLOCK (factory);
}
In the Same way get also.
And at last in place of gst_parse_launch in function default_create_element,
added this line
element = priv->pipeline; // priv is of type GstRTSPMediaFactoryPrivate
return element;
but I am not able to receive the data.
When i put pay0 for rtpmp2pay it is working.
But it is working for once only. If Client stops and again starts its not working. To work it, again i am restarting the server.
What is the problem?
** (rtsp_server:4292): CRITICAL **: gst_rtsp_media_new: assertion 'GST_IS_ELEMENT (element)' failed
To have some answer here.
It solves the main problem according to comments discussion, but there is still problem with requesting another stream (when stopping and starting client).
The solution was to add proper name for payloader element as stated in docs:
The pipeline description should contain elements named payN, one for each
stream (ex. pay0, pay1, ...). Also, for increased compatibility each stream
should have a different payload type which can be configured on the payloader.
So this has to be changed to:
sink = gst_element_factory_make("rtph264pay", "pay0");
notice the change in name of element from sink -> pay0.
For the stopping client issue I would check if this works for parse version.
If yes then check if the parse pipeline string (in original source code of rtsp server) is saved anywhere and reused after restart.. you need to debug this.

Gstreamer Appsink not getting Data from the Pipeline

I am designing a pipeline to Encode a video frame from a opencv application (got from a web cam) to video/x-h264 format, send it via network and decode it on another device of different type (probably a raspberry pi ) to a proper RGB stream for my project.
For this I am supposed to use a hardware accelerated Encoder and Decoder.
Since , the whole scenario is huge , the current development is performed on a Intel machine using the gstreamer VAAPI plugins(vaapiencode_h264 & vaapidecode ) . Ánd also, the fact that we need to NOT use any of the networking plugins like TCPServer or UDPServer
For this I have used the below pipeline for my purpose :
On the Encoder End:
appsrc name=applicationSource ! videoconvert ! video/x-raw, format=I420, width=640, height=480,framerate=30/1, pixel-aspect-ratio=1/1,interlace-mode=progressive ! vaapiencode_h264 bitrate=600 tune=high-compression ! h264parse config-interval=1 ! appsink name=applicationSink sync=false
The Appsrc part works perfectly well while the appsink part is having some issue with it.
The appsink part of this pipeline has been set with the below caps:
"video/x-h264, format=(string){avc,avc3,byte-stream },alignment=(string){au,nal};video/mpeg, mpegversion=(int)2, profile=(string)simple"
The code for the data extraction of my appsink is
bool HWEncoder::grabData()
{
// initial checks..
if (!cameraPipeline)
{
GST_ERROR("ERROR AS TO NO PIPE FOUND ... Stopping FRAME GRAB HERE !! ");
return false;
}
if (gst_app_sink_is_eos (GST_APP_SINK(applicationSink)))
{
GST_WARNING("APP SINK GAVE US AN EOS! BAILING OUT ");
return false;
}
if (sample)
{
cout << "sample available ... unrefing it ! "<< endl;
gst_sample_unref(sample);
}
sample = gst_app_sink_pull_sample (GST_APP_SINK(applicationSink));
if (!sample)
{
GST_WARNING("No valid sample");
return false; // no valid sample pulled !
}
sink_buffer = gst_sample_get_buffer(sample);
if (!sink_buffer)
{
GST_ERROR("No Valid Buffer ");return false;
}
return true;
}
After bringing up the pipeline and checking for the buffer filling up in my appsink, I am getting stuck at the below said lines ofmy code indefinitely:
sample = gst_app_sink_pull_sample (GST_APP_SINK(applicationSink));
I have the following questions :
1) Is my Caps for appsink correct ? If not How can I determine the caps for them ?
2) Is there something wrong in my pipeline above ?
How can I fix this issue with Appsink ??
Any kind of help would be useful!
Thanks !!
Just a guess (I had similar problems) the problem having appsink and appsrc in same pipeline may be that when you fill/empty one of them it will block the other(more on that below).
appsink and appsrc would block when they are full/empty - this is normal desired behaviour. There is option drop for appsink or for appsrc there is option block - but using these it may be just workaround and you will get glitches in your stream. Proper solution is to handle the synchronisation between appsrc and appsink in a better way.
You can react on appsrc signals enough-data and need-data - this is our way. Also we fiddled with properties of appsrc: is-live, do-timestamp and buffer size (this may or may not help you):
g_object_set(src->appsrc,
"stream-type", GST_APP_STREAM_TYPE_STREAM,
"format", GST_FORMAT_TIME,
"do-timestamp", TRUE,
"is-live", TRUE,
"block", TRUE,
NULL);
Why do they block each other?
Because (I guess) you process appsink and at the same time appsrc in main application thread. When one of the appsink/appsrc block the thread there is no one that would handle the processing for the other one. So when appsink is blocked because it does not have any data there is noone that can feed appsrc with new data - thus endless deadlock.
We also implemented noblock version of appsink *pull_sample method but it was just a workaround and resulted in more problems than solutions.
If you want to debug what is happening you can add GST_DEBUG entry for appsrc/appsink (I do not remember what they were), you can add callback on mentioned enough-data and need-data signals or you may add queues and enable GST_DEBUG=queue_dataflow:5 to see which queue is filled first etc.. this is always helpful when debugging the "data-deadlock".

How to check type of new added pad?

My pipeline scheme(dynamic link):
videotestsrc OR audiotestsrc ! decodebin ! queue ! autovideosink OR
autoaudiosink
I trying to use this advice to check which type of data I got (video/audio), but if I use decodebin like demuxer, then I get just "src_0" instead of "audio" or "video". How I can check my pad type for linking right element for playback? May be I can use one universal element for audio playback and video playback, like playsink(but it does not work for video)?
You can get the caps of the newly added pad and check if it contains audio or video caps (or something else).
Try with:
gst_pad_get_current_caps (pad);
or:
gst_pad_get_allowed_caps (pad);
If you are using gstreamer 0.10 (which is 3+ years obsolete an unmantained), you have:
gst_pad_get_caps_reffed (pad);
Then just check the returned caps if it is audio or video by getting the structure from the caps and checking if its name starts with video or audio.
/* There might be multiple structures depending on how you do it,
* but usually checking one in this case is enough */
structure = gst_caps_get_structure (caps, 0);
name = gst_structure_get_name (structure);
if (g_str_has_prefix (name, "video/")) {
...
} else if (g_str_has_prefix (name, "audio/")) {
...
}

dynamically replacing elements in a playing gstreamer pipeline

I'm looking for the correct technique, if one exists, for dynamically replacing an element in a running gstreamer pipeline. I have a gstreamer based c++ app and the pipeline it creates looks like this (using gst-launch syntax) :
souphttpsrc location="http://localhost/local.ts" ! mpegtsdemux name=d ! queue ! mpeg2dec ! xvimagesink d. ! queue ! a52dec ! pulsesink
During the middle of playback (i.e. GST_STATE_PLAYING is the pipeline state and the user is happily watching video), I need to remove souphttpsrc from the pipeline and create a new souphttpsrc, or even a new neonhttpsource, and then immediately add that back into the pipeline and continue playback of the same uri source stream at the same time position where playback was before we performed this operation. The user might see a small delay and that is fine.
We've barely figured out how to remove and replace the source, and we need more understanding. Here's our best attempt thus far:
gst_element_unlink(source, demuxer);
gst_element_set_state(source, GST_STATE_NULL);
gst_bin_remove(GST_BIN(pipeline), source);
source = gst_element_factory_make("souphttpsrc", "src");
g_object_set(G_OBJECT(source), "location", url, NULL);
gst_bin_add(GST_BIN(pipeline), source);
gst_element_link(source, demuxer);
gst_element_sync_state_with_parent(source);
This doesn't work perfectly because the source is playing back from the beginning and the rest of the pipeline is waiting for the correct timestamped buffers (I assume) because after several seconds, playback picks back up. I tried seeking the source in multiple ways but nothing has worked.
I need to know the correct way to do this. It would be nice to know a general technique, if one exists, as well, in case we wanted to dynamically replace the decoder or some other element.
thanks
I think this may be what you are looking for:
http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-block.txt
(starting at line 115)