Gstreamer:: Usage of g_signal_emit_by_name - gstreamer

I am developing an application which plays H264 dump using gstreamer
The pipeline is: appsrc - h264parse - ffdec_h264 - ffmpegcolorspace - deinterlace - autovideosink
And data flow is :: PULL Mode from appsrc { using the signals: need-data}
I want to verify the same application using PUSH mode from application: In the documentation it is mentioned that: we need to invoke 'push-buffer' signals and send the buffers
My code snippet is:
gst_app_src_set_emit_signals(source, TRUE);
g_signal_connect (source, "push-buffer", G_CALLBACK (start_feed), source);
Though the pipeline is created, I am not getting any callbacks to : start_feed()
Can anyone help me, what exactly need to do for 'PUSH' mode operation of appsrc.

According to the documentation:
Make appsrc emit the "new-preroll" and "new-buffer" signals. This option is by default disabled because signal emission is expensive and unneeded when the application prefers to operate in pull mode.
So, you could try adding a "new-buffer" signal. "push-buffer" is an action, so attaching a signal handler won't do anything because it's something you're supposed to call when you have data, not something that calls a callback.
Depending on what your start_feed does, you may also be looking for the "need-data" signal (presumably signals when the pipeline needs more data).

Related

play multiple sources at once, each from a different position

I would like to use gstreamer to play multiple sources (for instance two video files) simultaneously using a single pipeline but each video starting from a different position, for instance first video from the beginning and the second from the middle. Could someone guide me on how to achieve it?
Simplifying, my pipeline is an equivalent of:
gst-launch-1.0 \
uridecodebin uri=file:///Users/tmikolaj/Downloads/videoalpha_video_dancer1.webm ! videoconvert ! autovideosink \
uridecodebin uri=file:///Users/tmikolaj/Downloads/videoalpha_video_dancer1.webm ! videoconvert ! autovideosink
, but created programmatically.
Obviously, simple seeking the pipeline seeks two files at once.
I was trying to register a probe of the GST_PAD_PROBE_TYPE_EVENT_UPSTREAM type from inside the pad-added signal callback of the uridecodebin element. Inside the probe I wanted to catch the GST_EVENT_SEEK event and drop it for the first video. However, it seems that dropping the SEEK message leaves pipeline in a PAUSED state and even an explicit state change to PLAYING does nothing.
Does anybody has some hints on how to solve that problem?

Use "Clock Time" instead of Running Time for GStreamer Pipeline

I have a two GStreamer pipelines, one is like a "source" pipeline streaming a live camera feed into an external channel, and the second pipeline is like a "sink" pipeline that reads from the other end of that channel and outputs the live video to some form of sink.
[videotestsrc] -> [appsink] ----- Serial Channel ------> [appsrc] -> [autovideosink]
First Pipeline Second Pipeline
The first pipeline starts from a videotestsrc, encodes the video and wraps it in gdppay payload, and then sinks the pipeline into a serial channel (but for the sake of the question, any sink that can be read from to start another pipeline like a filesink writing to serial port or udpsink), where it is read by the source of the next pipeline and shown via a autovideosrc:
"Source" Pipeline
gst-launch-1.0 -v videotestsrc ! videoconvert ! video/x-raw,format=I420 ! x265enc ! gdppay ! udpsink host=127.0.0.1 port=5004
"Sink" pipeline
gst-launch-1.0 -v udpsrc uri=udp://127.0.0.1:5004 ! gdpdepay ! h265parse ! avdec_h265 ! autovideosink
Note: Given the latency induced using a udpsink/udpsrc, that pipeline complains about timestamp issues. If you replace the udpsrc/udpsink with a filesrc/filesink to a serial port you can see the problem that I am about to describe.
Problem:
Now that I have described the pipelines, here is the problem:
If I start both pipelines, everything works as expected. However, if after 30s, I stop the "source" pipeline, and restart the pipeline, the Running Time gets reset back to zero, causing the timestamps of all buffers to be sent to be considered old buffers by the sink pipeline because it has already received buffers for timestamps 0 through 30s, so the playback on the other end won't resume until after 30s:
Source Pipeline: [28][29][30][0 ][1 ][2 ][3 ]...[29][30][31]
Sink Pipeline: [28][29][30][30][30][30][30]...[30][30][31]
________________________^
Source pipeline restarted
^^^^^^^^^^^^^^^^...^^^^^^^^
Sink pipeline will continue
to only show the "frame"
received at 30s until a
"newer" frame is sent, when
in reality each sent frame
is newer and should be shown
immediately.
Solution
I have found that adding sync=false to the autovideosink does solve the problem, however I was hoping to find a solution where the source would send its timestamps (DTS and PTS) based on the Clock time as seen in the image on that page.
I have seen this post and experimented with is-live and do-timestamp on my video source, but they do not seem to do what I want. I also tried to manually set the timestamps (DTS, PTS) in the buffers based on system time, however to no avail.
Any suggestions?
I think you should just restart the receiver pipeline as well. You could add the -e switch to the sender pipeline and when you stop the pipeline it should correctly propagate EOS via the GDP element to the receiver pipeline. Else I guess you can send a new segment or discontinuity to the receiver. Some event has to be signaled though to make the pipeline aware of that change, else it is somewhat bogus data. I'd say restarting the receiver is the simplest way.

gstreamer get video playing event

I am quite new to gstreamer and trying to get some metrics on an existing pipeline. The pipeline is set as 'appsrc queue mpegvideoparse avdec_mpeg2video deinterlace videobalance xvimagesink'.
xvimagesink only has a sink pad and I am not sure where and how its output is connected to but I am interested in knowing when the actual video device/buffer displays the first I frame and then video starts rolling.
The application sets the pipeline state to 'playing' quite early on, so, listening on this event does not help.
Regards,
Check out GST_MESSAGE_STREAM_START and probes. However, I am not sure, what exactly do you want: at GStreamer level you can only detect moment when buffer is handled via some element, not when it's actually displayed.
xvimagesink has no srcpad (output), only sinkpad (input).
You can read about preroll here: http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-preroll.txt
Be sure to read GStreamer manual first:
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/index.html

Dynamically change udpsrc on gstreamer pipeline

Refer to http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/section-dynamic-pipelines.html, I tried to change udpsrc on gstreamer pipeline.
But something was wrong, state of the pipeline cannot be changed to PLAYING after change udp source.
Below is sequence of changing udpsrc.
The original pipeline bin consist of...
udpsrc - queue - tsdemux - queue - parser - videodecoder - queue videosink
first, block src pad of udpsrc
send eos event to queue (next to udpsrc)
wait until eos message is received from bus.
set state of udpsrc NULL, and remove udpsrc from pipeline bin.(unlink)
Create new udpsrc with new source uri.
link to queue
change state to PLAYING.
Is there any mistake in this sequence?
Thanks in advance.
You don't need to send an EOS through the pipeline in this case. That has the effect of signalling the end of a stream and while it can be recovered from in most cases, for this case it is not needed.
The scenario for sending an EOS through elements when changing the pipeline dynamically is for elements with both sink and src pads in order to drain any data that may be stuck inside.

Method to Cancel/Abort GStreamer tcpclientsink Timeout

I am working on an application that uses GStreamer to send a Motion JPEG video stream through a tcpclientsink element. The application works fine, except if I disrupt the network by switching the connection from wired to wireless or wireless to wired. When that happens, it looks like the tcpclientsink element waits 15 minutes before responding to messages. That becomes a problem if I try to shut down the application during this time. Here is what I've observed:
Start a Motion JPEG media stream with GStreamer using tcpclientsink as the sink. The code pushing the video runs in its own thread.
While the media stream is running, disrupt the connection by switching the type of network connection.
Start shutting down the application. Call gst_bus_post(bus, gst_message_new_eos(NULL)). This seems to get ignored.
Call pthread_join to wait for the video thread to exit. It does not respond for up to 15 minutes.
When I look at the GST_DEBUG messages, I can see that the GStreamer tcpclientsink hit an error while writing. It apparently waits 15 minutes while retrying.
Is there a way I can abort or cancel the timeout associated with tcpclientsink? Is there a different message I could send to cause the sink to terminate immediately?
I know I can use pthread_timedjoin_np and pthread_cancel to kill the video thread if GStreamer does not respond as fast as I would like, but I would prefer to have GStreamer exit as cleanly as possible.
Update
I should have mentioned I'm using GStreamer 0.10.36. Unfortunately this might just be a bug with that version. I see the handling has changed quite a bit in 1.2.x. I'm still hoping there is a workaround for the version I'm using.
I was able to recreate this problem using gst-launch-0.10. This might be more
complicated than necessary, but it worked for me:
Launch three scripts:
The following relays the data between the consumer and the producer:
while [ 1 ]
do
gst-launch-0.10 tcpserversrc host=0 port=${PORT_IN} ! jpegdec ! jpegenc !
tcpserversink port=${PORT_OUT}
done
The following is the script for the consumer
gst-launch-0.10 tcpclientsrc host=${IP_ADDR} port=${PORT_OUT} ! jpegdec !
ffmpegcolorspace ! ximagesink
The following is the script for the producer
gst-launch-0.10 ximagesrc ! videoscale !
video/x-raw-rgb,framerate=1/1,width=640,height=320 ! ffmpegcolorspace ! jpegenc
! tcpclientsink host=${IP_ADDR} port=${PORT_IN}
I ran the first two scripts on one machine and the third script on a second
machine. When I switched the network connection on the second machine from
wired to wireless, it took 15+ minutes for the tcpclientsink to report an
error.
In order to fix the problem, I had to patch GStreamer. I added code to specify the send timeout in the gst_tcp_client_sink_start() function of gsttcpclientsink.c
struct timeval timeout;
timeout.tv_sec = 60;
timeout.tv_usec = 0;
...
setsockopt (this->sock_fd.fd, SOL_SOCKET, SO_SNDTIMEO, (char *)&timeout, sizeof(timeout));
Now the application is capable of shutting down within one minute (acceptable for my situation) even if the network was disrupted while streaming video.
Note: It doesn't look like this will be a problem with version 1.2.1, but I need to stay with 0.10.36.