Pushing sample/ buffers from AppSink to AppSrc - gstreamer

I need to implement an architecture where I can push data from AppSink to
Appsrc.
Now, I can't figure out if it can be done within the same pipeline or I
would need two pipelines to achieve this. Considering I am somehow
manipulating the data between apppsink and appsrc.
Another thing is, from AppSink I can extract samples using pull-samples, and to AppSrc push-sample or push-buffer can be used to put data. So, is there a way I can generate a buffer of received samples in AppSink explicitly or I should feed sample by sample to the AppSrc?
Please suggest.

Related

Use "Clock Time" instead of Running Time for GStreamer Pipeline

I have a two GStreamer pipelines, one is like a "source" pipeline streaming a live camera feed into an external channel, and the second pipeline is like a "sink" pipeline that reads from the other end of that channel and outputs the live video to some form of sink.
[videotestsrc] -> [appsink] ----- Serial Channel ------> [appsrc] -> [autovideosink]
First Pipeline Second Pipeline
The first pipeline starts from a videotestsrc, encodes the video and wraps it in gdppay payload, and then sinks the pipeline into a serial channel (but for the sake of the question, any sink that can be read from to start another pipeline like a filesink writing to serial port or udpsink), where it is read by the source of the next pipeline and shown via a autovideosrc:
"Source" Pipeline
gst-launch-1.0 -v videotestsrc ! videoconvert ! video/x-raw,format=I420 ! x265enc ! gdppay ! udpsink host=127.0.0.1 port=5004
"Sink" pipeline
gst-launch-1.0 -v udpsrc uri=udp://127.0.0.1:5004 ! gdpdepay ! h265parse ! avdec_h265 ! autovideosink
Note: Given the latency induced using a udpsink/udpsrc, that pipeline complains about timestamp issues. If you replace the udpsrc/udpsink with a filesrc/filesink to a serial port you can see the problem that I am about to describe.
Problem:
Now that I have described the pipelines, here is the problem:
If I start both pipelines, everything works as expected. However, if after 30s, I stop the "source" pipeline, and restart the pipeline, the Running Time gets reset back to zero, causing the timestamps of all buffers to be sent to be considered old buffers by the sink pipeline because it has already received buffers for timestamps 0 through 30s, so the playback on the other end won't resume until after 30s:
Source Pipeline: [28][29][30][0 ][1 ][2 ][3 ]...[29][30][31]
Sink Pipeline: [28][29][30][30][30][30][30]...[30][30][31]
________________________^
Source pipeline restarted
^^^^^^^^^^^^^^^^...^^^^^^^^
Sink pipeline will continue
to only show the "frame"
received at 30s until a
"newer" frame is sent, when
in reality each sent frame
is newer and should be shown
immediately.
Solution
I have found that adding sync=false to the autovideosink does solve the problem, however I was hoping to find a solution where the source would send its timestamps (DTS and PTS) based on the Clock time as seen in the image on that page.
I have seen this post and experimented with is-live and do-timestamp on my video source, but they do not seem to do what I want. I also tried to manually set the timestamps (DTS, PTS) in the buffers based on system time, however to no avail.
Any suggestions?
I think you should just restart the receiver pipeline as well. You could add the -e switch to the sender pipeline and when you stop the pipeline it should correctly propagate EOS via the GDP element to the receiver pipeline. Else I guess you can send a new segment or discontinuity to the receiver. Some event has to be signaled though to make the pipeline aware of that change, else it is somewhat bogus data. I'd say restarting the receiver is the simplest way.

Gstreamer capture and store mjpeg from webcam

I am trying to capture and store a webcam stream. The requirements are 1920x1080#30fps. And it must be done by a single-board-computer (Raspberry).
The duration to capture is 10 minutes. (For the moment I only capture 10 seconds for testing)
In general the camera (usbfhd01m from ELP) is able to provide an MJPEG stream in 1920x1080#30fps. I am just not able to store it. And I don't know why. I tried it with the following pipeline:
gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=300 do-timestamp=true ! image/jpeg,width=1920,height=1080,framerate=30/1 ! queue ! avimux ! filesink location=test.avi
The result is a video file which is far away from being fluent. What is missing in my pipeline?
When I use the same pipeline, but decode the stream and save it in a raw file like this:
gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=300 do-timestamp=true ! image/jpeg,width=1920,height=1080,framerate=30/1 ! queue ! jpegdec ! filesink location=test.yuv
then the raw video is absolutely fluent. Therefore, I think the pipeline and the device is able to record in 1920x1080#30fps, but there seems to be something wrong for saving the stream.
Storing the stream into matroska fileformat does not change my problem. And for transcoding on the fly to H264 the Raspberry Pi 3 doesn't seem to be powerful enough. (Even by using omxh264enc)
What happens when you remove the do-timestamp=true? This options applies current pipeline timestamps to the sample buffers - overwriting those coming out from the device. You probably want to store the original timestamps instead of overwriting them as they can carry some pipeline jitter.
In your second pipeline you save the stream as raw. Basically removing all timestamp information that you have (also the jitter timestamps). So when you play back the raw stream it assumes a constant framerate instead.

Write gstreamer source with opencv

My goal is, to write a GigEVision to Gstreamer application.
The first approach was to read the frames via a GigEVision API and then send it via gstreamer as raw RTP/UDP stream.
This stream can then be received by any gstreamer application.
Here is a minimal example for a webcam: https://github.com/tik0/mat2gstreamer
The drawback of this is, alot of serialization and deserialization when the package is send via UDP to the next application.
So the question: Is it possible to write a gstreamer source pad easily with opencv, to overcome the drawbacks? (Or do you have any other suggestions?)
Greetings
I think I've found the best solution for my given setup (s.t. the data is exchanged between applications on the same PC).
Just using the plugin for shared memory allows data exchange with minimal effort.
So my OpenCV pileline looks like:
appsrc ! shmsink socket-path=/tmp/foo sync=true wait-for-connection=false
And any other receiver (in this case gstreamer-1.0) looks like:
gst-launch-1.0 shmsrc socket-path=/tmp/foo ! video/x-raw, format=BGR ,width=<myWidth>,height=<myHeight>,framerate=<myFps> ! videoconvert ! autovideosink
Works very nice even with multiple access.

Post-process GStreamer playbin pipeline

The playbin pipeline in GStreamer is a wonderful thing in that I don't need to have any real knowledge about the individual elements needed to process the stream.
However, if I wanted to rotate the video 90 degrees (or flip it, or anything else), it appears I have to manually code up the pipeline. At the moment, I'm doing this with:
rtspsrc location=X
! rtph264depay
! h264parse
! decodebin
! videoflip method=Y
! videoconvert
! autovideosink
However, because I'm binding the video to a specific Gtk widget, I capture the message asking for the widget ID and provide that back to GStreamer so it can correctly bind.
Unfortunately, according to gst-inspect-1.0, none of those elements in the pipeline above appear to actually provide the GstVideoOverlay interface so that, when I query for one that can receive the widget identifier, I get null followed very quickly by a null pointer error. Or, if I do nothing when the null is returned, no binding occurs and GStreamer opens up a separate window to stream the video.
It turns out that playbin itself provides the required interface.
I also tried replacing autovideosink with ximagesink, and then with xvimagesink, both of which claim to support the interface but, in both cases, no element was found that supported the interface.
So my question s are basically this:
1/ Can I insert something into the above pipeline that will provide the interface?
2/ Failing that, is there a way to use playbin to analyse the stream correctly but then capture its output and pass that through more filters? The sort of thing I'm thinking of is:
playbin location=X
! videoflip method=Y
! autovideosink
In other words, can I use something like the video-sink property of playbin to stop it creating its own sink and instead pass its data through to the videoflip?
I'd prefer something that could be implemented with Gst.Parse.Launch() since I don't really want to have to mess about creating every single pipeline element manually if I can avoid it.
I'd say the way you are requesting the GstVideoOverlay is not correct or there is a bug in GStreamer, xvimagesink and ximagesink both support GstVideoOverlay interface. autovideosink doesn't but it is likely that the videosink inside it will support.
Anyway, you want to have a custom bin set to the video-sink property. You can create your bin and put the elements you want inside it, create a sink ghostpad and then set it as the video-sink of your playbin.
It is also possible to do it using parse-launch syntax:
gst-launch-1.0 playbin video-sink="videoconvert ! videoscale ! aasink" uri=file://<path/to/some/file>
Just replace the bin elements with whatever you need.

gstreamer get video playing event

I am quite new to gstreamer and trying to get some metrics on an existing pipeline. The pipeline is set as 'appsrc queue mpegvideoparse avdec_mpeg2video deinterlace videobalance xvimagesink'.
xvimagesink only has a sink pad and I am not sure where and how its output is connected to but I am interested in knowing when the actual video device/buffer displays the first I frame and then video starts rolling.
The application sets the pipeline state to 'playing' quite early on, so, listening on this event does not help.
Regards,
Check out GST_MESSAGE_STREAM_START and probes. However, I am not sure, what exactly do you want: at GStreamer level you can only detect moment when buffer is handled via some element, not when it's actually displayed.
xvimagesink has no srcpad (output), only sinkpad (input).
You can read about preroll here: http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-preroll.txt
Be sure to read GStreamer manual first:
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/index.html