All,
I have a gstreamer source plugin, which reads a video frame from an avi file. It's connected to gstreamer's core tee and two queue elements to push the video frame to two video processing elements.These two video processing elements' output gets muxed by my mux plugin.
With tee and queue, currently my gstreamer source plugin keeps pushing almost 6-10 video frames to both queue - till the queues limit is filled I believe. What I want is to push only one video frame from my source plugin and wait for signal from my mux plugin for next frame.
Can someone guide how this can be achieved in gstreamer framework?
Thanks!
ARM
P.S. I tried using queue element's max-size-buffers property set to 1 and it did not work.
Take a look at the existing GStreamer muxers. Basically the rate control is done there by using GstCollectPads to wait for one buffer on every sinkpad and then block, and once every sinkpad has a buffer you mux them together (properly synchronizing them relative to each other) and then forward the data. So rate control is done by blocking inside the muxer, and only once the muxer unblocks (i.e. consumes a buffer) a new buffer can be pushed on that sinkpad.
The queues in front of the muxer are irrelevant for that, but if you want to keep memory usage low you can use max-size-buffers=1 or similar settings.
Related
Hey,
I am new to Gstreamer and want to send a video that is captured from a camera and manipulated with OpenCV over a network to the receiving part. The receiving part then read it and displays it. This shall be done in real-time. It basically works with the code/gstreamer settings below however as soon a frame is dropped (at least I think this is the reason) the video get corrupted in form of grey parts (attached picture).
OpenCV Sending Part:
cv::VideoWriter videoTransmitter("appsrc ! videoconvert ! videoscale ! x264enc ! rtph264pay config-interval=1 pt=96 ! udpsink host=192.168.168.99 port=5000", cv::VideoWriter::fourcc('H', '2', '6', '4'), 10, videoTransmitter_imageSize, true);
OpenCV Receiving part:
cv::VideoCapture videoReceiver("udpsrc port=5000 ! application/x-rtp ! rtpjitterbuffer ! rtph264depay ! avdec_h264 ! videoconvert ! appsink", cv::CAP_GSTREAMER);
It basically works but I often get grey parts in the video which then stay for a bit until the video is displayed correctly. I guessed it happens always when a frame is dropped due to the transmission. However, how can I get rid of these grey/corrupted frames? Any Hints? Any Gstreamer parameters I need to set to tune result? Is there a better way to stream a video with opencv over network?
Any help is appreciated!
No, there isn't any mechanism in Gstreamer to detect corrupted frames, because this doesn't make sense.
In most modern video codec, frame aren't sent in full anymore, but split in slices (meaning only a small part of the frame). It can takes multiple intra packets (each containing multiple slices) to build a complete frame, and this is a good thing, because it makes your stream more resilient to errors, and allow multithreaded decoding of the slices (for example).
In order to achieve what you want, you have multiple solutions:
Use RTP/RTCP instead of RTP over UDP only. At least RTP contains a sequence number and "end of frame" markers so it possible to detect some packet drops. Gstreamer doesn't care about those by default unless you have started a RTP/RTCP session. If you set up a session with RTCP, you can have reports when some packets were dropped. I'm not sure there is a pipeline way to be informed when a packet is dropped, so you might still have to write an appsink in your gstreamer pipeline to add some code for detecting this event. However, this will tell you something is wrong, but not when it's ok to resume or how much wrong it is. In Gstreamer speak, it's called RTPSession, and you're interested in the stats::XXX_nack_count properties,
Add some additional protocol to compute the checksum of the encoder's output frame/NAL/packet and transmit out of band. Make sure the decoder also compute the checksum of incoming frame/NAL/packet and if doesn't match, you'll know it'll fail decoding. Beware of packet/frame reordering (typically B frames will be re-ordered after their dependencies) that could disturb your algorithm. Again, you have no way to know when to resume upon an error. Using TCP instead of UDP might be enough to fix it if you only have partial packet drop, but it'll fail to resume if it's a bandwidth issue (if the video bandwidth > network bandwidth, it'll collapse, since TCP can't drop packets to adapt)
Use intra only video codec (like APNG, or JPEG). JPEG can also partially decode, but gstreamer's default software jpeg decoder doesn't output a partial JPEG frame.
Set a closed and shorter GOP in your encoder. Many encoder have a pseudo "gop = group of picture" parameter and count the frames in your decoder when decoding after an error. A GOP ensure that whatever the state of the encoding, after GOP frames, the encoder will emit an non-dependent group of frames (likely enough intra frame/slices to rebuild the complete frame). This will allow resuming after an error by dropping GOP - 1 frames (you must decode them, but you can't use them, they might be corrupted), you'll need a way to detect the error, see point 1 or 2 above. For x264enc the parameter is called key-int-max. You might want to try also intra-refresh=true so the broken frame effect upon error will be shorter. The downside is an increase in bandwidth for the same video quality.
Use a video codec with scalable video coding (SVC instead of AVC for exemple). In that case, in case of decoding error, you'll get a lower quality instead of corrupted frame. There isn't any free SVC encoder I'm aware of in Gstreamer.
Deal with it. Compute a saturation map of the picture with OpenCV and compute its deviation & mean. If it's very different from the previous picture, stop computation until the GOP has elapsed and the saturation is back to expected levels.
Im using uridecodebin with multiple types of sources, rtsp, http, filesrc, etc. But I want to control framerate at which frames are decoded, for example decode only 1 frame every 60 frames. I can set plugin that changes framerate after uridecodebin, but it will change framerate of already decoded frames(drop them).
May be there is some element that autodetects the source element, that I can connect to decodebin? I found autovideosrc, but I dont understand how to use it. Any advise appreciated.
A task:
I have a trusted video event detector. I trust to my event detector for 100% and I want to write an uncompressed frame to my avi containter only if my event detector produces "true" result.
For frames, when my event detector is producing "false" I would like to write an empty packet because I want to know that there was a frame without event happening.
Is it possible to keep AVI file alive? Or do I need to write my own player in this case?
Another option is to calculate timestamps manually and set dts/pts to that calculated time.
Drawback: I will need to recalculate timestamps to understand how many frames were between events.
I am using:
av_write_frame(AVFormatContext, AVPacket);
and
av_interleaved_write_frame(AVFormatContext, AVPacket);
What is your suggestion/idea?
Thank you in advance.
Knowing AVI spec, I don't think there is a such thing as an "empty packet" as AVI stores its frames densely without frame timestamps. If file-size is no issue, you can repeat the same frame to indicate undetected event (undo with freezedetect filter) or insert all-zero frame (undo with blackdetect filter). It, however, appears better to use something like matroska container and variable frame rate paired with a lossless h264 (more inline with your alternate option?). Just my 2 cents.
I have a directshow filter graph that run forever without any stopping. But when I change source of the graph to other video file, synchronization between audio & video streams was failed.
It's happening because of some audio frames haven't played yet. How could tell to graph to flash out audio buffer?
When you stop filter graph, the data is flushed unconditionally.
Without stopping, you can remove buffered data by calling respective input pin's IPin::BeginFlush and IPin::EndFlush methods (the first one and then the second immediately afterwards). This does not have to be renderer's input pin, you are interested in calling the upstream audio pin so that this flushing call is propagated through and drains everything up to the renderer.
I'm trying to capture an AVI video, using DirectShow AVIMux and FileWriter Filters.
When I connect SampleGrabber filter instead of the AVIMux, I can clearly see that the stream is 30 fps, however upon capturing the video, each frame is duplicated 4 time and I get a 120 frames instead of 30. The movie is 4 times slower than it should be and only the first frame in the set of 4 is a Key Frame.
I tried the same experiment with 8 fps and for each image I received, I had 15 frames in the video. And in case of 15 fps, I got each frame 8 times.
I tried both writing the code in C++ and testing it with Graph Edit Plus.
Is there any way I can control it? Maybe some restrictions on the AVIMux filter?
You don't specify your capture format which could have some bearing on the problem, but generally it sounds like the graph when writing to file has some bottleneck which prevents the stream from continuing to flow at 30fps. The camera is attempting to produce frames at 30fps, and it will do so as long as buffers are recycled for it to fill.
But here the buffers aren't available because the file writer is busy getting them onto the disk. The capture filter is starved and in this situation it increments the "dropped frame" counter which travels with each captured frame. AVIMux uses this count to insert an indicator into the AVI file which says in effect "a frame should have been available here to write to file, but isn't; at playback time repeat the last frame". So the file should have placeholders for 30 frames per second - some filled with actual frames, and some "dropped frames".
Also, you don't mention whether you're muxing in audio, which would be acting as a reference clock for the graph to maintain audio-video sync. When capture completes if also using an audio stream, AVIMux alters the framerate of the video stream to make the duration of the two streams equal. You can check whether AVIMux has altered the framerate of the video stream by dumping the AVI file header (or maybe right click on the file in explorer and look at the properties).
If I had to hazard a guess as to the root of the problem, I'd wager the capture driver has a bug in calculating the dropped frame count which is in turn messing up AVIMux. Does this happen with a different camera?