I have a directshow filter graph that run forever without any stopping. But when I change source of the graph to other video file, synchronization between audio & video streams was failed.
It's happening because of some audio frames haven't played yet. How could tell to graph to flash out audio buffer?
When you stop filter graph, the data is flushed unconditionally.
Without stopping, you can remove buffered data by calling respective input pin's IPin::BeginFlush and IPin::EndFlush methods (the first one and then the second immediately afterwards). This does not have to be renderer's input pin, you are interested in calling the upstream audio pin so that this flushing call is propagated through and drains everything up to the renderer.
Related
I decode the rtp h264 stream and display it on the screen. In a parallel thread, recording to the mp4 file is sometimes performed. Also, during recording, I mix the sound through mp4mux into the file. Separately, sound and video are written perfectly, but as soon as I combine this, a problem appears. The first few seconds of the video is a black screen, but there is sound. At the same time, sound and video are synchronous. How to solve this problem? Thank you in advance.
Video has a higher latency than audio. That's why you get audio sooner. So you would need to trim the file afterwards if you don't want that. Or you add some logic in your code that drops all audio until the first video is decoded.
All,
I have a gstreamer source plugin, which reads a video frame from an avi file. It's connected to gstreamer's core tee and two queue elements to push the video frame to two video processing elements.These two video processing elements' output gets muxed by my mux plugin.
With tee and queue, currently my gstreamer source plugin keeps pushing almost 6-10 video frames to both queue - till the queues limit is filled I believe. What I want is to push only one video frame from my source plugin and wait for signal from my mux plugin for next frame.
Can someone guide how this can be achieved in gstreamer framework?
Thanks!
ARM
P.S. I tried using queue element's max-size-buffers property set to 1 and it did not work.
Take a look at the existing GStreamer muxers. Basically the rate control is done there by using GstCollectPads to wait for one buffer on every sinkpad and then block, and once every sinkpad has a buffer you mux them together (properly synchronizing them relative to each other) and then forward the data. So rate control is done by blocking inside the muxer, and only once the muxer unblocks (i.e. consumes a buffer) a new buffer can be pushed on that sinkpad.
The queues in front of the muxer are irrelevant for that, but if you want to keep memory usage low you can use max-size-buffers=1 or similar settings.
I'm trying to use DirectShow to capture video from webcam. I assume to use SampleGabber class. For now I see that DirectShow can only read frames continiously with some desired fps. Can DirectShow read frames by request?
DirectShow pipeline sets up streaming video. Frames will continuously stream through Sample Grabber and its callback, if you set it up. The callback itself adds minimal processing overhead if you don't force format change (to force video to be RGB in particular). It is up to whether to process or skip a frame there.
On request grabbing will be taking either last known video frame streamed, or next to go through Sample Grabber. This is typical mode of operation.
Some devices offer additional feature of taking a still on request. This is a rarer case and it's described on MSDN here: Capturing an Image From a Still Image Pin:
Some cameras can produce a still image separate from the capture
stream, and often the still image is of higher quality than the images
produced by the capture stream. The camera may have a button that acts
as a hardware trigger, or it may support software triggering. A camera
that supports still images will expose a still image pin, which is pin
category PIN_CATEGORY_STILL.
The recommended way to get still images from the device is to use the
Windows Image Acquisition (WIA) APIs. [...]
To trigger the still pin, use [...]
I'm trying to capture an AVI video, using DirectShow AVIMux and FileWriter Filters.
When I connect SampleGrabber filter instead of the AVIMux, I can clearly see that the stream is 30 fps, however upon capturing the video, each frame is duplicated 4 time and I get a 120 frames instead of 30. The movie is 4 times slower than it should be and only the first frame in the set of 4 is a Key Frame.
I tried the same experiment with 8 fps and for each image I received, I had 15 frames in the video. And in case of 15 fps, I got each frame 8 times.
I tried both writing the code in C++ and testing it with Graph Edit Plus.
Is there any way I can control it? Maybe some restrictions on the AVIMux filter?
You don't specify your capture format which could have some bearing on the problem, but generally it sounds like the graph when writing to file has some bottleneck which prevents the stream from continuing to flow at 30fps. The camera is attempting to produce frames at 30fps, and it will do so as long as buffers are recycled for it to fill.
But here the buffers aren't available because the file writer is busy getting them onto the disk. The capture filter is starved and in this situation it increments the "dropped frame" counter which travels with each captured frame. AVIMux uses this count to insert an indicator into the AVI file which says in effect "a frame should have been available here to write to file, but isn't; at playback time repeat the last frame". So the file should have placeholders for 30 frames per second - some filled with actual frames, and some "dropped frames".
Also, you don't mention whether you're muxing in audio, which would be acting as a reference clock for the graph to maintain audio-video sync. When capture completes if also using an audio stream, AVIMux alters the framerate of the video stream to make the duration of the two streams equal. You can check whether AVIMux has altered the framerate of the video stream by dumping the AVI file header (or maybe right click on the file in explorer and look at the properties).
If I had to hazard a guess as to the root of the problem, I'd wager the capture driver has a bug in calculating the dropped frame count which is in turn messing up AVIMux. Does this happen with a different camera?
I build a DirectShow graph consisting of my video capture filter
(grabbing the screen), default audio input filter both connected
through spliiter to WM Asf Writter output filter and to VMR9 renderer.
This means I want to have realtime audio/video encoding to disk
together with preview. The problem is that no matter what WM profile I
choose (even very low resolution profile) the output video file is
always "jitter" - every few frames there is a delay. The audio is ok -
there is no jitter in audio. The CPU usage is low < 10% so I believe
this is not a problem of lack of CPU resources. I think I'm time-
stamping my frames correctly.
What could be the reason?
Below is a link to recorder video explaining the problem:
http://www.youtube.com/watch?v=b71iK-wG0zU
Thanks
Dominik Tomczak
I have had this problem in the past. Your problem is the volume of data being written to disk. Writing to a faster drive is a great and simple solution to this problem. The other thing I've done is placing a video compressor into the graph. You need to make sure both input streams are using the same reference clock. I have had a lot of problems using this compressor scheme and keeping a good preview. My preview's frame rate dies even if i use an infinite Tee rather than a Smart Tee, the result written to disk was fine though. Its also worth noting that the more of a beast the machine i was running it on was the less of an issue so it may not actually provide much of a win if you need both over sticking a new faster hard disk in the machine.
I don't think this is an issue. The volume of data written is less than 1MB/s (average compression ratio during encoding). I found the reason - when I build the graph without audio input (WM ASF writer has only video input pint) and my video capture pin is connected through Smart Tree to preview pin and to WM ASF writer input video pin then there is no glitch in the output movie. I reckon this is the problem with audio to video synchronization in my graph. The same happens when I build the graph in GraphEdit. Without audio, no glitch. With audio, there is a constant glitch every 1s. I wonder whether I time stamp my frames wrongly bu I think I'm doing it correctly. How is the general solution for audio to video synchronization in DirectShow graphs?