Capturing an AVI video with DirectShow - c++

I'm trying to capture an AVI video, using DirectShow AVIMux and FileWriter Filters.
When I connect SampleGrabber filter instead of the AVIMux, I can clearly see that the stream is 30 fps, however upon capturing the video, each frame is duplicated 4 time and I get a 120 frames instead of 30. The movie is 4 times slower than it should be and only the first frame in the set of 4 is a Key Frame.
I tried the same experiment with 8 fps and for each image I received, I had 15 frames in the video. And in case of 15 fps, I got each frame 8 times.
I tried both writing the code in C++ and testing it with Graph Edit Plus.
Is there any way I can control it? Maybe some restrictions on the AVIMux filter?

You don't specify your capture format which could have some bearing on the problem, but generally it sounds like the graph when writing to file has some bottleneck which prevents the stream from continuing to flow at 30fps. The camera is attempting to produce frames at 30fps, and it will do so as long as buffers are recycled for it to fill.
But here the buffers aren't available because the file writer is busy getting them onto the disk. The capture filter is starved and in this situation it increments the "dropped frame" counter which travels with each captured frame. AVIMux uses this count to insert an indicator into the AVI file which says in effect "a frame should have been available here to write to file, but isn't; at playback time repeat the last frame". So the file should have placeholders for 30 frames per second - some filled with actual frames, and some "dropped frames".
Also, you don't mention whether you're muxing in audio, which would be acting as a reference clock for the graph to maintain audio-video sync. When capture completes if also using an audio stream, AVIMux alters the framerate of the video stream to make the duration of the two streams equal. You can check whether AVIMux has altered the framerate of the video stream by dumping the AVI file header (or maybe right click on the file in explorer and look at the properties).
If I had to hazard a guess as to the root of the problem, I'd wager the capture driver has a bug in calculating the dropped frame count which is in turn messing up AVIMux. Does this happen with a different camera?

Related

Write empty packets into avi file ffmpeg

A task:
I have a trusted video event detector. I trust to my event detector for 100% and I want to write an uncompressed frame to my avi containter only if my event detector produces "true" result.
For frames, when my event detector is producing "false" I would like to write an empty packet because I want to know that there was a frame without event happening.
Is it possible to keep AVI file alive? Or do I need to write my own player in this case?
Another option is to calculate timestamps manually and set dts/pts to that calculated time.
Drawback: I will need to recalculate timestamps to understand how many frames were between events.
I am using:
av_write_frame(AVFormatContext, AVPacket);
and
av_interleaved_write_frame(AVFormatContext, AVPacket);
What is your suggestion/idea?
Thank you in advance.
Knowing AVI spec, I don't think there is a such thing as an "empty packet" as AVI stores its frames densely without frame timestamps. If file-size is no issue, you can repeat the same frame to indicate undetected event (undo with freezedetect filter) or insert all-zero frame (undo with blackdetect filter). It, however, appears better to use something like matroska container and variable frame rate paired with a lossless h264 (more inline with your alternate option?). Just my 2 cents.

openh264 Repeating frame

I'm using the openh264 lib, in a c++ program, to convert a set of images into a h264 encoded mp4 file. These images represent updates to the screen during a session recording.
Lets say a set contains 2 images, one initial screen grab of the desktop and another one, 30 seconds later, when the clock changes.
Is there a way for the stream to represent a 30 seconds long video using only theses 2 images?
Right now, I'm brute forcing this by encoding multiple times the first frame to fill the gap. It there a more efficient and/or faster way of doing this.
Of course. Set a frame rate of 1/30 fps and you end up with 1 frame every 30 seconds. It doesn't even have to be in the H.264 stream - it can be done also when it gets muxed into an mp4 file afterwards for example.

How to insert a key frame(Iframe) to a h.264 video stream in ffmpeg C++ api?

I have a real time video stream, and want to cut some video clips from it by accurate timestamp(pts).
When I receiver an avpacket, I decode it, and do something and cache the avpacket. I don't want to re-encode all avpackets, it cost cpu resource.
There are many gop structure in H.264 stream, usually we should cut the video begin at the key frame, and end at the key frame. Otherwise the front some frames in the video clip would display error.
Now I use av_write_frame to make avpacket to video. But sometimes the length of gop is very long, such as it could be 250, 8.3s(30 frame per second). It means the distance between two I-frame could be 250 frames. The video clip is short, I don't want to add too many unused frames.
How should I do? I think i should insert a i-frame at the start position of video clip. Could I change a p-frame to i-frame?
Thanks your reading!
This is not possible in the generic case, but may be in specific cases. Even then, there are no open source/free tools to do this, and I am unaware of any commercial tools. The reason I say it is not possible in the generic case is each frame can reference up to 16 other frames. So you can not just replace a single frame, You will need to replace all referenced frames. Doing this will likely take almost as much CPU as encoding the whole GOP.

MPEG backwards frame decoding using FFmpeg

I have so-called "block's" that stores some of MPEG4 (I,p,p,p,p...) frames.
For every "block", frame starts with an "I" frame, and ends before next "I" frame.
(VOL - "visual_object_sequence_start_code" is allways included before the "I" frame)
I need to be able to play those "block" frames in "backwards" mode.
The thick is that:
It's not possible to just take the last frame in my block and perform decoding, because it's a "P" frame and it needs an "inter frame (I)" to be correctly decoded.
I can't just take my first "I" frame, then pass it to the ffmpeg's "avcodec_decode_video" function and only then pass my last "P" frame to the ffmpeg, because that last "P" frame depends on the "P" frame before it, right? (well.. as far as I've tested this method, my last decoded P frame had artifacts)
Now the way I'm performing backwards playing is - first decoding all of my "block" frames in RGB and store them in memory. (in most cases it would be ~25 frames per block max.) But this method really requires a lot of memory... (especially if frames resolutions is high)
And I have a feeling that this is not the right way to do this...
So I would like to ask, does any one have any suggestions how this "backwards" frame decoding/playing could be performed using FFmpeg?
Thanks
What you are looking at really a research problem: To get a glimps of the overall approach, look at the following paper:
Compressed-Domain Reverse Play of MPEG Video Streams, SPIE International Symposium on Voice, Video, and Data Communications, Boston, MA, November, 1998.
Reverse-play algorithm for MPEG video streaming
MANIPULATING TEMPORAL DEPENDENCIES IN COMPRESSED VIDEO DATA WITH APPLICATIONS TO COMPRESSED-DOMAIN PROCESSING OF MPEG VIDEO.
Essentially, there is still advanced encoding based on key frames, however, you can reverse the process of motion compensation to achieve the reverse flow. This is done by on the fly conversion of P frames into I frames. This does require looking forward but doesn't require that much more memory. Possibly you can save this as a new file and then apply it to standard decoder with reverse play requirements.
However, this is very complex, and i have seen rare softwares doing this practically.
I do not think there is a way around starting from I-frame and decoding all P-frames, due to P-frame depending on the previous frame. To handle decoded frames, they can be saved to a file, or, with limited storage and extra CPU power, older P-frames can be discarded and recomputed later.
At the command level, you can convert input video to a series of images:
ffmpeg -i input_video output%4d.jpg
then reverse their order somehow and convert back to a video:
ffmpeg -r FRAME_RATE -i reverse_output%4d.jpg output_video
You may consider pre-processing, if it is an option.

DirectShow filter graph using WMASFWriter creates video which is too short

I am attempting to create a DirectShow source filter based on the pushsource example from the DirectShow SDK. This essentially outputs a set of bitmaps to a video. I have set up a filter graph which uses Async_reader with a Wave Parser for audio and my new filter to push the video (the filter is a CSourceStream and I populate my frames in the FillBuffer function). These are both connected to a WMASFWriter to output a WMV.
Each bitmap can last for several seconds so in the FillBuffer function I'm calling SetTime on the passed IMediaSample with a start and end time several seconds apart. This works fine when rendering to the screen but writing to disk results in a file which is too short in duration. It seems like the last bitmap is being ignored when writing a WMV (it is shown as the video ends rather than lasting for the intended duration). This is the case both with my filter and a modified pushsource filter (in which the frame length has been increased).
I've seen additional odd behaviour in that it was not possible to have a video that wasn't a multiple of 10 seconds in length at one point whilst I was trying to make this work. I'm not sure what this was, but I though I'd mention it incase it's relevant.
I think the end time is simply ignored. Normally video samples only have a start time because they are a point in time. If there is movement in the video, the movement is fluent, though the video are just points in time.
I think the solution is simple. Because video stays the same until the next frame is received, you can just add a dummy frame at the end of your video. You can simply repeat the previous frame.