I have a directshow graph as described below :
"File.mov"->haali Splitter->ffdshow decoder ->Custom Filter->avi Mux->File writer(File.avi).
The Fps of the original file is 30 Fps (File.mov) and the duration is 6 seconds.
Behaviour in the prompt: When I run the application in the prompt, I obtain a file with a duration of about 12seconds and Fps of 25. What is done, is that each frame is duplicated twice.
Behaviour in Graphedit Tools: When I run the same graph in Graphedit, the playback is progressing until the progress bar is full, but never stops(stop button doesn't change to grey). If I force stop with the Stop Button, the File.avi is automatically removed from the disc.
Thank you for your help
See discussion at DirectShow Record Problem - fps
The avi file format does not have per-frame timestamps. If the media type frame rate that is used for the file creation doesn't match the timestamps you pass, the mux will create dropped frame markers.
G
Related
Hello I am building an image/video editing application (runs on window 11) using Unoplatform.
(Unoplatform internally use WinUI3)
What i want to do is
App can load image/video frame on canvas
Use brush effect on frame surface like drawing pad feature
During playing video, user can add effect on every frame and modified frame saved right away on original file.
During playing video, do computer vision processing to recognize object and add mark on frame
Audio must playable
So my question is,
How can I add an effect to only a specific frame in the original video and save it using Unoplatform(or WinUI3) API?
Is it possible to play video on canvas view and get frame by frame from MediaPlayer?
I want to process frames with Qt. I use QMediaplayer to load a video.
I used the implmentation I found here. I have additional data stored as "frame by frame values" in a .csv file.
For this I want to get the exact frame number of the current frame which is processed in the "present" function of my QAbstractVideoSurface implementation. It works while playing/stopping and pausing the video, but not when I attach a slider to the video...it seems that the QMediaplayer is out of sync with the data which is displayed. I tried getting the current time of the QMediaplayer while being in the QAbstractVideoSurface::present() function but it just won't work. Setting the time from outside while the slider is being moved was also no success for me.
My main problem is that QVideoFrame::startTime() and QVideoFrame::stopTime() does not deliver correct results after QMediaPlayer::setPosition(int) was called!
Does anyone have ideas how I get the current frame number for a QVideoFrame?
I´m working with OMXPlayer on Raspberry.
Right now I have a loop(with Python2.7) to show videos and it works correctly.
But I have two problems:
1. When one video is finished, the Desktop will be shown for one second. And I don't want it. How can I change quickly to another video without showing the Desktop?
2. Another problem is that I wanna show some pictures too.. I know that OMXPlayer does not show images... Can I use another program in my code? But the user should not notice the change.
Thanks.
I was trying to figure this out too but I'm afraid it's not possible. It seams to me that omxplayer is only able to play a single video at a time and to play another video you have to run a new instance of the program. It takes a while to initialize, hence the gap between the videos.
That said, I figured a nasty way to get around it. When playing a video, you can extract it's last frame into a PNG file with ffmpeg. Then, you can play the video with omxplayer and use whatever graphical tools you have to display the picture fullscreen in the background. When the video ends, it disappears but the picture stays there and since it's the last frame of the video, it appears to just freeze at the end for a second, before the next video starts. Then, you just repeat the process. Hope it helps.
I want to capture the video output of an application using C++ and winapi, and stream it over the network. At the moment, I am capturing this output using a DirectShow filter. The application displays it's video output on the screen, and I just capture whatever it is there. I want to optimize this process.
My question is: Is there a way to capture the video/audio output of an application before it is displayed on the screen?
Thanks.
Capture video before it is shown?
It is depends on how is the application provides the video for you.
Real-time rendering - You can't access what's not exists. Like video games, or any dynamic rendering only displaying the actual state, and perhaps don't know anything about the future.
Also there's an anomaly, when rendering becomes slower than the screen's refresh rate, called screen tearing.
Static displaying - All the data is available already. For example if it's a video player application, with a video on your local machine, your only task is to get the data, and capture it with the appropriate position in time.
Last but not least, every hardware has a reaction time, a small delay to process data.
Also, there is a similar question Fastest method of screen capturing on Windows
I'm using VLC library to create a simple media player, the program will display instructions above the video. These instructions are varying in position, size and color. I need to process the video frame before it's displayed to add my drawings. How can this be done? How can I have the libvlc show this large text when changing the volume up or down?