Get accurate frame Number from QMediaplayer - c++

I want to process frames with Qt. I use QMediaplayer to load a video.
I used the implmentation I found here. I have additional data stored as "frame by frame values" in a .csv file.
For this I want to get the exact frame number of the current frame which is processed in the "present" function of my QAbstractVideoSurface implementation. It works while playing/stopping and pausing the video, but not when I attach a slider to the video...it seems that the QMediaplayer is out of sync with the data which is displayed. I tried getting the current time of the QMediaplayer while being in the QAbstractVideoSurface::present() function but it just won't work. Setting the time from outside while the slider is being moved was also no success for me.
My main problem is that QVideoFrame::startTime() and QVideoFrame::stopTime() does not deliver correct results after QMediaPlayer::setPosition(int) was called!
Does anyone have ideas how I get the current frame number for a QVideoFrame?

Related

Do not show Desktop between 2 videos

I´m working with OMXPlayer on Raspberry.
Right now I have a loop(with Python2.7) to show videos and it works correctly.
But I have two problems:
1. When one video is finished, the Desktop will be shown for one second. And I don't want it. How can I change quickly to another video without showing the Desktop?
2. Another problem is that I wanna show some pictures too.. I know that OMXPlayer does not show images... Can I use another program in my code? But the user should not notice the change.
Thanks.
I was trying to figure this out too but I'm afraid it's not possible. It seams to me that omxplayer is only able to play a single video at a time and to play another video you have to run a new instance of the program. It takes a while to initialize, hence the gap between the videos.
That said, I figured a nasty way to get around it. When playing a video, you can extract it's last frame into a PNG file with ffmpeg. Then, you can play the video with omxplayer and use whatever graphical tools you have to display the picture fullscreen in the background. When the video ends, it disappears but the picture stays there and since it's the last frame of the video, it appears to just freeze at the end for a second, before the next video starts. Then, you just repeat the process. Hope it helps.

How do I get frames one by one from the video in Qt framework?

I got a video input with QMediaPlayer and then I wanted to read frames one by one and process all frames with other vision algorithms. But I didn't know how do I get frames one by one from the video and access each pixel of the frame...
In OpenCV library, I'm easily able to solve that problem with cv::VideoCapture and cv::Mat.
cv::VideoCapture capture(filename);
cv::Mat img;
capture >> img; // 'img' contains the first frame of the video.
capture >> img; // 'img' contains the second frame of the video.
If someone has already handle with this kind of problem, please help me.
Thanks a lot.
You could write your own implementation of the QAbstractVideoSurface and override its
present method to handle the video frame by frame.
Then you will have to set the video output of the QMediaPlayer via setVideoOutput.
For details how to access the frame data you should consult the QVideoFrame documentation.
A suggestion: you could use OpenCV. That would make things easier to play videos and process them without QImage->Mat conversion.
In order to process videos with OpenCV + Qt you must create a QThread connected to a QTimer signal. The QTimer signal emits every few milliseconds signals to a slot in the worker thread to fetch the next video frame from VideoCapture and work on the data.

C++ waitKey Delay to Display Images in Loop

Within C++ (also using OpenCV) I'm creating a loop which displays a new image from file each iteration. To achieve this I've had to add in waitKey(1) otherwise only a blank window is displayed. I was just wondering why this millisecond delay must be included for the image to show each iteration and, if possible, if there is a method to display the image without requiring this delay.
Thanks in advance!
The function waitKey() waits for key event for a "delay" (here, 30
milliseconds). As explained in the OpenCV documentation, HighGui
(imshow() is a function of HighGui) need a call of waitKey reguraly,
in order to process its event loop.
Ie, if you don't call waitKey, HighGui cannot process windows events
like redraw, resizing, input event etc. So just call it, even with a
1ms delay :)
what does waitKey (30) mean in OpenCV?

Frame by frame with QMediaPlayer

I'm trying to implement a "frame by frame" function for users on the Video Widget from Qt Examples, but I didn't find how to do that with QMediaPlayer. I also want to get MetaData of a video file (e.g. how many frames per second, duration, etc...) but .metadata("Duration") function give nothing.
Is QMediaPlayer the right class to use ? If yes, how can I do what I want ?
If not what class do I have to use ?
Thanks for your help.

DirectShow C++ : wrong duration and Fps in resulting avi file

I have a directshow graph as described below :
"File.mov"->haali Splitter->ffdshow decoder ->Custom Filter->avi Mux->File writer(File.avi).
The Fps of the original file is 30 Fps (File.mov) and the duration is 6 seconds.
Behaviour in the prompt: When I run the application in the prompt, I obtain a file with a duration of about 12seconds and Fps of 25. What is done, is that each frame is duplicated twice.
Behaviour in Graphedit Tools: When I run the same graph in Graphedit, the playback is progressing until the progress bar is full, but never stops(stop button doesn't change to grey). If I force stop with the Stop Button, the File.avi is automatically removed from the disc.
Thank you for your help
See discussion at DirectShow Record Problem - fps
The avi file format does not have per-frame timestamps. If the media type frame rate that is used for the file creation doesn't match the timestamps you pass, the mux will create dropped frame markers.
G