Reading vector<Mat> or Video ? (Opencv & C++) - c++

I'm currently on a project where I have several picture taken with a camera.
My goal here is to make a video out of those pictures.
The problem is that pictures are not continuous ( there are some pictures missing in between).
And so when I'm trying to use Videowriter functions to create (obviously) a video the result is really messy and very speedy.
So I had an idea about creating an equivalent of a video reader but by reading a vector instead of a video: the display speed would depending on a cooldown between every pictures of my vector.
I would like to know your opinion about my solution and what would be your solution?
Thanking you.

Reduce the FPS in the VideoWriter object,
VideoWriter video(videoname, CV_FOURCC('M','J','P','G'), FPS, Size, true);
Try with FPS = 5 or even lesser, this might work

Related

how to programmatically modify the FPS of a video

I'm using OpenCV's cv::VideoCapture class to read frames from videos.
My guess is I could drop every 2nd frame to go from 30 FPS to 15 FPS, or I could drop every 3rd frame to go from 30 FPS to 20 FPS, etc.
...but I suspect this is not the right way to do it. Isn't there some sort of interpolation or re-interpretation of frames that needs to happen to smoothly modify the FPS?
Assuming there is, what would this be called so I can search for it? I believe projects like VLC can re-encode videos to use a different FPS, but I'm more curious to know about how to programmatically do this work in C++ with OpenCV.

using OpenCV to capture images, not video

I'm using OpenCV4 to read from a camera. Similar to a webcam. Works great, code is somewhat like this:
cv::VideoCapture cap(0);
cap.set(cv::CAP_PROP_FRAME_WIDTH , 1600);
cap.set(cv::CAP_PROP_FRAME_HEIGHT, 1200);
while (true)
{
cv::Mat mat;
// wait for some external event here so I know it is time to take a picture...
cap >> mat;
process_image(mat);
}
Problem is, this gives many video frames, not a single image. This is important because in my case I don't want nor need to be processing 30 FPS. I actually have specific physical events that trigger reading the image from the camera at certain times. Because OpenCV is expecting the caller to want video -- not surprising considering the class is called cv::VideoCapture -- it has buffered many seconds of frames.
What I see in the image is always from several seconds ago.
So my questions:
Is there a way to flush the OpenCV buffer?
Or to tell OpenCV to discard the input until I tell it to take another image?
Or to get the most recent image instead of the oldest one?
The other option I'm thinking of investigating is using V4L2 directly instead of OpenCV. Will that let me take individual pictures or only stream video like OpenCV?

OpenCV IplImage save/read to video

I'm trying to save a video to analyse later with OpenCV algorithms.
I'm using a C++ library of the camera to obtain the frames.
IplImage *iplImageInput = QueryFrame(); //runs every 30 ms
std::vector <cv::Mat> splittedVector;
cv::split(cv::Mat(iplImageInput), splittedVector); //stereo vision camera
// splittedVector buffer is used by the algorithms
So I would like to replace the function "QueryFrame()" with data saved before.
I have already tried some things with cv::videowriter/cv::videocapture but with no luck.
Do you have any hints to eliminate the need for the camera while testing algorithms?
How should I implement a writer and reader to save like 150 frames?
Thanks a lot.

How to find object on video using OpenCV

To track object on video frame, first of all I extract image frames from video and save those images to a folder. Then I am supposed to process those images to find an object. Actually I do not know if this is a practical thing, because all the algorithm did this for one step. Is this correct?
Well, your approach will consume a lot of space on your disk depending on the size of the video and the size of the frames, plus you will spend a considerable amount of time reading frames from the disk.
Have you tried to perform real-time video processing instead? If your algorithm is not too slow, there are some posts that show the things that you need to do:
This post demonstrates how to use the C interface of OpenCV to execute a function to convert frames captured by the webcam (on-the-fly) to grayscale and displays them on the screen;
This post shows a simple way to detect a square in an image using the C++ interface;
This post is a slight variation of the one above, and shows how to detect a paper sheet;
This thread shows several different ways to perform advanced square detection.
I trust you are capable of converting code from the C interface to the C++ interface.
There is no point in storing frames of a video if you're using OpenCV, as it has really handy methods for capturing frames from a camera/stored video real-time.
In this post you have an example code for capturing frames from a video.
Then, if you want to detect objects on those frames, you need to process each frame using a detection algorithm. OpenCV brings some sample code related to the topic. You can try to use SIFT algorithm, to detect a picture, for example.

Slow video-capturing with opencv 2.3.1

Is there a way to stream video with opencv faster?
i'm using
Mat img;
VideoCapture cap(.../video.avi);
for (;;) {
cap >> img;
...
here is some calculations
}
Thanks
Since the frame grabbing procedure is pretty straightforward, the slowness you are experiencing could caused by some calculations consuming your CPU, decreasing the FPS displayed by your application.
It's hard to tell without looking at the code that does this.
But a simple test to pinpoint the origin of the problem would be to simply remove some calculations and make a simple application that read the frames from the video and displays them. Simple as that! If this test works perfectly, then you know that the performance is being affected by the calculations that are being done.
Good luck.