OpenCV's VideoCapture::open Video Source Dialog - c++

In my current project, when I call VideoCapture::open(camera device index) and the camera is in used by another program, it shows a Video Source dialog and returns true when I select a device that is already in use.
However, in my [previous] experiment project, when I called VideoCapture::open(camera device index), it doesn't show this dialog.
I want to know what is causing the Video Source dialog to show and the program to behave differently from the experimental project.
This is the source code to the experiment project:
int main (int argc, char *argv[])
{
//vars
time_duration td, td1;
ptime nextFrameTimestamp, currentFrameTimestamp, initialLoopTimestamp, finalLoopTimestamp;
int delayFound = 0;
int totalDelay= 0;
// initialize capture on default source
VideoCapture capture;
std::cout << "capture.open(0): " << capture.open(0) << std::endl;
std::cout << "NOOO" << std::endl;
namedWindow("video", 1);
// set framerate to record and capture at
int framerate = 15;
// Get the properties from the camera
double width = capture.get(CV_CAP_PROP_FRAME_WIDTH);
double height = capture.get(CV_CAP_PROP_FRAME_HEIGHT);
// print camera frame size
//cout << "Camera properties\n";
//cout << "width = " << width << endl <<"height = "<< height << endl;
// Create a matrix to keep the retrieved frame
Mat frame;
// Create the video writer
VideoWriter video("capture.avi",0, framerate, cvSize((int)width,(int)height) );
// initialize initial timestamps
nextFrameTimestamp = microsec_clock::local_time();
currentFrameTimestamp = nextFrameTimestamp;
td = (currentFrameTimestamp - nextFrameTimestamp);
// start thread to begin capture and populate Mat frame
boost::thread captureThread(captureFunc, &frame, &capture);
// loop infinitely
for(bool q=true;q;)
{
if(frame.empty()){continue;}
//if(cvWaitKey( 5 ) == 'q'){ q=false; }
// wait for X microseconds until 1second/framerate time has passed after previous frame write
while(td.total_microseconds() < 1000000/framerate){
//determine current elapsed time
currentFrameTimestamp = microsec_clock::local_time();
td = (currentFrameTimestamp - nextFrameTimestamp);
if(cvWaitKey( 5 ) == 'q'){
std::cout << "B" << std::endl;
q=false;
boost::posix_time::time_duration timeout = boost::posix_time::milliseconds(0);
captureThread.timed_join(timeout);
break;
}
}
// determine time at start of write
initialLoopTimestamp = microsec_clock::local_time();
// Save frame to video
video << frame;
imshow("video", frame);
//write previous and current frame timestamp to console
cout << nextFrameTimestamp << " " << currentFrameTimestamp << " ";
// add 1second/framerate time for next loop pause
nextFrameTimestamp = nextFrameTimestamp + microsec(1000000/framerate);
// reset time_duration so while loop engages
td = (currentFrameTimestamp - nextFrameTimestamp);
//determine and print out delay in ms, should be less than 1000/FPS
//occasionally, if delay is larger than said value, correction will occur
//if delay is consistently larger than said value, then CPU is not powerful
// enough to capture/decompress/record/compress that fast.
finalLoopTimestamp = microsec_clock::local_time();
td1 = (finalLoopTimestamp - initialLoopTimestamp);
delayFound = td1.total_milliseconds();
cout << delayFound << endl;
//output will be in following format
//[TIMESTAMP OF PREVIOUS FRAME] [TIMESTAMP OF NEW FRAME] [TIME DELAY OF WRITING]
if(!q || cvWaitKey( 5 ) == 'q'){
std::cout << "C" << std::endl;
q=false;
boost::posix_time::time_duration timeout = boost::posix_time::milliseconds(0);
captureThread.timed_join(timeout);
break;
}
}
// Exit
return 0;
}

Related

C++ | OpenCV unable to capture webcam and resize into different frame, then save as mp4 # 60 fps

I am currently trying to record a video from a webcam (Logitech StreamCam Pro - Full HD) and then writing the captured feed into an .mp4 file. However, when I open the file with VLC, I get no playback. Just the VLC logo, with video length being 00:00:00 / 00:00:00.
I have noticed that if I change the extension from .mp4 into .avi that it does actually play back in VLC, however, at a ludacris speed (fps), much higher than the 60 fps of the camera.
TLDR - I am unable to play the .mp4 file of my webcam feed 1920x1080 # 60 fps, and I am wondering if anyone can help me with the problem.
C++ code block below, containing the logic
int MediaCapture::RecordFeed(int cameraID)
{
std::cout << "Camera selected: " << cameraID << std::endl;
// Define total frames and start of a counter for FPS calculation
int totalFrames = 0;
cv::VideoCapture *capture;
capture = new cv::VideoCapture(2);
if(!capture->isOpened()){
std::cout << "Error opening video stream or file" << std::endl;
return -1;
}
int frameWidth = capture->get(cv::CAP_PROP_FRAME_WIDTH);
int frameHeight = capture->get(cv::CAP_PROP_FRAME_HEIGHT);
int cameraFPS = capture->get(cv::CAP_PROP_FPS);
cv::VideoWriter video(fs::current_path().string() + "/assets/videos/recording.mp4",video.fourcc('m','p','4','v'),cameraFPS,cv::Size(frameWidth,frameHeight));
int tf = 0;
time_t start, end;
time(&start);
cv::Mat frame;
cv::Mat resizedFrame;
while (capture->read(frame))
{
tf++;
cv::resize(frame,resizedFrame,cv::Size(1920,1080));
std::cout << "Height:" << resizedFrame.size().height << std::endl;
std::cout << "Width:" << resizedFrame.size().width << std::endl;
video.write(resizedFrame);
if (cv::waitKey(1) >= 0)
{
break;
}
}
// When everything done, release the video capture and write object
capture->release();
video.release();
// Closes all the windows
cv::destroyAllWindows();
return 0;
}

cv::VideoWriter isn't saving any video

I'm trying to get cv::VideoWriter to work but I'm having trouble.
if (recording)
{
vid.write(imagecl);
std::cout << "\trecording..." << std::endl;
}
cv::imshow(CAMERA_TOPIC, imagecl);
rec_key = cv::waitKey(1);
imagecl.release();
if (rec_key == 114 && !recording)
{
std::cout << "\t\tStarted Recording" << std::endl;
auto now = std::chrono::system_clock::now();
now_time = std::chrono::system_clock::to_time_t(now);
vidname = VIDEO_PATH + date_str(std::ctime(&now_time)) + ".mp4";
std::cout << "\tSaving video to -> " << vidname << std::endl;
codec = cv::VideoWriter::fourcc('a','v','c','1');
vid = cv::VideoWriter(vidname, codec, 30, cv::Size(frame_width, frame_height), true);
recording = true;
}
else if (rec_key == 115 && recording)
{
std::cout << "\t\tEnded recording" << std::endl;
vid.release();
recording = false;
}
This code is part of a video_recorder function what is executing inside an infinite loop. I'm using another thread to fill the imagecl with the frames I want to record. The cv::imshow is working properly (it opens a window and displays the frames), but I can't get to save videos with cv::VideoWriter. The 'r' and 's' keys are triggering the conditions as I get the terminal messages of "started recording" and then "recording" each loop and then "ended recording". I double checked the path and it is correct. I'm saving to "/home/username/videoname.mp4". What am I doing wrong?

How to disable shadow detection in MoG2

I am using C++ and Opencv 2.3.1 for background subtraction. I have tried many times to change the parameters of Mog2 in order to disable shadow detection feature also i have tried what other people suggest on the internet. however, the shadow detection still enabled.
could you please tell me how to disable it?
see the sample code and the generated mask.
//opencv
#include < opencv2/opencv.hpp>
#include < opencv2/core/core.hpp>
#include < opencv2/highgui/highgui.hpp>
#include < opencv2/video/background_segm.hpp>
#include < opencv2/imgproc/imgproc.hpp>
#include < opencv2/video/video.hpp>
//C
#include <stdio.h>
//C++
#include <iostream>
#include <sstream>
using namespace cv;
using namespace std;
// Global variables
Mat frame; //current frame
Mat fgMaskMOG2; //fg mask fg mask generated by MOG method
Ptr<BackgroundSubtractor> pMOG2; //MOG Background subtractor
int keyboard; //input from keyboard
//new variables
int history = 1250;
float varThreshold = 16;
bool bShadowDetection = true;
/*
//added to remove the shadow
unsigned char nShadowDetection = 0;
float fTau = 0.5;
//static const unsigned char nShadowDetection =( unsigned char)0;
*/
// Function Headers
void help();
void processImages(char* firstFrameFilename);
void help()
{
cout
<< "This program shows how to use background subtraction methods provided by " << endl
<< " OpenCV. You can process images (-img)." << endl
<< "Usage:" << endl
<< "./bs -img <image filename>}" << endl
<< "for example: ./bs -img /data/images/1.png" << endl
<< endl;
}
// morphological operation
void morphOps(Mat &thresh){
//create structuring element that will be used to "dilate" and "erode" image.
//the element chosen here is a 3px by 3px rectangle
Mat erodeElement = getStructuringElement( MORPH_RECT,Size(2,2)); //3x3
//dilate with larger element so make sure object is nicely visible
Mat dilateElement = getStructuringElement( MORPH_RECT,Size(1,1)); //8x8
erode(thresh,thresh,erodeElement);
erode(thresh,thresh,erodeElement);
dilate(thresh,thresh,dilateElement);
dilate(thresh,thresh,dilateElement);
}
// main function
int main(int argc, char* argv[])
{
//print help information
help();
//check for the input parameter correctness
if(argc != 3) {
cerr <<"Incorret input list" << endl;
cerr <<"exiting..." << endl;
return EXIT_FAILURE;
}
//create GUI windows
namedWindow("Frame");
namedWindow("FG Mask MOG2 ");
//create Background Subtractor objects
//pMOG2 = new BackgroundSubtractorMOG2();
pMOG2 = new BackgroundSubtractorMOG2( history, varThreshold, bShadowDetection);
//BackgroundSubtractorMOG2(int history, float varThreshold, bool bShadowDetection=1);
if(strcmp(argv[1], "-img") == 0) {
//input data coming from a sequence of images
processImages(argv[2]);
}
else {
//error in reading input parameters
cerr <<"Please, check the input parameters." << endl;
cerr <<"Exiting..." << endl;
return EXIT_FAILURE;
}
//destroy GUI windows
destroyAllWindows();
return EXIT_SUCCESS;
}
//function processImages
void processImages(char* fistFrameFilename) {
//read the first file of the sequence
frame = imread(fistFrameFilename);
if(frame.empty()){
//error in opening the first image
cerr << "Unable to open first image frame: " << fistFrameFilename << endl;
exit(EXIT_FAILURE);
//current image filename
string fn(fistFrameFilename);
//read input data. ESC or 'q' for quitting
while( (char)keyboard != 'q' && (char)keyboard != 27 ){
//update the background model
pMOG2->operator()(frame, fgMaskMOG2,-1);
//get the frame number and write it on the current frame
size_t index = fn.find_last_of("/");
if(index == string::npos) {
index = fn.find_last_of("\\");
}
size_t index2 = fn.find_last_of(".");
string prefix = fn.substr(0,index+1);
string suffix = fn.substr(index2);
string frameNumberString = fn.substr(index+1, index2-index-1);
istringstream iss(frameNumberString);
int frameNumber = 0;
iss >> frameNumber;
rectangle(frame, cv::Point(10, 2), cv::Point(100,20),
cv::Scalar(255,255,255), -1);
putText(frame, frameNumberString.c_str(), cv::Point(15, 15),
FONT_HERSHEY_SIMPLEX, 0.5 , cv::Scalar(0,0,0));
//show the current frame and the fg masks
imshow("Frame", frame);
morphOps(fgMaskMOG2);
imshow("FG Mask MOG2 ", fgMaskMOG2);
//get the input from the keyboard
keyboard = waitKey(1);
//search for the next image in the sequence
ostringstream oss;
oss << (frameNumber + 1);
string nextFrameNumberString = oss.str();
string nextFrameFilename = prefix + nextFrameNumberString + suffix;
//read the next frame
frame = imread(nextFrameFilename);
if(frame.empty()){
//error in opening the next image in the sequence
cerr << "Unable to open image frame: " << nextFrameFilename << endl;
exit(EXIT_FAILURE);
}
//update the path of the current frame
fn.assign(nextFrameFilename);
// save subtracted images
string imageToSave =("output_MOG_" + frameNumberString + ".png");
bool saved = imwrite( "D:\\SO\\temp\\" +imageToSave,fgMaskMOG2);
if(!saved) {
cerr << "Unable to save " << imageToSave << endl;
}
}
}
}
take a look at the documentation
on your code you have
bool bShadowDetection = true;
change it to
bool bShadowDetection = false;
EDIT:
OpenCV 3's BackgroundSubtractorMOG2 Class has setShadowValue (int value) function to set gray value of shadow.
setting value of gray to zero will remove the shadow.
it depends on what you really want to see - if you want to separate the shadows from your segmentation:
bool bShadowDetection = true;
and use
cv::threshold(Mask,Mask,254,255,cv::THRESH_BINARY);
after MOG2->apply()
you'll get exactly the part of wich is {255} in your image
and sry for reanimating this...

PvAPI OpenCV built-in code: FPS mismatch and how to use multiple cameras?

I have two Manta G125B cameras (B stands for black that means monochrome). These are GigE interface cameras, and I am using PvAPI C++ application programmer's interface to read camera data to my Windows OS laptop by using Microsoft Visual Studio Community 2015 IDE.
Recently I came across Steven Puttemans' github account and there he shared the code AVT_Manta_opencv_builtin.cpp in this link:
https://github.com/StevenPuttemans/opencv_tryout_code/blob/master/camera_interfacing/AVT_Manta_opencv_builtin.cpp
I downloaded OpenCV 3.0.0 source files from
Itseez github page and built all required files by using CMake (I selected use default native compilers and Visual Studio 14 2015 64 options after I clicked configure option as I am using a 64-bit CPU and MVS Community 2015). I selected WITH_PVAPI option after first configuration (actually it was already selected) and I noticed that PVAPI_INCLUDE_PATH and PVAPI_LIBRARY options are automatically recognized correctly as C:/Program Files/Allied Vision Technologies/GigESDK/inc-pc and C:/Program Files/Allied Vision Technologies/GigESDK/lib-pc/x64/PvAPI.lib, respectively. I clicked configure option again, and then clicked generate option (in the mean time, if I don't unselect BUILD_PERF_TESTS and BUILD_TESTS options which are already selected after configurations, when I open OpenCV.sln and build ALL_BUILD and INSTALL, Visual Studio shows three errors. I removed the ticks at BUILD_PERF_TEST and BUILD_TESTS options, and errors are gone).
After building OpenCV from scratch, I made a Visual Studio project and modified Steven Puttemans' code slightly to see real-time camera acquisition from one of my cameras while printing frame number and frame rate on console. Here is my code:
int main()
{
Mat frame, imgResized;
double f = 0.4; /* f is a scalar in [0-1] range that scales the raw image. Output image is displayed in the screen. */
DWORD timeStart, timeEnd; // these variables are used for computing fps and avg_fps.
double fps = 1.0; // frame per second
double sum_fps(0.);
double avg_fps(0.); // average fps
int frameCount = 0;
VideoCapture camera(0 + CV_CAP_PVAPI); /* open the default camera; VideoCapture is class, camera is object. */
if (!camera.isOpened())
{
cerr << "Cannot open the camera." << endl;
return EXIT_FAILURE;
}
double rows = camera.get(CV_CAP_PROP_FRAME_HEIGHT); /* Height of the frames in the video stream. */
double cols = camera.get(CV_CAP_PROP_FRAME_WIDTH); /* Width of the frames in the video stream. */
double exposure = camera.get(CV_CAP_PROP_EXPOSURE);
cout << "Exposure value of the camera at the beginning is " << exposure << endl;
double exposureTimeInSecond = 0.02; /* As exposureTimeInSecond variable decreases, fps should increase */
exposure = exposureTimeInSecond * 1000000; /* esposure time in us */
camera.set(CV_CAP_PROP_EXPOSURE, exposure);
double frameRate; /* built-in fps */
cout << "Frame size of the camera is " << cols << "x" << rows << "." << endl;
cout << "Exposure value of the camera is set to " << exposure << endl;
char* winname = "Manta Camera";
namedWindow(winname, WINDOW_AUTOSIZE);
cout << "Press ESC to terminate default camera." << endl;
while (true)
{
timeStart = GetTickCount();
camera >> frame;
frameCount++;
frameRate = camera.get(CV_CAP_PROP_FPS); /* Built-in frame rate in Hz. */
/* resize() built-in function is in imgproc main module, in Geometric Image Transformations module. I resize the image by f (where f is a scalar in [0-1] range) for display. */
resize(frame, imgResized, Size(), f, f, INTER_LINEAR); /* void cv::resize(InputArray src, OutputArray dst, Size dsize, double fx = 0, double fy = 0, int interpolation = INTER_LINEAR) */
imshow(winname, imgResized);
moveWindow(winname, 980, 50);
int key = waitKey(10);
if (key == VK_ESCAPE)
{
destroyWindow(winname);
break;
}
/* Calculating FPS in my own way. */
timeEnd = GetTickCount();
fps = 1000.0 / (double)(timeEnd - timeStart); /* 1s = 1000ms */
sum_fps += fps;
avg_fps = sum_fps / frameCount;
cout << "FPS = " << frameRate << ", frame #" << frameCount << ", my_fps = " << fps << ", avg_fps = "<< avg_fps << endl;
}
cout << "Compiled with OpenCV version " << CV_VERSION << endl; /* thanks to Shervin Emami */
system("pause");
return EXIT_SUCCESS;
}
and this is the screenshot of the output during real-time acquisition:
Can someone explain why built-in fps is 30.9? I am pretty sure that real fps in my screen is around 15-16 Hz because I computed time between two different frame numbers that I see in my console, and then computed the fps and the result is the same as avg_fps value in console. I also ran SampleViewer.exe file that is in C:\Program Files\Allied Vision Technologies\GigESDK\bin-pc\x64 directory and when I click on "Show camera's attributes" icon, I see StatFrameRate = 30.2 approximately.
My second question is, how can I open the second camera that is connected to network switch? And also, how can I trigger them at the same moment? I examined cap_pvapi.cpp file that is located in source files that I downloaded from Itseez github page, and as far as I understand, the camera's FrameStartTriggerMode is "Freerun." Other options are SyncIn1, SyncIn2, FixedRate and Software. My main camera is left camera and I call the second camera as right camera. What should be the corresponding FrameStartTriggerMode for my left and right cameras?
I am able to open both of the cameras:
stereo camera real-time acquisition with OpenCV PvAPI built-in option
#include <iostream>
#include "Windows.h" /* data type DWORD and function GetTickCount() are defined in Windows.h */
#include "opencv2/opencv.hpp"
#include <cassert>
#include <ppl.h> // Parallel patterns library, concurrency namespace. Young-Jin uses this, so do I. It's very important; if not used, when images are processed, fps drops dramatically.
#if !defined VK_ESCAPE /* VK stands for virtual key*/
#define VK_ESCAPE 0x1B /* ASCII code for ESC character is 27 */
#endif
using namespace std;
using namespace cv;
const unsigned long numberOfCameras = 2;
bool displayImages = true;
int main()
{
Mat frame[numberOfCameras], imgResized[numberOfCameras];
double f = 0.4; /* f is a scalar in [0-1] range that scales the raw image. Output image is displayed in the screen. */
DWORD timeStart, timeEnd; // these variables are used for computing fps and avg_fps.
double fps = 1.0; // frame per second
double sum_fps(0.);
double avg_fps(0.); // average fps
int frameCount = 0;
VideoCapture camera[numberOfCameras]; // (0 + CV_CAP_PVAPI); /* open the default camera; VideoCapture is class, camera is object. */
for (int i = 0; i < numberOfCameras; i++)
{
camera[i].open(i + CV_CAP_PVAPI);
if (!camera[i].isOpened())
{
cerr << "Cannot open camera " << i << "." << endl;
return EXIT_FAILURE;
}
}
double rows = camera[0].get(CV_CAP_PROP_FRAME_HEIGHT); /* Height of the frames in the video stream. */
double cols = camera[0].get(CV_CAP_PROP_FRAME_WIDTH); /* Width of the frames in the video stream. */
if (numberOfCameras == 2)
assert(rows == camera[1].get(CV_CAP_PROP_FRAME_HEIGHT) && cols == camera[0].get(CV_CAP_PROP_FRAME_WIDTH));
for (int i = 0; i<numberOfCameras; i++) // initializing monochrome images.
{
frame[i] = Mat(Size(cols, rows), CV_8UC1); /* Mat(Size size, int type) */
resize(frame[i], imgResized[i], Size(0, 0), f, f, INTER_LINEAR);
}
/* combo is a combined image consisting of left and right resized images. images are resized in order to be displayed at a smaller region on the screen. */
Mat combo(Size(imgResized[0].size().width * 2, imgResized[0].size().height), imgResized[0].type()); /* This is the merged image (i.e., side by side) for real-time display. */
Rect roi[numberOfCameras]; /* roi stands for region of interest. */
for (int i = 0; i < numberOfCameras; i++)
roi[i] = Rect(0, 0, imgResized[0].cols, imgResized[0].rows);
/* Setting locations of images coming from different cameras in the combo image. */
if (numberOfCameras > 1) /* I assume max camera number is 2. */
{
roi[1].x = imgResized[0].cols;
roi[1].y = 0;
}
double exposure, exposureTimeInSecond = 0.06; /* As exposureTimeInSecond variable decreases, fps should increase */
for (int i = 0; i < numberOfCameras; i++)
{
exposure = camera[i].get(CV_CAP_PROP_EXPOSURE);
cout << "Exposure value of the camera " << i << " at the beginning is " << exposure << endl;
exposure = exposureTimeInSecond * 1000000; /* esposure time in us */
camera[i].set(CV_CAP_PROP_EXPOSURE, exposure);
}
double frameRate[numberOfCameras]; /* built-in fps */
cout << "Frame size of the camera is " << cols << "x" << rows << "." << endl;
cout << "Exposure value of both cameras is set to " << exposure << endl;
char* winname = "real-time image acquisition";
if (displayImages)
namedWindow(winname, WINDOW_AUTOSIZE);
cout << "Press ESC to terminate real-time acquisition." << endl;
while (true)
{
timeStart = GetTickCount();
Concurrency::parallel_for((unsigned long)0, numberOfCameras, [&](unsigned long i)
{
camera[i] >> frame[i];
frameRate[i] = camera[i].get(CV_CAP_PROP_FPS); /* Built-in frame rate in Hz. */
resize(frame[i], imgResized[i], Size(), f, f, INTER_LINEAR); /* void cv::resize(InputArray src, OutputArray dst, Size dsize, double fx = 0, double fy = 0, int interpolation = INTER_LINEAR) */
imgResized[i].copyTo(combo(roi[i])); /* This is C++ API. */
});
frameCount++;
if (displayImages)
{
imshow(winname, combo);
moveWindow(winname, 780, 50);
}
int key = waitKey(10);
if (key == VK_ESCAPE)
{
destroyWindow(winname);
break;
}
/* Calculating FPS in my own way. */
timeEnd = GetTickCount();
fps = 1000.0 / (double)(timeEnd - timeStart); /* 1s = 1000ms */
sum_fps += fps;
avg_fps = sum_fps / frameCount;
for (int i = 0; i < numberOfCameras; i++)
cout << "FPScam" << i << "=" << frameRate[i] << " ";
cout << "frame#" << frameCount << " my_fps=" << fps << " avg_fps=" << avg_fps << endl;
}
cout << "Compiled with OpenCV version " << CV_VERSION << endl; /* thanks to Shervin Emami */
system("pause");
return EXIT_SUCCESS;
}
// double triggerMode = camera.get(CV_CAP_PROP_PVAPI_FRAMESTARTTRIGGERMODE);
////camera.set(CV_CAP_PROP_PVAPI_FRAMESTARTTRIGGERMODE, 4.0);
//
//if (triggerMode == 0.)
//cout << "Trigger mode is Freerun" << endl;
//else if (triggerMode == 1.0)
//cout << "Trigger mode is SyncIn1" << endl;
//else if (triggerMode == 2.0)
//cout << "Trigger mode is SyncIn2" << endl;
//else if (triggerMode == 3.0)
//cout << "Trigger mode is FixedRate" << endl;
//else if (triggerMode == 4.0)
//cout << "Trigger mode is Software" << endl;
//else
//cout << "There is no trigger mode!!!";
But fps is still not the value that I desire.. :(
I also noticed that there exists cap_pvapi.cpp (contributed by Justin G. Eskesen) file that is implicitly used by my code at this subdirectory:
C:\opencv\sources\modules\videoio\src
and when I examined it, I saw that both cameras are set to "Freerun" FrameStartTriggerMode. Original command in PvAPI is:
PvAttrEnumSet(Camera.Handle, "FrameStartTriggerMode", "Freerun");
In my opinion, when using stereo cameras, cameras should be set to "Software" FrameStartTriggerMode. I already have a code that is running in this configuration for my stereo camera setup, but after some time, my console and running application are getting frozen. I tested my code if there memory leak or not and I am quite sure that there is no memory leak.
Average fps is about 16 but sometimes fps becomes low such as 2Hz which is not good. Good side of this code is, it doesn't stop but still not reliable. I am still wondering if it's possible to transfer all images at a fps more than 25. Actually, in my other code that gets frozen after some time (the one that is triggered by "Software" option), if I don't display the images on screen, avg_fps gets close to 27Hz. But it stops and screen gets frozen due to an unknown failure.

opencv video reading slow framerate

I am trying to read a video with OpenCV in C++, but when the video is displayed, the framerate is very slow, like 10% of the original framerate.
The whole code is here:
// g++ `pkg-config --cflags --libs opencv` play-video.cpp -o play-video
// ./play-video [video filename]
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
// video filename should be given as an argument
if (argc == 1) {
cerr << "Please give the video filename as an argument" << endl;
exit(1);
}
const string videofilename = argv[1];
// we open the video file
VideoCapture capture(videofilename);
if (!capture.isOpened()) {
cerr << "Error when reading video file" << endl;
exit(1);
}
// we compute the frame duration
int FPS = capture.get(CV_CAP_PROP_FPS);
cout << "FPS: " << FPS << endl;
int frameDuration = 1000 / FPS; // frame duration in milliseconds
cout << "frame duration: " << frameDuration << " ms" << endl;
// we read and display the video file, image after image
Mat frame;
namedWindow(videofilename, 1);
while(true)
{
// we grab a new image
capture >> frame;
if(frame.empty())
break;
// we display it
imshow(videofilename, frame);
// press 'q' to quit
char key = waitKey(frameDuration); // waits to display frame
if (key == 'q')
break;
}
// releases and window destroy are automatic in C++ interface
}
I tried with a video from a GoPro Hero 3+, and with a video from my MacBook's webcam, same problem with both videos. Both videos are played without problem by VLC.
Thanks in advance.
Try reducing the waitKey frame wait time. You are effectively waiting for the frame rate time (i.e. 33 ms), plus all the time it takes to grab the frame and display it. This means that if capturing the frame and displaying it takes over 0ms (which it does), you are guaranteed to be waiting for too long. Or if you really want to be accurate, you could time how long that part takes, and wait for the remainder, e.g. something along the lines of:
while(true)
{
auto start_time = std::chrono::high_resolution_clock::now();
capture >> frame;
if(frame.empty())
break;
imshow(videofilename, frame);
auto end_time = std::chrono::high_resolution_clock::now();
int elapsed_time = std::chrono::duration_cast<std::chrono::milliseconds>(end_time - start_time).count();
//make sure we call waitKey with some value > 0
int wait_time = std::max(1, elapsed_time);
char key = waitKey(wait_time); // waits to display frame
if (key == 'q')
break;
}
The whole int wait_time = std::max(1, elapsed_time); line is just to ensure that we wait for at least 1 ms, as OpenCV needs to have a call to waitKey in there to fetch and handle events, and calling waitKey with a value <= 0 tells it to wait infinity for a user input, which we don't want either (in this case)