I have an application which captures frame from camera and then it shows the picture as imshow() like that:
VideoCapture cap(0);
if (!cap.isOpened()) // if not success, exit program
{
cout << "Cannot open the web cam" << endl;
system("pause");
return -1;
}
while (true) {
bool bSuccess = cap.read(imgOriginal); // read a new frame from video
if (!bSuccess) //if not success, break loop
{
cout << "Cannot read a frame from video stream" << endl;
break;
}
cv::imshow("Image", imgOriginal);
if (waitKey(10) == 27)
{
break;
return 1;
}
}
And the program works well. But when I delete wait_key loop and instead of that give some other handle (for example variable which can describe if while loop is ok or even if, but instead wait_key(10) == 27 I put checkVariable == false), everything goes wrong. I get grey image instead of normal picture. Can you explain me why?
The waitKey function is not only to get a key from the user, it also does the equivalent to spin in other GUI frameworks. This means that it also "updates" any event of the window displaying the image, such as display a new image (it starts with default value gray most probably). So, you HAVE to use the function whenever you use imshow at least. It also does a small pause (number of milliseconds given as argument) so you can use it to avoid idle loops to occupy a CPU like crazy.
You can always ignore the result of waitKey if you do not need it, but it has to run.
I hope this clears your doubt.
Related
I'm relatively new to C++, especially multi-threading, and have been playing with different options. I've gotten some stuff to work but ultimately I'm looking to maximize performance so maybe I think it'd be better to reach out to everyone else for what would be most effective and then go down that road.
I'm working on an application that will take a video stream and write an unmodified video file and a modified video file (there's some image processing that happens) to the local disk. There's also going to be some other threads to collect some other GPS data, etc, but nothing special.
The problem I'm running into is the framerate is limited mainly by the VideoWriter function in OpenCV. I know this can be greatly alleviated if I use a thread to write the frame to the VideoWriter object, that while the two VideoWriters can run simultaneously with each other and the image processing functions.
I've successfully created this function:
void frameWriter(Mat writeFrame, VideoWriter *orgVideo)
{
(orgVideo->write(writeFrame));
}
And it is called from within an infinite loop like so:
thread writeOrgThread(frameWriter, modFrame, &orgVideo, &orgVideoMutex);
writeOrgThread.join();
thread writeModThread(frameWriter, processMatThread(modFrame, scrnMsg1, scrnMsg2)
writeModThread.join();
Now having the .join() immediately underneath defeats the performance benefits, but without it I immediately get the error "terminate called without an active exception". I thought it would do what I needed if I put the join() functions above, so on the next loop it'd make sure the previous frame was written before writing the next, but then it behaves as if the join is not there (perhaps by the time the main task has made the full loop and gotten to the join, the thread is already terminated?). Also, using detach I think creates the issue that the threads are unsynchronized and then I run into these errors:
[mpeg4 # 0000000000581b40] Invalid pts (156) <= last (156)
[mpeg4 # 00000000038d5380] Invalid pts (158) <= last (158)
[mpeg4 # 0000000000581b40] Invalid pts (158) <= last (158)
[mpeg4 # 00000000038d5380] Invalid pts (160) <= last (160)
[mpeg4 # 0000000000581b40] [mpeg4 # 00000000038d5380] Invalid pts (160) <= last
(160)
Invalid pts (162) <= last (162)
I'm assuming this is because multiple threads are trying to access the same resource? Finally, I tried using mutex with detach to avoid above and I got a curious behavior where my sleep thread wasn't behaving properly and the frame rate was inconsistent .
void frameWriter(Mat writeFrame, VideoWriter *orgVideo, mutex *mutexVid)
{
(mutexVid->lock());
(orgVideo->write(writeFrame));
(mutexVid->unlock());
}
Obviously I'm struggling with thread synchronization and management of shared resources. I realize this is probably a rookie struggle, so if somebody tossed a tutorial link at me and told me go read a book I'd be OK with that. I guess what i'm looking for right now is some guidance as far as what specific method is going to get me the best performance in this situation and then I'll make that work.
Additionally, does anyone have a link for a very good tutorial that covers multithreading in C++ from a broader point of view (not limited to Boost or C++11 implmentation and covers mutexs, etc). It could greatly help me out with this.
Here's the 'complete' code, I stripped out some functions to make it easier to read, so don't mind the extra variable here and there:
//Standard libraries
#include <iostream>
#include <ctime>
#include <sys/time.h>
#include <fstream>
#include <iomanip>
#include <thread>
#include <chrono>
#include <mutex>
//OpenCV libraries
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
//Other libraries
//Namespaces
using namespace cv;
using namespace std;
// Code for capture thread
void captureMatThread(Mat *orgFrame, VideoCapture *stream1){
//loop infinitely
for(;;){
//capture from webcame to Mat orgFrame
(*stream1) >> (*orgFrame);
}
}
Mat processMatThread(Mat inFrame, string scrnMsg1, string scrnMsg2){
//Fancify image
putText(inFrame, scrnMsg1, cvPoint(545,450),CV_FONT_HERSHEY_COMPLEX,
0.5,CvScalar(255,255,0,255),1,LINE_8,false);
putText(inFrame, scrnMsg2, cvPoint(395,470),CV_FONT_HERSHEY_COMPLEX,
0.5,CvScalar(255,255,0,255),1,LINE_8,false);
return inFrame;
}
void frameWriter(Mat writeFrame, VideoWriter *orgVideo, mutex *mutexVid)
{
//(mutexVid->lock());
(orgVideo->write(writeFrame));
//(mutexVid->unlock());
}
long usecDiff(long usec1, long usec2){
if (usec1>usec2){
return usec1 - usec2;
}
else {
return (1000000 + usec1) - usec2;
}
}
int main()
{
//Start video capture
cout << "Opening camera stream..." << endl;
VideoCapture stream1(0);
if (!stream1.isOpened()) {
cout << "Camera failed to open!" << endl;
return 1;
}
//Message incoming image size
cout << "Camera stream opened. Incoming size: ";
cout << stream1.get(CV_CAP_PROP_FRAME_WIDTH) << "x";
cout << stream1.get(CV_CAP_PROP_FRAME_HEIGHT) << endl;
//File locations
const long fileSizeLimitBytes(10485760);
const int fileNumLimit(5);
const string outPath("C:\\users\\nag1\\Desktop\\");
string outFilename("out.avi");
string inFilename("in.avi");
//Declare variables for screen messages
timeval t1;
timeval t2;
timeval t3;
time_t now(time(0));
gettimeofday(&t1,0);
gettimeofday(&t2,0);
gettimeofday(&t3,0);
float FPS(0.0f);
const int targetFPS(60);
const long targetUsec(1000000/targetFPS);
long usec(0);
long usecProcTime(0);
long sleepUsec(0);
int i(0);
stringstream scrnMsgStream;
string scrnMsg1;
string scrnMsg2;
string scrnMsg3;
//Define images
Mat orgFrame;
Mat modFrame;
//Start Video writers
cout << "Creating initial video files..." << endl;
//Identify outgoing size, comments use incoming size
const int frame_width = 640; //stream1.get(CV_CAP_PROP_FRAME_WIDTH);
const int frame_height = 480; //stream1.get(CV_CAP_PROP_FRAME_HEIGHT);
//Message outgoing image size
cout << "Outgoing size: ";
cout << frame_width << "x" << frame_height << endl;
VideoWriter orgVideo(outPath + inFilename,CV_FOURCC('D','I','V','X'),targetFPS,
Size(frame_width,frame_height),true);
mutex orgVideoMutex;
VideoWriter modVideo(outPath + outFilename,CV_FOURCC('D','I','V','X'),targetFPS,
Size(frame_width,frame_height),true);
mutex modVideoMutex;
//unconditional loop
cout << "Starting recording..." << endl;
//Get first image to prevent exception
stream1.read(orgFrame);
resize(orgFrame,modFrame,Size(frame_width,frame_height));
// start thread to begin capture and populate Mat frame
thread captureThread(captureMatThread, &orgFrame, &stream1);
while (true) {
//Time stuff
i++;
if (i%2==0){
gettimeofday(&t1,0);
usec = usecDiff(t1.tv_usec,t2.tv_usec);
}
else{
gettimeofday(&t2,0);
usec = usecDiff(t2.tv_usec,t1.tv_usec);
}
now = time(0);
FPS = 1000000.0f/usec;
scrnMsgStream.str(std::string());
scrnMsgStream.precision(2);
scrnMsgStream << std::setprecision(2) << std::fixed << FPS;
scrnMsg1 = scrnMsgStream.str() + " FPS";
scrnMsg2 = asctime(localtime(&now));
//Get image
//Handled by captureMatThread now!!!
//stream1.read(orgFrame);
//resize image
resize(orgFrame,modFrame,Size(frame_width,frame_height));
//write original image to video
//writeOrgThread.join();
thread writeOrgThread(frameWriter, modFrame, &orgVideo, &orgVideoMutex);
//writeOrgThread.join();
writeOrgThread.detach();
//orgVideo.write(modFrame);
//write modified image to video
//writeModThread.join();
thread writeModThread(frameWriter, processMatThread(modFrame, scrnMsg1, scrnMsg2), &modVideo, &modVideoMutex);
//writeOrgThread.join();
//writeModThread.join();
writeModThread.detach();
//modVideo.write(processMatThread(modFrame, scrnMsg1, scrnMsg2));
//sleep
gettimeofday(&t3,0);
if (i%2==0){
sleepUsec = targetUsec - usecDiff(t3.tv_usec,t1.tv_usec);
}
else{
sleepUsec = targetUsec - usecDiff(t3.tv_usec,t2.tv_usec);
}
this_thread::sleep_for(chrono::microseconds(sleepUsec));
}
orgVideo.release();
modVideo.release();
return 0;
}
This is actually running on a raspberry pi (adapted to use raspberry pi camera) so my resources are limited and that's why i'm trying to minimize how many copies of the image there are and implement the parallel writing of the video files. You can see I've also experimented with placing both the 'join()'s after the "writeModThread", so at least the writing of the two files are in parallel. Perhaps that's the best we can do, but I plan to add a thread with all the image processing that I'd like to run in parallel (now you can see it called as a simple function that adds plain text).
I'm developing an application in C++ with a GUI in OpenGL. I have a GUI with a background, some squared textures over the background and some text written next to the textures. When I run the program textures start to flash in blocks of n, and a string is printed on the gui in order to identify which block is flashing (I don't know how to better explain, is the P300Speller paradigm). I'm finding a problem due to flashes because the GUI doesn't update as long as I don't move mouse over the window! Can someone help me?
I'll try to post and explain some code. flashSquare is the function which makes textures flash. I use a boolean vector (showSquare) to indicate if the nth texture has to be normal or flashed (I load different textures) and a stack (flashHelperS) filled with integers which represent the stimulus sequence and the order of the flashing textures.
void flashSquare(int square){
if (firstTime){ // send stimulus command
sendToServer(START_STIMULUS);
fillStackS(); // Fill stack of stimuli
firstTime = false;
pausa = false;
}
if (pausa == false){
cout << " Stack size : " << flashHelperS.size() << endl;
if (!flashHelperS.empty()){ // If the stack is not empty
int tmp = flashHelperS.top(); // select the next texture to flash
flashHelperS.pop(); // and remove it from stack
showSquare[tmp] = !showSquare[tmp]; // change bool of current
// flashing texture
cout << showSquare[tmp] << endl;
if (showSquare[tmp]){
counterSquares[tmp]++;
sendToServer(tmp + 1);
}
square = tmp;
glutPostRedisplay(); // Redisplay to show current texture flashed
if (flashHelperS.size() >= 0) { //If the stack is not empty repeat
glutTimerFunc(interface->getFlashFrequency(), flashSquare, 0);
}
}
else{ // If the stack is empty reset
initCounterSquare();
sendToServer(END_STIMULUS);
pausa = true;
if (interface->getMaxSessions() == currentSession){
sendToServer(END_CALIBRATION);
drawOnScreen(); // Draw a string on the screen
firstTime = true;
}
else currentSession++;
}
}
}
For example if stack has 8 items and maxSessions is 3, it means that glutTimerFunc calls flashSquare 8 times, making all the 8 texture flash, than I send to server END_STIMULUS code and repeat this operation other 2 times (because maxSessions is 3). When server receives END_STIMULUS and it is ready to receive more data it sends a code to my client, RESTART_S. When I receive RESTART_S (stack is empty and currentSession < 3) I call flashSquare again to start a new train of stimuli and flash.
if (idFromClient[0] == RESTART_S) {
sendToServer(FLASH_S);
firstTime = true;
flashSquare(0);
}
When I call flashSquare manually the first time it works, but when I receive from server RESTART_S flashSquare is correctly called, indeed it sends to server START_STIMULS, firstTime = false, pausa = false, and it fills the stack. The execution of program goes on to the print of stack size and I can verify that all variables are correctly assigned (debugger shows that the function is called but it does nothing). The problem is that if I don't move mouse over the window with the textures, even if the window is focused, the execution doesn't go on.
I have an application which connects to an RTSP camera and processes some of the frames of video. Depending on the camera resolution and frame rate, I don't need to process all the frames and sometimes my processing takes a while. I've designed things so that when the frame is read, its passed off to a work queue for another thread to deal with. However, depending on system load/resolution/frame rate/network/file system/etc, I occasionally will find cases where the program doesn't keep up with the camera.
I've found that with ffmpeg(I'm using the latest git drop from mid october and running on windows) that being a couple seconds behind is fine and you keep getting the next frame, next frame, etc. However, once you get, say, 15-20 seconds behind that frames you get from ffmpeg occasionally have corruption. That is, what is returned as the next frame often has graphical glitches (streaking of the bottom of the frame, etc).
What I'd like to do is put in a check, somehow, to detect if I'm more than X frames behind the live stream and if so, flush the caches frames out and start fetching the latest/current frames.
My current snippet of my frame buffer reading thread (C++) :
while(runThread)
{
av_init_packet(&(newPacket));
int errorCheck = av_read_frame(context, &(newPacket));
if (errorCheck < 0)
{
// error
}
else
{
int frameFinished = 0;
int decodeCode = avcodec_decode_video2(ccontext, actualFrame, &frameFinished, &newPacket);
if (decodeCode <0)
{
// error
}
else
if (decodeCode == 0)
{
// no frame could be decompressed / decoded / etc
}
else
if ((decodeCode > 0) && (frameFinished))
{
// do my processing / copy the frame off for later processing / etc
}
else
{
// decoded some data, but frame was not finished...
// Save data and reconstitute the pieces somehow??
// Given that we free the packet, I doubt there is any way to use this partial information
}
av_free_packet(&(newPacket));
}
}
I've google'd and looked through the ffmpeg documents for some function I can call to flush things and enable me to catch up but I can't seem to find anything. This same sort of solution would be needed if you wanted to only occasionally monitor a video source(eg, if you only wanted to snag one frame per second or per minute). The only thing I could come up with is disconnecting from the camera and reconnecting. However, I still need a way to detect if the frames I am receiving are old.
Ideally, I'd be able to do something like this :
while(runThread)
{
av_init_packet(&(newPacket));
// Not a real function, but I'd like to do something like this
if (av_check_frame_buffer_size(context) > 30_frames)
{
// flush frame buffer.
av_frame_buffer_flush(context);
}
int errorCheck = av_read_frame(context, &(newPacket));
...
}
}
when i run this code and press Esc key on keyboard i can not exit while loop.
i use same code in another program (in main() function) and exit from while loop.
i do not use below code in main() function.it is recall by function that it used in main().
i want to stop processing capture from video and do other work , i do not want to close program completely.i want to only processing 2 frame per second,how i can do it?
(when capture frame from video or cam in opencv 2.4.6,windows7,vs2012)
please help me.
void Demo::processVideo()
{
VideoCapture cap(this->videofile.c_str());
cout<<"Loading classifier...."<<endl;
MultiTrain mt;
mt.loadModel(("f:\dataset/eye_svm_model.xml"));
int fps=(int) cap.get(CV_CAP_PROP_FPS);
if(!cap.isOpened())
{
cout<<"problems with avi file"<<endl;
}
else
{
cout<<"Processing video...."<<endl;
Mat frame;
int k = 0,key1=0;
while(1)
{
//char c = WaitKey(15);
// if( c == 27 ) break;
cap>>frame;
int res = (int)mt.getPrediction(frame);
cout<<"Prediction for frame #"<<k<<" => "<<res<<endl;
this->predictions.push_back(res);
k++;
}
cap.release();
}
}
It should be a type conversion problem since the function prototype is int cvWaitKey(int)
Try the following into the loop, and press ESC for exit the program.
char c = cvWaitKey(15);
if( c == 27 ) break;
If you want precise time control or frame rate control, you can count clock ticks. See here for details with example.
Good luck and happy coding.
I'm attempting to detect if my capture camera gets unplugged. My assumption was that a call to cvQueryFrame would return NULL, however it continues to return the last valid frame.
Does anyone know of how to detect camera plug/unplug events with OpenCV? This seems so rudimentary...what am I missing?
There is no API function to do that, unfortunately.
However, my suggestion is that you create another thread that simply calls cvCaptureFromCAM() and check it's result (inside a loop). If the camera get's disconnected then it should return NULL.
I'll paste some code just to illustrate my idea:
// This code should be executed on another thread!
while (1)
{
CvCapture* capture = NULL;
capture = cvCaptureFromCAM(-1); // or whatever parameter you are already using
if (!capture)
{
std::cout << "!!! Camera got disconnected !!!!" << std::endl;
break;
}
// I'm not sure if releasing it will have any affect on the other thread
cvReleaseCapture(&capture);
}
Thanks #karlphillip for pointing me in the right direction. Running calls to cvCaptureFromCAM in a separate thread works. When the camera gets unplugged, the return value is NULL.
However, it appears that this function is not thread-safe. But a simple mutex to lock simultaneous calls to cvCaptureFromCAM seems to do the trick. I used boost::thread, for this example, but one could tweak this easily.
At global scope:
// Create a mutex used to prevent simultaneous calls to cvCaptureFromCAM
boost::shared_mutex mtxCapture;
// Flag to notify when we're done.
// NOTE: not bothering w/mutex for this example for simplicity's sake
bool done = false;
Entry point goes something like this:
int main()
{
// Create the work and the capture monitoring threads
boost::thread workThread(&Work);
boost::thread camMonitorThread(&CamMonitor);
while (! done)
{
// Do whatever
}
// Wait for threads to close themselves
workThread.join();
camMonitorThread.join();
return 0;
}
The work thread is simple. The only caveat is that you need to lock the mutex so you don't get simultaneous calls to cvCaptureFromCAM.
// Work Thread
void Work()
{
Capture * capture = NULL;
mtxCapture.lock(); // Lock calls to cvCaptureFromCAM
capture = cvCaptureFromCAM(-1); // Get the capture object
mtxCapture.unlock(); // Release lock on calls to cvCaptureFromCAM
//TODO: check capture != NULL...
while (! done)
{
// Do work
}
// Release capture
cvReleaseCapture(&capture);
}
And finally, the capture monitoring thread, as suggested by #karlphillip, except with a locked call to cvCaptureFromCAM. In my tests, the calls to cvReleaseCapture were quite slow. I put a call to cvWaitKey at the end of the loop because I don't want the overheard of constantly checking.
void CamMonitor()
{
while (! done)
{
CvCapture * capture = NULL;
mtxCapture.lock(); // Lock calls to cvCaptureFromCAM
capture = cvCaptureFromCAM(-1); // Get the capture object
mtxCapture.unlock(); // Release lock on calls to cvCaptureFromCAM
if (capture == NULL)
done = true; // NOTE: not a thread-safe write...
else
cvReleaseCapture(&capture);
// Wait a while, we don't need to be constantly checking.
cvWaitKey(2500);
}
I will probably end up implementing a ready-state flag which will be able to detect if the camera gets plugged back in. But that's out of the scope of this example. Hope someone finds this useful. Thanks again, #karlphillip.
This still seems to be an issue.
Another solution would be to compare the returned data with the previous one. For a working camera there should always be flickering. if the data is identical you can be sure, the cam was unplugged.
Martin
I think that I have a good workaround for this problem. I create an auxiliary Mat array with zeros with the same resolution like the output from camera. I assign it to Mat array to which just after is assign the frame captured from camera and at the end I check the norm of this array. If it is equal zero it means that there was no new frame captured from camera.
VideoCapture cap(0);
if(!cap.isOpened()) return -1;
Mat frame;
cap >> frame;
Mat emptyFrame = Mat::zeros(CV_CAP_PROP_FRAME_WIDTH, , CV_32F);
for(;;)
{
frame = emptyFrame;
cap >> frame;
if (norm(frame) == 0) break;
}