I am working in a project in which I have to detect the Motion of a human. First of all I wrote a program for Motion Detection and got it working properly. Then I moved to Human Detection using HOGDescriptorand combined both the programs to increase the speed of the process. First I monitor for a motion and if there is any motion detected, then I crop the image by the rectangular box denoting the motion and send the cropped part alone to the Human detection function so that it can be processed quickly.
But there arises a problem. I am getting a good results some times and for sometimes I am getting a Pop Up window saying Unhandled Exception at some memory location in the .exe file.
My program is
#include <iostream>
#include <ctime>
#include<stdlib.h>
#include<vector>
#include"opencv2\opencv.hpp"
#include"opencv2\highgui\highgui.hpp"
#include"opencv2\core\core.hpp"
#include"opencv2\imgproc\imgproc.hpp"
#include<string>
#include<sstream>
using namespace std;
using namespace cv;
Mat detect1(int,VideoCapture,VideoWriter);
vector<Rect> found;
int humandet(Mat,Rect);
BackgroundSubtractorMOG2 bg[5];
int _tmain(int argc, _TCHAR* argv[])
{
Mat frame[5];
string win[5]={"Video 0","Video 1","Video 2","Video 3"};
string ip,user,pass;
stringstream ss;
string vid[5]={"D:/Recorded.avi","D:/Recorded1.avi","D:/Recorded2.avi","D:/Recorded3.avi"};
VideoWriter vidarr[5];
VideoCapture cap[5];
int n,type,j;
cout<<"Enter the no of cameras";
cin>>n;
for(int i=0,j=0;i<n;i++)
{
cout<<"Enter the camera type\n1.IP camera\n2.Non IP camera";
cin>>type;
if(type==2)
{
VideoCapture cap1(j++);
cap[i]=cap1;
cap[i].set(CV_CAP_PROP_FRAME_WIDTH,320);
cap[i].set(CV_CAP_PROP_FRAME_HEIGHT,240);
cap[i].set(CV_CAP_PROP_FPS,2);
}
else
{
cout<<"Enter the IP add:portno, username and password";
cin>>ip>>user>>pass;
ss<<"http://"<<user<<":"<<pass<<"#"<<ip<<"/axis-cgi/mjpg/video.cgi?.mjpg";
string s(ss.str());
VideoCapture cap2(s);
cap[i]=cap2;
cap[i].set(CV_CAP_PROP_FRAME_WIDTH,320);
cap[i].set(CV_CAP_PROP_FRAME_HEIGHT,240);
cap[i].set(CV_CAP_PROP_FPS,2);
}
VideoWriter video(vid[i],CV_FOURCC('D','I','V','X'),2,Size(320,240));
vidarr[i]=video;
}
while(9)
{
for(int i=0;i<n;i++)
{
frame[i]=detect1(i,cap[i],vidarr[i]);
imshow(win[i],frame[i]);
}
if(waitKey(30)==27)
break;
}
return 0;
}
Mat detect1(int j,VideoCapture cap,VideoWriter vid)
{
Mat frame;
Mat diff;
cap>>frame;
double large_area=0;
int large=0;
Rect bound_rect;
bg[j].nmixtures=3;
bg[j].bShadowDetection=true;
bg[j].nShadowDetection=0;
bg[j].fTau = 0.5;
bg[j].operator() (frame,diff);
vector<vector<Point>> contour;
findContours(diff,contour,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE);
for(unsigned int i=0;i<contour.size();i++)
{
double area=contourArea(contour[i]);
if(area>large_area)
{
large_area=area;
large=i;
bound_rect=boundingRect(contour[i]);
}
}
contour.clear();
if(large_area/100 > 2)
{
humandet(frame,bound_rect);
rectangle(frame,bound_rect,Scalar(0,0,255),2);
putText(frame,"Recording",Point(20,20),CV_FONT_HERSHEY_PLAIN,2,Scalar(0,255,0),2);
vid.write(frame);
return (frame);
}
else
return (frame);
}
int humandet(Mat frame1,Rect bound)
{
HOGDescriptor hog;
hog.setSVMDetector(HOGDescriptor::getDefaultPeopleDetector());
if((bound.height < 100) && (bound.width < 80))
{
Mat roi;
roi.create(Size(80,100),frame1.type());
roi.setTo(Scalar::all(0));
Mat fram=frame1(bound);
fram.copyTo(roi(Rect(0,0,(bound.height-1),(bound.width-1))));
hog.detectMultiScale(roi,found,0,Size(8,8),Size(32,32),1.025);
roi.release();
fram.release();
}
else if((bound.height < 200) && (bound.width < 160))
{
Mat roi;
roi.create(Size(160,200),frame1.type());
roi.setTo(Scalar::all(0));
Mat fram=frame1(bound);
fram.copyTo(roi(Rect(1,1,(bound.height-1),(bound.width-1))));
hog.detectMultiScale(roi,found,0,Size(8,8),Size(32,32),1.025);
roi.release();
fram.release();
}
else
{
Mat roi;
roi=frame1;
hog.detectMultiScale(roi,found,0,Size(8,8),Size(32,32),1.025);
roi.release();
}
for(unsigned int i=0;i<found.size();i++)
{
rectangle(frame1,found[i], Scalar(255,0,0), 2);
}
if(found.size())
{
frame1.release();
found.clear();
return 1;
}
else
return 0;
}
Before I used the cropping method, It was working good. i.e, when I passed the frame to the 'humandet' function without any changes and processed it as it is, there was no problem. But it was quite slow. So that I cropped the image and made the resolution constant and processed. Due to this the processing speed increased to a considerable amount. But it is often throwing an Exception. I think the problem is with the memory allocation. But I couldn't figure it out.
Give me a solution and a method to debug the error I made.
Thanks in advance.
Call detectMultiScale in try-catch block. This try-catch block solve my problem.
try{
hog.detectMultiScale(roi,found,0,Size(8,8),Size(32,32),1.025);
}
catch(cv::Exception & e){
return false;
}
I am also trying detect people with HogDescriptor. When I debug my code, I realize that this error occurs only when cropped image size is small. It was related with training data size. Maybe this can be useful for you:HOG detector: relation between detected roi size and training sample size
Ideal way to start debugging is to catch the exception and print the stack trace.
Please refer to this post on how to generate the stack trace How to generate a stacktrace when my gcc C++ app crashes
This will pinpoint the position from where it is generating the exception
Related
I will post my code and the exception after explaining my issue. So basically, I'm making a program where the end goal is to be able to calibrate a fisheye camera which is then on a rtsp stream from the camera that will be recorded via zoneminder using OpenCV and a lot of the code so far I have gotten from here:
http://aishack.in/tutorials/calibrating-undistorting-opencv-oh-yeah/
But I've just started to notice an exception that has been popping up on line 53 of my code
bool found = findChessboardCorners(image, board_sz,corners, CV_CALIB_CB_ADAPTIVE_THRESH | CV_CALIB_CB_FILTER_QUADS);
and I have no idea what exactly is causing it, I'm still new to a lot of what I'm trying to do along with stack overflow in general, so feel free to leave any input you feel might be helpful or ask any questions that may need to be asked.
Also a quick note if you run the code yourself is that stack overflow seems to like to split certain lines up, so keep that in mind along with the fact that you'll need OpenCV.
Code:
// ConsoleApplication2.cpp : Defines the entry point for the console
application.
#include <opencv2\videoio.hpp>
#include <opencv2\highgui.hpp>
#include <opencv\cv.hpp>
#include <opencv\cv.h>
#include <iostream>
#include <stdio.h>
#include <chrono>
#include <thread>
using namespace cv;
using namespace std;
int main (){
int numBoards = 0;
int numCornersHor;
int numCornersVer;
printf("Enter number of corners along width: ");
scanf_s("%d", &numCornersHor);
printf("Enter number of corners along height: ");
scanf_s("%d", &numCornersVer);
printf("Enter number of boards: ");
scanf_s("%d", &numBoards);
int numSquares = numCornersHor * numCornersVer;
Size board_sz = Size(numCornersHor, numCornersVer);
VideoCapture capture = VideoCapture("rtsp://172.16.127.27:554/mpeg4");
vector<vector<Point3f>> object_points;
vector<vector<Point2f>> image_points;
vector<Point2f> corners;
int successes = 0;
Mat image;
Mat gray_image;
capture >> image;
vector<Point3f> obj;
for (int j = 0; j < numSquares; j++)
obj.push_back(Point3f(j / numCornersHor, j%numCornersHor, 0.0f));
while (successes < numBoards) {
this_thread::sleep_for(chrono::milliseconds(100));
cvtColor(image, gray_image, CV_BGR2GRAY);
bool found = findChessboardCorners(image, board_sz, corners,
CV_CALIB_CB_ADAPTIVE_THRESH | CV_CALIB_CB_FILTER_QUADS);
if (found) {
cornerSubPix(gray_image, corners, Size(11,11), Size(-1, -1),
TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 30, 0.1));
drawChessboardCorners(gray_image, board_sz, corners, found);
}
imshow("win1", image);
imshow("win2", gray_image);
capture >> image;
int key = waitKey(1);
if (key == 27)
return 0;
if (key == ' ' && found != 0){
image_points.push_back(corners);
object_points.push_back(obj);
printf("Snap stored!");
successes++;
if (successes >= numBoards)
break;
}
}
Mat intrinsic = Mat(3, 3, CV_32FC1);
Mat distCoeffs;
vector<Mat> rvecs;
vector<Mat> tvecs;
intrinsic.ptr<float>(0)[0] = 1;
intrinsic.ptr<float>(1)[1] = 1;
calibrateCamera(object_points, image_points, image.size(), intrinsic,
distCoeffs, rvecs, tvecs);
Mat imageUndistorted;
while (1) {
capture >> image;
undistort(image, imageUndistorted, intrinsic, distCoeffs);
imshow("win1", image);
imshow("win2", imageUndistorted);
waitKey(1);
}
capture.release();
return 0;
}
Error in Console:
OpenCV(3.4.1) Error: Assertion failed (dims <= 2 && step[0] > 0) in
cv::Mat::locateROI, file C:\build\master_winpack-build-win64-
vc15\opencv\modules\core\src\matrix.cpp, line 760
Exception:
Exception thrown at 0x00007FFEFC21A388 in ConsoleApplication2.exe: Microsoft
C++ exception: cv::Exception at memory location 0x000000637891D640.
Exception thrown at 0x00007FFEFC21A388 in ConsoleApplication2.exe: Microsoft
C++ exception: [rethrow] at memory location 0x0000000000000000.
Exception thrown at 0x00007FFEFC21A388 in ConsoleApplication2.exe: Microsoft
C++ exception: cv::Exception at memory location 0x000000637891D640.
Unhandled exception at 0x00007FFEFC21A388 in ConsoleApplication2.exe:
Microsoft C++ exception: cv::Exception at memory location 0x000000637891D640.
Edit: So, oddly, I'm no longer having this issue, but I'll still leave this question up in case anyone else happens to have this issue or has some insight that may still be relevant, I'm still having an issue where the camera seems to freeze on the first frame it can capture over rtsp despite not having the issue when using the webcam, but I'll leave that for a separate question since this is not the issue here.
What was the last code you added before it started to freeze in the first frame calibration code?
#
Reference your sleep statement (which is in the first frame calibration code). I'm assuming you introduced to reduce CPU usage, I would remove it and modify the waitKey() method in the main loop at the bottom to waitKey (100) to wait 100 milliseconds (or less) in between reads from the stream.
The sleep statement where you have it is not where most of your CPU usage is. Your just in the calibration section of the app that is only executed once. I would remove it.
Where your computer really spins is the main while(1) loop at the bottom where it is continually reading from the stream.
That said, you don't actually need to use a sleep statement in the main while loop at the bottom as you call the waitKey() function. Simply change the call to waitKey(100).
#
Reference your scanf_s statements: Either convert those to #defines so we all know what your inputs are or show us the command line you entered to run this. Important piece of info.
Hope that helps!
int main(int argc, char* argv[])
{
VideoCapture cap(0);
Mat current_frame;
Mat previous_frame;
Mat result;
Mat frame;
//cap.open(-1);
if (!cap.isOpened()) {
//cerr << "can not open camera or video file" << endl;
return -1;
}
while(1)
{
cap >> current_frame;
if (current_frame.empty())
break;
if (! previous_frame.empty()) {
// subtract frames
subtract(current_frame, previous_frame, result);
}
imshow("Window", result);
waitKey(10);
frame.copyTo(previous_frame);
}
}
When i run this program to subtract current frame from the previous frame and then show the resultant frame , it show me this error while start executing
Unhandled exception at 0x755d812f in WK01.exe: Microsoft C++ exception: cv::Exception at memory location 0x001fe848..
And i want to apply the same thing on recorded video
in the 1st frame, result is empty !
imshow("Window", result); // this will crash
also, you're copying the empty frame Mat to previous_frame, that should be current_frame instead, no ?
try like:
if (! previous_frame.empty()) {
// subtract frames
subtract(current_frame, previous_frame, result);
imshow("Window", result);
}
waitKey(10);
current_frame.copyTo(previous_frame);
}
I think the problem is with previos_frame. You assign value to previous_frame only at the and of the loop.
I think it might be empty at the start of the while loop, so the
if (! previous_frame.empty()) {
// subtract frames
subtract(current_frame, previous_frame, result);
}
block will not executed.
previous_frame also must be the same size as current_frame when subtracting.
This code(the subtract method) should determine the size of result, what You'd like to show at the following line.
I'm new at this but have been doing my share of reading and trying different setups to help narrow down the problem! Any help tp get me past this road block would be much appreciated.
Currently I'm running: Win 7 Ultimate, Visual C++ 2010 Express, OpenCV 2.2.0, and a Microsoft - LifeCam Studio Webcam - Silver 1080p HD.
I'm getting no Build errors and when I run the program my camera comes on (blue light indicating it being on) and the screen pops up that i thought should show my camera feed but instead its just a grey box with nothing inside. The below code I thought would help narrow down the problem but I'm at a loss.
int main()
{
CvCapture *webcam = NULL;
webcam = cvCreateCameraCapture(-1);
if(webcam!=NULL)
{
IplImage *frame = cvQueryFrame(webcam);
cvShowImage("WEBCAM_TEST",frame);
cvWaitKey(0);
return 0;
}
else
{
std::cout<<"CAMERA NOT DETECTED"<<std::endl;
return 0;
}
}
your code is some times showing a black image sometimes showing a correct image on my system(Windows 7 64 VS2010 OpenCV 2.4.3)...how ever when I put it in a loop for non stop streaming the image is ok...so just modify your code slightly and try...
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
using namespace cv;
int main()
{
CvCapture *webcam = NULL;
webcam = cvCreateCameraCapture(-1);
if(webcam!=NULL)
{
while(true)
{
IplImage *frame = cvQueryFrame(webcam);
cvShowImage("WEBCAM_TEST",frame);
cvWaitKey(20);
}
}
else
{
std::cout<<"CAMERA NOT DETECTED"<<std::endl;
return 0;
}
return 0;
}
In OpenCV if you get frame just after creating camera capture usually it's grey. All you have to do is just get next frame or wait before getting first frame. This code:
int _tmain(int argc, _TCHAR* argv[])
{
VideoCapture cap(0);
if(!cap.isOpened())
return -1;
Mat frame;
namedWindow("01",1);
//cap >> frame; //option 1
//waitKey(5000); //option 2
cap >> frame;
imshow("01", frame);
int key = waitKey(30);
return 0;
}
will show grey frame, but if you uncomment option 1 or option 2 - it will work fine.
when I compile and run this code, I get an error. It compiles, but when I try to run it, it gives the following error:
The application has requested the Runtime to terminate in an unusual way.
This is the code:
#include <opencv2/opencv.hpp>
#include <string>
int main() {
cv::VideoCapture c(0);
double rate = 10;
bool stop(false);
cv::Mat frame;
cv::namedWindow("Hi!");
int delay = 1000/rate;
cv::Mat corners;
while(!stop){
if(!c.read(frame))
break;
cv::cornerHarris(frame,corners,3,3,0.1);
cv::imshow("Hi!",corners);
if(cv::waitKey(delay)>=0)
stop = true;
}
return 0;
}
BTW, I get the same error when using the Canny edge detector.
Your corners matrix is declared as a variable, but there is no memory allocated to it. The same with your frame variable. First you have to create a matrix big enough for the image to fit into it.
I suggest you first take a look at cvCreateImage so you can learn how basic images are created and handled, before you start working with video streams.
Make sure the capture is ready, and the image is ok:
if(!cap.IsOpened())
break;
if(!c.read(frame))
break;
if(frame.empty())
break;
You need to convert the image to grayscale before you use the corner detector:
cv::Mat frameGray;
cv::cvtColor(frame, frameGray, CV_RGB2GRAY);
I am an OpenCV and C++ beginner. I've got a problem with my student project.My Tutor wants to grab frames from a Camera and save the grabbed frames into jpg. So first I used "cvCreateCameraCapture,cvQueryFrame,cvSaveImage" and it worded ok. But the frame is relative big,about 2500x2000,and it takes about 1 second to save one Frame. But my Tutor requires at least to save 10 Frames per second.
Then I came out the ideas to save raw data first and after grabbing process I can save them into Jpg. So I wrote following test code.But the problem is that all the saved Images are the same and it seems they are just from the data of the last grabbed frame.I guess the problem is about my poor knowledge of c++ especially pointers.So I really hope to get help here.
Thanks in advance!
void COpenCVDuoArryTestDlg::OnBnClickedButton1()
{
IplImage* m_Frame=NULL;
TRACE("m_Frame initialed");
CvCapture * m_Video=NULL;
m_Video=cvCreateCameraCapture(0);
IplImage**Temp_Frame= (IplImage**)new IplImage*[100];
for(int j=0;j<100;j++){
Temp_Frame[j]= new IplImage [112];
}
TRACE("Tempt_Frame initialed\n");
cvNamedWindow("video",1);
int t=0;
while(m_Frame=cvQueryFrame(m_Video)){
for(int k=0;k<m_Frame->nSize;k++){
Temp_Frame[t][k]= m_Frame[k];
}
cvWaitKey(30);
t++;
if(t==100){
break;
}
}
for(int i=0;i<30;i++){
CString ImagesName;
ImagesName.Format(_T("Image%.3d.jpg"),i);
if(cvWaitKey(20)==27) {
break;
}
else{
cvSaveImage(ImagesName, Temp_Frame[i]);
}
}
cvReleaseCapture(&m_Video);
cvDestroyWindow("video");
TRACE("cvDestroy works\n");
delete []Temp_Frame;
}
If you use C++, why don't you use the C++ opencv interface?
The reason you get N times the same image is that the capture reuses the memory for each frame, if you want to store the frames you need to copy them. Example for the C++ interface:
#include <vector>
#include "cv.h"
#include "highgui.h"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
namedWindow("image",1);
std::vector<cv::Mat> images(100);
for(int i = 0; i < 100;++i) {
// this is optional, preallocation so there's no allocation
// during capture
images[i].create(480, 640, CV_8UC3);
}
for(int i = 0; i < 100;++i)
{
Mat frame;
cap >> frame; // get a new frame from camera
frame.copyTo(images[i]);
}
cap.release();
for(int i = 0; i < 100;++i)
{
imshow("image", images[i]);
if(waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
Do you have a multicore/multiple CPU system? Then you could farm out the 1second tasks across 16cores and save 16frames/second !
Or you could write your own optimized jpeg routine on the GPU in Cuda/OpenCL.
If you need it to run for longer you could dump the raw image data to disk, then read it back in later and convert to jpeg. 5Mpixel * 3color * 10fps is 150Mb/s (thanks etarion!) which you can do with two disks and windows Raid.
edit: If you only need to do 10frames then just buffer them in memory and then write them out as the other answer shows.
Since you already know how to retrieve a frame, check this answer:
openCV: How to split a video into image sequence?
This question is a little different because it retrieves frames from an AVI file instead of a webcam, but the way to save a frame to the disk is the same!