If I don't do anything (that is, don't change the color detection HSV via a Controls Window), the app runs fine. However if I change the HSV values when the app is running, I get the following errors. I have tested the code without Hough, it runs fine.
The CPU Error -
Unhandled exception at 0x00007FF9ECA64388 (ucrtbase.dll) in HoughFinder.exe: An invalid parameter was passed to a function that considers invalid parameters fatal.
This is my code -
VideoCapture capture(0); // 0 is my webcam
...
capture.read(displayOriginal))
...(Code to detect colors for extra accuracy)
cudaCanny->detect(imgThresholded, imgCanny);
vector<Vec2f> lines;
//Ptr<HoughLinesDetector> hough = createHoughLinesDetector(1, CV_PI / 180, 100); CUDA code...
//hough->detect(imgCanny, lines); CUDA code...
HoughLines(displayCanny, lines, 1, CV_PI / 180, 100, 0, 0); // CPU code...
for (size_t i = 0; i < lines.size(); i++)
{
float rho = lines[i][0], theta = lines[i][1];
Point pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000 * (-b));
pt1.y = cvRound(y0 + 1000 * (a));
pt2.x = cvRound(x0 - 1000 * (-b));
pt2.y = cvRound(y0 - 1000 * (a));
line(displayHough, pt1, pt2, Scalar(0, 0, 255), 3, CV_AA);
}
imshow("Hough", displayHough);
imshow("Live Video", displayOriginal);
Extra Info -
If I use CUDA code to use Hough, I get this error -
Unhandled exception at 0x00007FF9F561A1C8 in HoughFinder.exe: Microsoft C++ exception: cv::Exception at memory location 0x000000A75E81EB70.
App Error (Don't get this error while using CPU code) -
OpenCV Error: Assertion failed (d == 2 && (sizes[0] == 1 || sizes[1] == 1 || sizes[0]*sizes[1] == 0)) in cv::_OutputArray::create, file OPENCV_DIR\opencv-sources\modules\core\src\matrix.cpp, line 2363
Can anyone help? If either the CPU or CUDA code is fixed, its fine, but I would more prefer the CUDA error to be fixed (As CUDA has extra speed).
After lots of trials and errors, I finally found the solution. Actually the output in detect should be a GpuMat not a vect2d. I would have figured this out earlier, but the documentation of OpenCV is very confusing. Here's the edited code -
Ptr<HoughLinesDetector> houghLines = createHoughLinesDetector(1, CV_PI / 180, 120);
GpuMat tmpLines; // This should be GpuMat...
vector<Vec2d> lines;
GpuMat imgCanny;
...
while(true) {
...
houghLines->detect(imgCanny, tmpLines);
houghLines->downloadResults(tmpLines, lines);
...
}
OpenCV GPU Error (Function Not Implemented) in Hough Transform
I had a similar problem. My code is something like this:
cv::Mat src, src_gray, src_blured, detected_edges, hough_lines;
std::vector<cv::Vec4i> lines;
src = cv::cvarrToMat(img, false);
opencv_run_ = true;
while (opencv_run_) {
cv::cvtColor(src, src_gray, CV_BGR2GRAY);
cv::medianBlur(src_gray, src_blured, blur_size_);
cv::Canny(src_blured, detected_edges, canny_low_threshold_, canny_max_threshold_, canny_aperture_size_);
cv::HoughLinesP(detected_edges, lines, 2, CV_PI / 2, hough_votes_, hough_min_line_length_, hough_max_line_gap_);
hough_lines = cv::Mat::zeros(size_horizontal_, size_vertical_, CV_8UC4);
for (size_t i = 0; i < lines.size(); i++) {
cv::line(hough_lines, cv::Point(lines[i][0], lines[i][1]), cv::Point(lines[i][2], lines[i][3]), cv::Scalar(0, 0, 255), 3, 8);
}
if (lines.size() > 0) lines.clear();
cv::imshow("Gray scale", src_gray);
cv::imshow("Edges", detected_edges);
cv::imshow("Hough", hough_lines);
//Some logic in waitKey and sleeptime
cv::waitKey(sleep_time);
}
cvDestroyAllWindows();
cvReleaseImageHeader(&img);
where img is a pointer to an IplImage in which i manually configure the values. (My image data are coming from a camera whose API gives me void * i.e. raw data)
This particular piece of code is running inside a boost::thread. While everything was running fine while inside the loop, when i did
opencv_run = false;
this_boost_thread.join();
in order to stop the thread, i was getting the invalid parameter exception. What baffled me is that the exception was thrown after the thread return which alarmed that this a classic case of stack corruption.
After hours of search i came across some post in some forum that said this is probably a problem with linked libraries. So i checked my opencv installation and saw that my libs are in a vc12 folder which means Visual Studio 2014 (I like to install pre built because i am an idiot), different from VS 2015 that i use.
So i searched for Visual Studio 2015 opencv libs, found some in the opencv 3.1 version https://sourceforge.net/projects/opencvlibrary/files/opencv-win/ , while i was using opencv 2.4.13. I decided not use them and build opencv from scratch. So i cloned from here https://github.com/opencv/opencv, followed the instructions given here http://docs.opencv.org/2.4/doc/tutorials/introduction/windows_install/windows_install.html and builded vc14 x86 opencv3.1 libraries which seem to work.
Related
I am using haar cascading to detect frontal faces. I have below code:
int main()
{
Mat image;
cv::VideoCapture cap;
cap.open(1);
int frame_idx = 0;
time_t fpsStartTime, fpsEndTime;
time(&fpsStartTime);
for (;;)
{
frame_idx = frame_idx + 1;
cap.read(image);
CascadeClassifier face_cascade;
face_cascade.load("<PATH");
std::vector<Rect> faces;
face_cascade.detectMultiScale(image, faces, 1.1, 2, 0 | cv::CASCADE_SCALE_IMAGE, Size(30, 30));
// Draw circles on the detected faces
for (int i = 0; i < faces.size(); i++)
{
Point center(faces[i].x + faces[i].width*0.5, faces[i].y + faces[i].height*0.5);
ellipse(image, center, Size(faces[i].width*0.5, faces[i].height*0.5), 0, 0, 360, Scalar(255, 0, 255), 4, 8, 0);
}
cv::imshow("Detected Face", image);
char k = cv::waitKey(1);
if (k == 27)
break;
time(&fpsEndTime);
double seconds = difftime(fpsEndTime, fpsStartTime);
double fps = frame_idx / seconds;
std::string fps_txt = "FPS: " + std::to_string(fps); // fps_str.str();
cout << "FPS : " << fps_txt << endl;
}
return 0;
}
This code is working fine but giving very low FPS. FPS is ~1fps which is very slow. I am running this on Windows 10 laptop with intel i5 CPU. I believe this should not be this much slow.
In debug mode, it gives ~1fps but in release mode it is 4-5fps which again is very slow. I have run some openvino demo's like pedestrian detection which uses 2 openvino model on same hardware and it gives ~17-20fps which is very good.
I am using USB 3.0 logitech brio 4k camera so this cannot be a reason of low fps. My question is why haar cascading is performing very slow. Is there anyway we can enhance its speed and make it more usable. Please help. Thanks
You should not (re)load the classifier on every frame. It should load once before processing frames.
Move the following statements out of the for loop.
CascadeClassifier face_cascade;
face_cascade.load("<PATH");
See a demo on OpenCV Docs.
Can you confirm if you are using right .lib and .dll file?
I have checked and seen that the opencv_world440.lib & opencv_world440.dll provide great speed compared to opencv_world440d.lib & opencv_world440d.dll files.
My guess is that opencv_world440d.lib & opencv_world440d.dll are for debugging so slow speed.
Note::Your lib name may vary ie.., opencv_world<"SomeNumber">d.lib & opencv_world<"SomeNumber">.lib
using this code below I want to detect face and eyes from a video,
the code runs without error but the video and detection result is not displayed when I run it , what is the problem ?
I tried it using images it works fine on some images and other images just detect faces .
#include <opencv2\opencv.hpp>
using namespace cv;
using namespace std;
int main()
{
float EYE_SX = 0.16f;
float EYE_SY = 0.26f;
float EYE_SW = 0.30f;
float EYE_SH = 0.28f;
Mat dest, gray,frame;
VideoCapture capture("m.mp4");
CascadeClassifier detector, eyes_detector;
if (!capture.isOpened()) // check if we succeeded
return -1;
if(!detector.load("haarcascade_frontalface_alt2.xml"))
cout << "No se puede abrir clasificador." << endl;
if(!eyes_detector.load("haarcascade_eye_tree_eyeglasses.xml"))
cout << "No se puede abrir clasificador para los ojos." << endl;
for (;;)
{
capture >> frame;
cvtColor(frame, gray, CV_BGR2GRAY);
equalizeHist(gray, dest);
vector<Rect> rect;
detector.detectMultiScale(dest, rect);
for (Rect rc : rect)
{
rectangle(frame,
Point(rc.x, rc.y),
Point(rc.x + rc.width, rc.y + rc.height),
CV_RGB(0, 255, 0), 2);
}
if (rect.size() > 0)
{
Mat face = dest(rect[0]).clone();
vector<Rect> leftEye, rightEye;
int leftX = cvRound(face.cols * EYE_SX);
int topY = cvRound(face.rows * EYE_SY);
int widthX = cvRound(face.cols * EYE_SW);
int heightY = cvRound(face.rows * EYE_SH);
int rightX = cvRound(face.cols * (1.0 - EYE_SX - EYE_SW));
Mat topLeftOfFace = face(Rect(leftX, topY, widthX, heightY));
Mat topRightOfFace = face(Rect(rightX, topY, widthX, heightY));
eyes_detector.detectMultiScale(topLeftOfFace, leftEye);
eyes_detector.detectMultiScale(topRightOfFace, rightEye);
if ((int)leftEye.size() > 0)
{
rectangle(frame,
Point(leftEye[0].x + leftX + rect[0].x, leftEye[0].y + topY + rect[0].y),
Point(leftEye[0].width + widthX + rect[0].x - 5, leftEye[0].height + heightY + rect[0].y),
CV_RGB(0, 255, 255), 2);
}
if ((int)rightEye.size() > 0)
{
rectangle(frame,
Point(rightEye[0].x + rightX + leftX + rect[0].x, rightEye[0].y + topY + rect[0].y),
Point(rightEye[0].width + widthX + rect[0].x + 5, rightEye[0].height + heightY + rect[0].y),
CV_RGB(0, 255, 255), 2);
}
}
}
imshow("Ojos", frame);
waitKey(0);
return 1;
}
So, right now, the imshow("Ojos", frame); and the waitKey(0); are only getting called right before the program ends. That's fine for images, but not for video, since you want it to happen once per frame.
If you move it up a few lines, inside that for loop (basically, just put the bracket that's one line up from it one line below it), it should start working better for videos.
However, there are a couple of other things you might want to tweak in the code - it's only going to show one right eye and one left eye. This is normally what you want to happen, but if you have false positives you might end up with somebody's hair or skin being labeled an eye, and you're none-the-wiser as to how it happens. I'd reccommend displaying all of the items in the lefteye and righteye vectors. This can be done simply by replacing those if statements (if (int)rightEye.size() > 0, etc) with
for (int i = 0; i < rightEye.size(); i++) {
rectangle(frame,
Point(rightEye[i].x + rightX + leftX + rect[i].x,
rightEye[i].y + topY + rect[i].y),
Point(rightEye[i].width + widthX + rect[i].x + 5,
rightEye[i].height + heightY + rect[i].y),
CV_RGB(0, 255, 255), 2);
}
If you're having problems with false positives or negatives, you might want to look into tweaking the parameters on detectMultiscale - right now, you're leaving everything to the defaults. Multiscale has a number of parameters that can be put in. Image and objects you already have, but there are other, such as:
scaleFactor – Parameter specifying how much the image size is
reduced at each image scale. The default is 1.1. The bigger that is, the larger a scale will happen, which will take the cascade less time but there will be more false positives.
minNeighbors – Parameter specifying how
many neighbors each candidate rectangle should have to retain it. Default is 3. The bigger that is, the more neighbors it'll search for, resulting in less false positives, but it'll take longer. Tweak it too high, and it'll start giving false negatives.
flags – Parameter with the same meaning for an old cascade as in the
function cvHaarDetectObjects. It is not used for a new cascade. Default's zero, just leave it at zero for the most part.
minSize – Minimum possible object size. Objects smaller than that are
ignored. Default is Size(0,0). I tend to bump it up just a little. Again, bigger means it's faster and less false positives, but too big will skip over whatever you're looking for.
maxSize – Maximum possible object size. Objects larger than
that are ignored. Default is I believe the max size of the image passed in. I tend to limit it to smaller than that. Smaller on this means it's faster and less false positives, but too small will skip over whatever you're looking for.
cascade_name.detectMultiScale( frame_gray, frame_rectangle, 1.1, 2, 0, Size(30, 30) ); , as an example.
Im new to openCv and image processing in general. I need to draw the lines and their position in real time from an camera input like this:
i already have the image from the canny edge detection, but when applying hough line and trying to draw it to that image using the following code i found:
int main(int argc, char* argv[]){
Mat input;
Mat HSV;
Mat threshold;
Mat CannyThresh;
Mat HL;
//video capture object to acquire webcam feed
cv::VideoCapture capture;
capture.open(0);
capture.set(CV_CAP_PROP_FRAME_WIDTH, 640);
capture.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
//start an infinite loop where webcam feed is copied to cameraFeed matrix
//all operations will be performed within this loop
while (true){
capture.read(input);
cvtColor(input, HSV, COLOR_BGR2HSV); //hsv
inRange(HSV, Scalar(H_MIN, S_MIN, V_MIN), Scalar(H_MAX, S_MAX, V_MAX), threshold);//thershold
MorphOps(threshold);//morph operations on threshold image
Canny(threshold, CannyThresh, 100, 50); //canny edge detection
std::vector<Vec4i> lines;
HoughLines(CannyThresh, lines, 1, CV_PI/180, 150, 0, 0 );
for( size_t i = 0; i < lines.size(); i++ )
{
float rho = lines[i][0], theta = lines[i][1];
Point pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000*(-b));
pt1.y = cvRound(y0 + 1000*(a));
pt2.x = cvRound(x0 - 1000*(-b));
pt2.y = cvRound(y0 - 1000*(a));
line( input, pt1, pt2, Scalar(0,0,255), 3, CV_AA);
}
imshow("camera", input);
waitKey(30);
}
return 0;
}
i get the following exception:
1- I cant say i really understand that code yet, but can you tell me why it isnt working?.
2- if i manage to make it work, how can i get the Y coordinate of the horizontal lines? i need to know if another object is inside, below or above this one. so i need the position on the Y axis of the 2 horizontal lines on this image (the ones roughlines detected), so i can determine where the other object is regarding this "rectangle".
EDIT #1
I copied the complete code. as you can see in the second image, the debugger doesn't throw any errors. but in the console of the program it says OpenCV Error:Assertion failed (channels() == CV_MAT_CN(dtype)) in cv::Mat::copyTo, file C:\builds\master_packSlave-Win32-vc12-shared\opencv\modules\core\src\copy.cpp, line 281. Also the last call in the call stack is this: > KernelBase.dll!_RaiseException#16() Unknown, im starting to thing is an opencv problem and not a code problem, maybe something with that dll.
EDIT #2
i changed the line
std::vector<Vec4i> lines; // this line causes exception
for
std::vector<Vec2f> lines;
and now it enters the for loop. but it now gives another run time error (another segmentation fault. i think it has to do with these values:
i think they may be going off range, any ideas?
I'm not sure, but it could be the fact that you're trying to draw a line as you had a 3-channel image (using Scalar(b,g,r)), but what you really have is a single-channel image (I suppose that CannyThresh is the output of Canny()).
You can try to change the image to a colored version, using something like this:
Mat colorCannyThresh = CannyThresh.clone();
cvtColor(colorCannyThresh, colorCannyThresh, CV_GRAY2BGR);
or you can draw a line using Scalar([0~255]), changing your line() call to:
line(CannyThresh, pt1, pt2, Scalar(255), 3, CV_AA);
Again, since it's not a complete code, I'm not sure this is the case, but it could be.
About Question2, what you mean by "Y coordinate of the horizontal lines?"
Edit
After inRange(), threshold is a Mat of the same size as HSV and CV_8U type (inRange()), which means it is CV_8U*3 (3-channel -> H.S.V). Are you sure threshold, after MorphOps(), is CV_8U*1 (single-channel), as Canny() expects?
If your goal is to find what is inside a rectangle, you might want to read about minAreaRect().
I am still pretty new to OpenCV and I've just recently come across the HoughLinesP function. First and foremost, my goal is to write code that will detect rectangles in a webcam. Currently, the code I have below only detect lines in general. However, I still have problems while I am debugging the program. Here is my code:
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main() {
int erosion_size = 0;
VideoCapture cam(0);
if (!cam.isOpened()) {
cout << "cannot open camera";
}
while (true) {
Mat frame;
cam.read(frame);
Mat gray, edge, draw, die;
cvtColor(frame, gray, CV_BGR2GRAY);
Canny(gray, edge, 100, 150, 3);
edge.convertTo(draw, CV_8U);
dilate(draw, die, Mat(), Point(-1, -1), 2, 1, 1);
erode(die, die, Mat(), Point(-1, -1), 1, 1, 1);
#if 0
vector<Vec2f> lines;
HoughLines(die, lines, 1, CV_PI / 180, 100, 0, 0);
for (size_t i = 0; i < lines.size(); i++)
{
float rho = lines[i][0], theta = lines[i][1];
Point pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000 * (-b));
pt1.y = cvRound(y0 + 1000 * (a));
pt2.x = cvRound(x0 - 1000 * (-b));
pt2.y = cvRound(y0 - 1000 * (a));
line(frame, pt1, pt2, Scalar(0, 0, 255), 3, CV_AA);
}
#else
vector<Vec4i> lines;
HoughLinesP(die, lines, 1, CV_PI / 180, 200, 50, 10);
for (size_t i = 0; i < lines.size(); i++)
{
Vec4i l = lines[i];
line(frame, Point(l[0], l[1]), Point(l[2], l[3]), Scalar(0, 0, 255), 3, CV_AA);
}
#endif
imshow("Canny", die);
imshow("original", frame);
if (waitKey(30) >= 0)
break;
}
return 0;
}
When I debug the program, the webcam pops up okay, but when I show a rectangular object with lines(piece of paper) it stops the program with a break point error. I concluded that my program stopped every time it found a line. When I choose to continue instead of breaking, it gives me this error:
Debug Assertion failed!
Program: ....Studio
2013/Projects/TrialRectangle/Debug/TrialRectangle.exe
File: f:/dd/vctools/crt/crtw32/misc/dbgheap.c
Line: 1332
Expression: _CrtIsValidHeapPointer(pUserData)
I played around with the HoughLinesP function and found that a high threshold parameter(ex. 500) seems to make my program run fine BUT it does not show any HoughLines at all in my webcam. If someone could explain why that is, that would be helpful as well!
Does anyone have any ideas as to how to solve this breakpoint error?
I got into this Debug Assertion too.
In my case, It was because that my project was statically compiled, Yet I used OpenCV dynamically with it's dll. So I change my project into dynamically compile, Solved the Problem.
It's Because that OpenCV object is allocated in different heap. And When this object is destructed, current run-time can's find that heap, This is why the Assertion is hit.
First some background
I have written a C++ function that detect an area of a certain color in an RGB image using OpenCV. The function is used to isolate a small colored area using the FeatureDetector: SimpleBlobDetector.
The problem I have is that this function is used in a crossplatform project. On my OSX 10.8 machine using OpenCV in Xcode this works flawlessly. However when I try to run the same piece of code on Windows using OpenCV in Visual Studio, this code crashes whenever I use:
blobDetector.detect(imgThresh, keypoints)
with an error such as this:
OpenCV Error: Assertion failed (dims <= 2 && data && (unsigned)i0 < (unsigned)size.p[0] && (unsigned)(i1*DataType<_Tp>::channels) < (unsigned)(size.p[1]*channels()) && ((((sizeof(size_t)<<28)|0x8442211) >> ((DataType<_Tp>::depth) & ((1 << 3) - 1))*4) & 15) == elemSize1()) in unknown function, file C:\slave\builds\WinInstallerMegaPack\src\opencv\modules\core\include\opencv2/core/mat.hpp, line 545
This is the only piece of OpenCV code that have given me problems so far. I tried several solutions like the ones suggested here Using FeatureDetector in OpenCV gives access violation and Access violation reading in FeatureDetector OpenCV 2.4.5 . But to no avail.
A somewhat solution to my problem was to add a threshold() call just before my call to .detect(), which appears to make it work. However I don't like this solution as it forces me to do something I don't have to (as far as I know) and because it is not necessary to do on my Mac for some reason.
Question
Can anyone explain why the following line:
threshold(imgThresh, imgThresh, 100, 255, 0);
is necessary on Windows, but not on OSX, just before the call to .detect() in the following code?
Full code snippet:
#include "ColorDetector.h"
using namespace cv;
using namespace std;
Mat ColorDetection(Mat img, Scalar colorMin, Scalar colorMax, double alpha, int beta)
{
initModule_features2d();
initModule_nonfree();
//Define matrices
Mat contrast_img = constrastImage(img, alpha, beta);
Mat imgThresh;
Mat blob;
//Threshold based on color ranges (Blue/Green/Red scalars)
inRange(contrast_img, colorMin, colorMax, imgThresh); //BGR range
//Apply Blur effect to make blobs more coherent
GaussianBlur(imgThresh, imgThresh, Size(3,3), 0);
//Set SimpleBlobDetector parameters
SimpleBlobDetector::Params params;
params.filterByArea = false;
params.filterByCircularity = false;
params.filterByConvexity = false;
params.filterByInertia = false;
params.filterByColor = true;
params.blobColor = 255;
params.minArea = 100.0f;
params.maxArea = 500.0f;
SimpleBlobDetector blobDetector(params);
blobDetector.create("SimpleBlob");
//Vector to store keypoints (center points for a blob)
vector<KeyPoint> keypoints;
//Try blob detection
threshold(imgThresh, imgThresh, 100, 255, 0);
blobDetector.detect(imgThresh, keypoints);
//Draw resulting keypoints
drawKeypoints(img, keypoints, blob, CV_RGB(255,255,0), DrawMatchesFlags::DEFAULT);
return blob;
}
Try using it that way:
Ptr<SimpleBlobDetector> sbd = SimpleBlobDetector::create(params);
vector<cv::KeyPoint> keypoints;
sbd->detect(imgThresh, keypoints);