opencv c++ HoughCircles causing breakpoint in Visual Studio 2013 - c++

I have been detecting circles in frames from a video feed.
The script will capture a frame, detect any circle/s and then analyse them before taking another frame to repeat the process all over again.
Each time I ran this, the script would take the first frame, detect the circle/s and then once all circles had been analysed in that frame a breakpoint would occur with "Invalid address specified to RtlValidateHeap".
I commented out the entire script and slowly narrowed it down to where I was able to determine that it was the HoughCircles function that was causing the problem.
Has anyone else experienced this?
This is the function for what it is worth:
HoughCircles(
greyGB, // Mat input source
circles, // vector<vec3f> output vector that stores sets of 3 values: x_{c}, y_{c}, r for each detected circle.
CV_HOUGH_GRADIENT, // detection method
1, // The inverse ratio of resolution (size of image / int)
grey.rows / 8, // minimum distance between center of two circles
120, // Upper threshold for the internal Canny edge detector (should be 3x next number)
40, // Threshold for center detection (minimum number of votes) (lower this if no circles detected)
12, // Minimum radius to be detected. If unknown, put zero as default.
80 // Maximum radius to be detected. If unknown, put zero as default
);

It would seem that the answer is that the MSVS2013 installation is "incompatible" with the opencv2.4.10 in this instance. The answer was to forget MSVS13 and install MS Visual C++ 2010 instead. The only tricky bit with that is finding a registration key to activate the free version of MSVC10. Use this one: 6VPJ7-H3CXH-HBTPT-X4T74-3YVY7
Otherwise, once you have that sorted, follow these instructions to configure opencv in MSVC10:
Installing OpenCV 2.4.3 in Visual C++ 2010 Express
All good now!

Related

How to improve accuracy of estimateAffine2D (or estimageRigidTransform) in OpenCV?

I have two sets of points, one from time t-1 and current time t. The first set was generated using goodFeaturesToTrack, and the latter from using calcOpticalFlowPyrLK(). Using these two sets of points, I then estimate a transformation matrix via estimateAffine2DPartial() in order to keep track of its scale & rotation. Code snippet is listed below:
// Precompute image pyramids
maxLvl = cv::buildOpticalFlowPyramid(_imgPrev, imPyr1, _winSize, maxLvl, true);
maxLvl = cv::buildOpticalFlowPyramid(tmpImg, imPyr2, _winSize, maxLvl, true);
// Optical flow call for tracking pixels
cv::calcOpticalFlowPyrLK(imPyr1, imPyr2, _currentPoints, nextPts, status, err, _winSize, maxLvl, _terminationCriteria, 0, 0.000001);
// Get transformation matrix between the two data sets
cv::Mat H = cv::estimateAffinePartial2D(_currentPoints, nextPts, inlier_mask, cv::RANSAC, 10.0, 2000, 0.99);
Using H, I then map my masking points using perspectiveTransform(). The result seems accurate for the first few dozen frames until I notice some drift (in terms of rotation) occurring when the object I am tracking continues to rotate (usually when rotation becomes > M_PI). I'm honestly stumped on where the culprit is, but my main suspicion is perhaps my window size for optical flow might be too small, or too big. However, tweaking the window size did not seem to help, the position of my object is still accurate, but the estimated rotation (and scale) got worse. Can anyone hope to shed a light on this?
Warm regards and thanks.
EDIT: Images attached to show drift issue
Starting Frame
First few frames -- Rotation OK
Z-Rotation Drift occurs -- see anchor line has drifted towards the red rectangle.
Lucas Kanade tracker needs more features. Guess the tracking template you provided is not good enough.
(1) Try with other feature rich real images? e.g Opencv feautre tracking template image
(2) fix scale. Since you are doing simulation, you can try to anchor the size first.
calcOpticalFlowPyrLK is widely used in visual inertial state estimation studies. such as Semi direct visual odometry or VINSMONO. You can try to find the code inside those project to see how other people is playing with the feature and parameters

What is a reasonable size of a contours vector (openCV)?

I'm trying to track a small object with OpenCV and a Basler Camera (resolution 492 x 658), but i get undefined Behavior when I run the code and i think it might have something to do with the contours.
I'm using findContours(). The object I'm tracking is approx 15 x 15 pixels. The size I get for the biggest contours vector is around 500000000. This seems to be too big.
When i use canny before findContours the size gets reduced to 170000000. Which is a bit smaller but still appears to be too big.
What would be a reasonable size of a contour vector tracking a 15 x 15 pixel object?
Platform: Windows 7, Visual Studio 2017

Suggestions on how to approach averaging of objects detected

Background
I am currently trying to build an autonomous drone using ROS on my Rapsberry Pi which is running an Ubuntu MATE 16.04 LTS. Solving the Computer Vision problem of recognising red circles as of now. Since by nature, the drone is not stable (as there is an internal PID controller stabilising the drone) and due to lighting conditions, the drone often detects the same circle but in a very unstable way. About 80% of the frames detect the circle, while the other 20% do not at all. This is also inversely true where the drone does detect random noisy circles 20% of the time and not rest of the 80%.
Objective
I want to know if there is a good way to average the frames that I have right now. This way, I can get rid of the false positives and false negatives altogether.
Relevant Code
cv::medianBlur(intr_ptr, intr_ptr, 7);
strel_size.width = 3;
strel_size.height = 3;
cv::Mat strel = cv::getStructuringElement(cv::MORPH_ELLIPSE,
strel_size);
cv::morphologyEx(img_bin, intr_ptr, cv::MORPH_OPEN, strel,
cv::Point(-1,-1), 3);
cv::bitwise_not(intr_ptr,intr_ptr);
cv::GaussianBlur(intr_ptr, intr_ptr, cv::Size(7,7), 2, 2);
cv::vector<cv::Vec3f> circles;
cv::HoughCircles(intr_ptr, circles, CV_HOUGH_GRADIENT, 1, 70,
140, 15, 10, 40);
As you can see, I am performing a medianBlur, an open morphological operation and a GaussianBlur to get rid of the noise. However this is not enough.

How can I detect the position and the radius of the ball using opencv?

I need to detect this ball: and find its position and radius using opencv. I have downloaded many codes, but neither of them works. Any helps are highly appreciated.
I see you have quite a setup installed. As mentioned in the comments, please make sure that you have appropriate lighting to capture the ball, as well as making the ball distinguishable from it's surroundings by painting it a different colour.
Once your setup is optimized for detection, you may proceed via different ways to track your ball (stationary or not). A few ways may be:
Feature detection : Via Hough Circles, detect 2D circles (and their radius) that lie within a certain range of color, as explained below
There are many more ways to detect objects via feature detection, such as this clever blog points out.
Object Detection: Via SURF, SIFT and many other methods, you may detect your ball, calculate it's radius and even predict it's motion.
This code uses Hough Circles to compute the ball position, display it in real time and calculate it's radius in real time. I am using Qt 5.4 with OpenCV version 2.4.12
void Dialog::TrackMe() {
webcam.read(cim); /*call read method of webcam class to take in live feed from webcam and store each frame in an OpenCV Matrice 'cim'*/
if(cim.empty()==false) /*if there is something stored in cim, ie the webcam is running and there is some form of input*/ {
cv::inRange(cim,cv::Scalar(0,0,175),cv::Scalar(100,100,256),cproc);
/* if any part of cim lies between the RGB color ranges (0,0,175) and (100,100,175), store in OpenCV Matrice cproc */
cv::HoughCircles(cproc,veccircles,CV_HOUGH_GRADIENT,2,cproc.rows/4,100,50,10,100);
/* take cproc, process the output to matrice veccircles, use method [CV_HOUGH_GRADIENT][1] to process.*/
for(itcircles=veccircles.begin(); itcircles!=veccircles.end(); itcircles++)
{
cv::circle(cim,cv::Point((int)(*itcircles)[0],(int)(*itcircles)[1]), 3, cv::Scalar(0,255,0), CV_FILLED); //create center point
cv::circle(cim,cv::Point((int)(*itcircles)[0],(int)(*itcircles)[1]), (int)(*itcircles)[2], cv::Scalar(0,0,255),3); //create circle
}
QImage qimgprocess((uchar*)cproc.data,cproc.cols,cproc.rows,cproc.step,QImage::Format_Indexed8); //convert cv::Mat to Qimage
ui->output->setPixmap(QPixmap::fromImage(qimgprocess));
/*render QImage to screen*/
}
else
return; /*no input, return to calling function*/
}
How does the processing take place?
Once you start taking in live input of your ball, the frame captured should be able to show where the ball is. To do so, the frame captured is divided into buckets which are further divides into grids. Within each grid, an edge is detected (if it exists) and thus, a circle is detected. However, only those circles that pass through the grids that lie within the range mentioned above (in cv::Scalar) are considered. Thus, for every circle that passes through a grid that lies in the specified range, a number corresponding to that grid is incremented. This is known as voting.
Each grid then stores it's votes in an accumulator grid. Here, 2 is the accumulator ratio. This means that the accumulator matrix will store only half as many values as resolution of input image cproc. After voting, we can find local maxima in the accumulator matrix. The positions of the local maxima are corresponding to the circle centers in the original space.
cproc.rows/4 is the minimum distance between centers of the detected circles.
100 and 50 are respectively the higher and lower threshold passed to the canny edge function, which basically detects edges only between the mentioned thresholds
10 and 100 are the minimum and maximum radius to be detected. Anything above or below these values will not be detected.
Now, the for loop processes each frame captured and stored in veccircles. It create a circle and a point as detected in the frame.
For the above, you may visit this link

OpenCV result changes between Debug / Release and on other machine

I have a program that tries to detect rectangular objects on an image (i.e. solar modules). For that I use c++ with opencv 3 and Visual Studio 2015 Update 1.
In general my program uses GaussianBlur -> morphologyEx -> Canny -> HoughLines -> findContours-> approxPolyDP. Since, I have problems to find optimal parameters I tried to run many parameter combinations in order to get an optimal parameter setting.
The problem I have is that I get different results between "Debug in Visual Studio", "Debug by using the generated .exe", "Release in Visual Studio", "Release by using the generated .exe". Additionally running the .exe files on other machines once again changes the result.
Running the program on the same machine with the same settings does not change the result (i.e. it seems to be deterministic). There is also no concurrency in the program (except there is some in opencv I am not aware of).
Any idea why there is such a huge mismatch between the different settings ( parameter combinations that detect a solar module with 99% accuracy in one setting do not detect the module at all in the other)?.
EDIT:
I tried to create a minimum working example (see below) where I included the code until I get the first mismatch (perhaps there are more mismatches later on). I tried to initialize every variable I found.
The identifier paramterset is an instance of an object that contains all parameters I modify to find the optimum. I checked that those parameters were all initialized and are identical in Debug and Relase.
With this code, the first 3 images created by writeIntermediateResultImage (which basically just uses the opencv method imwrite and only specifies the path the image is stored to) are identical but the morphology image differs (by 13.43% according to some online image comparer I found). One difference is that the left and upper edge of the morphology image in Release mode is black for some pixels but there are additional differences within the image, too.
Edit: It seems that when running the code with the generated .exe file in Release mode, the morphology algorithm isn't applied at all but the image is just shifted left and down leaving a black edge at the top and bottom.
Edit: This shift seems to dependent on the machine it is running on. On my notebook I have the shift without the applying of morphology and on my desktop morphology is applied without a shift and black edges.
void findSquares(const Mat& image, vector<vector<Point> >& squares, string srcName)
{
// 1) Get HSV channels
Mat firstStepResult(image.size(), CV_8U);
Mat hsvImage(image.size(), CV_8UC3);
// Convert to HSV space
cvtColor(image, hsvImage, CV_BGR2HSV);
writeIntermediateResultImage("HSV.jpg", hsvImage, srcName);
// Transform Value channel of HSV image to greyscale
Mat channel0Mat(image.size(), CV_8U);
Mat channel1Mat(image.size(), CV_8U);
Mat channel2Mat(image.size(), CV_8U);
Mat hsv_channels[3]{ channel0Mat, channel1Mat, channel2Mat };
split(hsvImage, hsv_channels);
firstStepResult = hsv_channels[parameterset.hsvChannel];
writeIntermediateResultImage("HSVChannelImage.jpg", firstStepResult, srcName);
// 2) Gaussian Denoising
Mat gaussImage = firstStepResult;
GaussianBlur(gaussImage, gaussImage, Size(parameterset.gaussKernelSize, parameterset.gaussKernelSize), 0, 0);
writeIntermediateResultImage("GaussianBlur.jpg", gaussImage, srcName);
// 3) Morphology
Mat morphologyImage = gaussImage;
morphologyEx(morphologyImage, morphologyImage, parameterset.morphologyOperator, Mat(parameterset.dilateKernelSize, parameterset.dilateKernelSize, 0), cv::Point(-1, -1), parameterset.numMorpholgies);
writeIntermediateResultImage("Morphology.jpg", morphologyImage, srcName);
}
I also checked the library paths and the right libraries are used in the right compile mode (Debug with 'd', Release without).
I found the error in my code and I now get the same result in each configuration. The problem was the line that used the morphology operator.
morphologyEx(morphologyImage, morphologyImage, parameterset.morphologyOperator, Mat(parameterset.dilateKernelSize, parameterset.dilateKernelSize, 0), cv::Point(-1, -1), parameterset.numMorpholgies);
Even though the created Mat object (Mat(parameterset.dilateKernelSize, parameterset.dilateKernelSize, 0)) worked as a structuring element in Debug, it kind of messed up everything in Release.
Using
getStructuringElement(MORPH_RECT, Size(parameterset.dilateKernelSize, parameterset.dilateKernelSize))
as the structuring element did the trick.