OpenCV result changes between Debug / Release and on other machine - c++

I have a program that tries to detect rectangular objects on an image (i.e. solar modules). For that I use c++ with opencv 3 and Visual Studio 2015 Update 1.
In general my program uses GaussianBlur -> morphologyEx -> Canny -> HoughLines -> findContours-> approxPolyDP. Since, I have problems to find optimal parameters I tried to run many parameter combinations in order to get an optimal parameter setting.
The problem I have is that I get different results between "Debug in Visual Studio", "Debug by using the generated .exe", "Release in Visual Studio", "Release by using the generated .exe". Additionally running the .exe files on other machines once again changes the result.
Running the program on the same machine with the same settings does not change the result (i.e. it seems to be deterministic). There is also no concurrency in the program (except there is some in opencv I am not aware of).
Any idea why there is such a huge mismatch between the different settings ( parameter combinations that detect a solar module with 99% accuracy in one setting do not detect the module at all in the other)?.
EDIT:
I tried to create a minimum working example (see below) where I included the code until I get the first mismatch (perhaps there are more mismatches later on). I tried to initialize every variable I found.
The identifier paramterset is an instance of an object that contains all parameters I modify to find the optimum. I checked that those parameters were all initialized and are identical in Debug and Relase.
With this code, the first 3 images created by writeIntermediateResultImage (which basically just uses the opencv method imwrite and only specifies the path the image is stored to) are identical but the morphology image differs (by 13.43% according to some online image comparer I found). One difference is that the left and upper edge of the morphology image in Release mode is black for some pixels but there are additional differences within the image, too.
Edit: It seems that when running the code with the generated .exe file in Release mode, the morphology algorithm isn't applied at all but the image is just shifted left and down leaving a black edge at the top and bottom.
Edit: This shift seems to dependent on the machine it is running on. On my notebook I have the shift without the applying of morphology and on my desktop morphology is applied without a shift and black edges.
void findSquares(const Mat& image, vector<vector<Point> >& squares, string srcName)
{
// 1) Get HSV channels
Mat firstStepResult(image.size(), CV_8U);
Mat hsvImage(image.size(), CV_8UC3);
// Convert to HSV space
cvtColor(image, hsvImage, CV_BGR2HSV);
writeIntermediateResultImage("HSV.jpg", hsvImage, srcName);
// Transform Value channel of HSV image to greyscale
Mat channel0Mat(image.size(), CV_8U);
Mat channel1Mat(image.size(), CV_8U);
Mat channel2Mat(image.size(), CV_8U);
Mat hsv_channels[3]{ channel0Mat, channel1Mat, channel2Mat };
split(hsvImage, hsv_channels);
firstStepResult = hsv_channels[parameterset.hsvChannel];
writeIntermediateResultImage("HSVChannelImage.jpg", firstStepResult, srcName);
// 2) Gaussian Denoising
Mat gaussImage = firstStepResult;
GaussianBlur(gaussImage, gaussImage, Size(parameterset.gaussKernelSize, parameterset.gaussKernelSize), 0, 0);
writeIntermediateResultImage("GaussianBlur.jpg", gaussImage, srcName);
// 3) Morphology
Mat morphologyImage = gaussImage;
morphologyEx(morphologyImage, morphologyImage, parameterset.morphologyOperator, Mat(parameterset.dilateKernelSize, parameterset.dilateKernelSize, 0), cv::Point(-1, -1), parameterset.numMorpholgies);
writeIntermediateResultImage("Morphology.jpg", morphologyImage, srcName);
}
I also checked the library paths and the right libraries are used in the right compile mode (Debug with 'd', Release without).

I found the error in my code and I now get the same result in each configuration. The problem was the line that used the morphology operator.
morphologyEx(morphologyImage, morphologyImage, parameterset.morphologyOperator, Mat(parameterset.dilateKernelSize, parameterset.dilateKernelSize, 0), cv::Point(-1, -1), parameterset.numMorpholgies);
Even though the created Mat object (Mat(parameterset.dilateKernelSize, parameterset.dilateKernelSize, 0)) worked as a structuring element in Debug, it kind of messed up everything in Release.
Using
getStructuringElement(MORPH_RECT, Size(parameterset.dilateKernelSize, parameterset.dilateKernelSize))
as the structuring element did the trick.

Related

How do you get rid of the blank gap left after cropping an image in opencv c++?

I am writing a program to analyze pictures and crop them around an object in the picture. The program crops the images well, but it leaves a weird gap on the side.
I copied the code from the approved answer on this question:
Opencv c++ detect and crop white region on image
The image I start with looks like this on a larger canvas. I get this result, but I want to get rid of the extra white space on the left side in order to crop super close to the phone case. It can be seen better if you open the image in a new tab.
Please help. I am using opencv and c++ in Visual Studio 2015.
This picture is not correctly cropped because of salt-and-pepper noise. To get rid of it you'd use median blur. You can use blurred image to fill nonBlackList and use this list to correctly crop original image. Since it appears the image was slightly magnified after the noise appeared, you should probably try aperture size at least 5 to get rid of it completly.
cv::Mat in = cv::imread("CropWhite.jpg");
cv::Mat blurred;
cv::medianBlur(in, blurred, 5);
...
if(blurred.at<cv::Vec3b>(j,i) != cv::Vec3b(255,255,255))
{
nonBlackList.push_back(cv::Point(i,j));
}

OpenCV border mode issue with blur filter

I've been stuck on this for a few days now, maybe someone will be able to help me here.
I'm using OpenCV C++ API to perform some basic image processing. I have a step where I want to blur my image and specify BORDER_WRAPas my border type :
cv::blur(img, img, cv::Size(3, 3), cv::Point(-1, -1), cv::BORDER_WRAP);
But when executing my code, I get the following error:
OpenCV Error: Assertion failed (columnBorderType != BORDER_WRAP)
However, everything works fine when I use other border types, (BORDER_REFLECT for example), but I need BORDER_WRAP
Things seem to work if I use copyMakeBorder(img, img, 1, 1, 1, 1, cv::BORDER_WRAP) first on my image, blur this new image and then crop it back to the size of the original one, but still I can't figure out why my first try doesn't work.
Anyone knows how I can solve this ?
You can't do this. BORDER_WRAP is not accepted by all functions - it's valid just for a few of them and, as the assertion failure confirms, cv::blur is not one of them..
But as you've already found out yourself, you can first use cv::copyMakeBorder, blur this new image and crop it back to the size of the original.

OpenCV GPU SURF ROI error?

I am using VS2012 C++/CLI on a Win 8.1 64 bit machine with OpenCV 3.0.
I am trying to implement the GPU version of SURF.
When I do not specify an ROI, I have no problem. Thus, the following line gives no problem and detects keypoints in the full image:
surf(*dImages->d_gpuGrayFrame,cuda::GpuMat(),Images->SurfKeypoints);
However, efforts to specify an ROI cause a crash. For example, I specify the ROI by Top,Left and Bottom,Right coordinates (which I know work in non-GPU code). In the GPU code, the following causes a crash if the ROI is any smaller than the source image itself (no crash if the ROI is the same size as the source image).
int theight = (Images->LoadFrameClone.rows-Images->CropBottom) - Images->CropTop ;
int twidth = (Images->LoadFrameClone.cols-Images->CropRight)-Images->CropLeft ;
Rect tRect = Rect(Images->CropLeft,Images->CropTop,twidth,theight);
cuda::GpuMat tmask = cuda::GpuMat(*dImages->d_gpuGrayFrame,tRect);
surf(*dImages->d_gpuGrayFrame,tmask,Images->SurfKeypoints); // fails on this line
I know that tmask is non-zero in size and that the underlying image is correct. As far as I can tell, the only issue is specifying an ROI in the SURF GPU call. Any leads on why this may be happening?
Thanks
I experienced the same problem in OpenCV 3.1. Presumably the SURF algorithm doesn't work with images which have a step or stride which has been introduced by setting an ROI. I have not tried masking to see if this makes a difference.
A work around is to just copy the ROI to another contiguous GpuMat. Memory to memory copies are almost free on the GPU (my GTX780 does device to device memcopy at 142 GB/sec), which makes this hack a bit less odious.
GpuMat src; // filled with some image
GpuMat srcRoi = GpuMat (src, roi); // roi within src
GpuMat dst;
srcRoi.copyTo (dst);
surf(dst, GpuMat(), KeypointsGpu, DescriptorsGpu);

opencv c++ HoughCircles causing breakpoint in Visual Studio 2013

I have been detecting circles in frames from a video feed.
The script will capture a frame, detect any circle/s and then analyse them before taking another frame to repeat the process all over again.
Each time I ran this, the script would take the first frame, detect the circle/s and then once all circles had been analysed in that frame a breakpoint would occur with "Invalid address specified to RtlValidateHeap".
I commented out the entire script and slowly narrowed it down to where I was able to determine that it was the HoughCircles function that was causing the problem.
Has anyone else experienced this?
This is the function for what it is worth:
HoughCircles(
greyGB, // Mat input source
circles, // vector<vec3f> output vector that stores sets of 3 values: x_{c}, y_{c}, r for each detected circle.
CV_HOUGH_GRADIENT, // detection method
1, // The inverse ratio of resolution (size of image / int)
grey.rows / 8, // minimum distance between center of two circles
120, // Upper threshold for the internal Canny edge detector (should be 3x next number)
40, // Threshold for center detection (minimum number of votes) (lower this if no circles detected)
12, // Minimum radius to be detected. If unknown, put zero as default.
80 // Maximum radius to be detected. If unknown, put zero as default
);
It would seem that the answer is that the MSVS2013 installation is "incompatible" with the opencv2.4.10 in this instance. The answer was to forget MSVS13 and install MS Visual C++ 2010 instead. The only tricky bit with that is finding a registration key to activate the free version of MSVC10. Use this one: 6VPJ7-H3CXH-HBTPT-X4T74-3YVY7
Otherwise, once you have that sorted, follow these instructions to configure opencv in MSVC10:
Installing OpenCV 2.4.3 in Visual C++ 2010 Express
All good now!

OpenCV: findContours exception

my matlab code is:
h = fspecial('average', filterSize);
imageData = imfilter(imageData, h, 'replicate');
bwImg = im2bw(imageData, grayThresh);
cDist=regionprops(bwImg, 'Area');
cDist=[cDist.Area];
opencv code is:
cv::blur(dst, dst,cv::Size(filterSize,filterSize));
dst = im2bw(dst, grayThresh);
cv::vector<cv::vector<cv::Point> > contours;
cv::vector<cv::Vec4i> hierarchy;
cv::findContours(dst,contours,hierarchy,CV_RETR_CCOMP, CV_CHAIN_APPROX_NONE);
here is my image2blackand white function
cv::Mat AutomaticMacbethDetection::im2bw(cv::Mat src, double grayThresh)
{
cv::Mat dst;
cv::threshold(src, dst, grayThresh, 1, CV_THRESH_BINARY);
return dst;
}
I'm getting an exception in findContours() C++ exception: cv::Exception at memory location 0x0000003F6E09E0A0.
Can you please explain what am I doing wrong.
dst is cv::Mat and I used it all along it has my original values.
Update here is my matrix written into *.txt file:
http://www.filedropper.com/gili
UPDATE 2:
I have added dst.convertTo(dst,CV_8U); like Micka suggested, I no longer have an exception. however values are nothing like expected.
Take a look at this question which has a similar problem to what you're encountering: Matlab and OpenCV calculate different image moment m00 for the same image.
Basically, the OP in the linked post is trying to find the zeroth image moment for both x and y of all closed contours - which is actually just the area, by using findContours in OpenCV and regionprops in MATLAB. In MATLAB, that can be accessed by the Area property from regionprops, and judging from your MATLAB code, you wish to find the same quantity.
From the post, there is most certainly a difference between how OpenCV and MATLAB finds contours in an image. This boils down to the way both platforms consider what is a "connected pixel". OpenCV only uses a four-pixel neighbourhood while MATLAB uses an eight-pixel neighbourhood.
As such, there is nothing wrong with your implementation, and converting to 8UC1 is good. However, the areas (and ultimately the total number of connected components and contours themselves) between both contours found with MATLAB and OpenCV are not the same. The only way for you to get exactly the same result is if you manually draw the contours found by findContours on a black image, and using the cv::moments function directly on this image.
However, because of the differing implementations of cv::blur() in comparison to fspecial with an averaging mask that is even, you still may not be able to get the same results along the borders of the image. If there are no important contours around the borders of your image, then hopefully this will give you the right result.
Good luck!