EXC_BAD_ACCESS error OpenCV? - c++

I'm trying write a code for image segmentation in OpenCV. As a part of the image processing, I'm trying to detect the edges of a test image using Sobel filter.
In order to find the magnitude of gradient on both dX and dY direction, I'm computing the euclidean distance of both the gradients. But when I run the code I get the above error. I do know that the above error occurs when I am trying "ACCESS" an unavailable location in memory, but I am sure I have defined all Mat in my code.
This is part of my code.
//Blur the raw image to remove noise
GaussianBlur(src, src, kernel, 2);
//Run sobel edge detector
Sobel(src, edgeX, src.depth(), 1, 0);
Sobel(src, edgeY, src.depth(), 0, 1);
edge = Mat::zeros(317,554,CV_8UC1);
for (int r = 0; r < edgeX.rows; r++)
{
for (int c = 0; c < edgeY.cols; c++)
{
edge.at<double>(r,c) = sqrt((edgeX.at<double>(r,c)*edgeX.at<double>(r,c)) + (edgeY.at<double>(r,c)*edgeY.at<double>(r,c)));
}
}
where:
src: the RGB test image
edgeX: sobel output with dX gradient
edgeY: sobel output with dY
edge: is the Mat with the euclidean distance.
I get the error at this line
edge.at<double>(r,c) = sqrt((edgeX.at<double>(r,c)*edgeX.at<double>(r,c)) + (edgeY.at<double>(r,c)*edgeY.at<double>(r,c)));
when trying to access edge.at<double>(316,395)
How do I debug this?
What am I doing wrong?

edge is a matrix of type CV_8UC1, which means a matrix of uchar, not of double.
You need to access it with at<uchar>:
edge.at<uchar>(r,c) = sqrt((edgeX.at<uchar>(r,c)*edgeX.at<uchar>(r,c)) + (edgeY.at<uchar>(r,c)*edgeY.at<uchar>(r,c)));
You can avoid this kind of problems using Mat_<Tp>, that allows also easier access without using the .at function:
Mat1b edge(317,554,uchar(0));
for (int r = 0; r < edgeX.rows; r++) {
for (int c = 0; c < edgeY.cols; c++) {
edge(r,c) = sqrt((edgeX(r,c)*edgeX(r,c)) + (edgeY(r,c)*edgeY(r,c)));
}
}
In this case, you can also use cv::magnitude which performs the same operation you're doing with your for loops (but it needs matrices of float):
Sobel(src, edgeX, CV_32F, 1, 0);
Sobel(src, edgeY, CV_32F, 0, 1);
Mat edge;
magnitude(edgeX, edgeY, edge);
// Convert to CV_8UC1
edge.convertTo(edge, CV_8UC1);

Related

Matrix assignement value error in opencv C++ with mat.at<uchar>(i,j)

I am learning image processing with OpenCV in C++. To implement a basic down-sampling algorithm I need to work on the pixel level -to remove rows and columns. However, when I assign values with mat.at<>(i,j) other values are assign - things like 1e-38.
Here is the code :
Mat src, dst;
src = imread("diw3.jpg", CV_32F);//src is a 479x359 grayscale image
//dst will contain src low-pass-filtered I checked by displaying it works fine
Mat kernel;
kernel = Mat::ones(3, 3, CV_32F) / (float)(9);
filter2D(src, dst, -1, kernel, Point(-1, -1), 0, BORDER_DEFAULT);
// Now I try to remove half the rows/columns result is stored in downsampled
Mat downsampled = Mat::zeros(240, 180, CV_32F);
for (int i =0; i<downsampled.rows; i ++){
for (int j=0; j<downsampled.cols; j ++){
downsampled.at<uchar>(i,j) = dst.at<uchar>(2*i,2*j);
}
}
Since I read here OpenCV outputing odd pixel values that for cout I needed to cast, I wrote downsampled.at<uchar>(i,j) = (int) before dst.at<uchar> but it does not work also.
The second argument to cv::imread is cv::ImreadModes, so the line:
src = imread("diw3.jpg", CV_32F);
is not correct; it should probably be:
cv::Mat src_8u = imread("diw3.jpg", cv::IMREAD_GRAYSCALE);
src_8u.convertTo(src, CV_32FC1);
which will read the image as 8-bit grayscale image, and will convert it to floating point values.
The loop should look something like this:
Mat downsampled = Mat::zeros(240, 180, CV_32FC1);
for (int i = 0; i < downsampled.rows; i++) {
for (int j = 0; j < downsampled.cols; j++) {
downsampled.at<float>(i,j) = dst.at<float>(2*i,2*j);
}
}
note that the argument to cv::Mat::zeros is CV_32FC1 (1 channel, with 32-bit floating values), so Mat::at<float> method should be used.

Normalising an image in opencv

I have a RGB image stored in a Mat datastructure. I am converting the image into grayscale using cvtColor function in opencv. After that I am trying to normalise the image to the range [0,1]. I am using the default normalize function of opencv. To check the correctness, I tried printing the pixel values and equate it with matlab values(Matlab values are already in the range [0,1]). But the values differ a lot. Help me to make both results almost same. Below are the opencv and matlab codes.
Mat img1 = imread("D:/input.png", CV_LOAD_IMAGE_COLOR);
cvtColor(img1, img1, CV_BGR2GRAY);
img1.convertTo(img1, CV_32FC1);
cv::normalize(img1, img1, 0.0, 1.0, NORM_MINMAX, CV_32FC1);
for (int i = 0; i < img1.rows; i++)
{
for (int j = 0; j < img1.cols; j++)
{
cout << img1.at<float>(i, j) << endl;
}
}
Matlab code:
I=im2double(imread('input.png'));
gI=rgb2gray(I);
display(gI)
I don't think you want to normalize here. The Matlab conversion rgb2gray uses this equation: 0.2989 * R + 0.5870 * G + 0.1140 * B. So there's no expectation that you have the minimum value of 0.0 or the maximum value of 1.0 in your output greyscale image. You would only expect 0 and 1 if you had pure white (255,255,255) and pure black (0,0,0) pixels.
Try this:
img *= 1./255;
cvtColor(img, img, CV_BGR2GRAY);

What does the gradient from Sobel mean?

I have the gradients from the Sobel operator for each pixel. In my case 320x480. But how can I relate them with the orientation? For an example, I'm planning to draw an orientation map for fingerprints. So, how do I start?
Is it by dividing the gradients into blocks (example 16x24) then adding the gradients together and diving it by 384 to get the average gradients? Then from there draw a line from the center of the block using the average gradient?
Correct me if i'm wrong. Thank you.
Here are the codes that i used to find gradients
cv::Mat original_Mat=cv::imread("original.bmp", 1);
cv::Mat grad = cv::Mat::zeros(original_Mat.size(), CV_64F);
cv::Mat grad_x = cv::Mat::zeros(original_Mat.size(), CV_64F);
cv::Mat grad_y = cv::Mat::zeros(original_Mat.size(), CV_64F);
/// Gradient X
cv::Sobel(original_Mat, grad_x, CV_16S, 1, 0, 3);
/// Gradient Y
cv::Sobel(original_Mat, grad_y, CV_16S, 0, 1, 3);
short* pixelX = grad_x.ptr<short>(0);
short* pixelY = grad_y.ptr<short>(0);
int count = 0;
int min = 999999;
int max = -1;
int a=0,b=0;
for(int i = 0; i < grad_x.rows * grad_x.cols; i++)
{
double directionRAD = atan2(pixelY[i], pixelX[i]);
int directionDEG = (int)(180 + directionRAD / CV_PI * 180);
//printf("%d ",directionDEG);
if(directionDEG < min){min = directionDEG;}
if(directionDEG > max){max = directionDEG;}
if(directionDEG < 0 || directionDEG > 360)
{
cout<<"Weird gradient direction given in method: getGradients.";
}
}
There are several ways to visualize an orientation map:
As you suggested, you could draw it block-wise, but then you would have to be careful about "averaging" the directions. For example, what happens if you average the directions 0° and 180°?
More commonly, the direction is simply mapped to a grey value. This would visualize the gradient per pixel. For example as:
int v = (int)(128+directionRAD / CV_PI * 128);
(Disclaimer: not 100% sure about the 128, one of them might actually have to be a 127...
Or you could map the x and y gradient magnitudes to the rand gcomponents, respectively, ideally after normalizing the gradient vector to length 1. Assuming normX to be the normalized gradient in the x direction with values between -1 and 1:
int red = (int)((normX + 1)*127.5);
int green= (int)((normY + 1)*127.5);
Averaging depends on Sobel kernel size.
It'll be better to use CV_32FC or CV_64FC instead of CV_16S for results.
Also you can speed up your code using cv::phase method.
see my answer here: Sobel operator for gradient angle

reprojectImageTo3D() in OpenCV

I've been trying to compute real world coordinates of points from a disparity map using the reprojectImageTo3D() function provided by OpenCV, but the output seems to be incorrect.
I have the calibration parameters, and compute the Q matrix using
stereoRectify(left_cam_matrix, left_dist_coeffs, right_cam_matrix, right_dist_coeffs, frame_size, stereo_params.R, stereo_params.T, R1, R2, P1, P2, Q, CALIB_ZERO_DISPARITY, 0, frame_size, 0, 0);
I believe this first step is correct, since the stereo frames are being rectified properly, and the distortion removal I'm performing also seems all right. The disparity map is being computed with OpenCV's block matching algorithm, and it looks good too.
The 3D points are being calculated as follows:
cv::Mat XYZ(disparity8U.size(),CV_32FC3);
reprojectImageTo3D(disparity8U, XYZ, Q, false, CV_32F);
But for some reason they form some sort of cone, and are not even close to what I'd expect, considering the disparity map. I found out that other people had a similar problem with this function, and I was wondering if someone has the solution.
Thanks in advance!
[EDIT]
stereoRectify(left_cam_matrix, left_dist_coeffs, right_cam_matrix, right_dist_coeffs,frame_size, stereo_params.R, stereo_params.T, R1, R2, P1, P2, Q, CALIB_ZERO_DISPARITY, 0, frame_size, 0, 0);
initUndistortRectifyMap(left_cam_matrix, left_dist_coeffs, R1, P1, frame_size,CV_32FC1, left_undist_rect_map_x, left_undist_rect_map_y);
initUndistortRectifyMap(right_cam_matrix, right_dist_coeffs, R2, P2, frame_size, CV_32FC1, right_undist_rect_map_x, right_undist_rect_map_y);
cv::remap(left_frame, left_undist_rect, left_undist_rect_map_x, left_undist_rect_map_y, CV_INTER_CUBIC, BORDER_CONSTANT, 0);
cv::remap(right_frame, right_undist_rect, right_undist_rect_map_x, right_undist_rect_map_y, CV_INTER_CUBIC, BORDER_CONSTANT, 0);
cv::Mat imgDisparity32F = Mat( left_undist_rect.rows, left_undist_rect.cols, CV_32F );
StereoBM sbm(StereoBM::BASIC_PRESET,80,5);
sbm.state->preFilterSize = 15;
sbm.state->preFilterCap = 20;
sbm.state->SADWindowSize = 11;
sbm.state->minDisparity = 0;
sbm.state->numberOfDisparities = 80;
sbm.state->textureThreshold = 0;
sbm.state->uniquenessRatio = 8;
sbm.state->speckleWindowSize = 0;
sbm.state->speckleRange = 0;
// Compute disparity
sbm(left_undist_rect, right_undist_rect, imgDisparity32F, CV_32F );
// Compute world coordinates from the disparity image
cv::Mat XYZ(disparity32F.size(),CV_32FC3);
reprojectImageTo3D(disparity32F, XYZ, Q, false, CV_32F);
print_3D_points(disparity32F, XYZ);
[EDIT]
Adding the code used to compute 3D coords from disparity:
cv::Vec3f *StereoFrame::compute_3D_world_coordinates(int row, int col,
shared_ptr<StereoParameters> stereo_params_sptr){
cv::Mat Q_32F;
stereo_params_sptr->Q_sptr->convertTo(Q_32F,CV_32F);
cv::Mat_<float> vec(4,1);
vec(0) = col;
vec(1) = row;
vec(2) = this->disparity_sptr->at<float>(row,col);
// Discard points with 0 disparity
if(vec(2)==0) return NULL;
vec(3)=1;
vec = Q_32F*vec;
vec /= vec(3);
// Discard points that are too far from the camera, and thus are highly
// unreliable
if(abs(vec(0))>10 || abs(vec(1))>10 || abs(vec(2))>10) return NULL;
cv::Vec3f *point3f = new cv::Vec3f();
(*point3f)[0] = vec(0);
(*point3f)[1] = vec(1);
(*point3f)[2] = vec(2);
return point3f;
}
Your code seems fine to me. It could be a bug with the reprojectImageTo3D. Try to replace it with the following code (which has the same role):
cv::Mat_<cv::Vec3f> XYZ(disparity32F.rows,disparity32F.cols); // Output point cloud
cv::Mat_<float> vec_tmp(4,1);
for(int y=0; y<disparity32F.rows; ++y) {
for(int x=0; x<disparity32F.cols; ++x) {
vec_tmp(0)=x; vec_tmp(1)=y; vec_tmp(2)=disparity32F.at<float>(y,x); vec_tmp(3)=1;
vec_tmp = Q*vec_tmp;
vec_tmp /= vec_tmp(3);
cv::Vec3f &point = XYZ.at<cv::Vec3f>(y,x);
point[0] = vec_tmp(0);
point[1] = vec_tmp(1);
point[2] = vec_tmp(2);
}
}
I never used reprojectImageTo3D, however I am using successfully code similar to the snippet above.
[Initial answer]
As it is explained in the documentation for StereoBM, if you request a CV_16S disparity map, you have to divide each disparity value by 16 before using them.
Hence, you should convert the disparity map as follows before using it:
imgDisparity16S.convertTo( imgDisparity32F, CV_32F, 1./16);
You can also directly request a CV_32F disparity map from the StereoBM structure, in which case you directy get the true disparities.

Need help implementing a special edge detector

I'm implementing an approach from a research paper. Part of the approach calls for a major edge detector, which the authors describe as follows:
Obtain DC image (effectively downsample by 8 for both width and height)
Calculate Sobel gradient of DC image
Threshold Sobel gradient image (using T=120)
Morphological operations to clean up edge image
Note that this NOT Canny edge detection -- they don't bother with things like non-maximum suppression, etc. I could of course do this with Canny edge detection, but I want to implement things exactly as they are expressed in the paper.
That last step is the one I'm a bit stuck on.
Here is exactly what the authors say about it:
After obtaining the binary
edge map from the edge detection process, a binary morphological
operation is employed to remove isolated edge pixels,
which might cause false alarms during the edge detection
Here's how things are supposed to look like at the end of it all (edge blocks have been filled in black):
Here's what I have if I skip the last step:
It seems to be on the right track. So here's what happens if I do erosion for step 4:
I've tried combinations of erosion and dilation to obtain the same result as they do, but don't get anywhere close. Can anyone suggest a combination of morphological operators that will get me the desired result?
Here's the binarization output, in case anyone wants to play around with it:
And if you're really keen, here is the source code (C++):
#include <cv.h>
#include <highgui.h>
#include <stdlib.h>
#include <assert.h>
using cv::Mat;
using cv::Size;
#include <stdio.h>
#define DCTSIZE 8
#define EDGE_PX 255
/*
* Display a matrix as an image on the screen.
*/
void
show_mat(char *heading, Mat const &m)
{
Mat clone = m.clone();
Mat scaled(clone.size(), CV_8UC1);
convertScaleAbs(clone, scaled);
IplImage ipl = scaled;
cvNamedWindow(heading, CV_WINDOW_AUTOSIZE);
cvShowImage(heading, &ipl);
cvWaitKey(0);
}
/*
* Get the DC components of the specified matrix as an image.
*/
Mat
get_dc(Mat const &m)
{
Size s = m.size();
assert(s.width % DCTSIZE == 0);
assert(s.height % DCTSIZE == 0);
Size dc_size = Size(s.height/DCTSIZE, s.width/DCTSIZE);
Mat dc(dc_size, CV_32FC1);
cv::resize(m, dc, dc_size, 0, 0, cv::INTER_AREA);
return dc;
}
/*
* Detect the edges:
*
* Sobel operator
* Thresholding
* Morphological operations
*/
Mat
detect_edges(Mat const &src, int T)
{
Mat sobelx = Mat(src.size(), CV_32FC1);
Mat sobely = Mat(src.size(), CV_32FC1);
Mat sobel_sum = Mat(src.size(), CV_32FC1);
cv::Sobel(src, sobelx, CV_32F, 1, 0, 3, 0.5);
cv::Sobel(src, sobely, CV_32F, 0, 1, 3, 0.5);
cv::add(cv::abs(sobelx), cv::abs(sobely), sobel_sum);
Mat binarized = src.clone();
cv::threshold(sobel_sum, binarized, T, EDGE_PX, cv::THRESH_BINARY);
cv::imwrite("binarized.png", binarized);
//
// TODO: this is the part I'm having problems with.
//
#if 0
//
// Try a 3x3 cross structuring element.
//
Mat elt(3,3, CV_8UC1);
elt.at<uchar>(0, 1) = 0;
elt.at<uchar>(1, 0) = 0;
elt.at<uchar>(1, 1) = 0;
elt.at<uchar>(1, 2) = 0;
elt.at<uchar>(2, 1) = 0;
#endif
Mat dilated = binarized.clone();
//cv::dilate(binarized, dilated, Mat());
cv::imwrite("dilated.png", dilated);
Mat eroded = dilated.clone();
cv::erode(dilated, eroded, Mat());
cv::imwrite("eroded.png", eroded);
return eroded;
}
/*
* Black out the blocks in the image that contain DC edges.
*/
void
censure_edge_blocks(Mat &orig, Mat const &edges)
{
Size s = edges.size();
for (int i = 0; i < s.height; ++i)
for (int j = 0; j < s.width; ++j)
{
if (edges.at<float>(i, j) != EDGE_PX)
continue;
int row = i*DCTSIZE;
int col = j*DCTSIZE;
for (int m = 0; m < DCTSIZE; ++m)
for (int n = 0; n < DCTSIZE; ++n)
orig.at<uchar>(row + m, col + n) = 0;
}
}
/*
* Load the image and return the first channel.
*/
Mat
load_grayscale(char *filename)
{
Mat orig = cv::imread(filename);
std::vector<Mat> channels(orig.channels());
cv::split(orig, channels);
Mat grey = channels[0];
return grey;
}
int
main(int argc, char **argv)
{
assert(argc == 3);
int bin_thres = atoi(argv[2]);
Mat orig = load_grayscale(argv[1]);
//show_mat("orig", orig);
Mat dc = get_dc(orig);
cv::imwrite("dc.png", dc);
Mat dc_edges = detect_edges(dc, bin_thres);
cv::imwrite("dc_edges.png", dc_edges);
censure_edge_blocks(orig, dc_edges);
show_mat("censured", orig);
cv::imwrite("censured.png", orig);
return 0;
}
I can't imagine any combination of morphological operations that would produce the same edges as detected by the supposedly correct result, given your partial result as input.
I note that the underlying image is different; this probably contributes to why your results are so different. The Lena image is fine for indicating the type of result but not for comparisons. Do you have the exact same image as the original authors ?
What the authors described could be implemented with connected component analysis, using 8way connectivity. I would not call that morphological though.
I do think you are missing something else: Their image does not have edges that are thicker than one pixel. Yours has. The paragraph you quoted only talks about removing isolated pixels, so there must be a step you missed or implemented differently.
Good luck!
I think that what you need is a kind of erode or open that is, in a sense, 4-way and not 8-way. The default morphological kernel for OpenCV is a 3x3 rectangle (IplConvKernel with shape=CV_SHAPE_RECT). This is pretty harsh on thin edges.
You might want to try eroding with a 3x3 custom IplConvKernel with shape=CV_SHAPE_CROSS.
If you need an even finer filter, you may want to try eroding with 4 different CV_SHAPE_RECT kernels of size 1x2, 2x1 with the anchor in (0,1) and (1,0) for each.
First of all, your input image has a much higher resolution that the test input image, which can explain the fact less edges are detected - the changes are more smooth.
Second of all, since the edges are thresholded to 0, try dilation on smaller neighborhoods (e.g. compare each pixels with 4 original neighbors (in a non-serial manner)) to get rid of isolated edges.