Problem with Operation on Border pixels of an image - c++

I am trying to implement a demosaicing algorithm (interpolation) for a raw image with the Bayer pattern GRBG. The program logic was to use the neighboring pixels to assign the values to R,G and B channels(I have attached the code). I am having a problem for this logic at the border pixels. For example let i be the pixel at (0,0), I need the value of i-1 which is not present in the image. My question is there a possibility to work around this like masking i-1 and the others as 0 without adding an new border of zeros to my existing image.
Any suggestions will be helpful.
thanks.
int rows = 256;
int cols = 512;
Mat raw_img(rows, cols, CV_8U); //////////////////////
Mat image(rows, cols, CV_8UC3); // BAYER PATTERN //
cvtColor(image, image, COLOR_BGR2RGB); // G R //
for (int i = 0; i < raw_img.rows; i++) { // B G //
for (int j = 0; j < raw_img.cols; j++) { //////////////////////
if ((i % 2 == 0) && (j % 2 == 0))//top green
{
image.at<Vec3b>(i, j)[0] = (raw_img.at<uchar>(i - 1, j) +
raw_img.at<uchar>(i + 1, j)) / 2; //red
image.at<Vec3b>(i, j)[1] = (raw_img.at<uchar>(i, j) * 2); //blue
image.at<Vec3b>(i, j)[2] = (raw_img.at<uchar>(i, j - 1) +
raw_img.at<uchar>(i, j + 1)) / 2; //green
}
else if ((i % 2 == 0) && (j % 2 == 1))//red
{
image.at<Vec3b>(i, j)[0] = (raw_img.at<uchar>(i, j)); //red
image.at<Vec3b>(i, j)[1] = (raw_img.at<uchar>(i - 1, j) +
raw_img.at<uchar>(i + 1, j) +
raw_img.at<uchar>(i, j - 1) +
raw_img.at<uchar>(i, j + 1)) / 2;//green
image.at<Vec3b>(i, j)[2] = (raw_img.at<uchar>(i + 1, j - 1) +
raw_img.at<uchar>(i - 1, j + 1) +
raw_img.at<uchar>(i + 1, j + 1) +
raw_img.at<uchar>(i - 1, j - 1)) / 4;//blue
}
else if ((i % 2 == 1) && (j % 2 == 0))//blue
{
image.at<Vec3b>(i, j)[0] = (raw_img.at<uchar>(i + 1, j - 1) +
raw_img.at<uchar>(i - 1, j + 1) +
raw_img.at<uchar>(i + 1, j + 1) +
raw_img.at<uchar>(i - 1, j - 1)) / 4;//red
image.at<Vec3b>(i, j)[1] = (raw_img.at<uchar>(i + 1, j) +
raw_img.at<uchar>(i, j + 1) +
raw_img.at<uchar>(i - 1, j) +
raw_img.at<uchar>(i, j + 1)) / 2;//green
image.at<Vec3b>(i, j)[0] = (raw_img.at<uchar>(i, j));//blue
}
else // bottom green
{
image.at<Vec3b>(i, j)[0] = (raw_img.at<uchar>(i, j - 1) +
raw_img.at<uchar>(i, j + 1)) / 2;//red
image.at<Vec3b>(i, j)[1] = (raw_img.at<uchar>(i, j) * 2);//blue
image.at<Vec3b>(i, j)[2] = (raw_img.at<uchar>(i - 1, j) +
raw_img.at<uchar>(i + 1, j)) / 2;//green
}
}
}

You could do something like:
image.at<Vec3b>(i, j)[0] = (raw_img.at<uchar>(max(0,i - 1), j)
+ raw_img.at<uchar>(min(i + 1,raw_img.rows-1), j)) / 2; //red
For all you i +/- 1 , j +/-1: this way you "replicate" border values by simply sticking to the last value value in the X or Y dimension
As a side note, openCV includes different demosaic algorithm that will be hard to beat (for both quality and execution speed)

The above stated answer works. But to prevent the hassle of using min,max with each pixel. It can be done as shown below with an Opencv function:
int main(int argc, char** argv)
{
Mat img_rev = imread("C:/Users/20181217/Desktop/images/imgs/den_check.png");
//number of additional rows and columns
int top, left, right, bottom;
top = 1;
left = 1;
right = 1;
bottom = 1;
//define new image with additional borders
Mat img_clamp(img_rev.rows + 2, img_rev.cols + 2, CV_8UC3);
//if you want to pad the image with zero's
copyMakeBorder(img_rev, img_clamp, top, left, right, bottom, BORDER_CONSTANT);
//if you want to replicate the border of the image
copyMakeBorder(img_rev, img_clamp, top, left, right, bottom, BORDER_REPLICATE);
//Now you can access the image without having to worry about the borders as shown below
for(int i=1;i<img_clamp.rows-1;i++)
{
for(int j=1;i<img_clamp.cols-1;i++)
{
...
}
}
waitKey(100000);
return 0;
}
More operations can be found here:
https://docs.opencv.org/2.4/modules/imgproc/doc/filtering.html?highlight=copymakeborder#copymakeborder

Related

Floyd Steinberg Dithering, why the effect is not ideal?

Why do the graphics generated by my dithering algorithm have many black spots
here is picture
http://yanxuan.nosdn.127.net/00a07a53ab2083685da1a90d09452f69.png
Below is my code of Floyd Steinberg Dithering.
QImage * SaveThread::imageFloydSteinberg(QImage * origin)
{
int old_pix, new_pix, quant_err;
int width = origin->width();
int height = origin->height();
QImage * img_dither = new QImage(width, height, QImage::Format_ARGB32);
img_dither = origin;
for (int j = 0; j < height; j++)
{
for (int i = 0; i < width; i++)
{
old_pix = img_dither->pixel(i, j);
if (img_dither->pixel(i, j) > qRgb(128, 128, 128))
new_pix = qRgb(255, 255, 255);
else
new_pix = qRgb(0, 0, 0);
img_dither->setPixel(i, j, qRgb(new_pix,new_pix,new_pix));
quant_err = old_pix - new_pix;
img_dither->setPixel(i+1, j , img_dither->pixel(i+1, j ) + quant_err * 7 / 16);
img_dither->setPixel(i-1, j+1, img_dither->pixel(i-1, j+1) + quant_err * 3 / 16);
img_dither->setPixel(i , j+1, img_dither->pixel(i , j+1) + quant_err * 5 / 16);
img_dither->setPixel(i+1, j+1, img_dither->pixel(i+1, j+1) + quant_err * 1 / 16);
}
}
return img_dither;
}

How do I handle edge pixels from a image without any libraries but the standart ones from C++?

I have developed a code that can read and handle the bits from a 24 bits bmp image, mostly applying filters, but now I want to make my blur filter to blur the edge pixels too. Right now I have a 1 pixel edge, I'm using a 3x3 box blur, and this is the image I get after the blur is applied:
https://i.stack.imgur.com/0Px6Z.jpg
I'm able to keep the original bits from the image if I use an if statement in my inner loop but that doesn't really help given that I want it to be blurred and not the original unblurred bits.
Here is the code:
>
for (int count = 0; count < times; ++count) {
for (int x = 1; x < H-1; ++x) {
for (int y = 1; y < W-1; ++y) {
double sum1 = 0;
double sum2 = 0;
double sum3 = 0;
for (int k = -1; k <= 1; ++k) {
for (int j = -1; j <= 1; ++j) {
sum1 += bits[((x - j) * W + (y - k)) * 3] * kernel[j + 1][k + 1];
sum2 += bits[((x - j) * W + (y - k)) * 3 + 1] * kernel[j + 1][k + 1];
sum3 += bits[((x - j) * W + (y - k)) * 3 + 2] * kernel[j + 1][k + 1];
}
}
if (sum1 <= 0) sum1 = 0;
if (sum1 >= 255) sum1 = 255;
if (sum2 <= 0) sum2 = 0;
if (sum2 >= 255) sum2 = 255;
if (sum3 <= 0) sum3 = 0;
if (sum3 >= 255) sum3 = 255;
temp[(x * W + y) * 3] = sum1;
temp[(x * W + y) * 3 + 1] = sum2;
temp[(x * W + y) * 3 + 2] = sum3;
}
}
bits = temp;
}
I know that 5 for loops nested are really slow but I would like to be able to make it work properly first, but if there are any tips on how to improve it I'm all ears.
Now as for the first loop, what it does is it applies the filter the amount of times you want.
The next two is to go through the vector as a 2d vector, and the inner 2 are for the box blur.
Important things to know: I have a vector of bits(RGB) and not just the pixels, that is why I treat them one by one(bits), also my vector is a 1d vector.

SFML C++ Canny edge detection double edges

So, I decided to create a simple Canny edge detector just as exercise before biting harder topics with image processing.
I tried to follow the typical path of Canny:
1. Grayscaling the image
2. Gaussian filter to blur the noise
3. Edge detection - I use both Sobel and Scharr
4. Edge thinning - I used non-maximum suppression in direction depending on gradient direction - vertical, horizontal, 45 diagonal or 135 diagonal
5. Hysteresis
I somehow managed to get it working with Scharr's detection but I have recurring problem with double or multiple edges, espacially with Sobel. I can't really find a set of parameters which will make it work.
My algorithm for Sobel:
void sobel(sf::Image &image, pixldata **garray, float division)
{
int t1 = 0, t2 = 0, t3 = 0, t4 = 0;
sf::Color color;
sf::Image bufor;
bufor.create(image.getSize().x, image.getSize().y, sf::Color::Cyan);
for (int i = 1;i < image.getSize().y - 1;i++)
{
for (int j = 1;j < image.getSize().x - 1;j++)
{
t1 = (- image.getPixel(j - 1, i - 1).r - 2 * image.getPixel(j - 1, i).r - image.getPixel(j - 1, i + 1).r + image.getPixel(j + 1, i - 1).r + 2 * image.getPixel(j + 1, i).r + image.getPixel(j + 1, i + 1).r) / division;
t2 = (- image.getPixel(j - 1, i).r - 2 * image.getPixel(j - 1, i + 1).r - image.getPixel(j, i + 1).r + image.getPixel(j + 1, i).r + 2 * image.getPixel(j + 1, i - 1).r + image.getPixel(j, i - 1).r) / division;
t3 = (- image.getPixel(j - 1, i + 1).r - 2 * image.getPixel(j, i + 1).r - image.getPixel(j + 1, i + 1).r + image.getPixel(j - 1, i - 1).r + 2 * image.getPixel(j, i - 1).r + image.getPixel(j + 1, i - 1).r) / division;
t4 = (- image.getPixel(j, i + 1).r - 2 * image.getPixel(j + 1, i + 1).r - image.getPixel(j + 1, i).r + image.getPixel(j - 1, i).r + 2 * image.getPixel(j - 1, i - 1).r + image.getPixel(j, i - 1).r) / division;
color.r = (abs(t1) + abs(t2) + abs(t3) + abs(t4));
color.g = (abs(t1) + abs(t2) + abs(t3) + abs(t4));
color.b = (abs(t1) + abs(t2) + abs(t3) + abs(t4));
garray[j][i].gx = t1;
garray[j][i].gy = t3;
garray[j][i].gtrue = sqrt(t1*t1 + t2*t2 + t3*t3 + t4*t4);
garray[j][i].gsimpl = sqrt(t1*t1 + t2*t2);
t1 = abs(t1);
t2 = abs(t2);
t3 = abs(t3);
t4 = abs(t4);
if (t1 > t4 && t1 > t3 && t1 > t2)
garray[j][i].fi = 0;
else if (t2 > t4 && t2 > t3 && t2 > t1)
garray[j][i].fi = 45;
else if (t3 > t4 && t3 > t2 && t3 > t1)
garray[j][i].fi = 90;
else if (t4 > t3 && t4 > t2 && t4 > t1)
garray[j][i].fi = 135;
else
garray[j][i].fi = 0;
if (sqrt(t1*t1 + t2*t2 + t3*t3 + t4*t4) < 0)
{
color.r = 0;
color.g = 0;
color.b = 0;
}
else if (sqrt(t1*t1 + t2*t2 + t3*t3 + t4*t4) > 255)
{
color.r = 255;
color.g = 255;
color.b = 255;
}
else
{
color.r = sqrt(t1*t1 + t2*t2 + t3*t3 + t4*t4);
color.g = sqrt(t1*t1 + t2*t2 + t3*t3 + t4*t4);
color.b = sqrt(t1*t1 + t2*t2 + t3*t3 + t4*t4);
}
bufor.setPixel(j, i, color);
}
}
image.copy(bufor, 0, 0);
}
Code for Scharr differs only in multiplying the pixels' values.
t1 = (-3 * image.getPixel(j - 1, i - 1).r - 10 * image.getPixel(j - 1, i).r - 3 * image.getPixel(j - 1, i + 1).r + 3 * image.getPixel(j + 1, i - 1).r + 10 * image.getPixel(j + 1, i).r + 3 * image.getPixel(j + 1, i + 1).r) / division;
t2 = (-3 * image.getPixel(j - 1, i).r - 10 * image.getPixel(j - 1, i + 1).r - 3 * image.getPixel(j, i + 1).r + 3 * image.getPixel(j + 1, i).r + 10 * image.getPixel(j + 1, i - 1).r + 3 * image.getPixel(j, i - 1).r) / division;
t3 = (-3 * image.getPixel(j - 1, i + 1).r - 10 * image.getPixel(j, i + 1).r - 3 * image.getPixel(j + 1, i + 1).r + 3 * image.getPixel(j - 1, i - 1).r + 10 * image.getPixel(j, i - 1).r + 3 * image.getPixel(j + 1, i - 1).r) / division;
t4 = (-3 * image.getPixel(j, i + 1).r - 10 * image.getPixel(j + 1, i + 1).r - 3 * image.getPixel(j + 1, i).r + 3 * image.getPixel(j - 1, i).r + 10 * image.getPixel(j - 1, i - 1).r + 3 * image.getPixel(j, i - 1).r) / division;
Thinning code:
void intelligentThin(sf::Image &image, int radius, pixldata **garray)
{
int xmax = image.getSize().x;
int ymax = image.getSize().y;
bool judgeandjury = true;
for (int i = 0;i < xmax;i++)
{
int leftBound = 0, rightBound = 0, ceilBound = 0, bottomBound = 0;
if (i < radius)
{
leftBound = 0;
rightBound = i + radius;
}
else if (i >= xmax - radius)
{
leftBound = i - radius;
rightBound = xmax - 1;
}
else
{
leftBound = i - radius;
rightBound = i + radius;
}
for (int j = 0;j < ymax;j++)
{
if (j < radius)
{
ceilBound = 0;
bottomBound = j + radius;
}
else if (j >= ymax - radius)
{
ceilBound = j - radius;
bottomBound = ymax - 1;
}
else
{
ceilBound = j - radius;
bottomBound = j + radius;
}
if (garray[i][j].fi == 0)
{
for (int t = leftBound; t <= rightBound; t++)
{
if ((image.getPixel(t, j).r >= image.getPixel(i, j).r) && (t != i))
{
judgeandjury = false;
}
}
}
else if (garray[i][j].fi == 135)
{
for (int l = leftBound, t = ceilBound; (l <= rightBound && t <= bottomBound); l++, t++)
{
if ((image.getPixel(l, t).r >= image.getPixel(i, j).r) && (t != j))
{
judgeandjury = false;
}
}
}
else if (garray[i][j].fi == 90)
{
for (int t = ceilBound; t <= bottomBound; t++)
{
if ((image.getPixel(i, t).r >= image.getPixel(i, j).r) && (t != j))
{
judgeandjury = false;
}
}
}
else if (garray[i][j].fi == 45)
{
for (int l = rightBound, t = ceilBound; (l >= leftBound && t <= bottomBound); l--, t++)
{
if ((image.getPixel(l, t).r >= image.getPixel(i, j).r) && (t != j))
{
judgeandjury = false;
}
}
}
if (judgeandjury == false)
{
image.setPixel(i, j, sf::Color::Black);
}
judgeandjury = true;
}
leftBound = rightBound = 0;
}
}
Hysteresis code:
void hysteresis(sf::Image &image, int radius, int uplevel, int lowlevel)
{
int xmax = image.getSize().x;
int ymax = image.getSize().y;
bool judgeandjury = false;
sf::Image bufor;
bufor.create(image.getSize().x, image.getSize().y, sf::Color::Cyan);
for (int i = 0;i < xmax;i++)
{
int leftBound = 0, rightBound = 0, ceilBound = 0, bottomBound = 0;
if (i < radius)
{
leftBound = 0;
rightBound = i + radius;
}
else if (i >= xmax - radius)
{
leftBound = i - radius;
rightBound = xmax - 1;
}
else
{
leftBound = i - radius;
rightBound = i + radius;
}
for (int j = 0;j < ymax;j++)
{
int currentPoint = image.getPixel(i, j).r;
if (j < radius)
{
ceilBound = 0;
bottomBound = j + radius;
}
else if (j >= ymax - radius)
{
ceilBound = j - radius;
bottomBound = ymax - 1;
}
else
{
ceilBound = j - radius;
bottomBound = j + radius;
}
if (currentPoint > uplevel)
{
judgeandjury = true;
}
else if (currentPoint > lowlevel)
{
for (int t = leftBound; t <= rightBound; t++)
{
for (int l = ceilBound; l <= bottomBound; l++)
{
if (image.getPixel(t, l).r > uplevel)
{
judgeandjury = true;
}
}
}
}
else judgeandjury = false;
if (judgeandjury == true)
{
bufor.setPixel(i, j, sf::Color::White);
}
else
{
bufor.setPixel(i, j, sf::Color::Black);
}
judgeandjury = false;
currentPoint = 0;
}
leftBound = rightBound = 0;
}
image.copy(bufor, 0, 0);
}
The results are quite unsatisfactionary for Sobel:
Thinning the Sobel
Sobel after hysteresis
With Scharr the results are way better:
Thinned Scharr
Scharr after hysteresis
Set of parameters:
#define thinsize 1
#define scharrDivision 1
#define sobelDivision 1
#define hysteresisRadius 1
#define level 40
#define hysteresisUpperLevelSobel 80
#define hysteresisLowerLevelSobel 60
#define hysteresisUpperLevelScharr 200
#define hysteresisLowerLevelScharr 100
As you can see, there is a problem with Sobel, which generate double edges. Scharr also generates some noise but I think it's acceptable. Of course, it always can get better, if someone could give some advice :)
What is the cause of this behaviour? Does it result from my mistakes or poor algorithms or maybe is it just a case of parameters?
EDIT:
posting main()
sf::Image imydz;
imydz.loadFromFile("lena.jpg");
int x = imydz.getSize().x;
int y = imydz.getSize().y;
pixldata **garray = new pixldata *[x];
for (int i = 0;i < x;i++)
{
garray[i] = new pixldata[y];
}
monochrome(imydz);
gauss(imydz, radius, sigma);
//sobel(imydz, garray, sobelDivision);
scharr(imydz, garray, scharrDivision);
intelligentThin(imydz, thinsize, garray);
hysteresis(imydz, hysteresisRadius, hysteresisUpperLevel, hysteresisLowerLevel);
Second edit - repaired suppression:
sf::Image bufor;
bufor.create(image.getSize().x, image.getSize().y, sf::Color::Black);
for (int i = 1;i < xmax - 1;i++)
{
for (int j = 1;j < ymax - 1;j++)
{
if (garray[i][j].fi == 0)
{
if (((image.getPixel(i, j).r >= image.getPixel(i + 1, j).r) && (image.getPixel(i, j).r > image.getPixel(i - 1, j).r)) ||
((image.getPixel(i, j).r > image.getPixel(i + 1, j).r) && (image.getPixel(i, j).r >= image.getPixel(i - 1, j).r)))
{
judgeandjury = true;
}
else judgeandjury = false;
}
...
if (judgeandjury == false)
{
bufor.setPixel(i, j, sf::Color::Black);
}
else bufor.setPixel(i, j, image.getPixel(i, j));
judgeandjury = false;
}
}
image.copy(bufor, 0, 0);
Repaired Scharr on Lena
It seems strange
Another test image - strange results
Before binarization
Ready gears
I haven't read your whole code in detail, there is much too much code there. But obviously your non-maximum suppression code is wrong. Let's look at what it does for one pixel in the middle of the image, where the gradient is close to 0 degrees:
leftBound = i - radius;
rightBound = i + radius;
// ...
for (int t = leftBound; t <= rightBound; t++)
{
if ((image.getPixel(t, j).r >= image.getPixel(i, j).r) && (t != i))
{
judgeandjury = false; // it's not a maximum: suppress
}
}
// ...
if (judgeandjury == false)
{
image.setPixel(i, j, sf::Color::Black);
}
Here, radius is set to 1 by the calling code. Any other value would be bad, so this is OK. I would remove that as a parameter altogether. Now your loop is:
for (int t = i-1; t <= t+1; t++)
if (t != i)
This means that you hit exactly two values of t. So this should of course be replaced with simpler code that does not loop, it will be more readable.
This is what it now does:
if ( (image.getPixel(i-1, j).r >= image.getPixel(i, j).r)
|| (image.getPixel(i+1, j).r >= image.getPixel(i, j).r)) {
judgeandjury = false; // it's not a maximum: suppress
}
So you suppress the pixel if it is not strictly larger than its neighbors. Looking back at the Wikipedia article, it seems that they suggest the same. But in fact, this is not correct, you want the point to be strictly larger than one of the two neighbors, and larger or equal to the other. This prevents the situation where the gradient happens to be equally strong on two neighboring pixels. The actual maximum can fall right in the middle of two pixels, yielding two pixels on this local maximum gradient with exactly the same value. But let's ignore this case for now, it is possible but not all that likely.
Next, you suppress the maximum... in the input image! This means that, when you reach the next pixel on this line, you will compare its value to this value that was just suppressed. Of course it will be larger, even though it was smaller than the original value at that location. That is, non-maxima will look like maxima because you put a neighboring pixel to 0.
So: write the result of the algorithm to an output image:
if (judgeandjury == true)
{
output.setPixel(i, j, image.getPixel(i, j));
}
...which of course you need to allocate, but you already know that.
Your second problem is in the sobel function, where you compute the gradient magnitude. It clips (saturates) the output. By cutting values of the output above 255 to 255, you create very broad lines along the edges of a constant value. The test of the non-maximum suppression is satisfied at the two edges of this line, but not in the middle, where pixels have the same value as both its neighbors.
To solve this, either:
Use a floating-point buffer to store the gradient magnitude. Here you don’t need to worry about data ranges.
Divide the magnitude by some value such that it will never exceed 255. Now you’re quantifying the magnitude rather than clipping it. Quantizing should be fine in this case.
I strongly recommend that you follow (1). I typically use floating-point—values images for everything, and only convert to 8-bit ints for display. This simplified a lot of things!

exception at memory location (vector issue) opencv

I am trying to find average of 2x2 block pixels within a window of 6x6 of overall image size mxn. I can able to find the average of block till the end of first row and when the code has to move to next row, it throws runtime error "exception at memory location"
vector<int>m; vector<int>m1; vector<int>m2; vector<int>m3;vector<int>m4; vector<int>m5; vector<int>m6; vector<int>m7; vector<int>m8;
for (int i = 2; i < road.rows - 2 ; i++){
for (int j = 2; j < road.cols - 2 ; j++){
//center block
int avg=(round((road.at<uchar>(i, j) + road.at<uchar>(i, j + 1) + road.at<uchar>(i + 1, j) + road.at<uchar>(i + 1, j + 1)) / 4));
//top left block
int avg1= (round((road.at<uchar>(i - 2, j - 2) + road.at<uchar>(i - 2, j - 1) + road.at<uchar>(i - 1, j - 2) + road.at<uchar>(i - 1, j - 1)) / 4));
//top
int avg2 = (round((road.at<uchar>(i - 2, j) + road.at<uchar>(i - 2, j + 1) + road.at<uchar>(i - 1, j) + road.at<uchar>(i - 1, j + 1)) / 4));
//top right block
int avg3 = (round((road.at<uchar>(i - 2, j + 2) + road.at<uchar>(i - 2, j + 3) + road.at<uchar>(i - 1, j + 2) + road.at<uchar>(i - 1, j + 3)) / 4));
//left block
int avg4 = (round((road.at<uchar>(i, j - 2) + road.at<uchar>(i, j - 1) + road.at<uchar>(i + 1, j - 2) + road.at<uchar>(i + 1, j - 1)) / 4));
//right block
int avg5 = (round((road.at<uchar>(i, j + 2) + road.at<uchar>(i, j + 3) + road.at<uchar>(i + 1, j + 2) + road.at<uchar>(i + 1, j + 3)) / 4));
//bottom left block
int avg6 = (round((road.at<uchar>(i + 2, j - 2) + road.at<uchar>(i + 2, j - 1) + road.at<uchar>(i + 3, j - 2) + road.at<uchar>(i + 3, j - 1)) / 4));
//bottom
int avg7 = (round((road.at<uchar>(i + 2, j) + road.at<uchar>(i + 2, j + 1) + road.at<uchar>(i + 3, j) + road.at<uchar>(i + 3, j + 1)) / 4));
//bottom right block
int avg8 = (round((road.at<uchar>(i + 2, j + 2) + road.at<uchar>(i + 2, j + 3) + road.at<uchar>(i + 3, j + 2) + road.at<uchar>(i + 3, j + 3)) / 4));
m.push_back(avg);
m1.push_back(avg1);
m2.push_back(avg2);
m3.push_back(avg3);
m4.push_back(avg4);
m5.push_back(avg5);
m6.push_back(avg6);
m7.push_back(avg7);
m8.push_back(avg8);
}
}
Help me out of this error;

Unexpected Harris Detector results

I load the vertical and horizontal gradients into the function posted here and it calculates the sums which than make up the corner response. Why do only boarder pixels get to be found, my threshold is 0 otherwise there is 0 corners on the image. For gradients I used sobel operator.
Look at the output image below.
for (int i = 0; i < width; i++)
{
for (int j = 0; j < height; j++)
{
if ((i - search_size / 2 < 0 || i + search_size / 2 > image1.rows - 1) || (j - search_size / 2 < 0 || j + search_size / 2 > image1.cols - 1)) {
continue;
}
double Ix2 = 0, Iy2 = 0, Ixy = 0;
double detM=0;
double traceM=0;
double R = 0;
for (int m = i-search_size /2; m < i + search_size /2 ; m++){
for (int n = j-search_size /2; n < j + search_size/2 ; n++){
gauss = exp(-(((i - m) * (i - m)) + ((j - n) * (j - n))) / gaus_del);
//Compute Ix^2 , Iy^2 and Ixy
Ix2 += gauss*(image1.at<float>(m, n)*image1.at<float>(m, n));
Iy2 += gauss*(image2.at<float>(m, n)*image2.at<float>(m, n));
Ixy += gauss*(image1.at<float>(m, n)*image2.at<float>(m, n));
}
}
detM = (Ix2*Iy2 - Ixy*Ixy);
traceM = Ix2*Ix2 + Iy2*Iy2;
R = detM / traceM;
//cout <<i+j<< endl;
// std::cout << "R :" << Iy2 << endl;
if (R > threshold)
{
circle(image, cv::Point2f(i, j), 3.5, cv::Scalar(255, 255, 0), 1, 5);
cout << "corner found" << endl;
}
}
}
EDIT : i am using uchars now and the result looks alot better
2