Differences in filter2D implementation - c++

I was trying to implement convolute2D (filter2D in OpenCV) and came up with the following code.
Mat convolute2D(Mat image, double** kernel, int W){
Mat filtered_image = image.clone();
// find center position of kernel (half of kernel size)
int kCenterX = W / 2;
int kCenterY = W / 2;
int xx = 0;
int yy = 0;
cout << endl << "Performing convolution .." << endl;
cout << "Image Size : " << image.rows << ", " << image.cols <<endl;
for (int i = 0; i < image.rows; ++i){
for (int j = 0; j < image.cols; ++j){
for(int x = 0; x < W; ++x){
xx = W - 1 - x;
for(int y = 0; y < W; ++y){
yy = W - 1 - y;
int ii = i + (x - kCenterX);
int jj = j + (y - kCenterY);
if( ii >= 0 && ii < image.rows && jj >= 0 && jj < image.cols) {
filtered_image.at<uchar>(Point(j, i)) += image.at<uchar>(Point(jj, ii)) * kernel[xx][yy];
}
}
}
}
}
return filtered_image;
}
Assuming we always have a square kernel. But my results have been much different from filter2D. Is it because of possible overflow or is there a problem with my implementation?
Thanks

There are two issues with your code:
You don't set the output image to zero before adding values to it. Consequently, you are computing "input + filtered input", rather than just "filtered input".
Presuming that kernel has quite small values, "input pixel * kernel value" will likely yield a small number, which is rounded down when written to a uchar. Adding up each of these values for the kernel, you'll end up with a result that is too low.
I recommend that you do this:
double res = 0;
for(int x = 0; x < W; ++x){
int xx = W - 1 - x;
for(int y = 0; y < W; ++y){
int yy = W - 1 - y;
int ii = i + (x - kCenterX);
int jj = j + (y - kCenterY);
if( ii >= 0 && ii < image.rows && jj >= 0 && jj < image.cols) {
res += image.at<uchar>(Point(jj, ii)) * kernel[xx][yy];
}
}
}
filtered_image.at<uchar>(Point(j, i)) = res;
This solves both issues at once. Also, this should be a bit faster because accessing the output image has a bit of overhead.
For much faster speeds, consider that the check for out-of-bounds reads (the if in the inner loop) slows down your code significantly, and is totally unnecessary for most pixels (as few pixels are near the image edge). Instead, you can split up your loops into [0,kCenterX], [kCenterX,image.rows-kCenterX], and [image.rows-kCenterX,image.rows]. The middle loop, which is typically by far the largest, will not need to check for out-of-bounds reads.

And use cv::saturate_cast for correct assignment to uchar, for example:
filtered_image.at<uchar>(Point(j, i)) = cv::saturate_cast<uchar>(res);

Related

Convolution algorithm for image processing

I've come up with this code for applying a 3x3 kernel to my image:
double sum;
for(int i = 1; i < src.rows - 1; i++){
for(int j = 1; j < src.cols - 1; j++)
for (int k = 0; k < 3; k ++) {
sum = 0.0;
dst.at<cv::Vec3b>(i,j)[k] = 0.0;
for(int x = -1; x <= 1; x++){
for(int y = -1; y <=1; y++){
sum += (Kernel_Matrix[y+1][x+1]*src.at<cv::Vec3b>(i - x, j - y)[k]);
}
}
dst.at<cv::Vec3b>(i,j)[k] = cv::saturate_cast<uchar>(sum);
}
}
Now I got 2 questions:
By reading https://en.wikipedia.org/wiki/Kernel_(image_processing), there's various matrix for various filter, let's say I want my Blur filter to increase intensity, via a gui Slider that gives a value from x to whatever, what kind of operation should I make to my Blur Matrix(make a sum, a multiplication...)?
(I wold like to do the same with sharpness)
is there a specific matrix for Noise Reduction?
If you also have any mods to suggest me on my algorithm please let me know!
thanks!

Laplacian Sharpening result is kinda greyish C++

I am trying to implement laplacian filter for sharpening an image.
but the result is kinda grey , I don't know what went wrong with my code.
Here's my work so far
img = imread("moon.png", 0);
Mat convoSharp() {
//creating new image
Mat res = img.clone();
for (int y = 0; y < res.rows; y++) {
for (int x = 0; x < res.cols; x++) {
res.at<uchar>(y, x) = 0.0;
}
}
//variable declaration
//change -5 to -4 for original result.
int filter[3][3] = { {0,1,0},{1,-4,1},{0,1,0} };
//int filter[3][3] = { {-1,-2,-1},{0,0,0},{1,2,1} };
int height = img.rows;
int width = img.cols;
int **temp = new int*[height];
for (int i = 0; i < height; i++) {
temp[i] = new int[width];
}
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
temp[i][j] = 0;
}
}
int filterHeight = 3;
int filterWidth = 3;
int newImageHeight = height - filterHeight + 1;
int newImageWidth = width - filterWidth + 1;
int i, j, h, w;
//convolution
for (i = 0; i < newImageHeight; i++) {
for (j = 0; j < newImageWidth; j++) {
for (h = i; h < i + filterHeight; h++) {
for (w = j; w < j + filterWidth; w++) {
temp[i][j] += filter[h - i][w - j] * (int)img.at<uchar>(h, w);
}
}
}
}
//find max and min
int max = 0;
int min = 100;
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
if (temp[i][j] > max) {
max = temp[i][j];
}
if (temp[i][j] < min) {
min = temp[i][j];
}
}
}
//clamp 0 - 255
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
res.at<uchar>(i, j) = 0 + (temp[i][j] - min)*(255 - 0) / (max - min);
}
}
//empty the temp array
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
temp[i][j] = 0;
}
}
//img - res and store it in temp array
for (int y = 0; y < res.rows; y++) {
for (int x = 0; x < res.cols; x++) {
//int a = (int)img.at<uchar>(y, x) - (int)res.at<uchar>(y, x);
//cout << a << endl;
temp[y][x] = (int)img.at<uchar>(y, x) - (int)res.at<uchar>(y, x);
}
}
//find the new max and min
max = 0;
min = 100;
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
if (temp[i][j] > max) {
max = temp[i][j];
}
if (temp[i][j] < min) {
min = temp[i][j];
}
}
}
//clamp it back to 0-255
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
res.at<uchar>(i, j) = 0 + (temp[i][j] - min)*(255 - 0) / (max - min);
temp[i][j] = (int)res.at<uchar>(i, j);
}
}
return res;
}
And here's the result
as you can see in my code above , i already normalize the pixel value to 0-255. i still don't know what went wrong here. Can anyone here explain why is that ?
The greyness is because, as Max suggested in his answer, you are scaling to the 0-255 range, not clamping (as your comments in the code suggest).
However, that is not all of the issues in your code. The output of the Laplace operator contains negative values. You nicely store these in an int. But then you scale and copy over to a char. Don't do that!
You need to add the result of the Laplace unchanged to your image. This way, some pixels in your image will become darker, and some lighter. This is what causes the edges to appear sharper.
Simply skip some of the loops in your code, and keep one that does temp = img - temp. That result you can freely scale or clamp to the output range and cast to char.
To clamp, simply set any pixel values below 0 to 0, and any above 255 to 255. Don't compute min/max and scale as you do, because there you reduce contrast and create the greyish wash over your image.
Your recent question is quite similar (though the problem in the code was different), read my answer there again, it suggests a way to further simplify your code so that img-Laplace becomes a single convolution.
The problem is that you are clamping and rescaling the image. Look at the bottom left border of the moon: There are very bright pixels next to very dark pixels, and then some gray pixels right besides the bright ones. Your sharpening filter will really spike on that bright border and increase the maximum. Similarly, the black pixels will be reduced even further.
You then determine minimum and maximum and rescale the entire image. This necessarily means the entire image will lose contrast when displayed in the previous gray scale, because your filter outputted pixel values above 255 and below 0.
Looks closely at the border of the moon in the output image:
There is a black halo (the new 0) and a bright, sharp edge (the new 255). (The browser image scaling made it less crisp in this screenshot, look at your original output). Everything else was squashed by the rescaling, so what was previous black (0) is now dark gray.

grayscale Laplace sharpening implementation

I am trying to implement Laplace sharpening using C++ , here's my code so far:
img = imread("cow.png", 0);
Mat convoSharp() {
//creating new image
Mat res = img.clone();
for (int y = 0; y < res.rows; y++) {
for (int x = 0; x < res.cols; x++) {
res.at<uchar>(y, x) = 0.0;
}
}
//variable declaration
int filter[3][3] = { {0,1,0},{1,-4,1},{0,1,0} };
//int filter[3][3] = { {-1,-2,-1},{0,0,0},{1,2,1} };
int height = img.rows;
int width = img.cols;
int filterHeight = 3;
int filterWidth = 3;
int newImageHeight = height - filterHeight + 1;
int newImageWidth = width - filterWidth + 1;
int i, j, h, w;
//convolution
for (i = 0; i < newImageHeight; i++) {
for (j = 0; j < newImageWidth; j++) {
for (h = i; h < i + filterHeight; h++) {
for (w = j; w < j + filterWidth; w++) {
res.at<uchar>(i,j) += filter[h - i][w - j] * img.at<uchar>(h,w);
}
}
}
}
//img - laplace
for (int y = 0; y < res.rows; y++) {
for (int x = 0; x < res.cols; x++) {
res.at<uchar>(y, x) = img.at<uchar>(y, x) - res.at<uchar>(y, x);
}
}
return res;
}
I don't really know what went wrong, I also tried different filter (1,1,1),(1,-8,1),(1,1,1) and the result is also same (more or less). I don't think that I need to normalize the result because the result is in range of 0 - 255. Can anyone explain what really went wrong in my code?
Problem: uchar is too small to hold partial results of filerting operation.
You should create a temporary variable and add all the filtered positions to this variable then check if value of temp is in range <0,255> if not, you need to clamp the end result to fit <0,255>.
By executing below line
res.at<uchar>(i,j) += filter[h - i][w - j] * img.at<uchar>(h,w);
partial result may be greater than 255 (max value in uchar) or negative (in filter you have -4 or -8). temp has to be singed integer type to handle the case when partial result is negative value.
Fix:
for (i = 0; i < newImageHeight; i++) {
for (j = 0; j < newImageWidth; j++) {
int temp = res.at<uchar>(i,j); // added
for (h = i; h < i + filterHeight; h++) {
for (w = j; w < j + filterWidth; w++) {
temp += filter[h - i][w - j] * img.at<uchar>(h,w); // add to temp
}
}
// clamp temp to <0,255>
res.at<uchar>(i,j) = temp;
}
}
You should also clamp values to <0,255> range when you do the subtraction of images.
The problem is partially that you’re overflowing your uchar, as rafix07 suggested, but that is not the full problem.
The Laplace of an image contains negative values. It has to. And you can’t clamp those to 0, you need to preserve the negative values. Also, it can values up to 4*255 given your version of the filter. What this means is that you need to use a signed 16 bit type to store this output.
But there is a simpler and more efficient approach!
You are computing img - laplace(img). In terms of convolutions (*), this is 1 * img - laplace_kernel * img = (1 - laplace_kernel) * img. That is to say, you can combine both operations into a single convolution. The 1 kernel that doesn’t change the image is [(0,0,0),(0,1,0),(0,0,0)]. Subtract your Laplace kernel from that and you obtain [(0,-1,0),(-1,5,-1),(0,-1,0)].
So, simply compute the convolution with that kernel, and do it using int as intermediate type, which you then clamp to the uchar output range as shown by rafix07.

C++ Image processing loop

I have two grey scale images in txt files, one being a smaller block of the Main image. I have read the images into two different 2d vector matrices.
The Rows and the Columns of the images are:
Main: M = 768 N = 1024
SubImg: R = 49 C = 36
int R = 49; int C = 36; //Sub Image Rows / Columns
int M = 768; int N = 1024; //Main Image Rows / Columns
I want to loop through the Main image by blocks of width: 49 and height: 36 and put each block into an array, so I can compare the array with the Sub image (using Nearest Neighbor Search) to see which block has the closest result to the Sub image.
The problem I am having is that I cannot get the loop to display all of the blocks. When I run the loop only a certain number of block appear and the program clashes.
// Testing Main 2D Vector in block format
for (int bx = 0; bx < M; bx += R)
for (int by = 0; by < N; by += C)
{
for (int x = 0; x < R; ++x)
{
for (int y = 0; y < C; ++y)
{
cout << MainIMG_2DVector[bx + x][by + y] << " ";
}
}
cout << "\n\n" << endl;
}
Can someone please tell me what I have done wrong.
Thanks
EDIT +++++++++++++++++++++++++++++++++++++++++
After debugging
_DEBUG_ERROR("vector subscript out of range");
_SCL_SECURE_OUT_OF_RANGE;
M=768 is not divisible by R=49, the last loop starts with bx=735 (15*49) and should ends to bx=735+48=783 > 768... Same problem in N=1024 and C=36 by=1008 (28*36) to by=1008+35=1043 > 1024. – J. Piquard
If I increase the width and the height, my main image stretch. Is there a way around this?
Two ways could be explored:
Way 1 - change the value R (and C) to the best divider of M (and N)
int M = 768; int N = 1024; //Main Image Rows / Columns
int R = 48; int C = 32; //Sub Image Rows (768=16*48) / Columns (1024=32*32)
Way 2 - prevent out of range error in the for-loop exit condition
For x, both conditions (x < R) and ((bx + x) < M)) shall be
true.
And for y, both conditions (y < C) and ((by + y) < N)) shall be
true.
for (int x = 0; ((x < R)&&((bx + x) < M)); ++x)
{
for (int y = 0; ((y < C)&&((by + y) < N)); ++y)
{
if ((bx + x)>=M) {
std::cout << (bx + x) << (by + y) << " ";
}
}
}
Instead of:
for (int x = 0; x < R; ++x)
{
for (int y = 0; y < C; ++y)
{
if ((bx + x)>=M) {
std::cout << (bx + x) << (by + y) << " ";
}
}
}

2D convolution - wrong results compared to opencv's output

I'm trying to implement a simple 2D convolution (mean filter in this case). But when I compare my results with an image generated by opencv's filter2D function I see a lot of differences. My current code is:
cv::Mat filter2D(cv::Mat& image, uint32_t kernelSize = 3)
{
float divider = kernelSize*kernelSize;
cv::Mat kernel = cv::Mat::ones(kernelSize,kernelSize,CV_32F) / divider;
int kHalf = kernelSize/2.f;
cv::Mat smoothedImage = cv::Mat::ones(image.rows,image.cols,image.type());
for (int32_t y = 0; y<image.rows; ++y) {
for (int32_t x = 0; x<image.cols; ++x) {
uint8_t sum = 0;
for (int m = -kHalf; m <= kHalf; ++m) {
for (int n = -kHalf; n <= kHalf; ++n) {
if (x+n >= 0 || x+n <= image.cols || y+m >= 0 || y <= image.rows) {
sum += kernel.at<float>(m+kHalf, n+kHalf)*image.at<uint8_t>(y-m+1, x-n+1);
} else {
// Zero padding - nothing to do
}
}
}
smoothedImage.at<uint8_t>(y,x) = sum;
}
}
return smoothedImage;
}
The results for a kernel size of five are (1. opencv, 2. my implementation):
I would appreciate if someone can explain me what I'm doing wrong.
For starter, your condition to account for edges should use && instead of || like so:
if (x+n >= 0 && x+n <= image.cols && y+m >= 0 && y <= image.rows)
This should help a little to remove artefacts around the edge.
Then, for the artefacts on the inner region, you should make sure the sum stays within the 0-255 range, and try to avoid loosing resolution every time you cast the partial result back to uint8_t as you assign to sum:
float sum = 0;
for (int m = -kHalf; m <= kHalf; ++m) {
for (int n = -kHalf; n <= kHalf; ++n) {
if (x+n >= 0 && x+n <= image.cols && y+m >= 0 && y <= image.rows) {
sum += kernel.at<float>(m+kHalf, n+kHalf)*image.at<uint8_t>(y-m+1, x-n+1);
} else {
// Zero padding - nothing to do
}
}
}
smoothedImage.at<uint8_t>(y,x) = std::min(std::max(0.0f, sum), 255.0f);