I am trying to implement laplacian filter for sharpening an image.
but the result is kinda grey , I don't know what went wrong with my code.
Here's my work so far
img = imread("moon.png", 0);
Mat convoSharp() {
//creating new image
Mat res = img.clone();
for (int y = 0; y < res.rows; y++) {
for (int x = 0; x < res.cols; x++) {
res.at<uchar>(y, x) = 0.0;
}
}
//variable declaration
//change -5 to -4 for original result.
int filter[3][3] = { {0,1,0},{1,-4,1},{0,1,0} };
//int filter[3][3] = { {-1,-2,-1},{0,0,0},{1,2,1} };
int height = img.rows;
int width = img.cols;
int **temp = new int*[height];
for (int i = 0; i < height; i++) {
temp[i] = new int[width];
}
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
temp[i][j] = 0;
}
}
int filterHeight = 3;
int filterWidth = 3;
int newImageHeight = height - filterHeight + 1;
int newImageWidth = width - filterWidth + 1;
int i, j, h, w;
//convolution
for (i = 0; i < newImageHeight; i++) {
for (j = 0; j < newImageWidth; j++) {
for (h = i; h < i + filterHeight; h++) {
for (w = j; w < j + filterWidth; w++) {
temp[i][j] += filter[h - i][w - j] * (int)img.at<uchar>(h, w);
}
}
}
}
//find max and min
int max = 0;
int min = 100;
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
if (temp[i][j] > max) {
max = temp[i][j];
}
if (temp[i][j] < min) {
min = temp[i][j];
}
}
}
//clamp 0 - 255
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
res.at<uchar>(i, j) = 0 + (temp[i][j] - min)*(255 - 0) / (max - min);
}
}
//empty the temp array
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
temp[i][j] = 0;
}
}
//img - res and store it in temp array
for (int y = 0; y < res.rows; y++) {
for (int x = 0; x < res.cols; x++) {
//int a = (int)img.at<uchar>(y, x) - (int)res.at<uchar>(y, x);
//cout << a << endl;
temp[y][x] = (int)img.at<uchar>(y, x) - (int)res.at<uchar>(y, x);
}
}
//find the new max and min
max = 0;
min = 100;
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
if (temp[i][j] > max) {
max = temp[i][j];
}
if (temp[i][j] < min) {
min = temp[i][j];
}
}
}
//clamp it back to 0-255
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
res.at<uchar>(i, j) = 0 + (temp[i][j] - min)*(255 - 0) / (max - min);
temp[i][j] = (int)res.at<uchar>(i, j);
}
}
return res;
}
And here's the result
as you can see in my code above , i already normalize the pixel value to 0-255. i still don't know what went wrong here. Can anyone here explain why is that ?
The greyness is because, as Max suggested in his answer, you are scaling to the 0-255 range, not clamping (as your comments in the code suggest).
However, that is not all of the issues in your code. The output of the Laplace operator contains negative values. You nicely store these in an int. But then you scale and copy over to a char. Don't do that!
You need to add the result of the Laplace unchanged to your image. This way, some pixels in your image will become darker, and some lighter. This is what causes the edges to appear sharper.
Simply skip some of the loops in your code, and keep one that does temp = img - temp. That result you can freely scale or clamp to the output range and cast to char.
To clamp, simply set any pixel values below 0 to 0, and any above 255 to 255. Don't compute min/max and scale as you do, because there you reduce contrast and create the greyish wash over your image.
Your recent question is quite similar (though the problem in the code was different), read my answer there again, it suggests a way to further simplify your code so that img-Laplace becomes a single convolution.
The problem is that you are clamping and rescaling the image. Look at the bottom left border of the moon: There are very bright pixels next to very dark pixels, and then some gray pixels right besides the bright ones. Your sharpening filter will really spike on that bright border and increase the maximum. Similarly, the black pixels will be reduced even further.
You then determine minimum and maximum and rescale the entire image. This necessarily means the entire image will lose contrast when displayed in the previous gray scale, because your filter outputted pixel values above 255 and below 0.
Looks closely at the border of the moon in the output image:
There is a black halo (the new 0) and a bright, sharp edge (the new 255). (The browser image scaling made it less crisp in this screenshot, look at your original output). Everything else was squashed by the rescaling, so what was previous black (0) is now dark gray.
Related
Good day,
I am looking for a nested for loop to traverse the image of size 512x512 as 64x64 per iteration. My goal is to determine the element of each sub-region, such as performing number of edge count.
In this following code, I have tried to iterate per 64 row and 64 col (expect 8 times each to hit 512). Within the nested for loop, I have placed vec3b as a test run and I aware that the entire cycle of my code is repeating an identical pattern rather than traverse entire image.
int main()
{
char imgName[] = "data/near.jpg"; //input1.jpg, input2.jpg, near.jpg, far.jpg
Mat sourceImage = imread(imgName);
resize(sourceImage, sourceImage, Size(512, 512));
for (int t_row = 0; t_row < sourceImage.rows; t_row += 64)
{
for (int t_col = 0; t_col < sourceImage.cols; t_col += 64)
{
for (int row = 0; row < 64; row++)
{
for (int col = 0; col < 64; col++)
{
Vec3b bgrPixel = sourceImage.at<Vec3b>(row, col);
cout << bgrPixel << endl;
}
}
}
}
return 0;
}
If you actually want to have 64x64 sub-images per iteration, make use of OpenCV's Rect, like so:
const int w = 64;
const int h = 64;
for (int i = 0; i < int(sourceImage.size().width / w); i++)
{
for (int j = 0; j < int(sourceImage.size().height / h); j++)
{
cv::Mat smallImage = sourceImage(cv::Rect(i * w, j * h, w, h));
// Pass smallImage to any function...
}
}
You are iterating over
Vec3b bgrPixel = sourceImage.at<Vec3b>(row, col);
with 0 <= row < 64 and 0 <= col < 64. You are right that you iterate 64 times over the same region.
It should be
Vec3b bgrPixel = sourceImage.at<Vec3b>(t_row + row, t_col + col);
I am trying to implement Laplace sharpening using C++ , here's my code so far:
img = imread("cow.png", 0);
Mat convoSharp() {
//creating new image
Mat res = img.clone();
for (int y = 0; y < res.rows; y++) {
for (int x = 0; x < res.cols; x++) {
res.at<uchar>(y, x) = 0.0;
}
}
//variable declaration
int filter[3][3] = { {0,1,0},{1,-4,1},{0,1,0} };
//int filter[3][3] = { {-1,-2,-1},{0,0,0},{1,2,1} };
int height = img.rows;
int width = img.cols;
int filterHeight = 3;
int filterWidth = 3;
int newImageHeight = height - filterHeight + 1;
int newImageWidth = width - filterWidth + 1;
int i, j, h, w;
//convolution
for (i = 0; i < newImageHeight; i++) {
for (j = 0; j < newImageWidth; j++) {
for (h = i; h < i + filterHeight; h++) {
for (w = j; w < j + filterWidth; w++) {
res.at<uchar>(i,j) += filter[h - i][w - j] * img.at<uchar>(h,w);
}
}
}
}
//img - laplace
for (int y = 0; y < res.rows; y++) {
for (int x = 0; x < res.cols; x++) {
res.at<uchar>(y, x) = img.at<uchar>(y, x) - res.at<uchar>(y, x);
}
}
return res;
}
I don't really know what went wrong, I also tried different filter (1,1,1),(1,-8,1),(1,1,1) and the result is also same (more or less). I don't think that I need to normalize the result because the result is in range of 0 - 255. Can anyone explain what really went wrong in my code?
Problem: uchar is too small to hold partial results of filerting operation.
You should create a temporary variable and add all the filtered positions to this variable then check if value of temp is in range <0,255> if not, you need to clamp the end result to fit <0,255>.
By executing below line
res.at<uchar>(i,j) += filter[h - i][w - j] * img.at<uchar>(h,w);
partial result may be greater than 255 (max value in uchar) or negative (in filter you have -4 or -8). temp has to be singed integer type to handle the case when partial result is negative value.
Fix:
for (i = 0; i < newImageHeight; i++) {
for (j = 0; j < newImageWidth; j++) {
int temp = res.at<uchar>(i,j); // added
for (h = i; h < i + filterHeight; h++) {
for (w = j; w < j + filterWidth; w++) {
temp += filter[h - i][w - j] * img.at<uchar>(h,w); // add to temp
}
}
// clamp temp to <0,255>
res.at<uchar>(i,j) = temp;
}
}
You should also clamp values to <0,255> range when you do the subtraction of images.
The problem is partially that you’re overflowing your uchar, as rafix07 suggested, but that is not the full problem.
The Laplace of an image contains negative values. It has to. And you can’t clamp those to 0, you need to preserve the negative values. Also, it can values up to 4*255 given your version of the filter. What this means is that you need to use a signed 16 bit type to store this output.
But there is a simpler and more efficient approach!
You are computing img - laplace(img). In terms of convolutions (*), this is 1 * img - laplace_kernel * img = (1 - laplace_kernel) * img. That is to say, you can combine both operations into a single convolution. The 1 kernel that doesn’t change the image is [(0,0,0),(0,1,0),(0,0,0)]. Subtract your Laplace kernel from that and you obtain [(0,-1,0),(-1,5,-1),(0,-1,0)].
So, simply compute the convolution with that kernel, and do it using int as intermediate type, which you then clamp to the uchar output range as shown by rafix07.
My code seems to have a bug somewhere but I just can't catch it. I'm passing a 2d array to three sequential functions. First function populates it, second function modifies the values to 1's and 0's, the third function counts the 1's and 0's. I can access the array easily inside the first two functions, but I get an access violation at the first iteration of the third one.
Main
text_image_data = new int*[img_height];
for (i = 0; i < img_height; i++) {
text_image_data[i] = new int[img_width];
}
cav_length = new int[numb_of_files];
// Start processing - load each image and find max cavity length
for (proc = 0; proc < numb_of_files; proc++)
{
readImage(filles[proc], text_image_data, img_height, img_width);
threshold = makeBinary(text_image_data, img_height, img_width);
cav_length[proc] = measureCavity(bullet[0], img_width, bullet[1], img_height, text_image_data);
}
Functions
int makeBinary(int** img, int height, int width)
{
int threshold = 0;
unsigned long int sum = 0;
for (int k = 0; k < width; k++)
{
sum = sum + img[1][k] + img[2][k] + img[3][k] + img[4][k] + img[5][k];
}
threshold = sum / (width * 5);
for (int i = 0; i < height; i++)
{
for (int j = 0; j < width; j++)
{
img[i][j] = img[i][j] > threshold ? 1 : 0;
}
}
return threshold;
}
// Count pixels - find length of cavity here
int measureCavity(int &x, int& width, int &y, int &height, int **img)
{
double mean = 1.;
int maxcount = 0;
int pxcount = 0;
int i = x - 1;
int j;
int pxsum = 0;
for (j = 0; j < height - 2; j++)
{
while (mean > 0.0)
{
for (int ii = i; ii > i - 4; ii--)
{
pxsum = pxsum + img[ii][j] + img[ii][j + 1];
}
mean = pxsum / 4.;
pxcount += 2;
i += 2;
pxsum = 0;
}
maxcount = std::max(maxcount, pxcount);
pxcount = 0;
j++;
}
return maxcount;
}
I keep getting an access violation in the measureCavity() function. I'm passing and accessing the array text_image_data the same way as in makeBinary() and readImage(), and it works just fine for those functions. The size is [550][70], I'm getting the error when trying to access [327][0].
Is there a better, more reliable way to pass this array between the functions?
I work on traffic sign detection, firstly I am applied a segmentation on RGB image to obtain red channel image as it is illustrated in image 1:
Secondely I try to find homogeneous region to eliminate not interested region (not a traffic sign) by calculating the variance of sliding window above the image
I use this code but I have always exception
int main(int argc, char** argv)
{
IplImage *image1;
if ((image1 = cvLoadImage("segmenter1/00051.jpg", 0)) == 0)
return NULL;
int rows = image1->width;
int cols = image1->height;
Mat image = Mat::zeros(cols, rows, CV_32FC1);
double x = 0;
double temp = 0;
for (int i = 0; i < rows; i++){
for (int j = 0; j < cols; j++){
temp = cvGet2D(image1, j, i).val[0];
x = temp / 255;
image.at<float>(j, i) = x;
x = image.at<float>(j, i);
}
}
int k = 16;
double seuil = 0.0013;
CvScalar blanc;//pixel blanc
blanc.val[0] = 255;
cv::Scalar mean, stddev; //0:1st channel, 1:2nd channel and 2:3rd channel
for (int j = 0; j < rows - k; j++)
{
for (int i = 0; i < cols - k; i++)
{
double som = 0;
double var = 0;
double t = 0;
for (int jj = j; jj < k+j; jj++)
{
for (int ii = i; ii < k+i; ii++)
{
t = image.at<float>(jj, ii);
som = som + t;
t = t*t;
var =var+ t;
}
}
som = som / (k*k);
if (som>0.18){
var = (var / (k*k)) - (som*som);
if (var < seuil)
cvSet2D(image1, j, i, blanc);
}
}
}
char stsave[80];
cvSaveImage("variance/00051.jpg", image1);
cv::waitKey(0);
return 0;
}
Without the specific exception, I can only guess it is out_of_range. According to opencv docs, cvGet2D and cvSet2D parameters are image, y, x which effectively translates to image, rows, cols. You have flipped the definition of rows, cols and have conflicting usage between the two loops. Maybe fix these and try again.
I am trying to make an alphatrimmed filter in openCV library. My code is not working properly and the resultant image is not looking as image after filtering.
The filter should work in the following way.
Chossing some (array) of pixels in my example it is 9 pixels '3x3' window.
Ordering them in increasing way.
Cutting our 'array' both sides for alpha-2.
calculating arithmetic mean of remaining pixels and inserting them in proper place.
int alphatrimmed(Mat img, int alpha)
{
Mat img9 = img.clone();
const int start = alpha/2 ;
const int end = 9 - (alpha/2);
//going through whole image
for (int i = 1; i < img.rows - 1; i++)
{
for (int j = 1; j < img.cols - 1; j++)
{
uchar element[9];
Vec3b element3[9];
int k = 0;
int a = 0;
//selecting elements for window 3x3
for (int m = i -1; m < i + 2; m++)
{
for (int n = j - 1; n < j + 2; n++)
{
element3[a] = img.at<Vec3b>(m*img.cols + n);
a++;
for (int c = 0; c < img.channels(); c++)
{
element[k] += img.at<Vec3b>(m*img.cols + n)[c];
}
k++;
}
}
//comparing and sorting elements in window (uchar element [9])
for (int b = 0; b < end; b++)
{
int min = b;
for (int d = b + 1; d < 9; d++)
{
if (element[d] < element[min])
{
min = d;
const uchar temp = element[b];
element[b] = element[min];
element[min] = temp;
const Vec3b temporary = element3[b];
element3[b] = element3[min];
element3[min] = temporary;
}
}
}
// index in resultant image( after alpha-trimmed filter)
int result = (i - 1) * (img.cols - 2) + j - 1;
for (int l = start ; l < end; l++)
img9.at<Vec3b>(result) += element3[l];
img9.at<Vec3b>(result) /= (9 - alpha);
}
}
namedWindow("AlphaTrimmed Filter", WINDOW_AUTOSIZE);
imshow("AlphaTrimmed Filter", img9);
return 0;
}
Without actual data, it's somewhat of a guess, but an uchar can't hold the sum of 3 channels. It works modulo 256 (at least on any platform OpenCV supports).
The proper solution is std::sort with a proper comparator for your Vec3b :
void L1(Vec3b a, Vec3b b) { return a[0]+a[1]+a[2] < b[0]+b[1]+b[2]; }