OpenCV - Calculating edge strength of image - c++

I am new to image processing and I need to calculate the strength of edges present in an image. Assume a situation where you have an image and you add blur effect to that image. The strength of the edges of these two images are different. I need to calculate that edge strength for both images separately.
So far I have got the canny edge detection of the image using the code below.
Mat src1;
src1 = imread("D.PNG", CV_LOAD_IMAGE_COLOR);
namedWindow("Original image", CV_WINDOW_AUTOSIZE);
imshow("Original image", src1);
Mat gray, edge, draw;
cvtColor(src1, gray, CV_BGR2GRAY);
Canny(gray, edge, 50, 150, 3);
edge.convertTo(draw, CV_8U);
namedWindow("image", CV_WINDOW_AUTOSIZE);
imshow("image", draw);
waitKey(0);
return 0;
Is there any method to calculate of the strength of this edge image..?

mean will give you the mean value of your image. If you're using Canny as above you can do:
Scalar pixelMean = mean(draw);
To get the mean of only the edge pixels, you would use the image as the mask as well:
Scalar edgeMean = mean(draw, draw);
Unfortunately, since Canny sets all edge pixels to 255, your mean will always be 255. If this is the measure you're looking for, you'll probably want to use Sobel (after Gaussian Blur) and calculate the gradients to get the relative edge strengths.

Related

OpenCV Inverse Fourier Transform Distorting image

I am attempting to convert a greyscale image to and from the frequency domain using the Fourier transform in OpenCV. However, the resulting image in very distorted even though I made no changes to the image while in frequency domain. Could anyone help me with this? I've found several other questions explaining this like the links below and I have followed them exactly, but the result always ends up like this.
Inverse fourier transformation in OpenCV
https://coderedirect.com/questions/165340/inverse-fourier-transformation-in-opencv
//Make grayscale image
cvtColor(src, gray_in, COLOR_BGR2GRAY);
gray_in.convertTo(gray_in, CV_32FC1);
//Create complex output variable
//From https://docs.opencv.org/4.x/d8/d01/tutorial_discrete_fourier_transform.html
Mat planes[] = { Mat_<float>(gray_in), Mat::zeros(gray_in.size(), CV_32F) };
Mat complexI;
merge(planes, 2, complexI);
//Transform
dft(gray_in, complexI, DFT_COMPLEX_OUTPUT);
//Compute inverse transform
dft(complexI, tgt, DFT_SCALE | DFT_INVERSE | DFT_REAL_OUTPUT);
//Save file
tgt.convertTo(tgt, CV_32FC2);
imwrite(outfile, tgt);
//Display image
namedWindow(windowName);
imshow(windowName, tgt);
waitKey(0);
destroyWindow(windowName);

How to remove protruding part of a square shape in an image?

]2
I would like to get a square shape from the right image above. But when I try to get it, it also includes other protruding parts because they have similar color. Are there any solutions to get the result like below? (The square lines are not 100 % straight. They are little distorted.)
This is the code I wrote.
cv::Mat img_gray, img, clahe_img, threshold_img, bitwise_img, morph_img;
cv::Mat rectified_CCD_img = cv::imread('img.png')
cv::Mat kernel = cv::Mat::ones(99, 99, CV_8U);
clahe = cv::createCLAHE(10, cv::Size(100, 100));
cv::cvtColor(rectified_CCD_img, img_gray, cv::COLOR_BGR2GRAY);
cv::medianBlur(img_gray, img, 33);
clahe->apply(img, clahe_img);
cv::threshold(clahe_img, threshold_img, 0, 255, cv::THRESH_OTSU);
cv::bitwise_not(threshold_img, bitwise_img);
cv::morphologyEx(bitwise_img, morph_img, cv::MORPH_OPEN, kernel);
That's the original image:
Google Drive link
For this specific image my pipeline would be very simple:
Binary threshold the image with a fixed threshold. The rectangle is quite dark compared to the rest of the image.
Morphological opening with a large rectangular kernel to get rid of the "noise".
To get a perfect rectangle, determine the bounding rectangle of the remaining part, and draw a white rectangle.
That'd be the whole code:
// Read image
cv::Mat img = cv::imread("OTH61.png", cv::IMREAD_GRAYSCALE);
// Binary threshold image at fixed threshold
cv::Mat img_thr;
cv::threshold(img, img_thr, 32, 255, cv::THRESH_BINARY_INV);
// Morphological opening with large rectangular kernel
cv::Mat img_mop;
cv::morphologyEx(img_thr, img_mop, cv::MORPH_OPEN, cv::Mat::ones(51, 51, CV_8UC1));
// Draw rectangle w.r.t. to the bounding rectangle of the remaining part
cv::rectangle(img_mop, cv::boundingRect(img_mop), 255, cv::FILLED);
The thresholded image:
The morphological opened image:
The cleaned image:

Noise removal to create mask in OpenCV

I need to create a mask to retrieve an object (foreground object) based on two related images.
Image 1:
[![enter image description here]
Image 2:
[![enter image description here]
The images contain a foreground object and a background with texture.
The two images are mostly the same except that in image2, the foreground object may have changed a little bit (it could have been rotated, translated or/and scaled).
Using OpenCV, I did the followings:
perform image alignment (using findTransformECC with param cv::MOTION_AFFINE) to get transformation of foreground;
do transformation to image1 (using cv::warpAffine with param cv::INTER_LINEAR + cv::WARP_INVERSE_MAP) based on the transform matrix above;
do absolute diff (cv::absdiff & cv::threshold with param cv::THRESH_BINARY_INV) between image2 and already transformed image1.
I think I am close to my goal but I still can not get clean mask of foreground object due to remaining noises on the background area.
What is the solution to remove all noise on the image_absdiff_invert.png (above) in order to create a clean mask of the foreground object ?
I just tried it.
Using morphological operations is often a bit tricky (trial and error) and gives me this result:
While using a median filter might be a good pre-processing (or maybe even enough for your contour extraction) and gives this result (this is just median blur from the input image, no morphological operations yet):
here's the test code:
int main(int argc, char* argv[])
{
cv::Mat input = cv::imread("C:/StackOverflow/Input/maskNoise.png", CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat mask = input.clone();
cv::dilate(mask, mask, cv::Mat());
cv::dilate(mask, mask, cv::Mat());
cv::erode(mask, mask, cv::Mat());
cv::erode(mask, mask, cv::Mat());
cv::erode(mask, mask, cv::Mat());
cv::erode(mask, mask, cv::Mat());
//cv::erode(mask, mask, cv::Mat());
//cv::erode(mask, mask, cv::Mat());
//cv::dilate(mask, mask, cv::Mat());
//cv::dilate(mask, mask, cv::Mat());
cv::dilate(mask, mask, cv::Mat());
cv::dilate(mask, mask, cv::Mat());
cv::Mat median;
cv::medianBlur(input, median, 7);
cv::Mat resizedIn;
cv::Mat resizedMask;
cv::Mat resizedMedian;
cv::resize(mask, resizedMask, cv::Size(), 0.5, 0.5);
cv::resize(median, resizedMedian, cv::Size(), 0.5, 0.5);
cv::resize(input, resizedIn, cv::Size(), 0.5, 0.5);
cv::imshow("input", resizedIn);
cv::imshow("mask", resizedMask);
cv::imshow("median", resizedMedian);
cv::imwrite("C:/StackOverflow/Output/maskNoiseMorph.png", mask);
cv::imwrite("C:/StackOverflow/Output/maskNoiseMedian.png", median);
cv::waitKey(0);
return 0;
}

OpenCV add black pixels from one Mat to another. C++

I'm trying to create an outlining image effect which takes an image (or video), finds the outlines and then draws them on top of the original image as a black line. I am currently getting the outline thusly:
Mat im = imread(...);
Mat outline;
cvtColor(im, outline, COLOR_BGR2GRAY);
GaussianBlur(outline, outline, Size(15,15),2,2);
Canny(outline, outline, 0, 30, 3);
bitwise_not(outline, outline);
cvtColor(outline,outline, COLOR_GRAY2BGR);
how would I then go about making sure that all of the pixels which are black get added to im ?
You can use setTo with a mask.
You should do:
im.setTo(Scalar(0,0,0), ~outline);
which means: in the image im, set all pixels which are black in outline to black (Scalar(0,0,0))
Or you can avoid to use bitwise_not, and then avoid to negate again the mask. The final code will look like:
Mat im = imread(...);
Mat outline;
cvtColor(im, outline, COLOR_BGR2GRAY);
GaussianBlur(outline, outline, Size(15,15),2,2);
Canny(outline, outline, 0, 30, 3);
im.setTo(Scalar(0,0,0), outline);
// or
// bitwise_not(outline, outline);
// im.setTo(Scalar(0,0,0), ~outline);
imshow("Result", im);
waitKey();

OpenCV keep background transparent during warpAffine

I create a Bird-View-Image with the warpPerspective()-function like this:
warpPerspective(frame, result, H, result.size(), CV_WARP_INVERSE_MAP, BORDER_TRANSPARENT);
The result looks very good and also the border is transparent:
Bird-View-Image
Now I want to put this image on top of another image "out". I try doing this with the function warpAffine like this:
warpAffine(result, out, M, out.size(), CV_INTER_LINEAR, BORDER_TRANSPARENT);
I also converted "out" to a four channel image with alpha channel according to a question which was already asked on stackoverflow:
Convert Image
This is the code: cvtColor(out, out, CV_BGR2BGRA);
I expected to see the chessboard but not the gray background. But in fact, my result looks like this:
Result Image
What am I doing wrong? Do I forget something to do? Is there another way to solve my problem? Any help is appreciated :)
Thanks!
Best regards
DamBedEi
I hope there is a better way, but here it is something you could do:
Do warpaffine normally (without the transparency thing)
Find the contour that encloses the image warped
Use this contour for creating a mask (white values inside the image warped, blacks in the borders)
Use this mask for copy the image warped into the other image
Sample code:
// load images
cv::Mat image2 = cv::imread("lena.png");
cv::Mat image = cv::imread("IKnowOpencv.jpg");
cv::resize(image, image, image2.size());
// perform warp perspective
std::vector<cv::Point2f> prev;
prev.push_back(cv::Point2f(-30,-60));
prev.push_back(cv::Point2f(image.cols+50,-50));
prev.push_back(cv::Point2f(image.cols+100,image.rows+50));
prev.push_back(cv::Point2f(-50,image.rows+50 ));
std::vector<cv::Point2f> post;
post.push_back(cv::Point2f(0,0));
post.push_back(cv::Point2f(image.cols-1,0));
post.push_back(cv::Point2f(image.cols-1,image.rows-1));
post.push_back(cv::Point2f(0,image.rows-1));
cv::Mat homography = cv::findHomography(prev, post);
cv::Mat imageWarped;
cv::warpPerspective(image, imageWarped, homography, image.size());
// find external contour and create mask
std::vector<std::vector<cv::Point> > contours;
cv::Mat imageWarpedCloned = imageWarped.clone(); // clone the image because findContours will modify it
cv::cvtColor(imageWarpedCloned, imageWarpedCloned, CV_BGR2GRAY); //only if the image is BGR
cv::findContours (imageWarpedCloned, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// create mask
cv::Mat mask = cv::Mat::zeros(image.size(), CV_8U);
cv::drawContours(mask, contours, 0, cv::Scalar(255), -1);
// copy warped image into image2 using the mask
cv::erode(mask, mask, cv::Mat()); // for avoid artefacts
imageWarped.copyTo(image2, mask); // copy the image using the mask
//show images
cv::imshow("imageWarpedCloned", imageWarpedCloned);
cv::imshow("warped", imageWarped);
cv::imshow("image2", image2);
cv::waitKey();
One of the easiest ways to approach this (not necessarily the most efficient) is to warp the image twice, but set the OpenCV constant boundary value to different values each time (i.e. zero the first time and 255 the second time). These constant values should be chosen towards the minimum and maximum values in the image.
Then it is easy to find a binary mask where the two warp values are close to equal.
More importantly, you can also create a transparency effect through simple algebra like the following:
new_image = np.float32((warp_const_255 - warp_const_0) *
preferred_bkg_img) / 255.0 + np.float32(warp_const_0)
The main reason I prefer this method is that openCV seems to interpolate smoothly down (or up) to the constant value at the image edges. A fully binary mask will pick up these dark or light fringe areas as artifacts. The above method acts more like true transparency and blends properly with the preferred background.
Here's a small test program that warps with transparent "border", then copies the warped image to a solid background.
int main()
{
cv::Mat input = cv::imread("../inputData/Lenna.png");
cv::Mat transparentInput, transparentWarped;
cv::cvtColor(input, transparentInput, CV_BGR2BGRA);
//transparentInput = input.clone();
// create sample transformation mat
cv::Mat M = cv::Mat::eye(2,3, CV_64FC1);
// as a sample, just scale down and translate a little:
M.at<double>(0,0) = 0.3;
M.at<double>(0,2) = 100;
M.at<double>(1,1) = 0.3;
M.at<double>(1,2) = 100;
// warp to same size with transparent border:
cv::warpAffine(transparentInput, transparentWarped, M, transparentInput.size(), CV_INTER_LINEAR, cv::BORDER_TRANSPARENT);
// NOW: merge image with background, here I use the original image as background:
cv::Mat background = input;
// create output buffer with same size as input
cv::Mat outputImage = input.clone();
for(int j=0; j<transparentWarped.rows; ++j)
for(int i=0; i<transparentWarped.cols; ++i)
{
cv::Scalar pixWarped = transparentWarped.at<cv::Vec4b>(j,i);
cv::Scalar pixBackground = background.at<cv::Vec3b>(j,i);
float transparency = pixWarped[3] / 255.0f; // pixel value: 0 (0.0f) = fully transparent, 255 (1.0f) = fully solid
outputImage.at<cv::Vec3b>(j,i)[0] = transparency * pixWarped[0] + (1.0f-transparency)*pixBackground[0];
outputImage.at<cv::Vec3b>(j,i)[1] = transparency * pixWarped[1] + (1.0f-transparency)*pixBackground[1];
outputImage.at<cv::Vec3b>(j,i)[2] = transparency * pixWarped[2] + (1.0f-transparency)*pixBackground[2];
}
cv::imshow("warped", outputImage);
cv::imshow("input", input);
cv::imwrite("../outputData/TransparentWarped.png", outputImage);
cv::waitKey(0);
return 0;
}
I use this as input:
and get this output:
which looks like ALPHA channel isn't set to ZERO by warpAffine but to something like 205...
But in general this is the way I would do it (unoptimized)