OpenCV Blob detector using Laplacian - c++

I've got a image from a microscope and need to analyse it (isolate blobs). I've been trying a lot of methods in order to threshold and filter the image which gave me good results, now I'm trying to get the best results.
I've been reading about the Laplace operator and applying the Lapacian of Gauss to found zero-crossing, which are edges of the image.
I've implemented this code to my subject image, I can view the Laplacian results but I don't know how to "use" it since it's in other "space" (depth)
Here are the subject image and the Laplace result. How can I get blobs from Laplace image?

Since you have already done the laplace on the input image, The next step you need to do is:
Threshold the image to obtain a binary image.
Find contours(blobs) in the binary image.
The sample code may look like:
laplacian_img_RGB = cv2.cvtColor(laplacian_img, cv2.COLOR_GRAY2BGR)
ret, thresh = cv2.threshold(laplacian_img, 140, 255, cv2.THRESH_BINARY)
img, contours, hierarchy = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# For debugging Purposes
for i in xrange(len(contours)):
laplacian_img_RGB = cv2.drawContours(laplacian_img_RGB, contours, i, [0, 255, 0], 3)
cv2.imwrite("./debug.png", laplacian_img_RGB)

Related

OpenCV : Reversing the negative areas of an image

i'm using OpenCV 3.4.6 in a c++/Objective C project and given an image with negative rectangular areas, like this one:
I should detect those negative areas, reverse them and finally get the original image.
I tried to use findContours, enhancing the contrast of the original image or adding a threshold but the rectangles are not detected.
Here one of the test i've tried:
Mat contrasted = [self enhanceContrastTo: matOriginal];
Mat thresholded;
threshold(contrasted, thresholded, 125, 241, THRESH_BINARY);
std::vector<std::vector<cv::Point> > contours;
std::vector<Vec4i> hierarchy;
findContours( thresholded, contours, hierarchy, CV_RETR_EXTERNAL, CV_RETR_TREE );
/* contrast method */
+(Mat)enhanceContrastTo:(Mat)image {
cv::Mat lab_image;
cv::cvtColor(image, lab_image, CV_BGR2Lab);
// Extract the L channel
std::vector<cv::Mat> lab_planes(3);
cv::split(lab_image, lab_planes); // now we have the L image in lab_planes[0]
// apply the CLAHE algorithm to the L channel
cv::Ptr<cv::CLAHE> clahe = cv::createCLAHE();
// clahe->setClipLimit(4);
clahe->setClipLimit(3);
cv::Mat dst;
clahe->apply(lab_planes[0], dst);
// Merge the the color planes back into an Lab image
dst.copyTo(lab_planes[0]);
cv::merge(lab_planes, lab_image);
// convert back to RGB
cv::Mat image_clahe;
cv::cvtColor(lab_image, image_clahe, CV_Lab2BGR);
return image_clahe;
}
The rectangles are clearly visible to the naked eye, I hope that opencv can also identify them but I don't know how.
Any idea?
Thanks
This particular question isn't too complicated but even minor variants can get complex. I can advise you on a couple of simple ideas that should suffice to solve the problem.
1) Instead of contour you can check whether neighboring points are close to reverse - this should filter out most irrelevant edges. But just checking for near-reverse is not sufficient as monotone grey area (127) fits the criteria too. Require also minimal threshold difference.
2) Since rectangles are parallel to axes - you can simply go along each row and column and count the number of pixels that are potentially edges of the reversed rectangles. It is better not to just count the number - but to check whether you have continuous large sequences of such pixels and record where exactly these segments are.
3) Use the found segments (or just indexes of rows and columns) of reversed edge-pixels to make candidates for reversed rectangles and then make final verifications.
This is but an algo draft - it will surely require refining. I am not sure why you wanted to use the contour function, tho.

OpenCV - Removal of noise in image

I have an image here with a table.. In the column on the right the background is filled with noise
How to detect the areas with noise? I only want to apply some kind of filter on the parts with noise because I need to do OCR on it and any kind of filter will reduce the overall recognition
And what kind of filter is the best to remove the background noise in the image?
As said I need to do OCR on the image
I tried some filters/operations in OpenCV and it seems to work pretty well.
Step 1: Dilate the image -
kernel = np.ones((5, 5), np.uint8)
cv2.dilate(img, kernel, iterations = 1)
As you see, the noise is gone but the characters are very light, so I eroded the image.
Step 2: Erode the image -
kernel = np.ones((5, 5), np.uint8)
cv2.erode(img, kernel, iterations = 1)
As you can see, the noise is gone however some characters on the other columns are broken. I would recommend running these operations on the noisy column only. You might want to use HoughLines to find the last column. Then you can extract that column only, run dilation + erosion and replace this with the corresponding column in the original image.
Additionally, dilation + erosion is actually an operation called closing. This you could call directly using -
cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)
As #Ermlg suggested, medianBlur with a kernel of 3 also works wonderfully.
cv2.medianBlur(img, 3)
Alternative Step
As you can see all these filters work but it is better if you implement these filters only in the part where the noise is. To do that, use the following:
edges = cv2.Canny(img, 50, 150, apertureSize = 3) // img is gray here
lines = cv2.HoughLinesP(edges, 1, np.pi / 180, 100, 1000, 50) // last two arguments are minimum line length and max gap between two lines respectively.
for line in lines:
for x1, y1, x2, y2 in line:
print x1, y1
// This gives the start coordinates for all the lines. You should take the x value which is between (0.75 * w, w) where w is the width of the entire image. This will give you essentially **(x1, y1) = (1896, 766)**
Then, you can extract this part only like :
extract = img[y1:h, x1:w] // w, h are width and height of the image
Then, implement the filter (median or closing) in this image. After removing the noise, you need to put this filtered image in place of the blurred part in the original image.
image[y1:h, x1:w] = median
This is straightforward in C++ :
extract.copyTo(img, new Rect(x1, y1, w - x1, h - y1))
Final Result with alternate method
Hope it helps!
My solution is based on thresholding to get the resulted image in 4 steps.
Read image by OpenCV 3.2.0.
Apply GaussianBlur() to smooth image especially the region in gray color.
Mask the image to change text to white and the rest to black.
Invert the masked image to black text in white.
The code is in Python 2.7. It can be changed to C++ easily.
import numpy as np
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# read Danish doc image
img = cv2.imread('./imagesStackoverflow/danish_invoice.png')
# apply GaussianBlur to smooth image
blur = cv2.GaussianBlur(img,(5,3), 1)
# threshhold gray region to white (255,255, 255) and sets the rest to black(0,0,0)
mask=cv2.inRange(blur,(0,0,0),(150,150,150))
# invert the image to have text black-in-white
res = 255 - mask
plt.figure(1)
plt.subplot(121), plt.imshow(img[:,:,::-1]), plt.title('original')
plt.subplot(122), plt.imshow(blur, cmap='gray'), plt.title('blurred')
plt.figure(2)
plt.subplot(121), plt.imshow(mask, cmap='gray'), plt.title('masked')
plt.subplot(122), plt.imshow(res, cmap='gray'), plt.title('result')
plt.show()
The following is the plotted images by the code for reference.
Here is the result image at 2197 x 3218 pixels.
As I know the median filter is the best solution to reduce noise. I would recommend to use median filter with 3x3 window. See function cv::medianBlur().
But be careful when use any noise filtration simultaneously with OCR. Its can lead to decreasing of recognition accuracy.
Also I would recommend to try using pair of functions (cv::erode() and cv::dilate()). But I'm not shure that it will best solution then cv::medianBlur() with window 3x3.
I would go with median blur (probably 5*5 kernel).
if you are planning to apply OCR the image. I would advise you to the following:
Filter the image using Median Filter.
Find contours in the filtered image, you will get only text contours (Call them F).
Find contours in the original image (Call them O).
isolate all contours in O that have intersection with any contour in F.
Faster solution:
Find contours in the original image.
Filter them based on size.
Blur (3x3 box)
Threshold at 127
Result:
If you are very worried of removing pixels that could hurt your OCR detection. Without adding artefacts ea be as pure to the original as possible. Then you should create a blob filter. And delete any blobs that are smaller then n pixels or so.
Not going to write code, but i know this works great as i use this myself, though i dont use openCV (i wrote my own multithreaded blobfilter out of speed reasons). And sorry but i cannot share my code here. Just describing how to do it.
If processing time is not an issue, a very effective method in this case would be to compute all black connected components, and remove those smaller than a few pixels. It would remove all the noisy dots (apart those touching a valid component), but preserve all characters and the document structure (lines and so on).
The function to use would be connectedComponentWithStats (before you probably need to produce the negative image, the threshold function with THRESH_BINARY_INV would work in this case), drawing white rectangles where small connected components where found.
In fact, this method could be used to find characters, defined as connected components of a given minimum and maximum size, and with aspect ratio in a given range.
I had already faced the same issue and got the best solution.
Convert source image to grayscale image and apply fastNlMeanDenoising function and then apply threshold.
Like this -
fastNlMeansDenoising(gray,dst,3.0,21,7);
threshold(dst,finaldst,150,255,THRESH_BINARY);
ALSO use can adjust threshold accorsing to your background noise image.
eg- threshold(dst,finaldst,200,255,THRESH_BINARY);
NOTE - If your column lines got removed...You can take a mask of column lines from source image and can apply to the denoised resulted image using BITWISE operations like AND,OR,XOR.
Try thresholding the image like this. Make sure your src is in grayscale. This method will only retain the pixels which are between 150 and 255 intensity.
threshold(src, output, 150, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
You might want to invert the image as you are trying to negate the gray pixels. After the operation, invert it again to get your desired result.

OpenCV Binary Image Mask for Image Analysis in C++

I'm trying to analyse some images which have a lot of noise around the outside of the image, but a clear circular centre with a shape inside. The centre is the part I'm interested in, but the outside noise is affecting my binary thresholding of the image.
To ignore the noise, I'm trying to set up a circular mask of known centre position and radius whereby all pixels outside this circle are changed to black. I figure that everything inside the circle will now be easy to analyse with binary thresholding.
I'm just wondering if someone might be able to point me in the right direction for this sort of problem please? I've had a look at this solution: How to black out everything outside a circle in Open CV but some of my constraints are different and I'm confused by the method in which source images are loaded.
Thank you in advance!
//First load your source image, here load as gray scale
cv::Mat srcImage = cv::imread("sourceImage.jpg", CV_LOAD_IMAGE_GRAYSCALE);
//Then define your mask image
cv::Mat mask = cv::Mat::zeros(srcImage.size(), srcImage.type());
//Define your destination image
cv::Mat dstImage = cv::Mat::zeros(srcImage.size(), srcImage.type());
//I assume you want to draw the circle at the center of your image, with a radius of 50
cv::circle(mask, cv::Point(mask.cols/2, mask.rows/2), 50, cv::Scalar(255, 0, 0), -1, 8, 0);
//Now you can copy your source image to destination image with masking
srcImage.copyTo(dstImage, mask);
Then do your further processing on your dstImage. Assume this is your source image:
Then the above code gives you this as gray scale input:
And this is the binary mask you created:
And this is your final result after masking operation:
Since you are looking for a clear circular center with a shape inside, you could use Hough Transform to get that area- a careful selection of parameters will help you get this area perfectly.
A detailed tutorial is here:
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html
For setting pixels outside a region black:
Create a mask image :
cv::Mat mask(img_src.size(),img_src.type());
Mark the points inside with white color :
cv::circle( mask, center, radius, cv::Scalar(255,255,255),-1, 8, 0 );
You can now use bitwise_AND and thus get an output image with only the pixels enclosed in mask.
cv::bitwise_and(mask,img_src,output);

OpenCV: findContours exception

my matlab code is:
h = fspecial('average', filterSize);
imageData = imfilter(imageData, h, 'replicate');
bwImg = im2bw(imageData, grayThresh);
cDist=regionprops(bwImg, 'Area');
cDist=[cDist.Area];
opencv code is:
cv::blur(dst, dst,cv::Size(filterSize,filterSize));
dst = im2bw(dst, grayThresh);
cv::vector<cv::vector<cv::Point> > contours;
cv::vector<cv::Vec4i> hierarchy;
cv::findContours(dst,contours,hierarchy,CV_RETR_CCOMP, CV_CHAIN_APPROX_NONE);
here is my image2blackand white function
cv::Mat AutomaticMacbethDetection::im2bw(cv::Mat src, double grayThresh)
{
cv::Mat dst;
cv::threshold(src, dst, grayThresh, 1, CV_THRESH_BINARY);
return dst;
}
I'm getting an exception in findContours() C++ exception: cv::Exception at memory location 0x0000003F6E09E0A0.
Can you please explain what am I doing wrong.
dst is cv::Mat and I used it all along it has my original values.
Update here is my matrix written into *.txt file:
http://www.filedropper.com/gili
UPDATE 2:
I have added dst.convertTo(dst,CV_8U); like Micka suggested, I no longer have an exception. however values are nothing like expected.
Take a look at this question which has a similar problem to what you're encountering: Matlab and OpenCV calculate different image moment m00 for the same image.
Basically, the OP in the linked post is trying to find the zeroth image moment for both x and y of all closed contours - which is actually just the area, by using findContours in OpenCV and regionprops in MATLAB. In MATLAB, that can be accessed by the Area property from regionprops, and judging from your MATLAB code, you wish to find the same quantity.
From the post, there is most certainly a difference between how OpenCV and MATLAB finds contours in an image. This boils down to the way both platforms consider what is a "connected pixel". OpenCV only uses a four-pixel neighbourhood while MATLAB uses an eight-pixel neighbourhood.
As such, there is nothing wrong with your implementation, and converting to 8UC1 is good. However, the areas (and ultimately the total number of connected components and contours themselves) between both contours found with MATLAB and OpenCV are not the same. The only way for you to get exactly the same result is if you manually draw the contours found by findContours on a black image, and using the cv::moments function directly on this image.
However, because of the differing implementations of cv::blur() in comparison to fspecial with an averaging mask that is even, you still may not be able to get the same results along the borders of the image. If there are no important contours around the borders of your image, then hopefully this will give you the right result.
Good luck!

Difference between adaptive thresholding and normal thresholding in opencv

I have this gray video stream:
The histogram of this image:
The thresholded image by :
threshold( image, image, 150, 255, CV_THRESH_BINARY );
i get :
Which i expect.
When i do adaptive thresholding with :
adaptiveThreshold(image, image,255,ADAPTIVE_THRESH_GAUSSIAN_C, CV_THRESH_BINARY,15,-5);
i get :
Which looks like edge detection and not thresholding. What i expected was black and white areas . So my question is, why does this look like edge detection and not thresholding.
thx in advance
Adaptive Threshold works like this:
The function transforms a grayscale image to a binary image according
to the formulas:
THRESH_BINARY
THRESH_BINARY_INV
where T(x,y) is a threshold calculated individually for each pixel.
Threshold works differently:
The function applies fixed-level thresholding to a single-channel array.
So it sounds like adaptiveThreshold calculates a threshold pixel-by-pixel, whereas threshold calculates it for the whole image -- it measures the whole image by one ruler, whereas the other makes a new "ruler" for each pixel.
I had the same issue doing adaptive thresholding for OCR purposes. (sorry this is Python not C++)
img = cv.LoadImage(sys.argv[1])
bwsrc = cv.CreateImage( cv.GetSize(img), cv.IPL_DEPTH_8U, 1)
bwdst = cv.CreateImage( cv.GetSize(img), cv.IPL_DEPTH_8U, 1)
cv.CvtColor(img, bwsrc, cv.CV_BGR2GRAY)
cv.AdaptiveThreshold(bwsrc, bwdst, 255.0, cv.CV_THRESH_BINARY, cv.CV_ADAPTIVE_THRESH_MEAN_C,11)
cv.ShowImage("threshhold", bwdst)
cv.WaitKey()
The last paramter is the size of the neighborhood used to calculate the threshold for each pixel. If your neighborhood is too small (mine was 3), it works like edge detection. Once I made it bigger, it worked as expected. Of course, the "correct" size will depend on the resolution of your image, and size of the features you're looking at.