connected component labeling in python - python-2.7

How to implement connected component labeling in python with open cv?
This is an image example:
I need connected component labeling to separate objects on a black and white image.

The OpenCV 3.0 docs for connectedComponents() don't mention Python but it actually is implemented. See for e.g. this SO question. On OpenCV 3.4.0 and above, the docs do include the Python signatures, as can be seen on the current master docs.
The function call is simple: num_labels, labels_im = cv2.connectedComponents(img) and you can specify a parameter connectivity to check for 4- or 8-way (default) connectivity. The difference is that 4-way connectivity just checks the top, bottom, left, and right pixels and sees if they connect; 8-way checks if any of the eight neighboring pixels connect. If you have diagonal connections (like you do here) you should specify connectivity=8. Note that it just numbers each component and gives them increasing integer labels starting at 0. So all the zeros are connected, all the ones are connected, etc. If you want to visualize them, you can map those numbers to specific colors. I like to map them to different hues, combine them into an HSV image, and then convert to BGR to display. Here's an example with your image:
import cv2
import numpy as np
img = cv2.imread('eGaIy.jpg', 0)
img = cv2.threshold(img, 127, 255, cv2.THRESH_BINARY)[1] # ensure binary
num_labels, labels_im = cv2.connectedComponents(img)
def imshow_components(labels):
# Map component labels to hue val
label_hue = np.uint8(179*labels/np.max(labels))
blank_ch = 255*np.ones_like(label_hue)
labeled_img = cv2.merge([label_hue, blank_ch, blank_ch])
# cvt to BGR for display
labeled_img = cv2.cvtColor(labeled_img, cv2.COLOR_HSV2BGR)
# set bg label to black
labeled_img[label_hue==0] = 0
cv2.imshow('labeled.png', labeled_img)
cv2.waitKey()
imshow_components(labels_im)

My adaptation of the CCL in 2D is:
1) Convert the image into a 1/0 image, with 1 being the object pixels and 0 being the background pixels.
2) Make a 2 pass CCL algorithm by implementing the Union-Find algorithm with pass compression. You can see more here.
In the First pass in this CCL implementation, you check the neighbor pixels, in the case your target pixel is an object pixel, and compare their label between them so that you can generate equivalences between them. You assign the least label, of those neighbor pixels which are objects pixels (label>0) to your target pixel. In this way, you are not only assigning an object label to your target pixesl (label>0) but also creating a list of equivalences.
2) In the second pass, you go through all the pixels, and change their previous label by the label of its parent label by just looking into the equivalent table stored in your Union-Find class.
3)I implemented an additional pass to make the labels follow a sequential order (1,2,3,4....) instead of a random order (23,45,1,...). That involves changing the labels "name" just for aesthetic purposes.

Related

Python 2.7: Area opening and closing binary image in Python not so accurate

I am using Python 2.7 and I used following Python and Matlab function for removing noises and fill holes in this image
.
1. Code to remove noise and fill holes using Python and Opencv
img = cv2.imread("binar.png",0)
kernel = np.ones((5,5),np.uint8)
open = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel)
close = cv2.morphologyEx(open, cv2.MORPH_CLOSE, kernel)
Code used in python and scipy using ndimage.binary_closing:
im = cv2.imread("binar.png", cv2.IMREAD_GRAYSCALE)
open_img = ndimage.binary_opening(im)
close_img = ndimage.binary_closing(open_img)
clg = close_img.astype(np.int)
Code used in Matlab: I used imfill and bwareaopen.
The results I got is shown below:
First image from using nd.image.binary_closing. My problem is it doesn't fill all white blobs fully. We can see inbetween minor black portion are still present.
Second image from using cv2.morphologyEx. Same problem in this also, as it also has some minor white portion in between white blobs. Here I faced one more problem. It converts some white pixels into black which should not be otherwise. I mentioned those areas with red color in image 2. Red highlighted portions is connected with larger one blobs but even then they get converted into black pixels.
Third image I got from MATLAB processing in which imfill work perfectly without converting essential white pixels into black.
So, my question is, Is there any method for Python 2.7 with which I can remove noises below certain area and fill the white blobs accurately as in Matlab? One more thing is, I want to find out the centroids and areas of those final processed blobs in last for further used. I can find out these using cv2.connectedComponentsWithStats but I want to find area and centroids after removing noises and filling blobs.
Thanks.
(I think this is not duplicate because I want to do it in Python not in Matlab. )
From Matlab's imfill() documentation:
BW2= imfill(BW,locations) performs a flood-fill operation on background pixels of the input binary image BW, starting from the points specified in locations. (...)
BW2= imfill(BW,'holes') fills holes in the input binary image BW. In this syntax, a hole is a set of background pixels that cannot be reached by filling in the background from the edge of the image.
I2= imfill(I) fills holes in the grayscale image I. In this syntax, a hole is defined as an area of dark pixels surrounded by lighter pixels.
The duplicate that I flagged shows ways to accomplish the third variant usually. However for many images, the second variant will still work fine and is extremely easy to accomplish. From the first variant you see that it mentions a flood-fill operation, which can be implemented in OpenCV with cv2.floodFill(). The second variant gives a really easy method---just flood fill from the edges, and the pixels left over are the black holes which can't be reached from outside. Then if you invert this image, you'll get white pixels for the holes, which you can add to your mask to fill in the holes.
import cv2
import numpy as np
# read image, ensure binary
img = cv2.imread('image.png', 0)
img[img!=0] = 255
# flood fill background to find inner holes
holes = img.copy()
cv2.floodFill(holes, None, (0, 0), 255)
# invert holes mask, bitwise or with img fill in holes
holes = cv2.bitwise_not(holes)
filled_holes = cv2.bitwise_or(img, holes)
cv2.imshow('', filled_holes)
cv2.waitKey()
Note that in this case, I just set the starting pixel for the background at (0,0). However it's possible that there could be, e.g., a white line going down the center which would cut off this operation to stop filling (i.e. stop finding the background) for the other half of the image. The more robust method would be to go through all of the edge pixels on the image, and flood fill every time you come across a black pixel. You can accomplish this more easily with the mask parameter in cv2.floodFill(), which allows you to continue to update the mask each time.
To find the centroids of each blob, you could use contour detection and cv2.moments() to find the centroids of each contour, or you could also do cv2.connectedComponentsWithStats() like you mentioned.

OpenCV - Removal of noise in image

I have an image here with a table.. In the column on the right the background is filled with noise
How to detect the areas with noise? I only want to apply some kind of filter on the parts with noise because I need to do OCR on it and any kind of filter will reduce the overall recognition
And what kind of filter is the best to remove the background noise in the image?
As said I need to do OCR on the image
I tried some filters/operations in OpenCV and it seems to work pretty well.
Step 1: Dilate the image -
kernel = np.ones((5, 5), np.uint8)
cv2.dilate(img, kernel, iterations = 1)
As you see, the noise is gone but the characters are very light, so I eroded the image.
Step 2: Erode the image -
kernel = np.ones((5, 5), np.uint8)
cv2.erode(img, kernel, iterations = 1)
As you can see, the noise is gone however some characters on the other columns are broken. I would recommend running these operations on the noisy column only. You might want to use HoughLines to find the last column. Then you can extract that column only, run dilation + erosion and replace this with the corresponding column in the original image.
Additionally, dilation + erosion is actually an operation called closing. This you could call directly using -
cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)
As #Ermlg suggested, medianBlur with a kernel of 3 also works wonderfully.
cv2.medianBlur(img, 3)
Alternative Step
As you can see all these filters work but it is better if you implement these filters only in the part where the noise is. To do that, use the following:
edges = cv2.Canny(img, 50, 150, apertureSize = 3) // img is gray here
lines = cv2.HoughLinesP(edges, 1, np.pi / 180, 100, 1000, 50) // last two arguments are minimum line length and max gap between two lines respectively.
for line in lines:
for x1, y1, x2, y2 in line:
print x1, y1
// This gives the start coordinates for all the lines. You should take the x value which is between (0.75 * w, w) where w is the width of the entire image. This will give you essentially **(x1, y1) = (1896, 766)**
Then, you can extract this part only like :
extract = img[y1:h, x1:w] // w, h are width and height of the image
Then, implement the filter (median or closing) in this image. After removing the noise, you need to put this filtered image in place of the blurred part in the original image.
image[y1:h, x1:w] = median
This is straightforward in C++ :
extract.copyTo(img, new Rect(x1, y1, w - x1, h - y1))
Final Result with alternate method
Hope it helps!
My solution is based on thresholding to get the resulted image in 4 steps.
Read image by OpenCV 3.2.0.
Apply GaussianBlur() to smooth image especially the region in gray color.
Mask the image to change text to white and the rest to black.
Invert the masked image to black text in white.
The code is in Python 2.7. It can be changed to C++ easily.
import numpy as np
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# read Danish doc image
img = cv2.imread('./imagesStackoverflow/danish_invoice.png')
# apply GaussianBlur to smooth image
blur = cv2.GaussianBlur(img,(5,3), 1)
# threshhold gray region to white (255,255, 255) and sets the rest to black(0,0,0)
mask=cv2.inRange(blur,(0,0,0),(150,150,150))
# invert the image to have text black-in-white
res = 255 - mask
plt.figure(1)
plt.subplot(121), plt.imshow(img[:,:,::-1]), plt.title('original')
plt.subplot(122), plt.imshow(blur, cmap='gray'), plt.title('blurred')
plt.figure(2)
plt.subplot(121), plt.imshow(mask, cmap='gray'), plt.title('masked')
plt.subplot(122), plt.imshow(res, cmap='gray'), plt.title('result')
plt.show()
The following is the plotted images by the code for reference.
Here is the result image at 2197 x 3218 pixels.
As I know the median filter is the best solution to reduce noise. I would recommend to use median filter with 3x3 window. See function cv::medianBlur().
But be careful when use any noise filtration simultaneously with OCR. Its can lead to decreasing of recognition accuracy.
Also I would recommend to try using pair of functions (cv::erode() and cv::dilate()). But I'm not shure that it will best solution then cv::medianBlur() with window 3x3.
I would go with median blur (probably 5*5 kernel).
if you are planning to apply OCR the image. I would advise you to the following:
Filter the image using Median Filter.
Find contours in the filtered image, you will get only text contours (Call them F).
Find contours in the original image (Call them O).
isolate all contours in O that have intersection with any contour in F.
Faster solution:
Find contours in the original image.
Filter them based on size.
Blur (3x3 box)
Threshold at 127
Result:
If you are very worried of removing pixels that could hurt your OCR detection. Without adding artefacts ea be as pure to the original as possible. Then you should create a blob filter. And delete any blobs that are smaller then n pixels or so.
Not going to write code, but i know this works great as i use this myself, though i dont use openCV (i wrote my own multithreaded blobfilter out of speed reasons). And sorry but i cannot share my code here. Just describing how to do it.
If processing time is not an issue, a very effective method in this case would be to compute all black connected components, and remove those smaller than a few pixels. It would remove all the noisy dots (apart those touching a valid component), but preserve all characters and the document structure (lines and so on).
The function to use would be connectedComponentWithStats (before you probably need to produce the negative image, the threshold function with THRESH_BINARY_INV would work in this case), drawing white rectangles where small connected components where found.
In fact, this method could be used to find characters, defined as connected components of a given minimum and maximum size, and with aspect ratio in a given range.
I had already faced the same issue and got the best solution.
Convert source image to grayscale image and apply fastNlMeanDenoising function and then apply threshold.
Like this -
fastNlMeansDenoising(gray,dst,3.0,21,7);
threshold(dst,finaldst,150,255,THRESH_BINARY);
ALSO use can adjust threshold accorsing to your background noise image.
eg- threshold(dst,finaldst,200,255,THRESH_BINARY);
NOTE - If your column lines got removed...You can take a mask of column lines from source image and can apply to the denoised resulted image using BITWISE operations like AND,OR,XOR.
Try thresholding the image like this. Make sure your src is in grayscale. This method will only retain the pixels which are between 150 and 255 intensity.
threshold(src, output, 150, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
You might want to invert the image as you are trying to negate the gray pixels. After the operation, invert it again to get your desired result.

Pixel-indexing in OpenCV's distance transform

I want to use the function distanceTransform() to find the minimum distance of non-zero pixels to zeros pixels, but also the position of that closest zero pixel. I call the second version of the function with the labelType flag set to DIST_LABEL_PIXEL. Everything works fine and I get the distances to and indices of the closest zero pixels.
Now I want to convert the indices back to pixel locations and I thought the indexing would be like idx=(row*cols+col) or something like this but I had to find out that OpenCV is just counting the zero pixels and using this count as the index. So if I get 123 as the index of the closest pixel this means that the 123th zero pixel is the closest.
How is OpenCV counting them? Probably in a row-wise manner?
Is there an efficient way of mapping the indices back to the locations? Obviously I could recount them and keep track of the counts and positions, if I know how OpenCV counts them, but this seems stupid and not very efficient.
Is there a good reason to use the indexing they used? I mean, are there any advantages over using an absolute indexing?
Thanks in advance.
EDIT:
If you want to see what I mean, you can run this:
Mat mask = Mat::ones(100, 100, CV_8U);
mask.at<uchar>(50, 50) = 0;
Mat dist, labels;
distanceTransform(mask, dist, labels, CV_DIST_L2, CV_DIST_MASK_PRECISE, DIST_LABEL_PIXEL);
cout << labels.at<int>(0,0) << endl;
You will see that all the labels are 1 because there is only one zero pixel, but how am I supposed to find the location (50,50) with that information?
The zero pixels also get labelled - they will have the same label as the non-zero pixels to which they are closest.
So you will have a 2D array of labels, the same size as your source image. If you examine all of the zero pixels in the source image, you can then find the associated label from the 2D array returned. This can then allow you to find which non-zero pixels are associated with each zero pixel by matching the labels.
If you see what I mean.
In python you can use numpy to associate the labels and the coordinates:
import cv2
import numpy as np
# create an image with two 0-lines
a = np.ones((100,100), dtype=np.uint8)
a[50,:] = 0
a[:,70] = 0
dt,lbl = cv2.distanceTransformWithLabels(a, cv2.DIST_L2, 3, labelType=cv2.DIST_LABEL_PIXEL)
# coordinates of 0-value pixels
xy = np.where(a==0)
# print label id and coordinate
for i in range(len(np.unique(lbl))):
print(i,xy[0][i], xy[1][i])

Python Imaging Processing (PIL) - changing the overall RGB of an image

I am trying to change the RGB for the overall image for a project. Currently I am working with a test file before I apply it to the actual Image. I want to test different values of RGB but would first like to start with the mean of all three. How would I go about doing this? I have other modules installed such as scipy, numpy, matplotlib, etc if those are needed. Thanks
from PIL import Image, ImageFilter
test = Image.open('/Users/MeganRCunninghan/Pictures/4th-of-July-Wallpaper.ppm')
test.show()
test.getrgb()
Assuming your image is stored as a numpy.ndarray (Test this with print type(test))...
Your image will be represented by an NxMx3 array. Basically this means you have a N by M image with a color depth of 3- your RGB values. Taking the mean of those 3 will leave you with an NxMx1 array, where the 1 is now the average intensity. Numpy does this very well:
test = test.mean(2)
The parameter given, 2, specifies the dimension to take the mean along. It could be either 0, 1, or 2, because your image matrix is 3 dimensional. This should return an NxM array. You basically will be left with a gray-scale, (color depth of 1) image. Try to show the value that gets returned! If you get Nx3 or Mx3, you know you have just taken the average along the wrong axis. Note that you can check the dimensions of a numpy array with:
test.shape
Shape will be a tuple describing the dimensions of your image.

openCV filter image - replace kernel with local maximum

Some details about my problem:
I'm trying to realize corner detector in openCV (another algorithm, that are built-in: Canny, Harris, etc).
I've got a matrix filled with the response values. The biggest response value is - the biggest probability of corner detected is.
I have a problem, that in neighborhood of a point there are few corners detected (but there is only one). I need to reduce number of false-detected corners.
Exact problem:
I need to walk through the matrix with a kernel, calculate maximum value of every kernel, leave max value, but others values in kernel make equal zero.
Are there build-in openCV functions to do this?
This is how I would do it:
Create a kernel, it defines a pixels neighbourhood.
Create a new image by dilating your image using this kernel. This dilated image contains the maximum neighbourhood value for every point.
Do an equality comparison between these two arrays. Wherever they are equal is a valid neighbourhood maximum, and is set to 255 in the comparison array.
Multiply the comparison array, and the original array together (scaling appropriately).
This is your final array, containing only neighbourhood maxima.
This is illustrated by these zoomed in images:
9 pixel by 9 pixel original image:
After processing with a 5 by 5 pixel kernel, only the local neighbourhood maxima remain (ie. maxima seperated by more than 2 pixels from a pixel with a greater value):
There is one caveat. If two nearby maxima have the same value then they will both be present in the final image.
Here is some Python code that does it, it should be very easy to convert to c++:
import cv
im = cv.LoadImage('fish2.png',cv.CV_LOAD_IMAGE_GRAYSCALE)
maxed = cv.CreateImage((im.width, im.height), cv.IPL_DEPTH_8U, 1)
comp = cv.CreateImage((im.width, im.height), cv.IPL_DEPTH_8U, 1)
#Create a 5*5 kernel anchored at 2,2
kernel = cv.CreateStructuringElementEx(5, 5, 2, 2, cv.CV_SHAPE_RECT)
cv.Dilate(im, maxed, element=kernel, iterations=1)
cv.Cmp(im, maxed, comp, cv.CV_CMP_EQ)
cv.Mul(im, comp, im, 1/255.0)
cv.ShowImage("local max only", im)
cv.WaitKey(0)
I didn't realise until now, but this is what #sansuiso suggested in his/her answer.
This is possibly better illustrated with this image, before:
after processing with a 5 by 5 kernel:
solid regions are due to the shared local maxima values.
I would suggest an original 2-step procedure (there may exist more efficient approaches), that uses opencv built-in functions :
Step 1 : morphological dilation with a square kernel (corresponding to your neighborhood). This step gives you another image, after replacing each pixel value by the maximum value inside the kernel.
Step 2 : test if the cornerness value of each pixel of the original response image is equal to the max value given by the dilation step. If not, then obviously there exists a better corner in the neighborhood.
If you are looking for some built-in functionality, FilterEngine will help you make a custom filter (kernel).
http://docs.opencv.org/modules/imgproc/doc/filtering.html#filterengine
Also, I would recommend some kind of noise reduction, usually blur, before all processing. That is unless you really want the image raw.