detect filled rectangles in image - c++

How to detect filled rectangles in image?
I need to get the bounding box for the 4 white (filled with white) rectangles in the right side of the image, but not the big rectangle in the middle with a white outline

You can isolate each contour by drawing the contour on a mask. Then you can use that mask on the image to calculate the average color. A high average indicates that the contour contains mostly white, so it is likely a contour you want.
Result:
Code:
import numpy as np
import cv2
#load the image
img = cv2.imread("form.png")
# create grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
#Find contours (external only):
im, contours, hierarchy = cv2.findContours(gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
#draw contours on original image
for cnt in contours:
# disregard small contours cause by logo and noise
if cv2.contourArea(cnt) > 10000:
#isolate contour and calculate average pixel value
mask = np.zeros(gray.shape[:2],np.uint8)
cv2.drawContours(mask,[cnt],0,255,-1)
mean_val = cv2.mean(gray,mask = mask)
# a high value indicates the contour contains mostly white, so draw the contour (I used the boundingRect)
if mean_val[0] > 200:
x,y,w,h = cv2.boundingRect(cnt)
cv2.rectangle(img, (x,y),(x+w,y+h), (0,0,255), thickness=4)
# show/save image
cv2.imshow("Image", mask)
cv2.imwrite("result.jpg", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Note: you can also load the image as grayscale and skip creating one, but I used it here so I could draw more obvious red boxes.
Also be aware the code given might not generalize well, but it shows the concept.

Related

OpenCV Python : how to use a contour as a mask in calcHist?

I'm trying to obtain the RGB histogram of a zone on a picture. I've already isolated my zone by thresholding the picture (the background is bright, and my isolated zone is dark). I know how to make the color histogram of my entire picture, but not the RGB histogram of just my zone, by using the contour of my zone as a mask in calcHist OpenCV function.
What I actually do is :
#I threshlod my picture to obtain my objects of interest
threshold = threshold3(img, param['thresh_red_low'], param['thresh_red_high'], param['thresh_green_low'], param['thresh_green_high'], param['thresh_blue_low'])
#I find contours of my objects
contours = cv2.findContours(threshold , cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[0]
#For each of my objects
for indexx, contour in enumerate(contours):
#If I directly try to put contour as a mask in calcHist, I got an error
#I convert the contour into a mask
mask = cv2.drawContours(image_color, contour, -1, (255, 255, 255), 2)
#I calculate histograms for BGR channel, on ten ranges, from 5 to 256
b_hist = cv2.calcHist([image_color],[0],mask,[10],[5,256])
g_hist = cv2.calcHist([image_color],[1],mask,[10],[5,256])
r_hist = cv2.calcHist([image_color],[2],mask,[10],[5,256])
#Then I save results into a csv
But I got too many values in each of histogram range. For example, my first zone has an area of 6371 px, and its histogram values are :
Number of red pixels per range : 388997,500656,148124,97374,198893,793015,894672,1232693,674721,105807
Number of green pixels per range :
123052,478714,349357,153624,117838,105738,84656,1205018,1356478,1064373
Number of blue pixels per range :
1590057,702532,547988,430238,320658,103876,15366,7629,1527,2645
Which is more like the entire picture histogram than the zone's. What do I don't understand about mask and contour in calcHist function ?
Sorry for such a late response but this might help somebody else, I hope.
By and large your code is correct only that you may need to add just a line or two and modify one line a bit.
#I threshlod my picture to obtain my objects of interest
threshold = threshold3(img, param['thresh_red_low'], param['thresh_red_high'], param['thresh_green_low'], param['thresh_green_high'], param['thresh_blue_low'])
#I find contours of my objects
contours = cv2.findContours(threshold , cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[0]
#For each of my objects
for indexx, contour in enumerate(contours):
#If I directly try to put contour as a mask in calcHist, I got an error
#I convert the contour into a mask
w, h = img.shape
mask = np.zeros((h, w), dtype="uint8")
cv2.drawContours(mask, contours, indexx, 255, cv2.FILLED)
#I calculate histograms for BGR channel, on ten ranges, from 5 to 256
b_hist = cv2.calcHist([image_color],[0],mask,[10],[5,256])
g_hist = cv2.calcHist([image_color],[1],mask,[10],[5,256])
r_hist = cv2.calcHist([image_color],[2],mask,[10],[5,256])
#Then I save results into a csv
This solution assumes that you are interested in everything that's been found inside the contour.

segmentation of overlapping cells

The following python script should split overlapping cells apart which does work quite good. The problem is now that it also splits some of the cells apart which don't overlap with other cells. To make things clear to you i'll add my input image and the output image.
The input:input image
The output:
output image
Output image where I marked two "bad" segmented cells:Output image with marked errors
Thresholded image: Thresholded image
Does someone have an idea how to avoid this problem or is the whole approach not good enough to process these kind of images?
I am using the following piece of code to segment the cells:
from skimage.feature import peak_local_max
from skimage.morphology import watershed
from scipy import ndimage
import numpy as np
import cv2
# load the image and perform pyramid mean shift filtering
# to aid the thresholding step
image = cv2.imread('C:/Users/Root/Desktop/image13.jpg')
shifted = cv2.pyrMeanShiftFiltering(image, 41, 51)
# convert the mean shift image to grayscale, then apply
# Otsu's thresholding
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255,
cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]
im = gray.copy()
D = ndimage.distance_transform_edt(thresh)
localMax = peak_local_max(D, indices=False, min_distance=3,
labels=thresh)
# perform a connected component analysis on the local peaks,
# using 8-connectivity, then apply the Watershed algorithm
markers = ndimage.label(localMax, structure=np.ones((3, 3)))[0]
labels = watershed(-D, markers, mask=thresh)
print("[INFO] {} unique segments found".format(len(np.unique(labels)) - 1))
conts=[]
for label in np.unique(labels):
# if the label is zero, we are examining the 'background'
# so simply ignore it
if label == 0:
continue
# otherwise, allocate memory for the label region and draw
# it on the mask
mask = np.zeros(gray.shape, dtype="uint8")
mask[labels == label] = 255
# detect contours in the mask and grab the largest one
cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)[-2]
c = max(cnts, key=cv2.contourArea)
rect = cv2.minAreaRect(c)
box = cv2.boxPoints(rect)
box = np.int0(box)
if cv2.contourArea(c) > 150:
#cv2.drawContours(image,c,-1,(0,255,0))
cv2.drawContours(image,[box],-1,(0,255,0))
cv2.imshow("output", image)
cv2.waitKey()

When I threshold an image I get a completely black image

I am attempting to threshold a wave so that the white background appears black and the wave itself which was originally black is white, however it only seems to return an entirely black image. What am I doing wrong?
import cv2
src = cv2.imread("C:\\Users\\ksatt\\Desktop\\SoundByte\\blackwaveblackaxis (1).PNG",0)
maxValue = 255
thresh= 53
if not src is None:
th, dst = cv2.threshold(src, thresh, maxValue, cv2.THRESH_BINARY_INV)
cv2.imshow("blackwave.PNG", dst)
cv2.imwrite("blackwave.PNG", dst)
cv2.waitKey(0)
else:
print 'Image could not be read'
Your threshold is too low, and the dark paper is going to pick up values that you don't want anyways. Basically, the contrast of the image is too low.
One easy solution is to subtract out the background. The simple way to do this is to dilate() your grayscale image, which will expand the white area and overtake the black lines. Then you can apply a small GaussianBlur() to that dilated image, and this will give you a "background" image that you can subtract from your original image to get a clear view of the lines. From there you'll have a much better image to threshold(), and you can even use OTSU thresholding to automatically set the threshold level for you.
import cv2
import numpy as np
# read image
src = cv2.imread('wave.png',0)
# create background image
bg = cv2.dilate(src, np.ones((5,5), dtype=np.uint8))
bg = cv2.GaussianBlur(bg, (5,5), 1)
# subtract out background from source
src_no_bg = 255 - cv2.absdiff(src, bg)
# threshold
maxValue = 255
thresh = 240
retval, dst = cv2.threshold(src_no_bg, thresh, maxValue, cv2.THRESH_BINARY_INV)
# automatic / OTSU threshold
retval, dst = cv2.threshold(src_no_bg, 0, maxValue, cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
You can see that manual thresholding gives the same results as OTSU, but you don't have to play around with the values for OTSU, it'll find them for you. This isn't always the best way to go but it can be quick sometimes. Check out this tutorial for more on different thresholding operations.
if you take a look at http://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html#threshold it will tell you what each parameter does of the function.
Also here is a good tutorial:
http://docs.opencv.org/trunk/d7/d4d/tutorial_py_thresholding.html
Python: cv.Threshold(src, dst, threshold, maxValue, thresholdType) →
None
Is the prototype which gets further explenation in the mentioned API.
So simply change your code to:
cv2.threshold(src,RESULT, thresh, maxValue, cv2.THRESH_BINARY_INV)
cv2.imshow("blackwave.PNG", RESULT)
Could you post a picture of the wave? Have you tried using standard python? Something like this should work:
import numpy as np
import matplotlib.pyplot as plt
maxValue = 255
thresh= 53
A = np.load('file.png')
# For each pixel, see if it's above/below the threshold
for i in range(A.shape[0]): # Loop along the X direction
for j in range(A.shape[1]): # Loop along the Y direction
# Set to black the background
if A[i,j] > thresh:
A[i,j] = 0
if A[i,j] == 0:
A[i,j] = 255
Or something similar.

Eye Pupil Tracking using Hough Circle Transform

I have a project of Eye Controlled Wheel Chair where I need to detect the pupil of the Eye and according to its motion the Wheel Chair moves. As a test for the code I am writing I performed the script on a static image. The image is exactly where the camera will be put. The camera will be an IR one.
Note: I am using compiled OpenCV 3.1.0-dev and Python2.7 on Windows Platfrom
The detected circle I wanted using Houghcircle transform:
After that I am working on a code to detect the same thing only by using an IR camera.
The results from the static image code is very reliable to me, but the problem is the code with the IR camera.
The code I have wrote so far is:
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
while True:
## Read Image
ret, image = cap.read()
## Convert to 1 channel only grayscale image
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
## CLAHE Equalization
cl1 = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
clahe = cl1.apply(gray)
## medianBlur the image to remove noise
blur = cv2.medianBlur(clahe, 7)
## Detect Circles
circles = cv2.HoughCircles(blur ,cv2.HOUGH_GRADIENT,1,20,
param1=50,param2=30,minRadius=7,maxRadius=21)
if circles != None:
circles = np.round(circles[0,:]).astype("int")
for circle in circles[0,:]:
# draw the outer circle
cv2.circle(image,(circle[0],circle[1]),circle[2],(0,255,0),2)
# draw the center of the circle
cv2.circle(image,(circle[0],circle[1]),2,(0,0,255),3)
if cv2.waitKey(1) in [27, ord('q'), 32]:
break
cap.release()
cv2.destroyAllWindows()
I always get this error:
**if circles != None:
FutureWarning: comparison to `None` will result in an elementwise object comparison in the future.
Traceback (most recent call last):
cv2.circle(image,(circle[0],circle[1]),circle[2],(0,255,0),2)
IndexError: invalid index to scalar variable.**
For any questions about the code for the static image, the code is:
import cv2
import numpy as np
## Read Image
image = cv2.imread('eye.tif')
imageBackup = image.copy()
## Convert to 1 channel only grayscale image
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
## CLAHE Equalization
cl1 = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
clahe = cl1.apply(gray)
## medianBlur the image to remove noise
blur = cv2.medianBlur(clahe, 7)
## Detect Circles
circles = cv2.HoughCircles(blur ,cv2.HOUGH_GRADIENT,1,20,
param1=50,param2=30,minRadius=7,maxRadius=21)
for circle in circles[0,:]:
# draw the outer circle
cv2.circle(image,(circle[0],circle[1]),circle[2],(0,255,0),2)
# draw the center of the circle
cv2.circle(image,(circle[0],circle[1]),2,(0,0,255),3)
cv2.imshow('Final', image)
cv2.imshow('imageBackup', imageBackup)
cv2.waitKey(0)
cv2.destroyAllWindows()
So i tried it out my self and i had the same error. So i modified the code like i already proposed. Here is the snipped:
if circles != None:
for circle in circles[0,:]:
# draw the outer circle
cv2.circle(image,(circle[0],circle[1]),circle[2],(0,255,0),2)
# draw the center of the circle
cv2.circle(image,(circle[0],circle[1]),2,(0,0,255),3)
In addition you can try to use cv2.Canny for better results. Over and out :)

How to remove black part from the image?

I have stitched two images together using OpenCV functions and C++. Now I am facing a problem that the final image contains a large black part.
The final image should be a rectangle containing the effective part.
My image is the following:
How can I remove the black section?
mevatron's answer is one way where amount of black region is minimised while retaining full image.
Another option is removing complete black region where you also loose some part of image, but result will be a neat looking rectangular image. Below is the Python code.
Here, you find three main corners of the image as below:
I have marked those values. (1,x2), (x1,1), (x3,y3). It is based on the assumption that your image starts from (1,1).
Code :
First steps are same as mevatron's. Blur the image to remove noise, threshold the image, then find contours.
import cv2
import numpy as np
img = cv2.imread('office.jpg')
img = cv2.resize(img,(800,400))
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
gray = cv2.medianBlur(gray,3)
ret,thresh = cv2.threshold(gray,1,255,0)
contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
Now find the biggest contour which is your image. It is to avoid noise in case if any (Most probably there won't be any). Or you can use mevatron's method.
max_area = -1
best_cnt = None
for cnt in contours:
area = cv2.contourArea(cnt)
if area > max_area:
max_area = area
best_cnt = cnt
Now approximate the contour to remove unnecessary points in contour values found, but it preserve all corner values.
approx = cv2.approxPolyDP(best_cnt,0.01*cv2.arcLength(best_cnt,True),True)
Now we find the corners.
First, we find (x3,y3). It is farthest point. So x3*y3 will be very large. So we find products of all pair of points and select the pair with maximum product.
far = approx[np.product(approx,2).argmax()][0]
Next (1,x2). It is the point where first element is one,then second element is maximum.
ymax = approx[approx[:,:,0]==1].max()
Next (x1,1). It is the point where second element is 1, then first element is maximum.
xmax = approx[approx[:,:,1]==1].max()
Now we find the minimum values in (far.x,xmax) and (far.y, ymax)
x = min(far[0],xmax)
y = min(far[1],ymax)
If you draw a rectangle with (1,1) and (x,y), you get result as below:
So you crop the image to correct rectangular area.
img2 = img[:y,:x].copy()
Below is the result:
See, the problem is that you lose some parts of the stitched image.
You can do this with threshold, findContours, and boundingRect.
So, here is a quick script doing this with the python interface.
stitched = cv2.imread('stitched.jpg', 0)
(_, mask) = cv2.threshold(stitched, 1.0, 255.0, cv2.THRESH_BINARY);
# findContours destroys input
temp = mask.copy()
(contours, _) = cv2.findContours(temp, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# sort contours by largest first (if there are more than one)
contours = sorted(contours, key=lambda contour:len(contour), reverse=True)
roi = cv2.boundingRect(contours[0])
# use the roi to select into the original 'stitched' image
stitched[roi[1]:roi[3], roi[0]:roi[2]]
Ends up looking like this:
NOTE : Sorting may not be necessary with raw imagery, but using the compressed image caused some compression artifacts to show up when using a low threshold, so that is why I post-processed with sorting.
Hope that helps!
You can use active contours (balloons/snakes) for selecting the black region accurately. A demonstration can be found here. Active contours are available in OpenCV, check cvSnakeImage.