Edge Detection algorithm detecting Horizons - computer-vision

I am trying to use a canny edge detection algorithm to find the horizontal line in images of horizons (image 2), the way I have set it up now detects too much noise (image 1). Any tips? Different edge detection, blurring?
img = cv2.imread("lake2.jpeg", 3) # Read image
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blurred = cv2.GaussianBlur(gray, (3, 3), 0)
edge = cv2.Canny(blurred, 10, 200)
Edges.
Horizon

Related

detect filled rectangles in image

How to detect filled rectangles in image?
I need to get the bounding box for the 4 white (filled with white) rectangles in the right side of the image, but not the big rectangle in the middle with a white outline
You can isolate each contour by drawing the contour on a mask. Then you can use that mask on the image to calculate the average color. A high average indicates that the contour contains mostly white, so it is likely a contour you want.
Result:
Code:
import numpy as np
import cv2
#load the image
img = cv2.imread("form.png")
# create grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
#Find contours (external only):
im, contours, hierarchy = cv2.findContours(gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
#draw contours on original image
for cnt in contours:
# disregard small contours cause by logo and noise
if cv2.contourArea(cnt) > 10000:
#isolate contour and calculate average pixel value
mask = np.zeros(gray.shape[:2],np.uint8)
cv2.drawContours(mask,[cnt],0,255,-1)
mean_val = cv2.mean(gray,mask = mask)
# a high value indicates the contour contains mostly white, so draw the contour (I used the boundingRect)
if mean_val[0] > 200:
x,y,w,h = cv2.boundingRect(cnt)
cv2.rectangle(img, (x,y),(x+w,y+h), (0,0,255), thickness=4)
# show/save image
cv2.imshow("Image", mask)
cv2.imwrite("result.jpg", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Note: you can also load the image as grayscale and skip creating one, but I used it here so I could draw more obvious red boxes.
Also be aware the code given might not generalize well, but it shows the concept.

Extracting a laser line in an image (using OpenCV)

i have a picture from a laser line and i would like to extract that line out of the image.
As the laser line is red, i take the red channel of the image and then searching for the highest intensity in every row:
The problem now is, that there are also some points which doesnt belong to the laser line (if you zoom into the second picture, you can see these points).
Does anyone have an idea for the next steps (to remove the single points and also to extract the lines)?
That was another approach to detect the line:
First i blurred out that "black-white" line with a kernel, then i thinned(skeleton) that blurred line to a thin line, then i applied an OpenCV function to detect the line.. the result is in the below image:
NEW:
Now i have another harder situation.
I have to extract a green laser light.
The problem here is that the colour range of the laser line is wider and changing.
On some parts of the laser line the pixel just have high green component, while on other parts the pixel have high blue component as well.
Getting the highest value in every row will always output a value, instead of ignoring when the value isn't high enough. Consider using a threshold too, so that you can discard ones that aren't high enough.
However, that's not a very efficient way to do this at all. A much better and easier solution would be to use the OpenCV function inRange(); define a lower and upper bound for the red color in all three channels, and this will return a binary image with white pixels where the image intensity is within that BGR range.
This is in python but it does the job, should be easy to see how to use the function:
import cv2
import numpy as np
img = cv2.imread('image.png')
lowerb = np.array([0, 0, 120])
upperb = np.array([100, 100, 255])
red_line = cv2.inRange(img, lowerb, upperb)
cv2.imshow('red', red_line)
cv2.waitKey(0)
This produces the output:
This could be further processed by finding contours or other methods to turn the points into a nice curve.
I'm really sorry for the short answer without any code, but I suggest you take contours and process them.
I dont know exact what you need, so here are two approaches for you:
just collect as much as possible contours on single line (use centers and try find straight line with smallest mean)
as first way, but trying heuristically combine separated lines.... it's much harder, but this may give you almost full laser line from image.
--
Some example for yours picture:
import cv2
import numpy as np
import math
img = cv2.imread('image.png')
hsv = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)
# filtering red area of hue
redHueArea = 15
redRange = ((hsv[:, :, 0] + 360 + redHueArea) % 360)
hsv[np.where((2 * redHueArea) > redRange)] = [0, 0, 0]
# filtering by saturation
hsv[np.where(hsv[:, :, 1] < 95)] = [0, 0, 0]
# convert to rgb
rgb = cv2.cvtColor(hsv, cv2.COLOR_HSV2RGB)
# select only red grayscaled channel with low threshold
gray = cv2.cvtColor(rgb, cv2.COLOR_RGB2GRAY)
gray = cv2.threshold(gray, 15, 255, cv2.THRESH_BINARY)[1]
# contours processing
(_, contours, _) = cv2.findContours(gray.copy(), cv2.RETR_LIST, 1)
for c in contours:
area = cv2.contourArea(c)
if area < 8: continue
epsilon = 0.1 * cv2.arcLength(c, True) # tricky smoothing to a single line
approx = cv2.approxPolyDP(c, epsilon, True)
cv2.drawContours(img, [approx], -1, [255, 255, 255], -1)
cv2.imshow('result', img)
cv2.waitKey(0)
In your case it's work perfectly, but, as i already said, you will need to do much more work with contours.

OpenCV - Extract letters from string using python

I have an image
from where I want to extract each and every character individually.
As i want something like THIS OUTPUT and so on.
What would be the appropriate approach to do this using OpenCV and python?
A short addition to Amitay's awesome answer. You should negate the image using
cv2.THRESH_BINARY_INV
to capture black letters on white paper.
Another idea could be the MSER blob detector like that:
img = cv2.imread('path to image')
(h, w) = img.shape[:2]
image_size = h*w
mser = cv2.MSER_create()
mser.setMaxArea(image_size/2)
mser.setMinArea(10)
gray = cv2.cvtColor(filtered, cv2.COLOR_BGR2GRAY) #Converting to GrayScale
_, bw = cv2.threshold(gray, 0.0, 255.0, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
regions, rects = mser.detectRegions(bw)
# With the rects you can e.g. crop the letters
for (x, y, w, h) in rects:
cv2.rectangle(img, (x, y), (x+w, y+h), color=(255, 0, 255), thickness=1)
This also leads to a full letter recognition.
You can do the following ( opencv 3.0 and aboove)
Run Otsu thresholding on the image (http://docs.opencv.org/3.2.0/d7/d4d/tutorial_py_thresholding.html)
Run connected component labeling with stats on the threshold images.(How to use openCV's connected components with stats in python?)
For each connected component take the bounding box using the stat you got from step 2 which has for each one of the comoneonts the follwing information (cv2.CC_STAT_LEFT cv2.CC_STAT_TOP cv2.CC_STAT_WIDTH cv2.CC_STAT_HEIGHT)
Using the bounding box crop the component from the original image.

Eye Pupil Tracking using Hough Circle Transform

I have a project of Eye Controlled Wheel Chair where I need to detect the pupil of the Eye and according to its motion the Wheel Chair moves. As a test for the code I am writing I performed the script on a static image. The image is exactly where the camera will be put. The camera will be an IR one.
Note: I am using compiled OpenCV 3.1.0-dev and Python2.7 on Windows Platfrom
The detected circle I wanted using Houghcircle transform:
After that I am working on a code to detect the same thing only by using an IR camera.
The results from the static image code is very reliable to me, but the problem is the code with the IR camera.
The code I have wrote so far is:
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
while True:
## Read Image
ret, image = cap.read()
## Convert to 1 channel only grayscale image
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
## CLAHE Equalization
cl1 = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
clahe = cl1.apply(gray)
## medianBlur the image to remove noise
blur = cv2.medianBlur(clahe, 7)
## Detect Circles
circles = cv2.HoughCircles(blur ,cv2.HOUGH_GRADIENT,1,20,
param1=50,param2=30,minRadius=7,maxRadius=21)
if circles != None:
circles = np.round(circles[0,:]).astype("int")
for circle in circles[0,:]:
# draw the outer circle
cv2.circle(image,(circle[0],circle[1]),circle[2],(0,255,0),2)
# draw the center of the circle
cv2.circle(image,(circle[0],circle[1]),2,(0,0,255),3)
if cv2.waitKey(1) in [27, ord('q'), 32]:
break
cap.release()
cv2.destroyAllWindows()
I always get this error:
**if circles != None:
FutureWarning: comparison to `None` will result in an elementwise object comparison in the future.
Traceback (most recent call last):
cv2.circle(image,(circle[0],circle[1]),circle[2],(0,255,0),2)
IndexError: invalid index to scalar variable.**
For any questions about the code for the static image, the code is:
import cv2
import numpy as np
## Read Image
image = cv2.imread('eye.tif')
imageBackup = image.copy()
## Convert to 1 channel only grayscale image
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
## CLAHE Equalization
cl1 = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
clahe = cl1.apply(gray)
## medianBlur the image to remove noise
blur = cv2.medianBlur(clahe, 7)
## Detect Circles
circles = cv2.HoughCircles(blur ,cv2.HOUGH_GRADIENT,1,20,
param1=50,param2=30,minRadius=7,maxRadius=21)
for circle in circles[0,:]:
# draw the outer circle
cv2.circle(image,(circle[0],circle[1]),circle[2],(0,255,0),2)
# draw the center of the circle
cv2.circle(image,(circle[0],circle[1]),2,(0,0,255),3)
cv2.imshow('Final', image)
cv2.imshow('imageBackup', imageBackup)
cv2.waitKey(0)
cv2.destroyAllWindows()
So i tried it out my self and i had the same error. So i modified the code like i already proposed. Here is the snipped:
if circles != None:
for circle in circles[0,:]:
# draw the outer circle
cv2.circle(image,(circle[0],circle[1]),circle[2],(0,255,0),2)
# draw the center of the circle
cv2.circle(image,(circle[0],circle[1]),2,(0,0,255),3)
In addition you can try to use cv2.Canny for better results. Over and out :)

How to remove black part from the image?

I have stitched two images together using OpenCV functions and C++. Now I am facing a problem that the final image contains a large black part.
The final image should be a rectangle containing the effective part.
My image is the following:
How can I remove the black section?
mevatron's answer is one way where amount of black region is minimised while retaining full image.
Another option is removing complete black region where you also loose some part of image, but result will be a neat looking rectangular image. Below is the Python code.
Here, you find three main corners of the image as below:
I have marked those values. (1,x2), (x1,1), (x3,y3). It is based on the assumption that your image starts from (1,1).
Code :
First steps are same as mevatron's. Blur the image to remove noise, threshold the image, then find contours.
import cv2
import numpy as np
img = cv2.imread('office.jpg')
img = cv2.resize(img,(800,400))
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
gray = cv2.medianBlur(gray,3)
ret,thresh = cv2.threshold(gray,1,255,0)
contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
Now find the biggest contour which is your image. It is to avoid noise in case if any (Most probably there won't be any). Or you can use mevatron's method.
max_area = -1
best_cnt = None
for cnt in contours:
area = cv2.contourArea(cnt)
if area > max_area:
max_area = area
best_cnt = cnt
Now approximate the contour to remove unnecessary points in contour values found, but it preserve all corner values.
approx = cv2.approxPolyDP(best_cnt,0.01*cv2.arcLength(best_cnt,True),True)
Now we find the corners.
First, we find (x3,y3). It is farthest point. So x3*y3 will be very large. So we find products of all pair of points and select the pair with maximum product.
far = approx[np.product(approx,2).argmax()][0]
Next (1,x2). It is the point where first element is one,then second element is maximum.
ymax = approx[approx[:,:,0]==1].max()
Next (x1,1). It is the point where second element is 1, then first element is maximum.
xmax = approx[approx[:,:,1]==1].max()
Now we find the minimum values in (far.x,xmax) and (far.y, ymax)
x = min(far[0],xmax)
y = min(far[1],ymax)
If you draw a rectangle with (1,1) and (x,y), you get result as below:
So you crop the image to correct rectangular area.
img2 = img[:y,:x].copy()
Below is the result:
See, the problem is that you lose some parts of the stitched image.
You can do this with threshold, findContours, and boundingRect.
So, here is a quick script doing this with the python interface.
stitched = cv2.imread('stitched.jpg', 0)
(_, mask) = cv2.threshold(stitched, 1.0, 255.0, cv2.THRESH_BINARY);
# findContours destroys input
temp = mask.copy()
(contours, _) = cv2.findContours(temp, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# sort contours by largest first (if there are more than one)
contours = sorted(contours, key=lambda contour:len(contour), reverse=True)
roi = cv2.boundingRect(contours[0])
# use the roi to select into the original 'stitched' image
stitched[roi[1]:roi[3], roi[0]:roi[2]]
Ends up looking like this:
NOTE : Sorting may not be necessary with raw imagery, but using the compressed image caused some compression artifacts to show up when using a low threshold, so that is why I post-processed with sorting.
Hope that helps!
You can use active contours (balloons/snakes) for selecting the black region accurately. A demonstration can be found here. Active contours are available in OpenCV, check cvSnakeImage.