I am using opencv-python library to do the liquid level detection. So far I was able to convert the image to gray scale and applying canny edge detection the container has been identified.
import numpy as np
import cv2
import math
from matplotlib import pyplot as plt
from cv2 import threshold, drawContours
img1 = cv2.imread('botone.jpg')
kernel = np.ones((5,5),np.uint8)
#convert the image to grayscale
imgray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(imgray,120,230)
I need to know how to find water level from this stage.
Should I try machine learning, or is there any other option or algorithm available?
I took an approach of finding out the horizontal line in the edge detected image. If the horizontal line crosses certain threshold I can consider it as level.But the result is not consistent.
I want to know if there are any other approaches i can go with or white papers for reference?
I don't know how you would do that with numpy and opencv, because I use ImageMagick (which is installed on most Linux distros and is avilable for OSX and Windows), but the concept should be applicable.
First, I would probably go for a Sobel filter that is rotated to find horizontal edges - i.e. a directional filter.
convert chemistry.jpg -morphology Convolve Sobel:90 sobel.jpg
Then I would probably look at adding in a Hough Transform to find the lines within the horizontal edge-detected image. So, my one-liner looks like this in the Terminal/shell:
convert chemistry.jpg -morphology Convolve Sobel:90 -hough-lines 5x5+30 level.jpg
If I add in some debug, you can see the coefficients of the Sobel filter:
convert chemistry.jpg -define showkernel=1 -morphology Convolve Sobel:90 -hough-lines 5x5+30 sobel.jpg
Kernel "Sobel#90" of size 3x3+1+1 with values from -2 to 2
Forming a output range from -4 to 4 (Zero-Summing)
0: 1 2 1
1: 0 0 0
2: -1 -2 -1
If I add in some more debug, you can see the coordinates of the lines detected:
convert chemistry.jpg -morphology Convolve Sobel:90 -hough-lines 5x5+30 -write lines.mvg level.jpg
lines.mvg
# Hough line transform: 5x5+30
viewbox 0 0 86 196
line 0,1.52265 86,18.2394 # 30 <-- this is the topmost, somewhat diagonal line
line 0,84.2484 86,82.7472 # 40 <-- this is your actual level
line 0,84.5 86,84.5 # 40 <-- this is also your actual level
line 0,94.5 86,94.5 # 30 <-- this is the line just below the surface
line 0,93.7489 86,95.25 # 30 <-- so is this
line 0,132.379 86,124.854 # 32 <-- this is the red&white valve(?)
line 0,131.021 86,128.018 # 34
line 0,130.255 86,128.754 # 34
line 0,130.5 86,130.5 # 34
line 0,129.754 86,131.256 # 34
line 0,192.265 86,190.764 # 86
line 0,191.5 86,191.5 # 86
line 0,190.764 86,192.265 # 86
line 0,192.5 86,192.5 # 86
As I said in my comments, please also think about maybe lighting your experiment better - either with different coloured lights, more diffuse lights, different direction lights. Also, if your experiment happens over time, you could consider looking at differences between images to see which line is moving...
Here are the lines on top of your original image:
Related
i have a picture from a laser line and i would like to extract that line out of the image.
As the laser line is red, i take the red channel of the image and then searching for the highest intensity in every row:
The problem now is, that there are also some points which doesnt belong to the laser line (if you zoom into the second picture, you can see these points).
Does anyone have an idea for the next steps (to remove the single points and also to extract the lines)?
That was another approach to detect the line:
First i blurred out that "black-white" line with a kernel, then i thinned(skeleton) that blurred line to a thin line, then i applied an OpenCV function to detect the line.. the result is in the below image:
NEW:
Now i have another harder situation.
I have to extract a green laser light.
The problem here is that the colour range of the laser line is wider and changing.
On some parts of the laser line the pixel just have high green component, while on other parts the pixel have high blue component as well.
Getting the highest value in every row will always output a value, instead of ignoring when the value isn't high enough. Consider using a threshold too, so that you can discard ones that aren't high enough.
However, that's not a very efficient way to do this at all. A much better and easier solution would be to use the OpenCV function inRange(); define a lower and upper bound for the red color in all three channels, and this will return a binary image with white pixels where the image intensity is within that BGR range.
This is in python but it does the job, should be easy to see how to use the function:
import cv2
import numpy as np
img = cv2.imread('image.png')
lowerb = np.array([0, 0, 120])
upperb = np.array([100, 100, 255])
red_line = cv2.inRange(img, lowerb, upperb)
cv2.imshow('red', red_line)
cv2.waitKey(0)
This produces the output:
This could be further processed by finding contours or other methods to turn the points into a nice curve.
I'm really sorry for the short answer without any code, but I suggest you take contours and process them.
I dont know exact what you need, so here are two approaches for you:
just collect as much as possible contours on single line (use centers and try find straight line with smallest mean)
as first way, but trying heuristically combine separated lines.... it's much harder, but this may give you almost full laser line from image.
--
Some example for yours picture:
import cv2
import numpy as np
import math
img = cv2.imread('image.png')
hsv = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)
# filtering red area of hue
redHueArea = 15
redRange = ((hsv[:, :, 0] + 360 + redHueArea) % 360)
hsv[np.where((2 * redHueArea) > redRange)] = [0, 0, 0]
# filtering by saturation
hsv[np.where(hsv[:, :, 1] < 95)] = [0, 0, 0]
# convert to rgb
rgb = cv2.cvtColor(hsv, cv2.COLOR_HSV2RGB)
# select only red grayscaled channel with low threshold
gray = cv2.cvtColor(rgb, cv2.COLOR_RGB2GRAY)
gray = cv2.threshold(gray, 15, 255, cv2.THRESH_BINARY)[1]
# contours processing
(_, contours, _) = cv2.findContours(gray.copy(), cv2.RETR_LIST, 1)
for c in contours:
area = cv2.contourArea(c)
if area < 8: continue
epsilon = 0.1 * cv2.arcLength(c, True) # tricky smoothing to a single line
approx = cv2.approxPolyDP(c, epsilon, True)
cv2.drawContours(img, [approx], -1, [255, 255, 255], -1)
cv2.imshow('result', img)
cv2.waitKey(0)
In your case it's work perfectly, but, as i already said, you will need to do much more work with contours.
In the code below I'm using get_window_extent() to get the height of a text label. I have set the figure dpi value to 72 dpi in an attempt to make the screen display and font size have a 1:1 relationship. My expectation is that the value retrieved by get_window_extent() would match the text point size value.
To test this out I created a loop to draw a set of text labels of increasing size and I'm finding that the value retrieved by get_window_extent() matches for some font sizes but not for others.
Here is the output produced by the code below:
Font Size
Set Returned
9 9.0
10 10.0
11 10.0
12 13.0
13 13.0
14 14.0
15 15.0
16 15.0
17 18.0
18 18.0
It appears that either the figure dpi setting is not actually at 72 dpi, or that something is amiss with the get_window_extent() method.
I'm running Matplotlib 1.5.0 on macOS 10.12.5, using the WXagg backend. Any ideas as why this is occurring would be welcome.
import matplotlib as mpl
mpl.use('wxagg')
import matplotlib.pyplot as plt
# points per inch
points_per_inch = 72
# set figure size in inches
myfsize = (8, 6)
# create figure and subplot axes matrix
myfig, ax = plt.subplots(1, 1, dpi=72, figsize=myfsize)
# adjust subplot spacing
plt.subplots_adjust(wspace=0.04, hspace=0.04, right=0.8,
bottom=0.1, top=0.9, left=0.125)
# draw canvase to get positions
plt.gcf().canvas.draw()
string = 'curve'
print
print 'Font Size'
print 'Set', '\t', 'Returned'
# loop over a range of font sizes and print retrieved font size
for i in range(10):
text_size = 9 + i
text_position = i / 10.0
txt = ax.text(0.0, text_position, string, fontsize=text_size,
transform=ax.transAxes)
plt.gcf().canvas.draw()
txt_height_display = txt.get_window_extent().height
print text_size, '\t', txt_height_display
plt.show()
Due to the discretization of the text onto screen pixels, there may always be a deviation between the number of pixels filled by the font and the fontsize. This deviation may be up tp 2 pixels - one to each side.
I therefore wouldn't be worried or supprised by the results you get.
So I am working on a script to do some video processing. It will read a video file searching for red dots that are a certain size then find the center of each and return the x/y coordinates. Initially I had it working great on my Windows Machine, so I sent it over to the raspberry pi to see if i would encounter issues, and boy did I.
On Windows the script would run in real time, completing at the same time as the video. On the Raspberry it is slowwwwwwww. Also I noticed when I looked into the structure of countours, there is a huge array of 0's first, before my x/y coordinates array. I have no idea what is creating this, but it doesn't happen on the windows box.
I have same version of python and opencv installed on both boxes, the only difference is numpy 1.11 on windows and numpy 1.12 on raspberry. Note, I had to change np.mean(contours[?]) to 1 to skip the initial array of 0's. What have I done wrong?
Here's a video I made for testing purposes if needed:
http://www.foxcreekwinery.com/video.mp4
import numpy as np
import cv2
def vidToPoints():
cap = cv2.VideoCapture('video.mp4')
while(cap.isOpened()):
ret, image = cap.read()
if (ret):
cv2.imshow('frame',image)
if cv2.waitKey(1) == ord('q'):
break
# save frame as image
cv2.imwrite('frame.jpg',image)
# load the image
image = cv2.imread('frame.jpg')
# define the list of boundaries
boundaries = [
([0, 0, 150], [90, 90, 255])
]
# loop over the boundaries
for (lower, upper) in boundaries:
# create NumPy arrays from the boundaries
lower = np.array(lower, dtype = "uint8")
upper = np.array(upper, dtype = "uint8")
# find the colors within the specified boundaries
mask = cv2.inRange(image, lower, upper)
if (50 > cv2.countNonZero(mask) > 10):
#find contour
contours = cv2.findContours(mask, 0, 1)
#average countour list to find center
avg = np.mean(contours[1],axis=1)
x = int(round(avg[0,0,0]))
y = int(round(avg[0,0,1]))
print [x,y]
print cv2.countNonZero(mask)
for l in range(5):
cap.grab()
else:
break
cap.release()
cv2.destroyAllWindows()
vidToPoints()
I was working on following images to find lines and spots in these images. I am working with OpenCV, C++. I have tried HoughLineP, HoughLine, Contour and Canny methods but couldn't get the results. If someone can help or write a pseudo-code, I shall be grateful.
Thanks.
Image to detect line:
Image to detect spot:
Mmmmm - where did you get those awful images? Well, they were worth 2 minutes of effort... the spot is lighter than the rest, so if you divide the image into 100 rectangles and find the brightest ones, you will probably get it... I use ImageMagick here just at the command line - it is installed on most Linux distros and available for OSX and Windows:
convert noise.jpg -crop 10x10# -format "%[mean] %g\n" info: | sort -n
32123.3 640x416+384+291
32394.6 640x416+256+42
32442.2 640x416+320+125
32449.1 640x416+384+250
32459.6 640x416+192+374
32464.4 640x416+0+374
32486.5 640x416+448+125
32491.4 640x416+576+374
32493.7 640x416+576+333
32504.3 640x416+576+83
32520.9 640x416+576+0
32527 640x416+448+0
32621.8 640x416+384+333
32624.1 640x416+320+42
32631.3 640x416+192+333
32637.8 640x416+384+42
32643.4 640x416+512+0
32644.2 640x416+0+0
32652.6 640x416+384+83
32659.1 640x416+128+374
32660.4 640x416+320+208
32662.2 640x416+384+0
32668.5 640x416+256+208
32669.4 640x416+0+333
32676.7 640x416+256+250
32683.5 640x416+256+83
32699.7 640x416+0+208
32701.3 640x416+64+166
32704 640x416+576+208
32704 640x416+64+333
32707.5 640x416+512+208
32710.8 640x416+192+83
32729.8 640x416+320+83
32733.4 640x416+256+166
32735 640x416+576+250
32741 640x416+256+125
32745.4 640x416+0+166
32748.4 640x416+320+166
32751.4 640x416+512+166
32752.4 640x416+512+42
32755.1 640x416+384+208
32770.9 640x416+448+291
32776.8 640x416+128+166
32777.1 640x416+256+0
32795.8 640x416+512+125
32801.5 640x416+128+333
32803.3 640x416+192+125
32805.5 640x416+256+374
32809.6 640x416+448+166
32810 640x416+576+166
32822.2 640x416+0+291
32822.8 640x416+576+42
32826.8 640x416+320+333
32831.7 640x416+320+0
32834.8 640x416+192+42
32837.6 640x416+192+166
32843 640x416+384+125
32862 640x416+64+374
32865.8 640x416+0+42
32871.5 640x416+576+291
32872.5 640x416+0+83
32872.8 640x416+448+333
32873.6 640x416+320+291
32877.5 640x416+448+42
32880.5 640x416+64+208
32883.5 640x416+128+42
32883.9 640x416+192+208
32885.5 640x416+128+208
32889.2 640x416+256+333
32921 640x416+192+291
32923.3 640x416+64+291
32929.2 640x416+512+374
32935.4 640x416+192+250
32938.4 640x416+64+250
32943.5 640x416+448+374
32953.3 640x416+384+374
32954.7 640x416+320+374
32962 640x416+320+250
32966.9 640x416+448+83
32967.3 640x416+128+291
32968.3 640x416+0+250
32970.8 640x416+512+333
32974.5 640x416+64+0
32979.6 640x416+512+291
32983.6 640x416+256+291
32988.9 640x416+448+250
32993.3 640x416+576+125
33012.7 640x416+0+125
33057.3 640x416+512+250
33068.6 640x416+128+250
33102.9 640x416+64+42
33126.1 640x416+512+83
33127.9 640x416+384+166
33139.2 640x416+192+0
33141.4 640x416+64+83
33142.3 640x416+64+125
33181.5 640x416+448+208
33190.8 640x416+128+0
34693 640x416+128+125
36178.3 640x416+128+83
The last 2 rectangles are the brightest, so if I box them in in red and blue you can see what it has found:
convert noise.jpg -fill none -stroke red -draw "rectangle 128,83 192,123" -stroke blue -draw "rectangle 128,125 192,168" result.png
Alternatively, you could create a new image in which each pixel is the mean of the 50x50 square of surrounding pixels in the original image, like this:
convert noise.jpg -virtual-pixel edge -statistic mean 50x50 -auto-level result.png
Of course, you can also threshold that:
convert noise.jpg -virtual-pixel edge -statistic mean 50x50 -auto-level -threshold 80% result.png
As regards the lines, I want to use some type of mode to detect the frequently occurring values within small areas but as the colours vary, I need to reduce the palette of colours to find things that are just similarly coloured so I would go with an approach something like this which reduces the colours then calculates the mode:
convert noise2.jpg -colors 8 -statistic mode 8x8 result.jpg
It needs refinement, but you get the idea hopefully.
Alternatively, you could calculate a new image wherein each pixel is the standard deviation of the surrounding 3x3 pixels in the original image and then look for the ones where this value is lowest - i.e. where the image is darkest which corresponds to areas in the input image where there is least variation in the pixel colours:
convert noise2.png -statistic standarddeviation 3x3 -auto-level result.png
In my project I deal with images which I don't know if they are inclined or not.
I work with C++ and OpenCV. I try with Hough transformation to determine the angle of inclination: if it is 90 or 180. But it doesn't give a result.
A link to example image (full resolution TIFF) here.
The following illustration is the full-res image scaled down and converted to PNG:
If I want to attack your image with the Hough lines method, I would do a Canny edge detection first, then find the Hough lines and then look at the generated lines. So it would look like this in ImageMagick - you can transform to OpenCV:
convert input.jpg \
\( +clone -canny x10+10%+30% \
-background none -fill red \
-stroke red -strokewidth 2 \
-hough-lines 9x9+150 \
-write lines.mvg \
\) \
-composite hough.png
And in the lines.mvg file, I can see the individual detected lines:
# Hough line transform: 9x9+150
viewbox 0 0 349 500
line 0,-3.74454 349,8.44281 # 160
line 0,55.2914 349,67.4788 # 206
line 1,0 1,500 # 193
line 0,71.3012 349,83.4885 # 169
line 0,125.334 349,137.521 # 202
line 0,142.344 349,154.532 # 156
line 0,152.351 349,164.538 # 155
line 0,205.383 349,217.57 # 162
line 0,239.453 349,245.545 # 172
line 0,252.455 349,258.547 # 152
line 0,293.461 349,299.553 # 163
line 0,314.464 349,320.556 # 169
line 0,335.468 349,341.559 # 189
line 0,351.47 349,357.562 # 196
line 0,404.478 349,410.57 # 209
line 349.39,0 340.662,500 # 187
line 0,441.484 349,447.576 # 198
line 0,446.484 349,452.576 # 165
line 0,455.486 349,461.578 # 174
line 0,475.489 349,481.581 # 193
line 0,498.5 349,498.5 # 161
I resized your image to 349 pixels wide (to make it fit on Stack Overflow and process faster), so you can see there are lots of lines that start at 0 on the left side of the image and end at 349 on the right side which tells you they go across the image, not up and down it. Also, you can see that the right end of the lines is generally 16 pixels lower than the left, so the image is rotated tan inverse (16/349) degrees.
Here is a fairly simple approach that may help you get started, or give you ideas that you can adapt. I use ImageMagick, but the concepts and techniques should be readily applicable in OpenCV.
First, I note that the image is rotated a few degrees and that gives the black triangle at top right, so the first thing I would consider is cropping the middle out of the image - i.e. removing around 10-15% off each side.
The next thing I note is that, the image is poorly scanned with lots of noisy, muddy grey areas. I would tend to want to blur these together so that they become a bit more uniform and can be thresholded.
So, if I want to do those two things in ImageMagick, I would do this:
convert input.tif \
-gravity center -crop 75x75%+0+0 \
-blur x10 -threshold 50% \
-negate \
stage1.jpg
Now, I can count the number of horizontal black lines that run the full width of the image (without crossing anything white). I do this by squidging the image till it is just a single pixel wide (but still the full original height) and counting the number of black rows:
convert stage1.jpg -resize 1x! -threshold 1 txt: | grep -c black
1368
And I do the same for vertical black lines that run the full height of the image from top to bottom, uninterrupted by white. I do that by squidging the image till it is a single pixel tall and the full original width:
convert stage1.jpg -resize x1! -threshold 1 txt: | grep -c black
0
Therefore there are 1,368 lines across the image and none up and down it, so I can say the dark lines in the original image tend to run left-right across the image rather than top-bottom up and down the image.