segmentation of overlapping cells - python-2.7

The following python script should split overlapping cells apart which does work quite good. The problem is now that it also splits some of the cells apart which don't overlap with other cells. To make things clear to you i'll add my input image and the output image.
The input:input image
The output:
output image
Output image where I marked two "bad" segmented cells:Output image with marked errors
Thresholded image: Thresholded image
Does someone have an idea how to avoid this problem or is the whole approach not good enough to process these kind of images?
I am using the following piece of code to segment the cells:
from skimage.feature import peak_local_max
from skimage.morphology import watershed
from scipy import ndimage
import numpy as np
import cv2
# load the image and perform pyramid mean shift filtering
# to aid the thresholding step
image = cv2.imread('C:/Users/Root/Desktop/image13.jpg')
shifted = cv2.pyrMeanShiftFiltering(image, 41, 51)
# convert the mean shift image to grayscale, then apply
# Otsu's thresholding
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255,
cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]
im = gray.copy()
D = ndimage.distance_transform_edt(thresh)
localMax = peak_local_max(D, indices=False, min_distance=3,
labels=thresh)
# perform a connected component analysis on the local peaks,
# using 8-connectivity, then apply the Watershed algorithm
markers = ndimage.label(localMax, structure=np.ones((3, 3)))[0]
labels = watershed(-D, markers, mask=thresh)
print("[INFO] {} unique segments found".format(len(np.unique(labels)) - 1))
conts=[]
for label in np.unique(labels):
# if the label is zero, we are examining the 'background'
# so simply ignore it
if label == 0:
continue
# otherwise, allocate memory for the label region and draw
# it on the mask
mask = np.zeros(gray.shape, dtype="uint8")
mask[labels == label] = 255
# detect contours in the mask and grab the largest one
cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)[-2]
c = max(cnts, key=cv2.contourArea)
rect = cv2.minAreaRect(c)
box = cv2.boxPoints(rect)
box = np.int0(box)
if cv2.contourArea(c) > 150:
#cv2.drawContours(image,c,-1,(0,255,0))
cv2.drawContours(image,[box],-1,(0,255,0))
cv2.imshow("output", image)
cv2.waitKey()

Related

Using Houghlines on a binary image, to identify the horizontal and vertical components and then "removing" them by drawing them in black

On this original image I am attempting to create a binary image with a black background and white points so I can fit the curve around them. here is the image after thresholding, dilating, corroding and blurring
I Intend to do this by using Houghlines on a binary image, to identify the horizontal and vertical components and then "removing" them by drawing them in black, however my code merely returns the original image in grayscale as opposed to a bunch of white points on a black background ready to be used as co-ordinates to fit a curve around them
erosion = cv2.erode(img,kernel,iterations = 500)
edges = cv2.Canny(img,0,255)
lines = cv2.HoughLines(edges, 1, np.pi/180, 0, 0, 0)
for rho,theta in lines[0]:
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
line = cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)
cv2.imshow("blackwave.PNG", line)
cv2.imwrite("blackwave.PNG", line)
cv2.waitKey(0)
else:
print 'Image could not be read'
As a learning exercise for myself I've spent a while trying to solve the image analysis part of this problem. In some ways I feel a bit reluctant to gift you a solution because I think you are already showing the effect of having this happen to you - you haven't learned how to use cv, so you have to ask more questions looking for a solution rather than figuring out how to adapt the code for yourself. OTOH it feels churlish to not share what I've done.
DON'T ask me to 'please change/improve/get this working' - this code does what it does, if you want it to do something different then get coding: it's over to you now.
I saved your raw image in a file sineraw.png.
The code goes through the following steps:
1. read raw image, already grayscale
2. equalize the image in the first step to getting a binary (black/white) image
3. do an adaptive threshold to get a black/white image, still got lots of noise
4. perform an erosion to remove any very small dots of noise from the thresholded image
5. perform a connected component analysis on the thresholded image, then store only the "large" blobs into mask
6. skeletonize mask into skel
7. Now look for and overwrite near-horizontal and near-vertical lines with black
The final image should then be suitable for using curve fitting as only the curve is shown in white pixels. That's another exercise for you.
BTW you should really get a better source image.
I suppose there are other and possibly much better ways of achieving the same effect as shown in the final image, but this works for your source image. If it doesn't work for other images, well, you have the source code, get editing.
While doing this I explored a few options like different adaptive thresholding, the gaussian seems better at not putting white on the edges of the picture. I also explored drawing black lines around the picture to get rid of the edge noise, and also using the labelling to remove all white that is on the edge of the picture but that removes the main curve which goes up to the edge. I also tried more erosion/dilate/open/close but gave up and used the skeletonize because it preserves the shape and also happily leaves a centreline for the curve.
CODE
import copy
import cv2
import numpy as np
from skimage import measure
# seq is used to give the saved files a sequence so it is easier to understand the sequence
seq = 0
# utility to save/show an image and optionally pause
def show(name,im, pause=False, save=False):
global seq
seq += 1
if save:
cv2.imwrite(str(seq)+"-"+name+".PNG", im)
cv2.imshow(str(seq)+"-"+name+".PNG",im)
if pause:
cv2.waitKey(0)
# utility to return True if theta is approximately horizontal
def near_horizontal(theta):
a = np.sin(theta)
if a > -0.1 and a < 0.1:
return True
return False
# utility to return True if theta is approximately vertical
def near_vertical(theta):
return near_horizontal(theta-np.pi/2.0)
################################################
# 1. read raw image, already grayscale
src = cv2.imread('sineraw.PNG',0)
show("src",src, save=True)
################################################
# 2. equalize the image in the first step to getting a binary (black/white) image
gray = cv2.equalizeHist(src)
show("gray",gray, save=True)
################################################
# 3. do an adaptive threshold to get a black/white image, still got lots of noise
# I tried a range of parameters for the 41,10 - may vary by image, not sure
dst = cv2.adaptiveThreshold(gray, 255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY_INV,41,10)
show("dst",dst, save=True)
################################################
# 4. perform an erosion to remove any very small dots of noise from the thresholded image
erode1 = cv2.erode(dst, None, iterations=1)
show( "erode1",erode1, save=True)
################################################
# 5. perform a connected component analysis on the thresholded image, then store only the "large" blobs into mask
labels = measure.label(erode1, neighbors=8, background=0)
# mask is initially all black
mask = np.zeros(erode1.shape, dtype="uint8")
# loop over the unique components
for label in np.unique(labels):
# if this is the background label, ignore it
if label == 0:
continue
# otherwise, construct the mask for this label and count the
# number of pixels
labelMask = np.zeros(erode1.shape, dtype="uint8")
labelMask[labels == label] = 255
numPixels = cv2.countNonZero(labelMask)
# if the number of pixels in the component is sufficiently
# large, then add it to our mask of "large blobs"
if numPixels > 50:
# add the blob into mask
mask = cv2.add(mask, labelMask)
show( "mask", mask, save=True )
################################################
# 6. skeletonize mask into skel
img = copy.copy(mask)
element = cv2.getStructuringElement(cv2.MORPH_CROSS,(3,3))
done = False
size = np.size(img)
# the skeleton is initially all black
skel = np.zeros(img.shape,np.uint8)
while( not done):
eroded = cv2.erode(img,element)
temp = cv2.dilate(eroded,element)
temp = cv2.subtract(img,temp)
skel = cv2.bitwise_or(skel,temp)
img = eroded.copy()
# show( "tempimg",img)
zeros = size - cv2.countNonZero(img)
if zeros==size:
done = True
show( "skel",skel, save=True )
################################################
# 7. Now look for and overwrite near-horizontal and near-vertical lines with black
lines = cv2.HoughLines(skel, 1, np.pi/180, 100)
for val in lines:
(rho,theta)=val[0]
a = np.cos(theta)
b = np.sin(theta)
if not near_horizontal(theta) and not near_vertical(theta):
print "ignored line",rho,theta
continue
print "line",rho, theta, 180.0*theta/np.pi
x0 = a*rho
y0 = b*rho
# this is pretty kulgey, should be able to use actual image dimensions, but this works as long as image isn't too big
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
print "line",rho, theta, 180.0*theta/np.pi,x0,y0,x1,y1,x2,y2
cv2.line(skel,(x1,y1),(x2,y2),0,3)
################################################
# the final image is now in skel
show("final",skel, pause=True,save=True)

When I threshold an image I get a completely black image

I am attempting to threshold a wave so that the white background appears black and the wave itself which was originally black is white, however it only seems to return an entirely black image. What am I doing wrong?
import cv2
src = cv2.imread("C:\\Users\\ksatt\\Desktop\\SoundByte\\blackwaveblackaxis (1).PNG",0)
maxValue = 255
thresh= 53
if not src is None:
th, dst = cv2.threshold(src, thresh, maxValue, cv2.THRESH_BINARY_INV)
cv2.imshow("blackwave.PNG", dst)
cv2.imwrite("blackwave.PNG", dst)
cv2.waitKey(0)
else:
print 'Image could not be read'
Your threshold is too low, and the dark paper is going to pick up values that you don't want anyways. Basically, the contrast of the image is too low.
One easy solution is to subtract out the background. The simple way to do this is to dilate() your grayscale image, which will expand the white area and overtake the black lines. Then you can apply a small GaussianBlur() to that dilated image, and this will give you a "background" image that you can subtract from your original image to get a clear view of the lines. From there you'll have a much better image to threshold(), and you can even use OTSU thresholding to automatically set the threshold level for you.
import cv2
import numpy as np
# read image
src = cv2.imread('wave.png',0)
# create background image
bg = cv2.dilate(src, np.ones((5,5), dtype=np.uint8))
bg = cv2.GaussianBlur(bg, (5,5), 1)
# subtract out background from source
src_no_bg = 255 - cv2.absdiff(src, bg)
# threshold
maxValue = 255
thresh = 240
retval, dst = cv2.threshold(src_no_bg, thresh, maxValue, cv2.THRESH_BINARY_INV)
# automatic / OTSU threshold
retval, dst = cv2.threshold(src_no_bg, 0, maxValue, cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
You can see that manual thresholding gives the same results as OTSU, but you don't have to play around with the values for OTSU, it'll find them for you. This isn't always the best way to go but it can be quick sometimes. Check out this tutorial for more on different thresholding operations.
if you take a look at http://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html#threshold it will tell you what each parameter does of the function.
Also here is a good tutorial:
http://docs.opencv.org/trunk/d7/d4d/tutorial_py_thresholding.html
Python: cv.Threshold(src, dst, threshold, maxValue, thresholdType) →
None
Is the prototype which gets further explenation in the mentioned API.
So simply change your code to:
cv2.threshold(src,RESULT, thresh, maxValue, cv2.THRESH_BINARY_INV)
cv2.imshow("blackwave.PNG", RESULT)
Could you post a picture of the wave? Have you tried using standard python? Something like this should work:
import numpy as np
import matplotlib.pyplot as plt
maxValue = 255
thresh= 53
A = np.load('file.png')
# For each pixel, see if it's above/below the threshold
for i in range(A.shape[0]): # Loop along the X direction
for j in range(A.shape[1]): # Loop along the Y direction
# Set to black the background
if A[i,j] > thresh:
A[i,j] = 0
if A[i,j] == 0:
A[i,j] = 255
Or something similar.

Extracting a laser line in an image (using OpenCV)

i have a picture from a laser line and i would like to extract that line out of the image.
As the laser line is red, i take the red channel of the image and then searching for the highest intensity in every row:
The problem now is, that there are also some points which doesnt belong to the laser line (if you zoom into the second picture, you can see these points).
Does anyone have an idea for the next steps (to remove the single points and also to extract the lines)?
That was another approach to detect the line:
First i blurred out that "black-white" line with a kernel, then i thinned(skeleton) that blurred line to a thin line, then i applied an OpenCV function to detect the line.. the result is in the below image:
NEW:
Now i have another harder situation.
I have to extract a green laser light.
The problem here is that the colour range of the laser line is wider and changing.
On some parts of the laser line the pixel just have high green component, while on other parts the pixel have high blue component as well.
Getting the highest value in every row will always output a value, instead of ignoring when the value isn't high enough. Consider using a threshold too, so that you can discard ones that aren't high enough.
However, that's not a very efficient way to do this at all. A much better and easier solution would be to use the OpenCV function inRange(); define a lower and upper bound for the red color in all three channels, and this will return a binary image with white pixels where the image intensity is within that BGR range.
This is in python but it does the job, should be easy to see how to use the function:
import cv2
import numpy as np
img = cv2.imread('image.png')
lowerb = np.array([0, 0, 120])
upperb = np.array([100, 100, 255])
red_line = cv2.inRange(img, lowerb, upperb)
cv2.imshow('red', red_line)
cv2.waitKey(0)
This produces the output:
This could be further processed by finding contours or other methods to turn the points into a nice curve.
I'm really sorry for the short answer without any code, but I suggest you take contours and process them.
I dont know exact what you need, so here are two approaches for you:
just collect as much as possible contours on single line (use centers and try find straight line with smallest mean)
as first way, but trying heuristically combine separated lines.... it's much harder, but this may give you almost full laser line from image.
--
Some example for yours picture:
import cv2
import numpy as np
import math
img = cv2.imread('image.png')
hsv = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)
# filtering red area of hue
redHueArea = 15
redRange = ((hsv[:, :, 0] + 360 + redHueArea) % 360)
hsv[np.where((2 * redHueArea) > redRange)] = [0, 0, 0]
# filtering by saturation
hsv[np.where(hsv[:, :, 1] < 95)] = [0, 0, 0]
# convert to rgb
rgb = cv2.cvtColor(hsv, cv2.COLOR_HSV2RGB)
# select only red grayscaled channel with low threshold
gray = cv2.cvtColor(rgb, cv2.COLOR_RGB2GRAY)
gray = cv2.threshold(gray, 15, 255, cv2.THRESH_BINARY)[1]
# contours processing
(_, contours, _) = cv2.findContours(gray.copy(), cv2.RETR_LIST, 1)
for c in contours:
area = cv2.contourArea(c)
if area < 8: continue
epsilon = 0.1 * cv2.arcLength(c, True) # tricky smoothing to a single line
approx = cv2.approxPolyDP(c, epsilon, True)
cv2.drawContours(img, [approx], -1, [255, 255, 255], -1)
cv2.imshow('result', img)
cv2.waitKey(0)
In your case it's work perfectly, but, as i already said, you will need to do much more work with contours.

Python OpenCv2, counting contours of colored object

I want to be able to count the number of pixels in a detected object. I'm using the cv2.threshold function. Here is some sudo code.
import cv2
import numpy as np
import time
while True:
cam= cv2.VideoCapture(0)
while(cam.isOpened())
ret, image = cam.read()
image = cv2.GaussianBlur(image, (5,5), 0)
Image1 = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
lower= np.array([30,40,40], dtype='uint8')
upper= np.array([95,240,240], dtype='uint8')
Thresh= cv2.inRange(Image1, lower, upper)
From here on out, I have no idea how to count the pixels of my objects. How do you find the contours of a binary image? I suppose it could be possible to cv2.bitwise_and a full black image over the Thresh/ mask, but that seems like it could be slow and also I don't know how to create a fully black and white image like that.
So TD:LR, how do you count the number of pixels in an object from a binary image?
Note: I'm actually just after the largest object and only need the number of pixels, not the image.
Edit: not trying to count the total number of pixels detected, I've already done that. Want the number of pixels detected from the object with the largest number.
This is how I did it
import cv2
import numpy as np
import time
from scipy.ndimage import (labeled_comprehension, label, measurements, generate_binary_structure) # new import
while True:
cam= cv2.VideoCapture(0)
while(cam.isOpened())
ret, image = cam.read() # record image
image = cv2.GaussianBlur(image, (5,5), 0) # blur to remove noise
Image1 = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) # convert to better color scheme
lower= np.array([30,40,40], dtype='uint8') # low green
upper= np.array([95,240,240], dtype='uint8') # high green
Thresh= cv2.inRange(Image1, lower, upper) # returns array with 255 as pixel if in threshold
struct = generate_binary_structure(2,2) # seems necessary for some reason
Label, features = label(Thresh, struct) # label is object, features is number of objects
Arange = np.arange(1, features+1) # seems necessary for some reason
Biggest = sorted(labeled_comprehension(Thresh, Label, Arange, np.sum, float, -1))[features-1]//255 # counts and organises the objects based on size. [features-1] means last object, ie: biggest. //255 because that's each pixel work (from thresh)

OpenCV 2.4.9 for Python, cannot find chessboard (camera calibration tutorial)

I am trying to calibrate camera using OpenCV tools according to the following this guide.
The problem is that function findChessboardCorners cannot find any chessboard on images I tried. I used a lot of them - even just plain chessboard pattern. In any case, nothing was detected.
Here is the code (almost the same as from link above):
import numpy as np
import cv2
import glob
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*7,3), np.float32)
objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
images = glob.glob('*.png')
for fname in images:
img = cv2.imread(fname)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (7,6),None)
# If found, add object points, image points (after refining them)
if ret == True:
objpoints.append(objp)
corners2 = cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
imgpoints.append(corners2)
# Draw and display the corners
img = cv2.drawChessboardCorners(img, (7,6), corners2,ret)
cv2.imshow('img',img)
cv2.waitKey(500)
cv2.destroyAllWindows()
The only change I made is that i switched from .jpg files to .png files - for some reason, function imread cannot read jpg images (that's another strange problem for other topic).
Thank you in advance for advices!
Image ref:
Just for other Python newbies that may go down this road. Working code:
import numpy as np
import cv2
import glob
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# Arrays to store object points and image points from all the images.
imgpoints = [] # 2d points in image plane.
images = glob.glob('*.png')
for fname in images:
img = cv2.imread(fname)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret = False
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (7,7))
# If found, add object points, image points (after refining them)
if ret == True:
cv2.cornerSubPix(gray, corners, (11,11), (-1,-1), criteria)
imgpoints.append(corners)
# Draw and display the corners
cv2.drawChessboardCorners(img, (7,7), corners, ret)
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Two main points:
You have to carefully count dimension of you pattern. (7,7) is for usual chessboard.
Line img = cv2.drawChessboardCorners(img, (7,6), corners2,ret) doesn't work, you have to change it to cv2.drawChessboardCorners(img, (7,6), corners2,ret) (function doesn't return image).
Thanks to AldurDisciple!