Python OpenCV get bottom most value of mask - python-2.7

I have an image from which I extract a colour into a mask as shown in the code below. The mask gives a black and white image. White being the colour I detect. The pixel value of white is 255 and black is 0.
I want to get the bottommost x and Y pixel of the white portion of the mask. How do I do this?
My code is as follows:
image = cv2.imread(FILENAME)
# THE COLOURS ARE IN RGB
lower_blue = np.array([50, 0, 0])
upper_blue = np.array([255, 50, 50])
# loop over the boundaries
# for (lower, upper) in boundaries:
# create NumPy arrays from the boundaries
lower = np.array(lower_blue, dtype = "uint8")
upper = np.array(upper_blue, dtype = "uint8")
# find the colors within the specified boundaries and apply
# the mask
mask = cv2.inRange(image, lower, upper)

you can use numpy's where to search your mask for a specific value:
np.max(np.where(np.max(img_binary,axis=1)==255)

Related

OpenCV Python: How to warpPerspective a large image based on transform inferred from small region

I am using cv2.getPerspectiveTransform() and cv2.warpPerspective() to warp an image according to Adrian Rosenbrock blog : https://www.pyimagesearch.com/2014/08...
However in my case I have an image where I can only select the region B to be warped but need to warp (top-down view) the whole larger image A.
Can the parameters of the perspective transform inferred from the smaller region B be applied to the full image A? Is that possible?enter image description here
Here is one way to demonstrate that the matrix from the red square applies to the whole image in Python OpenCV.
Here I rectify the quadrilateral into a rectangle on the basis of its top and left dimensions.
Input:
import numpy as np
import cv2
import math
# read input
img = cv2.imread("red_quadrilateral.png")
hh, ww = img.shape[:2]
# specify input coordinates for corners of red quadrilateral in order TL, TR, BR, BL as x,
input = np.float32([[136,113], [206,130], [173,207], [132,196]])
# get top and left dimensions and set to output dimensions of red rectangle
width = round(math.hypot(input[0,0]-input[1,0], input[0,1]-input[1,1]))
height = round(math.hypot(input[0,0]-input[3,0], input[0,1]-input[3,1]))
print("width:",width, "height:",height)
# set upper left coordinates for output rectangle
x = input[0,0]
y = input[0,1]
# specify output coordinates for corners of red quadrilateral in order TL, TR, BR, BL as x,
output = np.float32([[x,y], [x+width-1,y], [x+width-1,y+height-1], [x,y+height-1]])
# compute perspective matrix
matrix = cv2.getPerspectiveTransform(input,output)
print(matrix)
# do perspective transformation setting area outside input to black
# Note that output size is the same as the input image size
imgOutput = cv2.warpPerspective(img, matrix, (ww,hh), cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT, borderValue=(0,0,0))
# save the warped output
cv2.imwrite("red_quadrilateral_warped.jpg", imgOutput)
# show the result
cv2.imshow("result", imgOutput)
cv2.waitKey(0)
cv2.destroyAllWindows()

openCV - white color filter [duplicate]

Below is my python code for tracking white color objects. It works - but only for a few seconds and then the whole screen turns black and in some times it not work. I experimented with blue color and it works - but white and green are giving me problems:
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
while(1):
_, frame = cap.read()
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# define range of white color in HSV
# change it according to your need !
sensitivity = 15
lower_white = np.array([0,0,255-sensitivity])
upper_white = np.array([255,sensitivity,255])
# Threshold the HSV image to get only white colors
mask = cv2.inRange(hsv, lower_white, upper_white)
# Bitwise-AND mask and original image
res = cv2.bitwise_and(frame,frame, mask= mask)
cv2.imshow('frame',frame)
cv2.imshow('mask',mask)
cv2.imshow('res',res)
k = cv2.waitKey(5) & 0xFF
if k == 27:
break
cv2.destroyAllWindows()
Well, first thing you should know what color space you are using.
Just a small tutorial of color spaces in OpenCV for Mat of type CV_8UC3. (Images from Wikipedia)
HSV
In the HSV (Hue, Saturation, Value) color space, H gives the color dominant color, S the saturation of the color, V the lightness. In OpenCV, the ranges are different. S,V are in [0,255], while H is in [0, 180]. Typically H is in range [0,360] (the full circle), but to fit in a byte (256 different values) it's value is halved.
In HSV space is easier to separate a single color, since you can simply set the proper range for H, and just take care that S is not too small (it will be almost white), and V is not too small (it will be dark).
So for example, if you need almost blue colors, you need H to be around the value 120 (say in [110,130]), and S,V not too small (say in [100,255]).
White is not a hue (the rainbow doesn't have white color in it), but is a combination of color.
In HSV, you need to take all range of H (H in [0, 180]), very small S values (say S in [0, 25]), and very high V values (say V in [230, 255]). This basically corresponds to the upper part of the central axis of the cone.
So to make it track white objects in HSV space, you need:
lower_white = np.array([0, 0, 230])
upper_white = np.array([180, 25, 255])
Or, since you defined a sensitivity value, like:
sensitivity = 15
lower_white = np.array([0, 0, 255-sensitivity])
upper_white = np.array([180, sensitivity, 255])
For other colors:
green = 60;
blue = 120;
yellow = 30;
...
sensitivity = 15
// Change color with your actual color
lower_color = np.array([color - sensitivity, 100, 100])
upper_color = np.array([color + sensitivity, 255, 255])
Red H value is 0, so you need to take two ranges and "OR" them together:
sensitivity = 15
lower_red_0 = np.array([0, 100, 100])
upper_red_0 = np.array([sensitivity, 255, 255])
lower_red_1 = np.array([180 - sensitivity, 100, 100])
upper_red_1 = np.array([180, 255, 255])
mask_0 = cv2.inRange(hsv, lower_red_0 , upper_red_0);
mask_1 = cv2.inRange(hsv, lower_red_1 , upper_red_1 );
mask = cv2.bitwise_or(mask1, mask2)
Now you should be able to track any color!
Instead of having to guess and check the HSV lower/upper color ranges, you can use a HSV color thresholder script to determine the ranges with trackbars. This makes it very easy to define the ranges for whatever color you're trying to segment. Just change the input image in cv2.imread. Example to segment white
import cv2
import numpy as np
def nothing(x):
pass
# Load image
image = cv2.imread('1.jpg')
# Create a window
cv2.namedWindow('image')
# Create trackbars for color change
# Hue is from 0-179 for Opencv
cv2.createTrackbar('HMin', 'image', 0, 179, nothing)
cv2.createTrackbar('SMin', 'image', 0, 255, nothing)
cv2.createTrackbar('VMin', 'image', 0, 255, nothing)
cv2.createTrackbar('HMax', 'image', 0, 179, nothing)
cv2.createTrackbar('SMax', 'image', 0, 255, nothing)
cv2.createTrackbar('VMax', 'image', 0, 255, nothing)
# Set default value for Max HSV trackbars
cv2.setTrackbarPos('HMax', 'image', 179)
cv2.setTrackbarPos('SMax', 'image', 255)
cv2.setTrackbarPos('VMax', 'image', 255)
# Initialize HSV min/max values
hMin = sMin = vMin = hMax = sMax = vMax = 0
phMin = psMin = pvMin = phMax = psMax = pvMax = 0
while(True):
# Get current positions of all trackbars
hMin = cv2.getTrackbarPos('HMin', 'image')
sMin = cv2.getTrackbarPos('SMin', 'image')
vMin = cv2.getTrackbarPos('VMin', 'image')
hMax = cv2.getTrackbarPos('HMax', 'image')
sMax = cv2.getTrackbarPos('SMax', 'image')
vMax = cv2.getTrackbarPos('VMax', 'image')
# Set minimum and maximum HSV values to display
lower = np.array([hMin, sMin, vMin])
upper = np.array([hMax, sMax, vMax])
# Convert to HSV format and color threshold
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv, lower, upper)
result = cv2.bitwise_and(image, image, mask=mask)
# Print if there is a change in HSV value
if((phMin != hMin) | (psMin != sMin) | (pvMin != vMin) | (phMax != hMax) | (psMax != sMax) | (pvMax != vMax) ):
print("(hMin = %d , sMin = %d, vMin = %d), (hMax = %d , sMax = %d, vMax = %d)" % (hMin , sMin , vMin, hMax, sMax , vMax))
phMin = hMin
psMin = sMin
pvMin = vMin
phMax = hMax
psMax = sMax
pvMax = vMax
# Display result image
cv2.imshow('image', result)
if cv2.waitKey(10) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()

algorithm that finds the "mean positioned" white pixel in a column and repeats the process for each column

I am attempting to create an algorithm that locates the white pixels in a column of a binary image, and then adds the y co-ordinates/column number of each white pixel and divides this value by the number of white pixels in the column, in order to get the "mean/middle positioned" white pixel in the column. And this returns an (x,y) co-ordinate that can be plotted. This process repeats for each column in the image and each time sy sets back to 0.
The end goal is instead of having a lines that are multiple pixels thick/wide, as shown in the image's numpy arraycurrent line multiple thicks wide array, I have lines that are just one pixel wide, whilst mantaining the original shape. I planned on doing this by selecting the "mean positioned white pixel in each column". I will then use these pixels to obtain x and y co-ordinates to plot.
Here is what I have
sx = x = img.shape[1]
sy = 0
whitec = cv2.countNonZero(img.shape[1])
arrayOfMeanY = [] #array to place (x,y) co-ordinate in
#Select column to iterate
for x in range(img.shape[1]):
# iterating through individual items in the column
for y in range(img.shape[0]):
# Checking for white pixels
pixel = img[x,y]
if pixel == 255:
# Then we check the y values of the white pixels in the column and add them all up
sy = sy+y
whitec +=1
# Doing the calculation for the mean and putting it into the meanY list
sy = sy/whitec
y = sy
print img[x,y]
array.append(y)
cv2.waitKey(0)
# reset sy to 0 for the next column
sy = 0
My issue is I recieve this error when I run the code:
File "<ipython-input-6-e4c2225ff632>", line 27, in <module>
whitec = cv2.countNonZero(img.shape[1]) #n= number of white pixels
in the column
TypeError: src is not a numpy array, neither a scalar
How do I rectify this issue, and once once this issue is rectified will my coding do what I described above.
No need for loops here. With numpy you hardly ever need to loop over individual pixels.
Instead, create a function which takes the mean of the locations of the non-zero pixels for each column (I converted to np.intp to index the image; you could just cast with int() but np.intp is what Numpy uses for indexing arrays so, it's slightly more appropriate).
def avgWhiteLocOverCol(col):
return np.intp(np.mean(np.where(col)))
Then you can simply apply the function along all columns with np.apply_along_axis().
avgRows = np.apply_along_axis(avgWhiteLocOverCol, 0, img)
For example, let's create an image with white pixels on the middle row and on the diagonal:
import numpy as np
import cv2
img = np.eye(500)*255
img[249,:] = 255
cv2.imshow('',img)
cv2.waitKey(0)
Then we can apply the function over each column, which should give a line with half the slope:
def avgWhiteLocOverCol(col):
return int(np.mean(np.where(col)))
avgRows = np.apply_along_axis(avgWhiteLocOverCol, 0, img)
avgIndImg = np.zeros_like(img)
avgIndImg[avgRows,range(img.shape[1])] = 255
cv2.imshow('',avgIndImg)
cv2.waitKey(0)

Getting mean of an image using a mask

I have a series of concentric rectangles and wish to obtain the means of the outer rectangle excluding the inner rectangle. See the attached diagram , I need to get the mean for the shaded area.
So I am using a mask of the inner rectangle to pass into the cv2.mean method, but I am not sure how to set the mask. I have the following code:
for i in xrange(0,len(wins)-2,1):
means_1 = cv2.mean(wins[i])[0]
msk = cv2.bitwise_and(np.ones_like((wins[i+1]), np.uint8),np.zeros_like((wins[i]), np.uint8))
means_2 = cv2.mean(wins[i+1],mask=msk)
means_3 = cv2.mean(wins[i+1])[0]
print means_1,means_2,means_3
I get this error for the means_2 (means_3 works fine).:
error:
/Users/jenkins/miniconda/0/2.7/conda-bld/work/opencv-2.4.11/modules/core/src/arithm.cpp:1021:
error: (-209) The operation is neither 'array op array' (where arrays
have the same size and type), nor 'array op scalar', nor 'scalar op
array' in function binary_op
The mask here refers to a binary mask which has 0 as background and 255 as foreground, So You need to create an empty mask with default color = 0 and then paint the Region of Interest where you want to find the mean with 255. Suppose I have input image [512 x 512]:
Lets's assume 2 concentric rectangles as:
outer_rect = [100, 100, 400, 400] # top, left, bottom, right
inner_rect = [200, 200, 300, 300]
Now create the binary mask using these rectangles as:
mask = np.zeros(image.shape[:2], dtype=np.uint8)
cv2.rectangle(mask, (outer_rect[0], outer_rect[1]), (outer_rect[2], outer_rect[3]), 255, -1)
cv2.rectangle(mask, (inner_rect[0], inner_rect[1]), (inner_rect[2], inner_rect[3]), 0, -1)
Now you may call the cv2.mean() to get the mean of foreground area, labelled with 255 as:
lena_mean = cv2.mean(image, mask)
>>> (109.98813432835821, 96.60768656716418, 173.57567164179105, 0.0)
In Python/OpenCV or any software, if you have a masked image and the binary mask, then the mean of the non-black pixels in the image (i.e. ROI) is the mean of the masked image divided by the mean of the mask
Input:
Mask:
import cv2
import numpy as np
# load image
img = cv2.imread('lena_g.png', cv2.IMREAD_GRAYSCALE)
# load mask
mask = cv2.imread('lena_mask.png', cv2.IMREAD_GRAYSCALE)
# compute means
mean_img = np.mean(img)
mean_mask = np.mean(mask)
# compute 255*mean_img/mean_mask
mean_roi = 255 * mean_img / mean_mask
# print mean of each
print("mean of image:", mean_img)
print("mean of mask:", mean_mask)
print("mean of roi:", mean_roi)
mean of image: 98.50196838378906
mean of mask: 216.090087890625
mean of roi: 116.23856597522328

checking the Colors Opencv Python

I have a code to detect two colors green and blue. I want to check if
green color is detected to print a massage and if blue color is detected to print another message too
Here is the Code:
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
while(1):
# Take each frame
_, frame = cap.read()
# Convert BGR to HSV
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# define range of blue color in HSV
lower_blue = np.array([110,50,50])
upper_blue = np.array([130,255,255])
lower_green = np.array([50, 50, 120])
upper_green = np.array([70, 255, 255])
green_mask = cv2.inRange(hsv, lower_green, upper_green) # I have the Green threshold image.
# Threshold the HSV image to get only blue colors
blue_mask = cv2.inRange(hsv, lower_blue, upper_blue)
mask = blue_mask + green_mask
############this is the Error ####################
if mask==green_mask:
print "DOne"
################################################
# Bitwise-AND mask and original image
res = cv2.bitwise_and(frame,frame, mask= mask)
cv2.imshow('frame',frame)
cv2.imshow('mask',mask)
cv2.imshow('res',res)
k = cv2.waitKey(5) & 0xFF
if k == 27:
break
cv2.destroyAllWindows()
Running the above code gives me following error:
if mask==green_mask:
ValueError: The truth value of an array with more
than one element is ambiguous. Use a.any() or a.all()
Any ideas how to fix this?
On comparing if mask==green_mask returns an array of boolean value of their dimension which can not satisfy an if condition which require a single boolean.
You need to use a function which will return true if matrix mask and green_mask are all same and return false otherwise. That function should be called under this if condition.
Edit:
Replace the code with
if np.array_equal(mask, green_mask):
It will fix your issue.