Counting the point which intercept in a line with opencv python - python-2.7

I am working in vehicle counting with opencv and python programming, I already complete step:
1. Detect moving vehicle with BackgroundSubtractorMOG2
2. Draw rectangle on it, then poin a centroid of that
3. Draw a line (to indicate of the counting)
if that centroid accros/intercept with the line I want count that 1. but in my code sometime it add sometime no. Here the line code:
cv2.line(frame,(0,170),(300,170),(200,200,0),2)
and there the centroid:
if w > 20 and h > 25:
cv2.rectangle(frame, (x,y), (x+w,y+h), (180, 1, 0), 1)
x1=w/2
y1=h/2
cx=x+x1
cy=y+y1
centroid=(cx,cy)
cv2.circle(frame,(int(cx),int(cy)),4,(0,255,0),-1)
my counting code:
if cy==170:
counter=counter+1
Can anyone help me. please. for your advice thankyou!

Here is my approach that would work independently of the video frame rate. Assuming that you are able to track a car's centroid at each frame, I would save the last two centroids' position (last_centroid and centroid in my code) and process as follows:
compute the intercepting line equation's parameters ( (a,b,c) from aX + bY + c = 0)
compute the equation's parameters of the segment line between last_centroid and centroid
find if the two lines are intersecting
if so, increment your counter
Here is how I implemented it in OpenCV (Python):
import cv2
import numpy as np
import collections
Params = collections.namedtuple('Params', ['a','b','c']) #to store equation of a line
def calcParams(point1, point2): #line's equation Params computation
if point2[1] - point1[1] == 0:
a = 0
b = -1.0
elif point2[0] - point1[0] == 0:
a = -1.0
b = 0
else:
a = (point2[1] - point1[1]) / (point2[0] - point1[0])
b = -1.0
c = (-a * point1[0]) - b * point1[1]
return Params(a,b,c)
def areLinesIntersecting(params1, params2, point1, point2):
det = params1.a * params2.b - params2.a * params1.b
if det == 0:
return False #lines are parallel
else:
x = (params2.b * -params1.c - params1.b * -params2.c)/det
y = (params1.a * -params2.c - params2.a * -params1.c)/det
if x <= max(point1[0],point2[0]) and x >= min(point1[0],point2[0]) and y <= max(point1[1],point2[1]) and y >= min(point1[1],point2[1]):
print("intersecting in:", x,y)
cv2.circle(frame,(int(x),int(y)),4,(0,0,255), -1) #intersecting point
return True #lines are intersecting inside the line segment
else:
return False #lines are intersecting but outside of the line segment
cv2.namedWindow('frame')
frame = np.zeros((240,320,3), np.uint8)
last_centroid = (200,200) #centroid of a car at t-1
centroid = (210,180) #centroid of a car at t
line_params = calcParams(last_centroid, centroid)
intercept_line_params = calcParams((0,170), (300,170))
print("Params:", line_params.a,line_params.b,line_params.c)
while(1):
cv2.circle(frame,last_centroid,4,(0,255,0), -1) #last_centroid
cv2.circle(frame,centroid,4,(0,255,0), -1) #current centroid
cv2.line(frame,last_centroid,centroid,(0,0,255),1) #segment line between car centroid at t-1 and t
cv2.line(frame,(0,170),(300,170),(200,200,0),2) #intercepting line
print("AreLinesIntersecting: ",areLinesIntersecting(intercept_line_params,line_params,last_centroid,centroid))
cv2.imshow('frame',frame)
if cv2.waitKey(15) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
And here are some results:
Fig1. Segment is intersecting the line (intercepting line in blue - segment line between last_centroid and centroid in red)
Fig2. Segment is NOT intersecting the line
N.B. I found the formulas to calculate the intersection point here.
I hope my approach will help to address your problem.

To assume that the centroid will assume a position 170 (in x or y) is wrong, because videos generally works at 30 fps, that mean you will get 30 centroid locations per second which means even if there the object crosses the line, it may never be 170!
To counter this, one method that can be used is defining a line margin. This means now you have a line margin x before actual line (y = 170) and x after the line margin.
So if your object falls anywhere in the margin, you can increment the counter. Now the next big part would be to make a tracking mechanism wherein you collect the list of point for each object and check if it fell in the margin.

Related

how to make complex shapes using swarm of dots......like chair,rocket and many more using pygame and numpy

i am working on a project of swarm algorithms and i am trying to make complex shapes using the swarm consensus. However, the mathematics to achieve that seems quite difficult for me.
I have been able to make shapes like stars, circle and triangle but to make other complex shapes seems more harder. It would be very helpful if i get the idea of using numpy arrays to build these complex shapes using swarms....................................................
# general function to reset radian angle to [-pi, pi)
def reset_radian(radian):
while radian >= math.pi:
radian = radian - 2*math.pi
while radian < -math.pi:
radian = radian + 2*math.pi
return radian
# general function to calculate next position node along a heading direction
def cal_next_node(node_poses, index_curr, heading_angle, rep_times):
for _ in range(rep_times):
index_next = index_curr + 1
x = node_poses[index_curr][0] + 1.0*math.cos(heading_angle)
y = node_poses[index_curr][1] + 1.0*math.sin(heading_angle)
node_poses[index_next] = np.array([x,y])
index_curr = index_next
return index_next
##### script to generate star #####
filename = 'star'
swarm_size = 30
node_poses = np.zeros((swarm_size, 2))
outer_angle = 2*math.pi / 5.0
devia_right = outer_angle
devia_left = 2*outer_angle
# first node is at bottom left corner
heading_angle = outer_angle / 2.0 # current heading
heading_dir = 0 # current heading direction: 0 for left, 1 for right
seg_count = 0 # current segment count
for i in range(1,swarm_size):
node_poses[i] = (node_poses[i-1] +
np.array([math.cos(heading_angle), math.sin(heading_angle)]))
seg_count = seg_count + 1
if seg_count == 3:
seg_count = 0
if heading_dir == 0:
heading_angle = reset_radian(heading_angle - devia_right)
heading_dir = 1
else:
heading_angle = reset_radian(heading_angle + devia_left)
heading_dir = 0
print(node_poses)
with open(filename, 'w') as f:
pickle.dump(node_poses, f)
pygame.init()
# find the right world and screen sizes
x_max, y_max = np.max(node_poses, axis=0)
x_min, y_min = np.min(node_poses, axis=0)
pixel_per_length = 30
world_size = (x_max - x_min + 2.0, y_max - y_min + 2.0)
screen_size = (int(world_size[0])*pixel_per_length, int(world_size[1])*pixel_per_length)
# convert node poses in the world to disp poses on screen
def cal_disp_poses():
poses_temp = np.zeros((swarm_size, 2))
# shift the loop to the middle of the world
middle = np.array([(x_max+x_min)/2.0, (y_max+y_min)/2.0])
for i in range(swarm_size):
poses_temp[i] = (node_poses[i] - middle +
np.array([world_size[0]/2.0, world_size[1]/2.0]))
# convert to display coordinates
poses_temp[:,0] = poses_temp[:,0] / world_size[0]
poses_temp[:,0] = poses_temp[:,0] * screen_size[0]
poses_temp[:,1] = poses_temp[:,1] / world_size[1]
poses_temp[:,1] = 1.0 - poses_temp[:,1]
poses_temp[:,1] = poses_temp[:,1] * screen_size[1]
return poses_temp.astype(int)
disp_poses = cal_disp_poses()
# draw the loop shape on pygame window
color_white = (255,255,255)
color_black = (0,0,0)
screen = pygame.display.set_mode(screen_size)
screen.fill(color_white)
for i in range(swarm_size):
pygame.draw.circle(screen, color_black, disp_poses[i], 5, 0)
for i in range(swarm_size-1):
pygame.draw.line(screen, color_black, disp_poses[i], disp_poses[i+1],2)
pygame.draw.line(screen, color_black, disp_poses[0], disp_poses[swarm_size-1], 2)
pygame.display.update()
Your method for drawing takes huge advantage of the symmetries in the shapes you are drawing. More complex shapes will have fewer symmetries and so your method will require a lot of tedious work to get them drawn with stars. Without symmetry you may be better served writing each individual line 'command' in a list and following that list. For example, drawing the number 4 starting from the bottom (assuming 0 degrees is --> that way):
angles = [90,225,0]
distances = [20,15,12]
Then with a similar program to what you have, you can start drawing dots in a line at 90 degrees for 20 dots, then 225 degrees for 15 dots etc... Then by adding to these two lists you can build up a very complicated shape without relying on symmetry.

finding shortest path given distance transform image

I am given a distance transform (below) and I need to write a program that finds the shortest path going from point A(140,200) to point B(725,1095) while making sure I am at least ten pixels away from any obstacle
distance_transform_given
(the above image is the distance transform of map)
This is what I have done so far:
I started off at the initial point and evaluated the grayscale intensity of every point around it. ( 8 neighboring points that is)
Then I moved to the point with the highest grayscale intensity of the 8 neighboring points.
Then I repeated this process but I get random turns and not the shortest path.
please do help me out
code of what I have done so far :
def find_max_neigh_location(np,img):
maxi = 0
x0=0
y0=0
for i in range(len(np)):
if img[np[i][0]][np[i][1]][0] >maxi:
maxi = img[np[i][0]][np[i][1]][0]
x0 = np[i][0]
y0 = np[i][1]
return x0,y0
-----------------------------------------------------------------
def check_if_extremes(x,y):
if(x==1099 and y==1174):return 1
elif(y==1174 and x!=1099):return 2
elif(x==1099 and y!=1174):return 3
else:return 0
--------------------------------------------------------
def find_highest_neighbour(img,x,y,visted_points):
val = check_if_extremes(x,y)
if val==1:
neigh_points = [(x-1,y),(x-1,y-1),(x,y-1)]
np = list(set(neigh_points)-set(visited_points))
x0,y0 = find_max_neigh_location(np,img)
elif val==2:
neigh_points = [(x-1,y),(x-1,y-1),(x,y-1),(x+1,y-1),(x+1,y)]
np = list(set(neigh_points)-set(visited_points))
x0,y0 = find_max_neigh_location(np,img)
elif val==3:
neigh_points = [(x-1,y),(x-1,y-1),(x,y-1),(x,y+1),(x-1,y+1)]
np = list(set(neigh_points)-set(visited_points))
x0,y0 = find_max_neigh_location(np,img)
elif val==0:
neigh_points = [(x-1,y),(x-1,y-1),(x,y-1),(x,y+1),(x+1,y),(x+1,y+1),(x,y+1),(x-1,y+1)]
np = list(set(neigh_points)-set(visited_points))
x0,y0 = find_max_neigh_location(np,img)
for pt in neigh_points:
visited_points.append(pt)
return x0,y0,visited_points
---------------------------------------------------------
def check_if_neighbour_is_final_pt(img,x,y):
l = [(x-1,y), (x+1,y),(x,y-1),(x,y+1),(x-1,y-1),(x+1,y+1),(x-1,y+1),(x+1,y-1)]
if (725,1095) in l:
return True
else:
return False
--------------------------------------------------------------
x=140
y=200
pos=[]
count = 0
visited_points = [(x,y)]
keyword = True
while keyword:
val = check_if_neighbour_is_final_pt(img,x,y)
if val == True:
keyword = False
if val == False:
count=count+1
x,y,visited_points = find_highest_neighbour(img,x,y,visited_points)
img[x][y] = [255,0,0]
cv2.imwrite("img\distance_transform_result__"+str(count)+".png",img)
As you did not comment your code at all I won't read through your code.
I'll stick to what you described as your approach.
The fact that you start at point A and move to the brightest point A's neigbourhood shows that you don't know what distance transform does or what it is you see in your distance map... Never start coding if you don't know what you're dealing with.
Distance transform transforms a binary image into an image where each pixel's value is the minimum distance of the input image's foreground pixel to the background.
Dark pixels mean close to background (obstacles in your problem) and bright pixels are further away.
So moving to the brightest pixel nearby will only lead you away from the obstacles but never to your target point.
First restriction:
Never get closer to an obstacle than 10 pixels!
This means, every pixel that is closer to the obstacle (darker than 10) cannot be part of your path. So apply a global threshold of 10 to your distance map.
Now every white pixel can be used for your path to B.
The rest ist an optimization problem. There is plenty of literature on shortest path algorithms online. I'll leave that up to you...

Intersection-over-union between two detections

I was reading through the paper :
Ferrari et al. in the "Affinity Measures" section. I understood that Ferrari et al. tries to obtain affinity by :
Location affinity - using area of intersection-over-union between two detections
Appearance affinity - using Euclidean distances between Histograms
KLT point affinity measure
However, I have 2 main problems:
I cannot understand what is actually meant by intersection-over-union between 2 detections and how to calculate it
I tried a slightly difference appearance affinity measure. I transformed the RGB detection into HSV..concatenating the Hue and Saturation into 1 vector, and used it to compare with other detections. However, using this technique failed as a detection of a bag had a better similarity score than a detection of the same person's head (with a different orientation).
Any suggestions or solutions to my problems described above? Thank you and your help is very much appreciated.
Try intersection over Union
Intersection over Union is an evaluation metric used to measure the accuracy of an object detector on a particular dataset.
More formally, in order to apply Intersection over Union to evaluate an (arbitrary) object detector we need:
The ground-truth bounding boxes (i.e., the hand labeled bounding boxes from the testing set that specify where in the image our object is).
The predicted bounding boxes from our model.
Below I have included a visual example of a ground-truth bounding box versus a predicted bounding box:
The predicted bounding box is drawn in red while the ground-truth (i.e., hand labeled) bounding box is drawn in green.
In the figure above we can see that our object detector has detected the presence of a stop sign in an image.
Computing Intersection over Union can therefore be determined via:
As long as we have these two sets of bounding boxes we can apply Intersection over Union.
Here is the Python code
# import the necessary packages
from collections import namedtuple
import numpy as np
import cv2
# define the `Detection` object
Detection = namedtuple("Detection", ["image_path", "gt", "pred"])
def bb_intersection_over_union(boxA, boxB):
# determine the (x, y)-coordinates of the intersection rectangle
xA = max(boxA[0], boxB[0])
yA = max(boxA[1], boxB[1])
xB = min(boxA[2], boxB[2])
yB = min(boxA[3], boxB[3])
# compute the area of intersection rectangle
interArea = (xB - xA) * (yB - yA)
# compute the area of both the prediction and ground-truth
# rectangles
boxAArea = (boxA[2] - boxA[0]) * (boxA[3] - boxA[1])
boxBArea = (boxB[2] - boxB[0]) * (boxB[3] - boxB[1])
# compute the intersection over union by taking the intersection
# area and dividing it by the sum of prediction + ground-truth
# areas - the interesection area
iou = interArea / float(boxAArea + boxBArea - interArea)
# return the intersection over union value
return iou
The gt and pred are
gt : The ground-truth bounding box.
pred : The predicted bounding box from our model.
For more information, you can click this post
1) You have two overlapping bounding boxes. You compute the intersection of the boxes, which is the area of the overlap. You compute the union of the overlapping boxes, which is the sum of the areas of the entire boxes minus the area of the overlap. Then you divide the intersection by the union. There is a function for that in the Computer Vision System Toolbox called bboxOverlapRatio.
2) Generally, you don't want to concatenate the color channels. What you want instead, is a 3D histogram, where the dimensions are H, S, and V.
The current answer already explained the question clearly. So here I provide a bit better version of IoU with Python that doesn't break when two bounding boxes don't intersect.
import numpy as np
def IoU(box1: np.ndarray, box2: np.ndarray):
"""
calculate intersection over union cover percent
:param box1: box1 with shape (N,4) or (N,2,2) or (2,2) or (4,). first shape is preferred
:param box2: box2 with shape (N,4) or (N,2,2) or (2,2) or (4,). first shape is preferred
:return: IoU ratio if intersect, else 0
"""
# first unify all boxes to shape (N,4)
if box1.shape[-1] == 2 or len(box1.shape) == 1:
box1 = box1.reshape(1, 4) if len(box1.shape) <= 2 else box1.reshape(box1.shape[0], 4)
if box2.shape[-1] == 2 or len(box2.shape) == 1:
box2 = box2.reshape(1, 4) if len(box2.shape) <= 2 else box2.reshape(box2.shape[0], 4)
point_num = max(box1.shape[0], box2.shape[0])
b1p1, b1p2, b2p1, b2p2 = box1[:, :2], box1[:, 2:], box2[:, :2], box2[:, 2:]
# mask that eliminates non-intersecting matrices
base_mat = np.ones(shape=(point_num,))
base_mat *= np.all(np.greater(b1p2 - b2p1, 0), axis=1)
base_mat *= np.all(np.greater(b2p2 - b1p1, 0), axis=1)
# I area
intersect_area = np.prod(np.minimum(b2p2, b1p2) - np.maximum(b1p1, b2p1), axis=1)
# U area
union_area = np.prod(b1p2 - b1p1, axis=1) + np.prod(b2p2 - b2p1, axis=1) - intersect_area
# IoU
intersect_ratio = intersect_area / union_area
return base_mat * intersect_ratio
Here's yet another solution I implemented that works for me.
Borrowed heavily from PyImageSearch
import numpy as np
def bbox_intersects(bbox_a, bbox_b):
if bbox_b['x0'] >= bbox_a['x0'] and bbox_b['x0'] <= bbox_a['x1'] and \
bbox_b['y0'] >= bbox_a['y0'] and bbox_b['y0'] <= bbox_a['y1']:
# top-left of b within a
return True
elif bbox_b['x1'] >= bbox_a['x0'] and bbox_b['x1'] <= bbox_a['x1'] and \
bbox_b['y1'] >= bbox_a['y0'] and bbox_b['y1'] <= bbox_a['y1']:
# bottom-right of b within a
return True
elif bbox_a['x0'] >= bbox_b['x0'] and bbox_a['x0'] <= bbox_b['x1'] and \
bbox_a['y0'] >= bbox_b['y0'] and bbox_a['y0'] <= bbox_b['y1']:
# top-left of a within b
return True
elif bbox_a['x1'] >= bbox_b['x0'] and bbox_a['x1'] <= bbox_b['x1'] and \
bbox_a['y1'] >= bbox_b['y0'] and bbox_a['y1'] <= bbox_b['y1']:
# bottom-right of a within b
return True
return False
def bbox_area(x0, y0, x1, y1):
return (x1-x0) * (y1-y0)
def get_bbox_iou(bbox_a, bbox_b):
if bbox_intersects(bbox_a, bbox_b):
x_left = max(bbox_a['x0'], bbox_b['x0'])
x_right = min(bbox_a['x1'], bbox_b['x1'])
y_top = max(bbox_a['y0'], bbox_b['y0'])
y_bottom = min(bbox_a['y1'], bbox_b['y1'])
inter_area = bbox_area(x0 = x_left, x1 = x_right, y0 = y_top , y1 = y_bottom)
bbox_a_area = bbox_area(**bbox_a)
bbox_b_area = bbox_area(**bbox_b)
return inter_area / float(bbox_a_area + bbox_b_area - inter_area)
else:
return 0

calculating perpendicular and angular distance between line segments in 3d

I am working on implementing a clustering algorithm in C++. Specifically, this algorithm: http://www.cs.uiuc.edu/~hanj/pdf/sigmod07_jglee.pdf
At one point in the algorithm (sec 3.2 p4-5), I am to calculate perpendicular and angular distance (d┴ and dθ) between two line segments: p1 to p2, p1 to p3.
It has been a while since I had a math class, I am kinda shaky on what these actually are conceptually and how to calculate them. Can anyone help?
To get the perpendicular distance of a point Q to a line defined by two points P_1 and P_2 calculate this:
d = DOT(Q, CROSS(P_1, P_2) )/MAG(P_2 - P_1)
where DOT is the dot product, CROSS is the vector cross product, and MAG is the magnitude (sqrt(X*X+Y*Y+..))
Using Fig 5. You calculate d_1 the distance from sj to line (si->ei) and d_2 the distance from ej to the same line.
I would establish a coordinate system based on three points, two (P_1, P_2) for a line and the third Q for either the start or the end of the other line segment. The three axis of the coordinate system can be defined as such:
e = UNIT(P_2 - P_1) // axis along the line from P_1 to P_2
k = UNIT( CROSS(e, Q) ) // axis normal to plane defined by P_1, P_2, Q
n = UNIT( CROSS(k, e) ) // axis normal to line towards Q
where UNIT() is function to return a unit vector (with magnitude=1).
Then you can establish all your projected lengths with simple dot products. So considering the line si-ei and the point sj in Fig 5, the lengths are:
(l || 1) = DOT(e, sj-si);
(l |_ 1) = DOT(n, sj-si);
ps = si + e * (l || 1) //projected point
And with the end of the second segment ej, new coordinate axes (e,k,n) need to be computed
(l || 2) = DOT(e, ei-ej);
(l |_ 1) = DOT(n, ej-ei);
pe = ei - e * (l || 1) //projected point
Eventually the angle distance is
(d th) = ATAN( ((l |_ 2)-(L |_ 1))/MAG(pe-ps) )
PS. You might want to post this at Math.SO where you can get better answers.
Look at figure 5 on page 3. It draws out what d┴ and dθ are.
EDIT: The "Lehmer mean" is defined using Lp-space conventions. So in 3 dimensions, you would use p = 3. Let's say that the (Euclidean) distance between the two start points is d1, and between the ends is d2. Then d┴(L1, L2) = (d1^3 + d2^3) / (d1^2 + d2^2).
To find the angle between two vectors, you can use their dot product. The norm (denoted ||x||) is computed like this.

Drawing N-width lines?

Given a series of points, how could I calculate the vector for that line 5 pixels away? Ex:
Given:
\
\
\
How could I find the vector for
\ \
\ \
\ \
The ones on the right.
I'm trying to figure out how programs like Flash can make thick outlines.
Thanks
A thick line is a polygon. (Let's forget about antialiasing for now)
picture http://img39.imageshack.us/img39/863/linezi.png
start = line start = vector(x1, y1)
end = line end = vector(x2, y2)
dir = line direction = end - start = vector(x2-x1, y2-y1)
ndir = normalized direction = dir*1.0/length(dir)
perp = perpendicular to direction = vector(dir.x, -dir.y)
nperp = normalized perpendicular = perp*1.0/length(perp)
perpoffset = nperp*w*0.5
diroffset = ndir*w*0.5
(You can easily remove one normalization and calculate one of the offsets by taking perpendicular from the other)
p0, p1, p2, p3 = polygon points:
p0 = start + perpoffset - diroffset
p1 = start - perpoffset - diroffset
p2 = end + perpoffset + diroffset
p3 = end - perpoffset + diroffset
P.S. You're the last person I ever going to explain this stuff to.
Things like these should be understood on intuitive level.
The way to do with a straight line is to find the line perpendicular (N) to the original line, take a 5 pixels step in that direction and then find the perpendicular to the perpendicular in that point
| |
--+-----+---N
| |
| |
The way to do it with a non straight line is to approximate it with many straight lines or if you have the analytic representation of the line, to find some sort of analytic solution in a similar manner to the one of the straight line.
Try this untested pseudo-code:
# Calculate the "Rise" and "run" (slope) of your input line, then
# call this function, which returns offsets of x- and y-intercept
# for the parallel line. Obviously the slope of the parallel line
# is already known: rise/run.
# returns (delta_x, delta_y) to be added to intercepts.
adjacent_parallel(rise, run, distance, other_side):
negate = other_side ? -1 : 1
if rise == 0:
# horizontal line; parallel is vertically away
return (0, negate * distance)
elif run == 0:
# vertical line; parallel is horizontally away
return (negate * distance, 0)
else:
# a perpendicular radius is - run / rise slope with length
# run^2 + rize^2 = length ^ 2
nrml = sqrt(run*run + rise*rise)
return (negate * -1 * run / nrml, negate * rise/nrml)
As SigTerm shows in his nice diagram, you will want to get the lines on either side of the intended line: so pass in thickness/2 for distance and call twice, once with other_side=true, and draw a thickness centered on the 'abstract line'.
You will need to have some math background.
Start by understanding the line (linear equations and linear functions) what is a parallel and benefit from looking up geometric transformations.
After that you will understand SigTerm's answer...