So, I'm a high school student and the lead programmer on my local robotics team, and this year I decided to try out OpenCV and do some vision processing on our robot.
From my vision code, I need to know a few things about some objects on our competition field. These things are: distance (ft), horizontal angle from camera, and horizontal distance from camera (ft). Essentially, one large right triangle.
I already have the camera successfully detecting these objects and putting a boundingRect around them. With a gyroscope on our robot, we should be able to get our robot to ~90 degree angle to the object once it is detected (as it's a set angle on the field). Thus, I can calculate distance just based on an empirically made function of the area of the boundingRect of the object.
The horizontal angle of the object from the camera, however, I'm not exactly sure how to approach. Once I have that, though, I can do some simple trig and get the horizontal distance.
-
So here's what we have/know: Distance to object in ft, object is at ~90 degrees to camera, camera has horizontal fov of 67 degrees w/ resolution of 800x600, the real world dimensions of the object, and a boundingRect around the object.
How would I, using all of this information, calculate the horizontal angle from the camera to the object?
Related
Lets say a camera is spinning around horizontally with its axis of rotation at the center of the camera lens. Do subjects farther away from the camera have a different rate of change in photo x coordinate than subjects closer to the camera when camera lens is rotating? Obviously this is true when translating the camera (when driving in a car the mountains in the distance go by slower than the stop sign). But after playing around a bit and doing some at-home experiments I couldnt find any evidence that suggests there is a difference when rotating...
I don't know the answer for sure, but I will share my thoughts.
Let's pretend we are using a camera with an FOV (field of view) of 90 degrees or so. Let's start the position of the camera at some perpendicular distance away from two same sized objects that are aligned in a straight line. The camera is not yet included in that straight line.
As we translate the camera towards the two objects in order to make a straight line with them, the object that is further away will appear in the image before the closer object due to the triangular FOV. The further object will appear first but it's x-coordinate in the resulting image will shift slower than the closer object.
Now we stop the camera when it is in a straight line with the other two objects. The further object is behind the closer object, so it cannot be seen. I think no matter how we rotate the camera, we will not be able to see the further object behind the closer object. I also think changing the FOV will not help us here. This would mean that there is no difference in the rate of change of each object's x-coordinate. If there were, we would be able to see the further object behind the closer object. We would have created an x-ray vision camera!
I'm doing camera calibration using the calibration.cpp sample provided in the OpenCV 3.4 release. I'm using a simple 9x6 chessboard, with square length = 3.45 mm.
Command to run the code:
Calib.exe -w=9 -h=6 -s=3.45 -o=camera.yml -oe imgList.xml
imgList.xml
I'm using a batch of 28 images available here
camera.yml (output)
Image outputs from drawChessboardCorners: here
There are 4 images without the chessboard overlay drawn, findChessboardCorners has failed for these.
Results look kind of strange (if I understand them correctly). I'm taking focal length value for granted, but the principal point seems way off at c = (834, 1513). I was expecting a point closer to the image center at (1280, 960) since the orientation of the camera to the surface viewed is very close to 90 degrees.
Also if I place an object at the principal point and move it in the Z axis I shouldn't see it move along x and y in the image, is this correct?
I suspect I should add images with greater tilt of the chessboard with respect to the camera to get better results (z-angle). But the camera has a really narrow depth of field, and this prevents the chessboard corners from being detected.
The main issue you have is you don't feed the camera software enough information to get the right estimation of different parameters.
In all the 28 images you changed only the orientation of the chessboard around the z axis in the same plane. You don't need to take that much photos, for me around 15 is okay. You need to add more ddl to your images: change the distance of the chessboard from the camera and tilt the chessboard around its X and Y axis. Re calibrate the camera and you should get the right parameters.
It really depends on the camera and lens you use.
More specifically on things like:
precision of chip deployment
attachment of screw thread of lens
manufacturing of lens itself
Some cheap webcam with small chip could even have the principal point out of the image size (means it could be also a negative number). So in your case C could be both - (834,1513) or (1513,834).
If you are using industrial cam or something similar, C should be in range of tens of percent around the centre of the image ->e.g. (1280,960)+-25%.
About the problem with narrow DOF (in nutshell) - to make it wider you need to get aperture as small as possible, prolong the exposure and add some extra light behind the camera to compensate the aperture.
Also you could refocus to get sharp shots from different distances, only your accuracy gets lower as refocusing is slightly changing the focal length. But in most cases you do not need this super extra ultra accuracy so this should not be the problem.
I've got a question related to multiple view geometry.
I'm currently dealing with a problem where I have a number of images collected by a drone flying around an object of interest. This object is planar, and I am hoping to eventually stitch the images together.
Letting aside the classical way of identifying corresponding feature pairs, computing a homography and warping/blending, I want to see what information related to this task I can infer from prior known data.
Specifically, for each acquired image I know the following two things: I know the correspondence between the central point of my image and a point on the object of interest (on whose plane I would eventually want to warp my image). I also have a normal vector to the plane of each image.
So, knowing the centre point (in object-centric world coordinates) and the normal, I can derive the plane equation of each image.
My question is, knowing the plane equation of 2 images is it possible to compute a homography (or part of the transformation matrix, such as the rotation) between the 2?
I get the feeling that this may seem like a very straightforward/obvious answer to someone with deep knowledge of visual geometry but since it's not my strongest point I'd like to double check...
Thanks in advance!
Your "normal" is the direction of the focal axis of the camera.
So, IIUC, you have a 3D point that projects on the image center in both images, which is another way of saying that (absent other information) the motion of the camera consists of the focal axis orbiting about a point on the ground plane, plus an arbitrary rotation about the focal axis, plus an arbitrary translation along the focal axis.
The motion has a non-zero baseline, therefore the transformation between images is generally not a homography. However, the portion of the image occupied by the ground plane does, of course, transform as a homography.
Such a motion is defined by 5 parameters, e.g. the 3 components of the rotation vector for the orbit, plus the the angle of rotation about the focal axis, plus the displacement along the focal axis. However the one point correspondence you have gives you only two equations.
It follows that you don't have enough information to constrain the homography between the images of the ground plane.
I'm trying to find the angle between two points in an image. The angle is with reference to the centre line of the camera.
In this image the center point is along the center of the image (assumption, I still have to figure out how to actually calculate it) and I want to find the angle between the line connecting point 1 and the camera center and the line connecting the desired point and the camera center
Now I want to know two things about finding the angle
- Is it possible to know the angle if the distance is not known exactly (but can be estimated by a human at run time) Assuming both points lie in the same plane in the image
- If the points are not in the same plane, how should I handle the angle calculation?
It can be achieved by inner product.
If you are talking in 3D space so your x(vector) and y(vector) should be in the form [a,b,f](a and b are points) where f is the distance of image plane from the camera center and a and a are the corresponding coordinates in the camera frame.
If it is in 2D space, so you have to specify the origin of your frame and find a and b according to that frame and your x and y vectors are in the form [a,b].
It can be found by using this formula:
Angle between two rays
K is the camera matrix. x1 and x2 are the image points given in homogeneous form like [u,v,1] and d1 and d2 are the corresponding 3D points.
See Richard Hartley, Australian National University, Canberra, Andrew Zisserman, University of Oxford: “Multiple View Geometry in Computer Vision”, 2nd edition, p. 209 for more details.
Inverting the camera matrix is quite simple. See
https://www.imatest.com/support/docs/pre-5-2/geometric-calibration-deprecated/projective-camera/ for more details.
Right down to business, basically I am making a small mini game which has characters running on top of a flat clock with the clock hand rotating around, the characters have to avoid it by jumping.
the part im struggling with is coding the collision, the clock hand is just a set model that is rotated applying matrices and for whatever reason box collision will not work.
So my theory is because i know the angle that the clock hand is currently being multiplied by, is there some mathematical way to calculate the angle of the player in relation to the centre point of the circle so that this can be checked against the clock hand angle?
Sure.
float angle = atan2(y_handle - y_center, x_handle - x_center);