OpenCV: Calculate angle between camera and pixel - c++

I'd like to know how I can go about calculating the angle of some pixel in a photo relative to the webcam that I'm using. I'm new to this sort of thing and I'm using a webcam. Essentially, I take a photo, process it, and I end up with a pixel value in the image that is what I'm looking for. I then need to somehow turn that pixel value into some meaningful quantity---I need to find a line/vector that passes through the pixel and the camera. I don't need magnitude, just phase.
How does one go about doing this? Is camera calibration necessary? I've been reading a bit about it but am unsure.
Thanks

You don't need to know the distance to the object, only the resolution and angle of view of the camera.
Computing the angle requires only simple linear interpolation. For example, let's assume a camera with a resolution of 1920x1080 that covers a 45 degree angle of view across the diagonal.
In this case, sqrt(19202 + 10802) gives 2292.19 pixels along the diagonal. That means each pixel represents 45/2292.19 = .0153994 degrees.
So, compute the distance from the center (in pixels), multiply by .0153994, and you have its angle from the center (for that camera -- for yours, you'll obviously have to use its resolution and angle of view).
Of course, this is somewhat approximate -- its accuracy will depend on how much distortion the lens has. With a zoom lens (especially wider angle) you can generally count on that being fairly high. With a fixed focal length lens (especially if it doesn't cover an angle wider than 90 degrees or so) it'll usually be pretty low.
If you want to improve accuracy, you can start by taking a picture of a flat rectangle with straight lines just inside the angle of view of the camera, then compute the distortion based on the deviation from perfectly straight in the resulting picture. If you're working with an extremely wide angle lens, this may be nearly essential. With a lens covering a narrower angle of view (especially, as already mentioned, if it's fixed focal length) it's rarely likely to be worthwhile (such lenses often have only a fraction of a percent of distortion).

Recipe:
1 - Calibrate the camera, obtaining the camera matrix K and distortion parameters D. In OpenCV this is done as described in this tutorial.
2 - Remove the nonlinear distortion from the pixel positions of interest. In OpenCV is done using the undistortPoints routine, without passing arguments R and P.
3 - Back-project the pixels of interest into rays (unit vectors with the tail at the camera center) in camera 3D coordinates, by multiplying their pixel positions in homogeneous coordinates times the inverse of the camera matrix.
4 - The angle you want is the angle between the above vectors and (0, 0, 1), the vector associated to the camera's focal axis.

Related

Compute Homography Matrix based on intrinsic and extrinsic camera parameters

I am willing to perform a 360° Panorama stitching for 6 fish-eye cameras.
In order to find the relation among cameras I need to compute the Homography Matrix. The latter is usually computed by finding features in the images and matching them.
However, for my camera setup I already know:
The intrinsic camera matrix K, which I computed through camera calibration.
Extrinsic camera parameters R and t. The camera orientation is fixed and does not change at any point. The cameras are located on a circle of known diameter d, being each camera positioned with a shift of 60° degrees with respect to the circle.
Therefore, I think I could manually compute the Homography Matrix, which I am assuming would result in a more accurate approach than performing feature matching.
In the literature I found the following formula to compute the homography Matrix which relates image 2 to image 1:
H_2_1 = (K_2) * (R_2)^-1 * R_1 * K_1
This formula only takes into account a rotation angle among the cameras but not the translation vector that exists in my case.
How could I plug the translation t of each camera in the computation of H?
I have already tried to compute H without considering the translation, but as d>1 meter, the images are not accurate aligned in the panorama picture.
EDIT:
Based on Francesco's answer below, I got the following questions:
After calibrating the fisheye lenses, I got a matrix K with focal length f=620 for an image of size 1024 x 768. Is that considered to be a big or small focal length?
My cameras are located on a circle with a diameter of 1 meter. The explanation below makes it clear for me, that due to this "big" translation among the cameras, I have remarkable ghosting effects with objects that are relative close to them. Therefore, if the Homography model cannot fully represent the position of the cameras, is it possible to use another model like Fundamental/Essential Matrix for image stitching?
You cannot "plug" the translation in: its presence along with a nontrivial rotation mathematically implies that the relationship between images is not a homography.
However, if the imaged scene is and appears "far enough" from the camera, i.e. if the translations between cameras are small compared to the distances of the scene objects from the cameras, and the cameras' focal lengths are small enough, then you may use the homography induced by a pure rotation as an approximation.
Your equation is wrong. The correct formula is obtained as follows:
Take a pixel in camera 1: p_1 = (x, y, 1) in homogeneous coordinates
Back project it into a ray in 3D space: P_1 = inv(K_1) * p_1
Decompose the ray in the coordinates of camera 2: P_2 = R_2_1 * P1
Project the ray into a pixel in camera 2: p_2 = K_2 * P_2
Put the equations together: p_2 = [K_2 * R_2_1 * inv(K_1)] * p_1
The product H = K2 * R_2_1 * inv(K1) is the homography induced by the pure rotation R_2_1. The rotation transforms points into frame 2 from frame 1. It is represented by a 3x3 matrix whose columns are the components of the x, y, z axes of frame 1 decomposed in frame 2. If your setup gives you the rotations of all the cameras with respect to a common frame 0, i.e. as R_i_0, then it is R_2_1 = R_2_0 * R_1_0.transposed.
Generally speaking, you should use the above homography as an initial estimation, to be refined by matching points and optimizing. This is because (a) the homography model itself is only an approximation (since it ignores the translation), and (b) the rotations given by the mechanical setup (even a calibrated one) are affected by errors. Using matched pixels to optimize the transformation will minimize the errors where it matters, on the image, rather than in an abstract rotation space.

Unstable values in ArUco pose estimation

I'm trying to find the orientation of the camera using Aruco marker. Euler angles extracted from the rotation matrix are unstable beyond a certain point.
As the distance of the camera increases from the marker, the yaw angle values of camera is just unstable. The "Z" axis on the marker flips.
The euler angles are jittery, not the same in every frame and take time to stabilize. How do I obtain some reliable values of the yaw angle and distance between the camera and marker?
I am trying to find the pose of moving camera w.r.t a static marker.
I implemented solvePnP and solvePnPRansac both yielding in unstable results.
The rotation matrix obtained after converting rotation vectors from estimatePoseSingleMarker seems alright up to a certain point but loses stability.
How do I go about this?
Thank you
In general, you won't get accurate camera pose estimation from a single marker. The solution is to add more markers. You could use either a marker board, or a more sparse pattern of markers.
As a single marker gets further from the camera, several factors work to reduce the accuracy of the marker pose estimate.
the projected size of the marker becomes smaller and more quantized by the pixel grid. Distance is estimated by inverse perspective division, so it becomes less accurate as distance increases.
perspective distortion reduces, approaching a parallel projection. In a parallel projection the marker has two equally viable orientations, which may be returned alternately (see https://en.wikipedia.org/wiki/Necker_cube). The orientation of the marker relative to the camera is also significant - in more perpendicular views of the marker (orthographic projection), pitch and yaw of the marker are ambiguous, compared to oblique views. Reduced perspective distortion with distance makes this effect worse, and will cause the calculated camera pose to yaw, pitch, and move laterally.
given the smaller number of pixels in the marker, small scale effects such as sensor noise and quantization become more significant, reducing stability from frame to frame and causing jitter.
As you have discovered, pose estimation works OK in close-up, oblique views of a single marker, because the projected points given to solvePnP() are far apart and have large perspective distortion. By adding more markers, you always have ideal projected points for solvePnP().

Invalid cameras calibration for an head mounter Eye Tracking system

I'm working on an Eye Tracking system with two cameras mounted on some kind of glasses. There are optical lenses so that the screen is perceived at around 420 mm from the eye.
From a few dozen pupil samples, we compute two eye models (one for each camera), located in their respective camera coordinates system. This is based on the works here, but modified so that an estimation of the eye center is found using some kind of brute-force approach to minimize the ellipse projection error on the model given its center position in camera space.
Theorically, an approximation of the cameras parameters would be symetrical to the lenses on the Y axis. So every camera should be at the coordinates (around 17.5mm or -17.5, 0, 3.3) with respect to the lenses coordinates system, a rotation of around 42.5 degrees on the Y axis.
With the However, with these values, there is an offset in the result. See below:
The red point is the gaze center estimated by the left eye tracker, the white one is the right eye tracker, in screen coordinates
The screen limits are represented by the white lines.
The green line is the gaze vector, in camera coordinates (projected in 2D for visualization)
The two camera centers found, projected in 2D, are in the middle of the eye (the blue circle).
The pupil samples and current pupils are represented by the ellipses with matching colors.
The offset on x isn't constant which mean the rotation on Y is not exact. and the position of the camera aren't precise too. In order to fix it, we used: this to calibrate and then this to get the rotation parameters from the rotation matrix.
We added a camera on the middle of the lenses (Close to the theorical 0,0,0 point ?) to get the extrinsics and intrinsic parameters of the cameras, relative to our lens center. However, with about 50 checkerboard captures from different positions, the results given by OpenCV doesn't seems correct.
For example, it gives for a camera a position of about (-14,0,10) in lens coordinates for the translation and something like (-2.38, 49, -2.83) as rotation angles in degrees.
The previous screenshots are taken with theses parameters. The theorical ones are a bit further apart, but are more likely to reach the screen borders, unlike the opencv value.
This is probably because the test camera is in front of the optic, not behind, where our real 0,0,0 would be located (we just add the distance at which the screen is perceived on the Z axis afterwards, which is 420mm).
However, we have no way to put the camera in (0, 0, 0).
As the system is compact (everything is captured within a few cm^2), each degree or millimeter can change the result drastically so without the precise value the cameras, we're a bit stuck.
Our objective here is to find an accurate way to get the extrinsic and intrisic parameters of each cameras, so that we can compute a precise position of the center of the eye of the person wearing the glasses, without other calibration procedure than looking around (so no fixation points)
Right now, the system is precise enough so that we get a global indication on where someone is looking on the screen,but there is a divergence between the right and left camera, it's not precise enough. Any advice or hint that could help us is welcome :)

Film coordinate to world coordinate

I am working on building 3D point cloud from features matching using OpenCV3.1 and OpenGL.
I have implemented 1) Camera Calibration (Hence I am having Intrinsic Matrix of the camera) 2) Feature extraction( Hence I have 2D points in Pixel Coordinates).
I was going through few websites but generally all have suggested the flow for converting 3D object points to pixel points but I am doing completely backword projection. Here is the ppt that explains it well.
I have implemented film coordinates(u,v) from pixel coordinates(x,y)(With the help of intrisic matrix). Can anyone shed the light on how I can render "Z" of camera coordinate(X,Y,Z) from the film coordinate(x,y).
Please guide me on how I can utilize functions for the desired goal in OpenCV like solvePnP, recoverPose, findFundamentalMat, findEssentialMat.
With single camera and rotating object on fixed rotation platform I would implement something like this:
Each camera has resolution xs,ys and field of view FOV defined by two angles FOVx,FOVy so either check your camera data sheet or measure it. From that and perpendicular distance (z) you can convert any pixel position (x,y) to 3D coordinate relative to camera (x',y',z'). So first convert pixel position to angles:
ax = (x - (xs/2)) * FOVx / xs
ay = (y - (ys/2)) * FOVy / ys
and then compute cartesian position in 3D:
x' = distance * tan(ax)
y' = distance * tan(ay)
z' = distance
That is nice but on common image we do not know the distance. Luckily on such setup if we turn our object than any convex edge will make an maximum ax angle on the sides if crossing the perpendicular plane to camera. So check few frames and if maximal ax detected you can assume its an edge (or convex bump) of object positioned at distance.
If you also know the rotation angle ang of your platform (relative to your camera) Then you can compute the un-rotated position by using rotation formula around y axis (Ay matrix in the link) and known platform center position relative to camera (just subbstraction befor the un-rotation)... As I mention all this is just simple geometry.
In an nutshell:
obtain calibration data
FOVx,FOVy,xs,ys,distance. Some camera datasheets have only FOVx but if the pixels are rectangular you can compute the FOVy from resolution as
FOVx/FOVy = xs/ys
Beware with Multi resolution camera modes the FOV can be different for each resolution !!!
extract the silhouette of your object in the video for each frame
you can subbstract the background image to ease up the detection
obtain platform angle for each frame
so either use IRC data or place known markers on the rotation disc and detect/interpolate...
detect ax maximum
just inspect the x coordinate of the silhouette (for each y line of image separately) and if peak detected add its 3D position to your model. Let assume rotating rectangular box. Some of its frames could look like this:
So inspect one horizontal line on all frames and found the maximal ax. To improve accuracy you can do a close loop regulation loop by turning the platform until peak is found "exactly". Do this for all horizontal lines separately.
btw. if you detect no ax change over few frames that means circular shape with the same radius ... so you can handle each of such frame as ax maximum.
Easy as pie resulting in 3D point cloud. Which you can sort by platform angle to ease up conversion to mesh ... That angle can be also used as texture coordinate ...
But do not forget that you will lose some concave details that are hidden in the silhouette !!!
If this approach is not enough you can use this same setup for stereoscopic 3D reconstruction. Because each rotation behaves as new (known) camera position.
You can't, if all you have is 2D images from that single camera location.
In theory you could use heuristics to infer a Z stacking. But mathematically your problem is under defined and there's literally infinitely many different Z coordinates that would evaluate your constraints. You have to supply some extra information. For example you could move your camera around over several frames (Google "structure from motion") or you could use multiple cameras or use a camera that has a depth sensor and gives you complete XYZ tuples (Kinect or similar).
Update due to comment:
For every pixel in a 2D image there is an infinite number of points that is projected to it. The technical term for that is called a ray. If you have two 2D images of about the same volume of space each image's set of ray (one for each pixel) intersects with the set of rays corresponding to the other image. Which is to say, that if you determine the ray for a pixel in image #1 this maps to a line of pixels covered by that ray in image #2. Selecting a particular pixel along that line in image #2 will give you the XYZ tuple for that point.
Since you're rotating the object by a certain angle θ along a certain axis a between images, you actually have a lot of images to work with. All you have to do is deriving the camera location by an additional transformation (inverse(translate(-a)·rotate(θ)·translate(a)).
Then do the following: Select a image to start with. For the particular pixel you're interested in determine the ray it corresponds to. For that simply assume two Z values for the pixel. 0 and 1 work just fine. Transform them back into the space of your object, then project them into the view space of the next camera you chose to use; the result will be two points in the image plane (possibly outside the limits of the actual image, but that's not a problem). These two points define a line within that second image. Find the pixel along that line that matches the pixel on the first image you selected and project that back into the space as done with the first image. Due to numerical round-off errors you're not going to get a perfect intersection of the rays in 3D space, so find the point where the ray are the closest with each other (this involves solving a quadratic polynomial, which is trivial).
To select which pixel you want to match between images you can use some feature motion tracking algorithm, as used in video compression or similar. The basic idea is, that for every pixel a correlation of its surroundings is performed with the same region in the previous image. Where the correlation peaks is, where it likely was moved from into.
With this pixel tracking in place you can then derive the structure of the object. This is essentially what structure from motion does.

Angle of object relative to the camera and video? Video and camera output different

I am wondering if I have got my thinking write about this, I have calibration done for my camera and now I want to get the angle of detected objects relative to the camera only on the x-axis, the horizontal.
I am thinking I can put some grid lines across the image at known pixel values and match those with know real world distances and calculate the angle per pixel that way, knowing the distances of the triangles. Starting at the centre of the image 0 degrees, and as we move towards the right +X degrees and towards the left -X pixels.
Assuming this is a correct way to go about it, for some reason the video I'm working with was recorded at 704x576 pixels, but when I plug the camera into my computer to work with it's 640x480 pixels and it's the same camera that made the recordings. I assume this will affect my results somewhat, with the calibration and definitely with the angle per pixel measurement that I want. I am working with OpenCV in C++, I am wondering if there's a way/function to adjust the screen size for when I call up the camera to 704x576 and if I then do my measurements at this size can I get a somewhat accurate angle per pixel measurement? Or do I need to do something else?
I'm still figuring my way around camera geometry and openCV, and any help would be much appreciated, thanks.
It is probably easier than you think. Say your camera has 60.0 deg horizontal field of view (FOV). Than each pixel along X axis is just 60.0/640 deg. You can easily calculate FOV by considering a right triangle with sides formed by a focal length vector and half of the screen width:
FOV = 2*atan(640/2, focal) where focal length is in pixels
for example, for focal=500 pixels
FOV = 2*atan(640/2, 500) = 1.14rad = 65.2deg
One thing to keep in mind is that focal length changes proportionally with screen resolution. For example, if you calculated focal=500 based on 640x320 image, then for 320x160 image focal=250.