I am attempting to calibrate the extrinsics of four cameras that I have mounted on a set-up. They are pointing 90 degrees apart. I have already calibrated the intrinsic paramteres, and I am thinking of using an image of a calibration pattern to find the extrinsics. What I have done so far is: placed the calibration pattern so that it lies flat on the table, so that its roll and yaw angles are 0 and pitch is 90 (as it lies parallel with the camera). The cameras have 0,90,180,270 degrees angles yaw (as they are 90 degrees apart) and the roll angle of the cameras are 0 (as they do not tilt. So what is left to calculate is the pitch angle of the cameras.
I can't quite wrap my head around how to calculate it, as I am not used to doing mapping between coordinate systems, so any help is welcome. I have already made a part of the program that calculates the rotation vector (of the calibration pattern in the image) using the cv::solvePnPRansac() function, so I have the rotation vector (which I believe I can make into a matrix using cv::Rodrigues()
What would the next step be for me in my calculations?
Related
I have 2 static cameras being used for stereo 3D positioning of objects. I need to determine the location and orientation of the second camera relative to the first as accurately as possible. I am trying to do this by locating n objects on the both cameras' images and correlating between the two cameras in order to calibrate my system to locate additional objects later.
Is there a preferred way to use a large number (6+) of correlated points to determine the best-fit relative locations/orientations of 2 cameras, assuming that I have already compensated for any distortive effects and know the correct (but somewhat noisy) angles between the optical axes and the objects, and the distance between the cameras?
My solution is to determine a rotation to perform on the second camera (B) in order to realign its measurements so they are from the point of view of the first camera (A) as if it has been translated to the location of camera B.
I did this using a compound rotation by first rotating the second camera's measurements about the cross product of vector -AB (B pointing at A from the perspective of A) and BA (B pointing at A from the perspective from B) such that R1*BA=-AB. Doing this rotation just means the vectors pointing between the cameras are aligned, and another rotation must be done in order to account for further degrees of freedom.
That rotation was done so that the second one can be about -AB. R2 is a rotation of theta radians about -AB. I found theta by taking the cross products of my measurements from camera A and vector AB, and comparing them to the cross products of R1*(the measurements from camera B) and -AB. I numerically minimized the RMS of the angles between the cross product pairs, because when the cameras are aligned those cross product vectors should be all pointing in the same directions because they are normal to coplanar planes.
After that I can use https://math.stackexchange.com/questions/61719/finding-the-intersection-point-of-many-lines-in-3d-point-closest-to-all-lines to find accurate 3D locations of intersection points by applying R1*R2 to any future measurements from camera B.
I've got a question related to multiple view geometry.
I'm currently dealing with a problem where I have a number of images collected by a drone flying around an object of interest. This object is planar, and I am hoping to eventually stitch the images together.
Letting aside the classical way of identifying corresponding feature pairs, computing a homography and warping/blending, I want to see what information related to this task I can infer from prior known data.
Specifically, for each acquired image I know the following two things: I know the correspondence between the central point of my image and a point on the object of interest (on whose plane I would eventually want to warp my image). I also have a normal vector to the plane of each image.
So, knowing the centre point (in object-centric world coordinates) and the normal, I can derive the plane equation of each image.
My question is, knowing the plane equation of 2 images is it possible to compute a homography (or part of the transformation matrix, such as the rotation) between the 2?
I get the feeling that this may seem like a very straightforward/obvious answer to someone with deep knowledge of visual geometry but since it's not my strongest point I'd like to double check...
Thanks in advance!
Your "normal" is the direction of the focal axis of the camera.
So, IIUC, you have a 3D point that projects on the image center in both images, which is another way of saying that (absent other information) the motion of the camera consists of the focal axis orbiting about a point on the ground plane, plus an arbitrary rotation about the focal axis, plus an arbitrary translation along the focal axis.
The motion has a non-zero baseline, therefore the transformation between images is generally not a homography. However, the portion of the image occupied by the ground plane does, of course, transform as a homography.
Such a motion is defined by 5 parameters, e.g. the 3 components of the rotation vector for the orbit, plus the the angle of rotation about the focal axis, plus the displacement along the focal axis. However the one point correspondence you have gives you only two equations.
It follows that you don't have enough information to constrain the homography between the images of the ground plane.
This is a continuation of the question from Here-How to find angle formed by the blades of a wind turbine with respect to a horizontal imaginary axis?
I've decided to use the following methodology for this-
Getting a frame from a camera and putting it in a loop.
Performing Canny edge detection.
Perform HoughLinesP to detect lines in the image.
Finding Blade Angle:
Perform Probabilistic Hough Lines Transform on the image. Restrict the blade lines to the length of the blades, as known already.
The returned value will have the start and end points of the lines detected. Since there are no background noises, this gives the starting and end point of the blade lines and the image will have the blade lines.
Now, find the dot product with a vector (1,0) by finding the vectors of the blade lines detected or we can use atan2 to find the relative angle of all the points detected with respect to a horizontal.
Problem:
When the yaw angle of the turbine is changed and it is not directly facing the camera, how do I calculate the blade angle formed?
The idea is to basically map the angles when rotated back into the form when viewed head on. From what I've been able to understand, I thought I'd find the homography matrix, decompose the matrix to get rotation, convert to Euler angles to calculate shift from the original axis, then shift all the axes with that angle. However, it's just a vague idea with no concrete planning to go upon.
Or I begin with trying to find the projection matrix, then get camera matrix and rotation matrix? I am lost on this account completely and feel overwhelmed with the many functions...
Other things I came across was the perspective transform,solvepnp..
It would be great if anyone could suggest another way to deal with this? Any links of code snippets would be helpful. I'm not that familiar with OpenCV and would be grateful for any help.
Thanks!
Edit:
[Edit by Spektre]
Assume the tip of the blades plus the center (or the three "roots" of the blades") lie on a common plane.
Fit a homography between those points and the corresponding ones in a reference pose for the turbine (cv::findHomography in OpenCv)
Decompose the homography into rotation and translation using an estimated or assumed camera calibration (cv::decomposeHomographyMat).
Convert the rotation into Euler angles.
So, I'm a high school student and the lead programmer on my local robotics team, and this year I decided to try out OpenCV and do some vision processing on our robot.
From my vision code, I need to know a few things about some objects on our competition field. These things are: distance (ft), horizontal angle from camera, and horizontal distance from camera (ft). Essentially, one large right triangle.
I already have the camera successfully detecting these objects and putting a boundingRect around them. With a gyroscope on our robot, we should be able to get our robot to ~90 degree angle to the object once it is detected (as it's a set angle on the field). Thus, I can calculate distance just based on an empirically made function of the area of the boundingRect of the object.
The horizontal angle of the object from the camera, however, I'm not exactly sure how to approach. Once I have that, though, I can do some simple trig and get the horizontal distance.
-
So here's what we have/know: Distance to object in ft, object is at ~90 degrees to camera, camera has horizontal fov of 67 degrees w/ resolution of 800x600, the real world dimensions of the object, and a boundingRect around the object.
How would I, using all of this information, calculate the horizontal angle from the camera to the object?
I'm accessing the Kinect Accelerometer in c++ via openFrameworks and ofxKinect and am having some issues with certain angles. If I pitch the kinect 90 degrees downwards I get nan values. I had a look at the getAccelPitch() method and this kind of makes sense since asin will return 0 when there will be values greater than 9.80665 divided by 10.1/9.80665.
The main problem though is after I pitch the device 90 degrees, the roll doesn't seem reliable(doesn't seem change much). In my setup I will need to have the device pitched 90 degrees but also know it's new roll.
Any hints,tips on how I may do that ? Is there an easy way to get the data to draw the kinect's orientation with 3 lines(axes).
I'm trying to detect orientations like these:
The problem is that you are using Euler angles (roll, pitch and yaw).
Euler angles are evil and they screw up the stability of your app, see for example
Strange behavior with android orientation sensor,
Reducing wiimote pitch/roll variations.
They are not useful for interpolation either.
A solution is to use rotation matrices instead. A tutorial on rotation matrices is given in the
Direction Cosine Matrix IMU: Theory
manuscript.