Why my camera matrix is an identity matrix after calibration - c++

After calibrating my camera using some sample data in OpenCV, my camera matrix becomes an identity matrix.
Here is my code:
std::vector<cv::Point3f> t_3d;
t_3d.push_back(cv::Point3f(50, 100, 0));
t_3d.push_back(cv::Point3f(375, 100, 0));
t_3d.push_back(cv::Point3f(50, 1600, 0));
t_3d.push_back(cv::Point3f(750, 1600, 0));
object_points.push_back(t_3d);
std::vector<cv::Point2f> t_2d;
t_2d.push_back(cv::Point2f(2.27556, 98.9867));
t_2d.push_back(cv::Point2f(631.467, 58.0267));
t_2d.push_back(cv::Point2f(207.076, 1020.59));
t_2d.push_back(cv::Point2f(1061.55, 969.387));
image_points.push_back(t_2d);
cv::calibrateCamera(object_points, image_points, cv::Size(1440, 900), cam_mat,
dist_coeffs, rvecs, tvecs, CV_CALIB_USE_INTRINSIC_GUESS);
Is this behaviour normal?

According to the docs, you need to initialize your camera matrix at least partially beforehand.
From the docs:
If CV_CALIB_USE_INTRINSIC_GUESS and/or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of fx, fy, cx, cy must be initialized before calling the function.
and
CV_CALIB_USE_INTRINSIC_GUESS cameraMatrix contains valid initial values of fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image center ( imageSize is used), and focal distances are computed in a least-squares fashion.
In your case, since you haven't set cam_mat to any value, I believe c_x and c_y are being set to the image center (presumably (0,0)) and the focal distances f_x and f_y are 1 because of your use of CV_CALIB_USE_INTRINSIC_GUESS.
Try passing in 0 instead of that flag.

Related

How to get the difference of Aruco Markers' axis-angel in the world coordinate?

I am new to opencv and Aruco. I attempted to find out the differences in axis-angles of two different Aruco markers. For example, the angle difference b/w the two (1, 0, 0) vectors of markers in the real world. In my understanding, the transform happens in the following order: local coordinate-> camera coordinate -> world coordinate. And then I get angle difference of two aruco markers because they are now both in the same world coordinate. Can someone explain how this process is done? Or is there any better way to find out the angle difference? I am coding with python and opencv
Known:
Translation and rotation vectors(1x3) from estimatePoseSingleMarkers() function in aruco module. (rotation and translation vectors can be transformed to matrix(3x3) with Rodrigues() )
camera Matrix(3x3) and dist_coefs matrix(1x5) from camera calibration.
Using 6x6_250 aruco markers
Upate:
world coordinate = camera coordinate
The following is the function to draw the aruco's XYZ components. It looks like the origin or the camera is (0, 0, 0). Apply the translation and rotation matrix of aruco marker to the x, y, z components of origin gets me the x, y, z components of aruco marker. Is that right?
/**
*/
void drawAxis(InputOutputArray _image, InputArray _cameraMatrix, InputArray _distCoeffs,
InputArray _rvec, InputArray _tvec, float length) {
CV_Assert(_image.getMat().total() != 0 &&
(_image.getMat().channels() == 1 || _image.getMat().channels() == 3));
CV_Assert(length > 0);
// project axis points
vector< Point3f > axisPoints;
axisPoints.push_back(Point3f(0, 0, 0));
axisPoints.push_back(Point3f(length, 0, 0));
axisPoints.push_back(Point3f(0, length, 0));
axisPoints.push_back(Point3f(0, 0, length));
vector< Point2f > imagePoints;
projectPoints(axisPoints, _rvec, _tvec, _cameraMatrix, _distCoeffs, imagePoints);
// draw axis lines
line(_image, imagePoints[0], imagePoints[1], Scalar(0, 0, 255), 3);
line(_image, imagePoints[0], imagePoints[2], Scalar(0, 255, 0), 3);
line(_image, imagePoints[0], imagePoints[3], Scalar(255, 0, 0), 3);
}
Technically it's a work around to my question. So I figured ARUCO library has this function called projectPoints which maps the 3D objectPoints into 2D imagePoints.
imagePoints[0] and imagePoints[1] gives you the 2D projection of (length, 0, 0). By using arccos(np.dot(v1,v2)), you can get the difference in x-axis angle of the two markers.

Allegro 5 how to draw a scaled bitmap region

My allegro 5 game needs to draw a region of a tilesheet then i used al_draw_bitmap_region, but now i added the function to change the screen resolution, so now i also need to scale that bitmap but allegro 5 do not have something like al_draw_scaled_bitmap_region, it have al_draw_bitmap_region andal_draw_scaled_bitmap` but not both.
somebody can help me how use both?
There is no al_draw_scaled_bitmap_region, but there is
al_draw_tinted_scaled_rotated_bitmap_region. You can just pass 'default'
values to the parameters you don't need.
al_draw_tinted_scaled_rotated_bitmap_region(
bitmap,
sx, sy, sw, sh, // source bitmap region
al_map_rgb(1, 1, 1), // color, just use white if you don't want a tint
cx, cy, // center of rotation/scaling
float dx, float dy, // destination
xscale, yscale, // scale
0, 0)); // angle and flags
You could also use transforms to scale your bitmap:
ALLEGRO_TRANSFORM trans, prevTrans;
// back up the current transform
al_copy_transform(&prevTrans, al_get_current_transform());
// scale using the new transform
al_identity_transform(&trans);
al_scale_transform(&trans, xscale, yscale);
al_use_transform(&trans);
al_draw_bitmap_region(*bitmap, sx, sy, sw, sh, dx, dy, 0));
// restore the old transform
al_use_transform(&prevTrans);

How to maintain white background when using opencv warpaffine

I'm trying to rotate image using
void rotate(cv::Mat& src, double angle, cv::Mat& dst)
{
int len = std::max(src.cols, src.rows);
cv::Point2f pt(len / 2., len / 2.);
cv::Mat r = cv::getRotationMatrix2D(pt, angle, 1.0);
cv::warpAffine(src, dst, r, cv::Size(src.cols, src.rows));
}
by giving angle, source and destination image. Rotation works correctly as follows.
I want to make black areas white. I have tried with
cv::Mat dst = cv::Mat::ones(src.cols, src.rows, src.type());
before calling rotate, but no change in result. How can I achieve this?
Note: I am looking for solution which achieve this while doing the rotation. obviously by making black areas white after the rotation this can be achieved.
You will want to use the borderMode and borderValue arguments of the warpAffine function to accomlish this. By setting the mode to BORDER_CONSTANT it will use a constant value for border pixels (i.e. outside the image), and you can set the value to the constant value you want to use (i.e. white). It would look something like:
cv::warpAffine(src, dst, r,
cv::Size(src.cols, src.rows),
cv::INTER_LINEAR,
cv::BORDER_CONSTANT,
cv::Scalar(255, 255, 255));
For more details see the OpenCV API Documentation.

OpenCV triangulatePoints handedness

I have two cameras attached rigidly side by side looking in parallel directions.
Projection matrix for left camera
Projection matrix for right camera
When I perform triangulatePoints on the two vectors of corresponding points, I get the collection of points in 3D space. All of the points in 3D space have a negative Z coordinate.
So, to get to the bottom of this...
My assumption was that OpenCV uses Right Hand coordinate system.
The handedness reminder:
I assume that the initial orientation of each camera is directed in the positive Z axis direction.
So, by using the projection matrices like the ones I presented in the beginning, I would assume that the cameras are positioned in space like this:
This assumption is in conflict with what I observe when I get negative Z corrdinates for the traingulated points. The only explanation I can think of is that OpenCV in fact uses the Left Hand coordinate system. So, with the projection matrices I stated in the beginning, this is how the cameras are positioned in space:
This would indicate that my left camera in this situation is not on the left side. And this is why I am getting negative depth for points.
Moreover, if I try to combine triangulatePoints with solvePnP I run into problems.
I use the output of triangulatePoints as an input to solvePnP. I expect to get the camera coordinates near the origin of 3D coordinate system. I expect for calculated camera position to match the projection matrices used in the beginning. But this is not happening. I get some completely wild results, missing the expected values by over 10 times the baseline length.
Example
This example is more complete representation of the problem than what is stated above.
points3D
Here is the code for generating these points.
Movin on, setting up camera A and camera D...
Mat cameraMatrix = (Mat_<double>(3, 3) <<
716.731, 0, 660.749,
0, 716.731, 360.754,
0, 0, 1);
Mat distCoeffs = (Mat_<double>(5, 1) << 0, 0, 0, 0, 0);
Mat rotation_a = Mat::eye(3, 3, CV_64F); // no rotation
Mat translation_a = (Mat_<double>(3, 1) << 0, 0, 0); // no translation
Mat rt_a;
hconcat(rotation_a, translation_a, rt_a);
Mat projectionMatrix_a = cameraMatrix * rt_a;
Mat rotation_d = (Mat_<double>(3, 1) << 0, CV_PI / 6.0, 0); // 30° rotation about Y axis
Rodrigues(rotation_d, rotation_d); // convert to 3x3 matrix
Mat translation_d = (Mat_<double>(3, 1) << 100, 0, 0);
Mat rt_d;
hconcat(rotation_d, translation_d, rt_d);
Mat projectionMatrix_d = cameraMatrix * rt_d;
What are the pixel coordinates of points3D when observed by projections A and D.
Mat points2D_a = projectionMatrix_a * points3D;
Mat points2D_d = projectionMatrix_d * points3D;
I put them in vectors:
vector<Point2f> points2Dvector_a, points2Dvector_d;
After that, I generate the 3D points again.
Mat points3DHomogeneous;
triangulatePoints(projectionMatrix_a, projectionMatrix_d, points2Dvector_a, points2Dvector_d, points3DHomogeneous);
Mat triangulatedPoints3D;
transpose(points3DHomogeneous, triangulatedPoints3D);
convertPointsFromHomogeneous(triangulatedPoints3D, triangulatedPoints3D);
Now, triangulatedPoints3D start out like this:
and they are identical to points3D.
And then the last step.
Mat rvec, tvec;
solvePnP(triangulatedPoints3D, points2Dvector_d, cameraMatrix, distCoeffs, rvec, tvec);
Resulting rvec and tvec:
I had hoped to get something more similar to the transformations used in creation of projectionMatrix_d, i.e. translation of (100, 0, 0) and rotation of 30° about Y axis.
If I use inverted transformations when creating projection matrix, like this:
Mat rotation_d = (Mat_<double>(3, 1) << 0, CV_PI / 6.0, 0); // 30° rotation about Y axis
Rodrigues(-rotation_d, rotation_d); // NEGATIVE ROTATION
Mat translation_d = (Mat_<double>(3, 1) << 100, 0, 0);
Mat rt_d;
hconcat(rotation_d, -translation_d, rt_d); // NEGATIVE TRANSLATION
Mat projectionMatrix_d = cameraMatrix * rt_d;
then I get rvec and tvec:
And that makes much more sense. But then I change the starting transformation so that the rotation is negative CV_PI / 6.0 -> -CV_PI / 6.0 and the resulting rvec and tvec are:
I would like to find an explanation to why this is happening. Why am I getting such weird results from solvePnP.
OpenCV coordinate system is right-handed, the answer here gives an illustrative example about OpenCV camera system. I guess the confusion is about rvec and tvec, the latter does not give the translation of the camera but it points to the world origin. The first answer here explains it based on an example. You can get the actual projection matrix from the output of solvePnP by a simple matrix multiplication, the details are here in the first answer.

3D Geometry projecting a point in space to an image given FOV and angles

I'm attempting to project a 3D point in Cartesian space onto an image, so it is shown as a circle at the location on the image it would be seen if a certain camera was looking at it.
Knows:
The camera has horizontal field of view, vertical FOV, azimuth and elevation.
It is located at (0,0,0) although being able to change this would be good.
The image has a certain size.
The point in space has a know X, Y, Z location.
I'm using the very good C++ library MathGeoLib
In my attempt, changing azimuth or elevation does not work, but the projection seems to work ok if rotation is ignored.
How do you convert from a real world point into pixel location of an image providing that you know where the where the camera is, which direction it is looking and it's field of view?
Frustum MyClass::GetCameraFrustrum()
{
Frustum frustrum;
frustrum.SetPerspective(DegToRad(_horizontal_field_of_view), DegToRad(_vertical_filed_of_view));
// Looking straight down the z axis
frustrum.SetFront(vec(0, 0, 1));
frustrum.SetUp(vec(0, 1, 0));
frustrum.SetPos(vec(0,0,0));
frustrum.SetKind(FrustumSpaceGL, FrustumRightHanded);
frustrum.SetViewPlaneDistances(1, 2000);
float3x3 rotate_around_x = float3x3::RotateX(DegToRad(_angle_azimuth));
frustrum.Transform(rotate_around_x);
float3x3 rotate_around_y = float3x3::RotateY(DegToRad(_angle_elevation));
frustrum.Transform(rotate_around_y);
return frustrum;
}
void MyClass::Test()
{
cv::Mat image = cv::Mat::zeros(2000, 2000, CV_8UC1);
Frustum camera_frustrum = GetCameraFrustrum();
vec projection = camera_frustrum.Project(_position_of_object);
cv::Point2d location_on_image(projection.x * image.cols, projection.y * image.rows);
LOG_DEBUG << location_on_image;
cv::circle(image, location_on_image, 5, cv::Scalar(255, 0, 0), 3, cv::LINE_AA);
}