I am trying to project 3D (x,y,z) axes via the openCV projectPoints function onto a chessboard after calibration, but every time I run my code, the axes all point to a specific projected image point on the screen.
example output image
The cameraMatrix and distCoeffs are read in from data that was wrote to a file during calibration. They are:
CameraMatrix[1372.852997982289, 0, 554.2708806543288;
0, 1372.852997982289, 906.4327368600385;
0, 0, 1]
distCoeff[0.02839203221556521;
0.442572399014994;
-0.01755006951285373;
-0.0008989327508155589;
-1.836490953232962]
The rotation and translation values are being computed in real-time via SolvePnP every time a bool is turned on via keypress. An example of a their output values are:
R =
[-0.9065211432378315;
0.3787201875924527;
-0.2788943269946833]
T =
[-0.4433059282649063;
-0.6745750872705997;
1.13753594660495]
While SolvePnP is being computed, I press another keypress to draw the 3D axes from the origin as written in the code below. And the rotation and translation values are passed into the projectPoint function. However, the axesProjectedPoints image points output for each axis is always very similar and in the range of:
[100.932, 127.418]
[55.154, 157.192]
[70.3054, 162.585]
Note the axesProjectedPoints is initialized out of the loop as a vector<Point2f> axesProjectedPoints
The reprojection error is fairly good, and under 1 pixel.
The projectPoints code:
if (found) {
// read calibration data -- function that reads calibration data saved to a file
readCameraConfig(cameraMatrix, distCoeffs);
// draw corners
drawChessboardCorners(convertedImage, patternsize, corners, found);
// draw 3D axes using projectPoints
// used 0.04 because the chessboard square is 0.01778 m
std::vector<Point3f> axis;
axis.push_back(cv::Point3f(0.04, 0, 0));
axis.push_back(cv::Point3f(0, 0.04, 0));
axis.push_back(cv::Point3f(0, 0, 0.04));
// the rotation_values and translation values are outputs from the openCV solvePnp function that is being computed separately in real-time every time i press a keypress
projectPoints(axis, rotation_values, translation_values, cameraMatrix, distCoeffs, axesProjectedPoints);
cout << "image points" << endl;
for (auto &n : axesProjectedPoints) {
cout << n << endl;
}
cv::line(convertedImage, corners[0], axesProjectedPoints[0], {255,0,0}, 5);
cv::line(convertedImage, corners[0], axesProjectedPoints[1], {0,255,0}, 5);
cv::line(convertedImage, corners[0], axesProjectedPoints[2], {0,0,255}, 5);
}
the solvePnP part of code:
/* calculate board's pose (rotation and translation) */
bool flag = false;
if (flag) {
printf("Calculating board's pose (rotation and translation) ...\n");
// read calibration data
readCameraConfig(cameraMatrix, distCoeffs);
// create undistorted corners or image points
undistortPoints(corners, imagePoints, cameraMatrix, distCoeffs);
//cout << "POINTS" << endl;
std::vector<Point3d> objp;
for(auto &i : points) {
objp.push_back(i);
//cout << i << endl;
}
//cout << "CORNERS" << endl;
std::vector<Point2d> imagep;
for(auto &j : imagePoints) {
imagep.push_back(j);
//cout << j << endl;
}
cout << "point size" << endl;
cout << objp.size() << endl;
// calculate pose
solvePnP(objp, imagep, cameraMatrix, distCoeffs, rotation_values, translation_values, true, SOLVEPNP_ITERATIVE);
// print rotation and translation values
cout << "R = " << endl << " " << rotation_values << endl << endl;
cout << "T = " << endl << " " << translation_values << endl << endl;
}
}
Related
I'd like to perform the following action using CGAL :
Intersect a surface mesh; the cut mesh; with another surface mesh; the cutted mesh; to divide the cutted mesh in 2 different meshes.
To do so I use the CGAL Mesh Slicer which gives me a set of polylines. Unfortunately, the slicer doesn't provide the information of which face of the cutted mesh each polyline belongs to. Moreover it effectively perfom the slicer action.
How can I retrieve this information ?
I need this information to perform the subdivision of the cutted mesh along the polylines, then the division in 2 separate meshes.
In my context the cut mesh is a surface mesh like this :
Here is the cut mesh + the cutted mesh :
In my code, I generate a Plane from each faces of the cut mesh and perform a slice operation with the cutted mesh.
Here is my code :
//Import off files and instantiate meshes
const char* filename1 = "test.off";
const char* filename2 = "cutMesh2.off";
std::ifstream input(filename1);
Mesh cuttedMesh, cutMesh;
if (!input || !(input >> cuttedMesh))
{
std::cerr << "First mesh is not a valid off file." << std::endl;
return 0;
}
input.close();
input.open(filename2);
if (!input || !(input >> cutMesh))
{
std::cerr << "Second mesh is not a valid off file." << std::endl;
return 0;
}
input.close();
// AABB Slicer constructor from the cutted mesh
AABB_tree tree(edges(cuttedMesh).first, edges(cuttedMesh).second, cuttedMesh);
tree.accelerate_distance_queries();
CGAL::Polygon_mesh_slicer<Mesh, K> slicer_aabb(cuttedMesh, tree);
std::cout << cutMesh.num_vertices()<< std::endl;
// For each face of the cut mesh
BOOST_FOREACH(face_descriptor f, faces(cutMesh))
{
std::cout << "Face " << f << std::endl;
Point points [3];
int i = 0;
//for each point of the current face
BOOST_FOREACH(vertex_descriptor v, CGAL::vertices_around_face(cutMesh.halfedge(f), cutMesh))
{
points[i]= cutMesh.point(v);
++i;
}
Polylines polylines;
// Perform the slice between the current face of the cut mesh and the cutted mesh
slicer_aabb(K::Plane_3(points[0],points[1],points[2]), std::back_inserter(polylines));
std::cout << "the slicer intersects " << polylines.size() << " polylines" << std::endl;
//for each polyline computed by this face of the cutmesh
BOOST_FOREACH(Polyline_type polyline,polylines)
{
std::cout << "Polyline : " << polyline.size() << " points"<< std::endl;
BOOST_FOREACH(Point point, polyline)
{
std::cout << "Point : " << point << std::endl;
}
}
std::cout << std::endl;
polylines.clear();
}
Thanks for your help.
The answer to the post has been proposed by sloriot :
Try using the non-documented clip() function located in Polygon_mesh_processing/include/CGAL/Polygon_mesh_processing/internal/clip.h
I'm trying to run the example in the apriltags library, and I keep getting this error:
OpenCV Error: Assertion failed (mtype == type0 || (CV_MAT_CN(mtype) == 1 && ((1 << type0) & fixedDepthMask) != 0)) in create, file /Users/Vijin/PersInq/opencv-3.2.0/modules/core/src/matrix.cpp, line 2559
I narrowed it down to a function call
detection.getRelativeTranslationRotation(m_tagSize, m_fx, m_fy, m_px, m_py,
translation, rotation);
I'm not an expert in opencv, so I'd appreciate some help with this. The exception is thrown the moment a marker is detected. Otherwise, it runs fine. Here's the whole function:
void print_detection(AprilTags::TagDetection& detection) const {
cout << " Id: " << detection.id
<< " (Hamming: " << detection.hammingDistance << ")";
// recovering the relative pose of a tag:
// NOTE: for this to be accurate, it is necessary to use the
// actual camera parameters here as well as the actual tag size
// (m_fx, m_fy, m_px, m_py, m_tagSize)
Eigen::Vector3d translation;
Eigen::Matrix3d rotation;
try{
detection.getRelativeTranslationRotation(m_tagSize, m_fx, m_fy, m_px, m_py,
translation, rotation);
}
catch (const std::exception& e)
{
cout<<"print_detection failing";
}
Eigen::Matrix3d F;
F <<
1, 0, 0,
0, -1, 0,
0, 0, 1;
Eigen::Matrix3d fixed_rot = F*rotation;
double yaw, pitch, roll;
wRo_to_euler(fixed_rot, yaw, pitch, roll);
cout << " distance=" << translation.norm()
<< "m, x=" << translation(0)
<< ", y=" << translation(1)
<< ", z=" << translation(2)
<< ", yaw=" << yaw
<< ", pitch=" << pitch
<< ", roll=" << roll
<< endl;
// Also note that for SLAM/multi-view application it is better to
// use reprojection error of corner points, because the noise in
// this relative pose is very non-Gaussian; see iSAM source code
// for suitable factors.
}
This issue persists with the newer versions of OpenCV. It can be easily fixed by changing line 95 of src/TagDetection.cc from cv::Matx33f cameraMatrix( to cv::Matx33d cameraMatrix(.
Note, this is simply converting from float to double. Alternatively, you can use this library (https://github.com/PrieureDeSion/apriltags-cpp), which I have made changes to and tested with Ubuntu 16 and OpenCV.
I've used OpenCV to calibrate my camera from different views and I obtained intrinsics, rvec and tvec with a reprojection error of .03 px (I thus think the calibration is fine).
Now, given one view of my scene, I want to be able to click on a point and find its projection on the other views.
To do so, I se the following functions:
void Camera::project(const vector<cv::Point2f> &pts_2d, vector<cv::Point3f> &pts_3d) {
std::cout << "Start proj 2d -> 3d" << std::endl;
cv::Mat pts_2d_homo;
convertPointsToHomogeneous(pts_2d, pts_2d_homo);
std::cout << "Cartesian to Homogeneous done!" << std::endl;
// Project point to camera normalized coordinates
cv::Mat unproj;
cv::transform(pts_2d_homo, unproj, intrinsics().inv());
std::cout << "Point unprojected: " << unproj.at<cv::Point3f>(0) << std::endl;
// Undo model view transform
unproj -= transVec();
cv::Mat rot;
cv::Rodrigues(rotVec(), rot);
cv::transform(unproj, unproj, rot.t());
unproj *= 1.f/cv::norm(unproj);
std::cout << "Model view undone: " << unproj.at<cv::Point3f>(0) << std::endl;
for (int i = 0; i < unproj.rows; ++i) {
std::cout << "Inside for :" << unproj.at<cv::Point3f>(i,0) << std::endl;
pts_3d.push_back(unproj.at<cv::Point3f>(i,0));
}
}
void Camera::project(const vector<cv::Point3f> &pts_3d, vector<cv::Point2f> &pts_2d) {
cv::projectPoints(pts_3d, rotVec(), transVec(), intrinsics(), dist_coeffs(), pts_2d);
}
Now I have mixed feelings about what I get as an output. When I draw the point projected on each view, they all correspond BUT no matter where I clicked at first in the "canonical view", the projected point is always the same.
Ive succesfully parsed the iqe (Inter Quake Exporter) format and now im stuck at displaying it in bindpose.
All vertices have a weird transformation, where root bone(which covers the full mesh for orientation) is not the only bone with influence for that vertex. You can see it at the arm / should / neck area of the mesh. This mesh has 3 bones. One root bone overing the whole mesh and two arm bones.
You can see how the mesh should look like in the background (exported as obj)
For better understanding, i have the following system:
1. I load all vertex data into one big vbo
(vertices, uvs, normals, tangent, bitangent, boneIndicies(4) (index of joint list) and boneWeights(4))
2. I add all joints to a joint list and create a tree system (simple linked list with position, rotation and parent pointer)
3. i have a seperate list called boneMatrices where i store.. well my bone matrices. currently every frame, later i will precalculate the matrices for each animation frame.
I try to calculate the bone matrix the following way:
for (int i = 0; i < this->jointList.size(); i++)
{
pixel::CJoint *joint = this->jointList.at(i);
std::cout << "Joint ------- " << joint->name << " -------- values: \n";
std::cout << "Translation: " << joint->position.x << " " << joint->position.y << " " << joint->position.z << "\n";
std::cout << "Quaternion: " << joint->rotation.x << " " << joint->rotation.y << " " << joint->rotation.z << " " << joint->rotation.w << "\n";
pixel::matrix4 rotation = pixel::CMatrix::fromQuaternion(joint->rotation);
pixel::matrix4 offset = pixel::CMatrix::translateMatrix(joint->position);
pixel::matrix4 baseMatrix = rotation * offset; // translation * rotation
joint->bindPose = baseMatrix;
joint->invBindPose = pixel::CMatrix::inverseMatrix(baseMatrix);
if (joint->parent != NULL)
{
std::cout << "Joint: " << joint->name << " is child of " << joint->parent->name << " \n";
joint->bindPose = joint->bindPose * joint->parent->invBindPose;
joint->invBindPose = pixel::CMatrix::inverseMatrix(joint->bindPose);
}
std::cout << "\n";
}
I store the transposed (else the mesh is upside down) of joint->invBindPose in the boneMatrices and send it to the shader:
boneMatrix is a std::vector of matrix4
this->material.setParameter("boneMatrix", this->boneMatrices.at(0), this->boneMatrices.size());
The root bone bind calculation has to be right (at least i think) because the head is at the right place and the eyes too (which dont have a bone influence currently)
I need to calculate the area of a blob/an object in a grayscale picture (loading it as Mat, not as IplImage) using OpenCV.
I thought it would be a good idea to get the coordinates of the edges (number of edges change form object to object) or to get all coordinates of the contour and then use contourArea() to calculate the area of my object.
I deleted all noise and got some nice and satisfying contours by using findContours() (programming in C++).
findContours(InputOutputArray image, OutputArrayOfArrays contours, OutputArray hierarchy,int mode, int method, Point offset=Point());
Now I got to understand that param contours already owns the coordinates of all contours of my object. Did I get that right?
If yes, it there a way to access them?
And if no, how do I get the coordinates of the contour anyway?
contours is actually defined as
vector<vector<Point> > contours;
And now I think it's clear how to access its points.
The contour area is calculated by a function nicely called contourArea():
for (unsigned int i = 0; i < contours.size(); i++)
{
std::cout << "# of contour points: " << contours[i].size() << std::endl;
for (unsigned int j=0; j<contours[i].size(); j++)
{
std::cout << "Point(x,y)=" << contours[i][j] << std::endl;
}
std::cout << " Area: " << contourArea(contours[i]) << std::endl;
}