I've used OpenCV to calibrate my camera from different views and I obtained intrinsics, rvec and tvec with a reprojection error of .03 px (I thus think the calibration is fine).
Now, given one view of my scene, I want to be able to click on a point and find its projection on the other views.
To do so, I se the following functions:
void Camera::project(const vector<cv::Point2f> &pts_2d, vector<cv::Point3f> &pts_3d) {
std::cout << "Start proj 2d -> 3d" << std::endl;
cv::Mat pts_2d_homo;
convertPointsToHomogeneous(pts_2d, pts_2d_homo);
std::cout << "Cartesian to Homogeneous done!" << std::endl;
// Project point to camera normalized coordinates
cv::Mat unproj;
cv::transform(pts_2d_homo, unproj, intrinsics().inv());
std::cout << "Point unprojected: " << unproj.at<cv::Point3f>(0) << std::endl;
// Undo model view transform
unproj -= transVec();
cv::Mat rot;
cv::Rodrigues(rotVec(), rot);
cv::transform(unproj, unproj, rot.t());
unproj *= 1.f/cv::norm(unproj);
std::cout << "Model view undone: " << unproj.at<cv::Point3f>(0) << std::endl;
for (int i = 0; i < unproj.rows; ++i) {
std::cout << "Inside for :" << unproj.at<cv::Point3f>(i,0) << std::endl;
pts_3d.push_back(unproj.at<cv::Point3f>(i,0));
}
}
void Camera::project(const vector<cv::Point3f> &pts_3d, vector<cv::Point2f> &pts_2d) {
cv::projectPoints(pts_3d, rotVec(), transVec(), intrinsics(), dist_coeffs(), pts_2d);
}
Now I have mixed feelings about what I get as an output. When I draw the point projected on each view, they all correspond BUT no matter where I clicked at first in the "canonical view", the projected point is always the same.
Related
I am trying to project 3D (x,y,z) axes via the openCV projectPoints function onto a chessboard after calibration, but every time I run my code, the axes all point to a specific projected image point on the screen.
example output image
The cameraMatrix and distCoeffs are read in from data that was wrote to a file during calibration. They are:
CameraMatrix[1372.852997982289, 0, 554.2708806543288;
0, 1372.852997982289, 906.4327368600385;
0, 0, 1]
distCoeff[0.02839203221556521;
0.442572399014994;
-0.01755006951285373;
-0.0008989327508155589;
-1.836490953232962]
The rotation and translation values are being computed in real-time via SolvePnP every time a bool is turned on via keypress. An example of a their output values are:
R =
[-0.9065211432378315;
0.3787201875924527;
-0.2788943269946833]
T =
[-0.4433059282649063;
-0.6745750872705997;
1.13753594660495]
While SolvePnP is being computed, I press another keypress to draw the 3D axes from the origin as written in the code below. And the rotation and translation values are passed into the projectPoint function. However, the axesProjectedPoints image points output for each axis is always very similar and in the range of:
[100.932, 127.418]
[55.154, 157.192]
[70.3054, 162.585]
Note the axesProjectedPoints is initialized out of the loop as a vector<Point2f> axesProjectedPoints
The reprojection error is fairly good, and under 1 pixel.
The projectPoints code:
if (found) {
// read calibration data -- function that reads calibration data saved to a file
readCameraConfig(cameraMatrix, distCoeffs);
// draw corners
drawChessboardCorners(convertedImage, patternsize, corners, found);
// draw 3D axes using projectPoints
// used 0.04 because the chessboard square is 0.01778 m
std::vector<Point3f> axis;
axis.push_back(cv::Point3f(0.04, 0, 0));
axis.push_back(cv::Point3f(0, 0.04, 0));
axis.push_back(cv::Point3f(0, 0, 0.04));
// the rotation_values and translation values are outputs from the openCV solvePnp function that is being computed separately in real-time every time i press a keypress
projectPoints(axis, rotation_values, translation_values, cameraMatrix, distCoeffs, axesProjectedPoints);
cout << "image points" << endl;
for (auto &n : axesProjectedPoints) {
cout << n << endl;
}
cv::line(convertedImage, corners[0], axesProjectedPoints[0], {255,0,0}, 5);
cv::line(convertedImage, corners[0], axesProjectedPoints[1], {0,255,0}, 5);
cv::line(convertedImage, corners[0], axesProjectedPoints[2], {0,0,255}, 5);
}
the solvePnP part of code:
/* calculate board's pose (rotation and translation) */
bool flag = false;
if (flag) {
printf("Calculating board's pose (rotation and translation) ...\n");
// read calibration data
readCameraConfig(cameraMatrix, distCoeffs);
// create undistorted corners or image points
undistortPoints(corners, imagePoints, cameraMatrix, distCoeffs);
//cout << "POINTS" << endl;
std::vector<Point3d> objp;
for(auto &i : points) {
objp.push_back(i);
//cout << i << endl;
}
//cout << "CORNERS" << endl;
std::vector<Point2d> imagep;
for(auto &j : imagePoints) {
imagep.push_back(j);
//cout << j << endl;
}
cout << "point size" << endl;
cout << objp.size() << endl;
// calculate pose
solvePnP(objp, imagep, cameraMatrix, distCoeffs, rotation_values, translation_values, true, SOLVEPNP_ITERATIVE);
// print rotation and translation values
cout << "R = " << endl << " " << rotation_values << endl << endl;
cout << "T = " << endl << " " << translation_values << endl << endl;
}
}
I've been trying to make a simple collision system for my 3D game, I'm creating a bounding sphere like this:
struct ACollSphr
{
glm::vec3* pos;
float radius;
};
And in the while loop that renders my game, in main, I give them a position like this:
for (unsigned int i = 0; i < meshModelMatrices2.size(); i++)
{
Ackerfe::ACollSphr tempSphr;
glm::vec3 *temporary = new glm::vec3(meshRenderer2.getBoundingSpherePos(*meshRenderer2.getMesh()) * glm::vec3(vec[i][12], vec[i][13], vec[i][14]));
tempSphr.pos = temporary;
radius = meshRenderer2.getBoundingSphereRadius(*meshRenderer2.getMesh(), *tempSphr.pos);
tempSphr.radius = radius;
meshSphr.push_back(tempSphr);
//std::cout << pos.x << " " << pos.y << " " << pos.z << std::endl;
//std::cout << vec[i][12] << " " << vec[i][13] << " " << vec[i][14] << std::endl;
}
where meshSphr is a vector of spheres and meshRenderer2 is the renderer I use for the meshes I'm loading in, basically I get the mesh, pass it to getBoundingSpherePos and get the position of that mesh, then I multiply it by a glm::vec3 made up of the position values inside the model matrix of each mesh and I get the radius after that and put my newly created sphere inside the meshSphr vector (which I clear after using it for the collision checks so it can get repopulated again in the next iteration)
my collision check looks like this:
for (unsigned int i = 0; i < meshSphr.size(); i++)
{
if (Ackerfe::sphrSphrColl(camera3D.getSphr(), &meshSphr[i]))
{
camera3D.changePosition(camera3D.getPosition()+glm::vec3(-5.0f));
}
}
and my sphrSphrColl function looks like this:
bool sphrSphrColl(ACollSphr *first, ACollSphr *second)
{
if (fabs((first->pos->x - second->pos->x) * (first->pos->x - second->pos->x) +
(first->pos->y - second->pos->y) * (first->pos->y - second->pos->y) +
(first->pos->z - second->pos->z) * (first->pos->z - second->pos->z) < (first->radius + second->radius) * (first->radius + second->radius)))
{
//std::cout <<"DISTANCE: "<<std::endl<<glm::length(*first->pos - *second->pos) << std::endl << std::endl << std::endl << std::endl;
return true;
}
return false;
}
I'm checking the position of the bounding spheres with a cout in my main while loop and the positions are registered correctly but when I pass them to the sphrSphrColl function it only seems to put a bounding sphere in the origin and that's it.
My question: Any idea why it's doing that? While I do have a mesh in the origin why does it only put the bounding sphere there? it's almost like the pointers aren't getting updated when I pass them in the function
I'd like to perform the following action using CGAL :
Intersect a surface mesh; the cut mesh; with another surface mesh; the cutted mesh; to divide the cutted mesh in 2 different meshes.
To do so I use the CGAL Mesh Slicer which gives me a set of polylines. Unfortunately, the slicer doesn't provide the information of which face of the cutted mesh each polyline belongs to. Moreover it effectively perfom the slicer action.
How can I retrieve this information ?
I need this information to perform the subdivision of the cutted mesh along the polylines, then the division in 2 separate meshes.
In my context the cut mesh is a surface mesh like this :
Here is the cut mesh + the cutted mesh :
In my code, I generate a Plane from each faces of the cut mesh and perform a slice operation with the cutted mesh.
Here is my code :
//Import off files and instantiate meshes
const char* filename1 = "test.off";
const char* filename2 = "cutMesh2.off";
std::ifstream input(filename1);
Mesh cuttedMesh, cutMesh;
if (!input || !(input >> cuttedMesh))
{
std::cerr << "First mesh is not a valid off file." << std::endl;
return 0;
}
input.close();
input.open(filename2);
if (!input || !(input >> cutMesh))
{
std::cerr << "Second mesh is not a valid off file." << std::endl;
return 0;
}
input.close();
// AABB Slicer constructor from the cutted mesh
AABB_tree tree(edges(cuttedMesh).first, edges(cuttedMesh).second, cuttedMesh);
tree.accelerate_distance_queries();
CGAL::Polygon_mesh_slicer<Mesh, K> slicer_aabb(cuttedMesh, tree);
std::cout << cutMesh.num_vertices()<< std::endl;
// For each face of the cut mesh
BOOST_FOREACH(face_descriptor f, faces(cutMesh))
{
std::cout << "Face " << f << std::endl;
Point points [3];
int i = 0;
//for each point of the current face
BOOST_FOREACH(vertex_descriptor v, CGAL::vertices_around_face(cutMesh.halfedge(f), cutMesh))
{
points[i]= cutMesh.point(v);
++i;
}
Polylines polylines;
// Perform the slice between the current face of the cut mesh and the cutted mesh
slicer_aabb(K::Plane_3(points[0],points[1],points[2]), std::back_inserter(polylines));
std::cout << "the slicer intersects " << polylines.size() << " polylines" << std::endl;
//for each polyline computed by this face of the cutmesh
BOOST_FOREACH(Polyline_type polyline,polylines)
{
std::cout << "Polyline : " << polyline.size() << " points"<< std::endl;
BOOST_FOREACH(Point point, polyline)
{
std::cout << "Point : " << point << std::endl;
}
}
std::cout << std::endl;
polylines.clear();
}
Thanks for your help.
The answer to the post has been proposed by sloriot :
Try using the non-documented clip() function located in Polygon_mesh_processing/include/CGAL/Polygon_mesh_processing/internal/clip.h
I am using VTK to generate a 3D Delaunay triangulation from a list of input points that are obtained using terrestrial laser scanner. So far, I successfully generated some 3D triangulations and saved them in .vtk or .vtu file formats. However, I need to save them as one of the popular formats such as .ply, .stl, .obj or .wrl. Could you please tell me how can I save 3d triangulation in one of these formats? My other question is about setAlpha and setTolerance parameters, could you please explain me these parameters in detail?
Thanks in advance
int main( int argc, char* argv[] )
{
//load the point cloud
vtkSmartPointer<vtkSimplePointsReader> reader = vtkSmartPointer<vtkSimplePointsReader>::New();
reader->SetFileName("kucuk50k.xyz");
reader->Update();
vtkPolyData* polydata = reader->GetOutput();
std::cout << "The point cloud is loaded" << std::endl;
//end of point cloud loading
std::cout << "----------------------------------------------" << std::endl;
// Generate a mesh from the input points. If Alpha is non-zero, then
// tetrahedra, triangles, edges and vertices that lie within the
// alpha radius are output.
std::cout << "Start delaunay 3d triangulation" << std::endl;
vtkSmartPointer<vtkDelaunay3D> delaunay3DAlpha = vtkSmartPointer<vtkDelaunay3D>::New();
delaunay3DAlpha->SetInputConnection(reader->GetOutputPort());
delaunay3DAlpha->SetAlpha(0.01);
std::cout << "3d Delaunay computed" << std::endl;
std::cout << "----------------------------------------------" << std::endl;
std::cout << "Start writing the triangulation" << std::endl;
vtkSmartPointer<vtkXMLUnstructuredGridWriter> ugWriter = vtkSmartPointer<vtkXMLUnstructuredGridWriter>::New();
ugWriter->SetInputConnection(delaunay3DAlpha->GetOutputPort());
ugWriter->SetFileName("delaunayy_50k.vtk");
ugWriter->Write();
std::cout << "VTK file created" << std::endl;
return EXIT_SUCCESS;
}
You try to compute a triangular surface from a scanned terrain, right? Is there a good reason to use 3D Delaunay which creates tetrahedra instead of 2.5D Delaunay? You may have a look at my Fade2.5D library, there is a free student version. A limitation of 2.5D is however that you can't represent vertical walls and thus buildings are not directly supported.
Ive succesfully parsed the iqe (Inter Quake Exporter) format and now im stuck at displaying it in bindpose.
All vertices have a weird transformation, where root bone(which covers the full mesh for orientation) is not the only bone with influence for that vertex. You can see it at the arm / should / neck area of the mesh. This mesh has 3 bones. One root bone overing the whole mesh and two arm bones.
You can see how the mesh should look like in the background (exported as obj)
For better understanding, i have the following system:
1. I load all vertex data into one big vbo
(vertices, uvs, normals, tangent, bitangent, boneIndicies(4) (index of joint list) and boneWeights(4))
2. I add all joints to a joint list and create a tree system (simple linked list with position, rotation and parent pointer)
3. i have a seperate list called boneMatrices where i store.. well my bone matrices. currently every frame, later i will precalculate the matrices for each animation frame.
I try to calculate the bone matrix the following way:
for (int i = 0; i < this->jointList.size(); i++)
{
pixel::CJoint *joint = this->jointList.at(i);
std::cout << "Joint ------- " << joint->name << " -------- values: \n";
std::cout << "Translation: " << joint->position.x << " " << joint->position.y << " " << joint->position.z << "\n";
std::cout << "Quaternion: " << joint->rotation.x << " " << joint->rotation.y << " " << joint->rotation.z << " " << joint->rotation.w << "\n";
pixel::matrix4 rotation = pixel::CMatrix::fromQuaternion(joint->rotation);
pixel::matrix4 offset = pixel::CMatrix::translateMatrix(joint->position);
pixel::matrix4 baseMatrix = rotation * offset; // translation * rotation
joint->bindPose = baseMatrix;
joint->invBindPose = pixel::CMatrix::inverseMatrix(baseMatrix);
if (joint->parent != NULL)
{
std::cout << "Joint: " << joint->name << " is child of " << joint->parent->name << " \n";
joint->bindPose = joint->bindPose * joint->parent->invBindPose;
joint->invBindPose = pixel::CMatrix::inverseMatrix(joint->bindPose);
}
std::cout << "\n";
}
I store the transposed (else the mesh is upside down) of joint->invBindPose in the boneMatrices and send it to the shader:
boneMatrix is a std::vector of matrix4
this->material.setParameter("boneMatrix", this->boneMatrices.at(0), this->boneMatrices.size());
The root bone bind calculation has to be right (at least i think) because the head is at the right place and the eyes too (which dont have a bone influence currently)