Ive succesfully parsed the iqe (Inter Quake Exporter) format and now im stuck at displaying it in bindpose.
All vertices have a weird transformation, where root bone(which covers the full mesh for orientation) is not the only bone with influence for that vertex. You can see it at the arm / should / neck area of the mesh. This mesh has 3 bones. One root bone overing the whole mesh and two arm bones.
You can see how the mesh should look like in the background (exported as obj)
For better understanding, i have the following system:
1. I load all vertex data into one big vbo
(vertices, uvs, normals, tangent, bitangent, boneIndicies(4) (index of joint list) and boneWeights(4))
2. I add all joints to a joint list and create a tree system (simple linked list with position, rotation and parent pointer)
3. i have a seperate list called boneMatrices where i store.. well my bone matrices. currently every frame, later i will precalculate the matrices for each animation frame.
I try to calculate the bone matrix the following way:
for (int i = 0; i < this->jointList.size(); i++)
{
pixel::CJoint *joint = this->jointList.at(i);
std::cout << "Joint ------- " << joint->name << " -------- values: \n";
std::cout << "Translation: " << joint->position.x << " " << joint->position.y << " " << joint->position.z << "\n";
std::cout << "Quaternion: " << joint->rotation.x << " " << joint->rotation.y << " " << joint->rotation.z << " " << joint->rotation.w << "\n";
pixel::matrix4 rotation = pixel::CMatrix::fromQuaternion(joint->rotation);
pixel::matrix4 offset = pixel::CMatrix::translateMatrix(joint->position);
pixel::matrix4 baseMatrix = rotation * offset; // translation * rotation
joint->bindPose = baseMatrix;
joint->invBindPose = pixel::CMatrix::inverseMatrix(baseMatrix);
if (joint->parent != NULL)
{
std::cout << "Joint: " << joint->name << " is child of " << joint->parent->name << " \n";
joint->bindPose = joint->bindPose * joint->parent->invBindPose;
joint->invBindPose = pixel::CMatrix::inverseMatrix(joint->bindPose);
}
std::cout << "\n";
}
I store the transposed (else the mesh is upside down) of joint->invBindPose in the boneMatrices and send it to the shader:
boneMatrix is a std::vector of matrix4
this->material.setParameter("boneMatrix", this->boneMatrices.at(0), this->boneMatrices.size());
The root bone bind calculation has to be right (at least i think) because the head is at the right place and the eyes too (which dont have a bone influence currently)
Related
I've been trying to make a simple collision system for my 3D game, I'm creating a bounding sphere like this:
struct ACollSphr
{
glm::vec3* pos;
float radius;
};
And in the while loop that renders my game, in main, I give them a position like this:
for (unsigned int i = 0; i < meshModelMatrices2.size(); i++)
{
Ackerfe::ACollSphr tempSphr;
glm::vec3 *temporary = new glm::vec3(meshRenderer2.getBoundingSpherePos(*meshRenderer2.getMesh()) * glm::vec3(vec[i][12], vec[i][13], vec[i][14]));
tempSphr.pos = temporary;
radius = meshRenderer2.getBoundingSphereRadius(*meshRenderer2.getMesh(), *tempSphr.pos);
tempSphr.radius = radius;
meshSphr.push_back(tempSphr);
//std::cout << pos.x << " " << pos.y << " " << pos.z << std::endl;
//std::cout << vec[i][12] << " " << vec[i][13] << " " << vec[i][14] << std::endl;
}
where meshSphr is a vector of spheres and meshRenderer2 is the renderer I use for the meshes I'm loading in, basically I get the mesh, pass it to getBoundingSpherePos and get the position of that mesh, then I multiply it by a glm::vec3 made up of the position values inside the model matrix of each mesh and I get the radius after that and put my newly created sphere inside the meshSphr vector (which I clear after using it for the collision checks so it can get repopulated again in the next iteration)
my collision check looks like this:
for (unsigned int i = 0; i < meshSphr.size(); i++)
{
if (Ackerfe::sphrSphrColl(camera3D.getSphr(), &meshSphr[i]))
{
camera3D.changePosition(camera3D.getPosition()+glm::vec3(-5.0f));
}
}
and my sphrSphrColl function looks like this:
bool sphrSphrColl(ACollSphr *first, ACollSphr *second)
{
if (fabs((first->pos->x - second->pos->x) * (first->pos->x - second->pos->x) +
(first->pos->y - second->pos->y) * (first->pos->y - second->pos->y) +
(first->pos->z - second->pos->z) * (first->pos->z - second->pos->z) < (first->radius + second->radius) * (first->radius + second->radius)))
{
//std::cout <<"DISTANCE: "<<std::endl<<glm::length(*first->pos - *second->pos) << std::endl << std::endl << std::endl << std::endl;
return true;
}
return false;
}
I'm checking the position of the bounding spheres with a cout in my main while loop and the positions are registered correctly but when I pass them to the sphrSphrColl function it only seems to put a bounding sphere in the origin and that's it.
My question: Any idea why it's doing that? While I do have a mesh in the origin why does it only put the bounding sphere there? it's almost like the pointers aren't getting updated when I pass them in the function
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I am trying to generate normal maps given a mesh, camera pose, and camera intrinsics.
My plan is to calculate the vertex normal for each point in the cloud then project them onto an image plane with the corresponding camera pose and intrinsics. More specifically, I would first calculate the vertex normals then convert the point coordinates from world coordinates into camera coordinates with camera pose. Finally, using the camera intrinsics, the point cloud can be projected onto an image where each pixel represents the surface normal of the corresponding 3D vertex.
Below is my code:
#include <iostream>
#include <thread>
#include <pcl/io/ply_io.h>
#include <pcl/point_types.h>
#include <pcl/features/from_meshes.h>
#include <pcl/visualization/pcl_visualizer.h>
using namespace std;
using namespace pcl;
void readPLY(PolygonMesh::Ptr mesh, string fname, bool printResult=false)
{
PLYReader reader;
int success = reader.read(fname, *mesh); // load the file
if (success == -1) {
cout << "Couldn't read file " << fname << endl;
exit(-1);
}
if(printResult){
cout << "Loaded "
<< mesh->cloud.width * mesh->cloud.height
<< " data points from "
<< fname
<< " with the following fields: "
<< endl;
// convert from pcl/PCLPointCloud2 to pcl::PointCloud<T>
PointCloud<PointXYZ>::Ptr cloud (new PointCloud<PointXYZ>);
fromPCLPointCloud2(mesh->cloud, *cloud);
// print the first 10 vertices
cout << "Vertices:" << endl;
for (size_t i=0; i<10; ++i)
cout << " " << cloud->points[i].x
<< " " << cloud->points[i].y
<< " " << cloud->points[i].z << endl;
// print the first 10 polygons
cout << "Polygons:" << endl;
for (size_t i=0; i<10; ++i){
cout << mesh->polygons[i] << endl;
}
}
}
void computeNormal(PolygonMesh::Ptr mesh,
PointCloud<Normal>::Ptr normal,
bool printResult=false)
{
// convert from pcl/PCLPointCloud2 to pcl::PointCloud<T>
PointCloud<PointXYZ>::Ptr cloud (new PointCloud<PointXYZ>);
fromPCLPointCloud2(mesh->cloud, *cloud);
// compute surface normal
pcl::features::computeApproximateNormals(*cloud, mesh->polygons, *normal);
// print results
if (printResult){
cout << "Normal cloud contains "
<< normal->width * normal->height
<< " points" << endl;
// print the first 10 vertices
cout << "Vertex normals:" << endl;
for (size_t i=0; i<10; ++i)
cout << " " << normal->points[i] << endl;
}
}
int main (int argc, char** argv)
{
// ./main [path/to/ply] (--debug)
string fname = argv[1];
// check if debug flag is set
bool debug = false;
for(int i=0;i<argc;++i){
string arg = argv[i];
if(arg == "--debug")
debug = true;
}
// read file
PolygonMesh::Ptr mesh (new PolygonMesh);
readPLY(mesh, fname, debug);
// calculate normals
PointCloud<Normal>::Ptr normal (new PointCloud<Normal>);
computeNormal(mesh, normal, debug);
}
Currently, I have already obtained surface normal for each vertex with pcl::features::computeApproximateNormals. Is there a way to use PCL to project the normals onto an image plane with the xyz-elements of the normal mapped to the RGB channels and save the image to a file?
Welcome to Stack Overflow. What the documentation says is:
Given a geometric surface, it’s usually trivial to infer the direction of the normal at a certain point on the surface as the vector perpendicular to the surface in that point.
From what I gather from what you say is that you already have the surfaces for which you can easily calculate surface normals. Normal estimation is used because 3D point cloud data is basically a bunch of sample points from the real world. You do not have surface information in this kind of data. What you do is you estimate a surface around a pixel using Planar Fitting(2D Regression). Then, you obtain surface normal. You cannot compare these two methods. They essentially gave different purposes.
For question two: Yes. Refer to this SO answer.
I'd like to perform the following action using CGAL :
Intersect a surface mesh; the cut mesh; with another surface mesh; the cutted mesh; to divide the cutted mesh in 2 different meshes.
To do so I use the CGAL Mesh Slicer which gives me a set of polylines. Unfortunately, the slicer doesn't provide the information of which face of the cutted mesh each polyline belongs to. Moreover it effectively perfom the slicer action.
How can I retrieve this information ?
I need this information to perform the subdivision of the cutted mesh along the polylines, then the division in 2 separate meshes.
In my context the cut mesh is a surface mesh like this :
Here is the cut mesh + the cutted mesh :
In my code, I generate a Plane from each faces of the cut mesh and perform a slice operation with the cutted mesh.
Here is my code :
//Import off files and instantiate meshes
const char* filename1 = "test.off";
const char* filename2 = "cutMesh2.off";
std::ifstream input(filename1);
Mesh cuttedMesh, cutMesh;
if (!input || !(input >> cuttedMesh))
{
std::cerr << "First mesh is not a valid off file." << std::endl;
return 0;
}
input.close();
input.open(filename2);
if (!input || !(input >> cutMesh))
{
std::cerr << "Second mesh is not a valid off file." << std::endl;
return 0;
}
input.close();
// AABB Slicer constructor from the cutted mesh
AABB_tree tree(edges(cuttedMesh).first, edges(cuttedMesh).second, cuttedMesh);
tree.accelerate_distance_queries();
CGAL::Polygon_mesh_slicer<Mesh, K> slicer_aabb(cuttedMesh, tree);
std::cout << cutMesh.num_vertices()<< std::endl;
// For each face of the cut mesh
BOOST_FOREACH(face_descriptor f, faces(cutMesh))
{
std::cout << "Face " << f << std::endl;
Point points [3];
int i = 0;
//for each point of the current face
BOOST_FOREACH(vertex_descriptor v, CGAL::vertices_around_face(cutMesh.halfedge(f), cutMesh))
{
points[i]= cutMesh.point(v);
++i;
}
Polylines polylines;
// Perform the slice between the current face of the cut mesh and the cutted mesh
slicer_aabb(K::Plane_3(points[0],points[1],points[2]), std::back_inserter(polylines));
std::cout << "the slicer intersects " << polylines.size() << " polylines" << std::endl;
//for each polyline computed by this face of the cutmesh
BOOST_FOREACH(Polyline_type polyline,polylines)
{
std::cout << "Polyline : " << polyline.size() << " points"<< std::endl;
BOOST_FOREACH(Point point, polyline)
{
std::cout << "Point : " << point << std::endl;
}
}
std::cout << std::endl;
polylines.clear();
}
Thanks for your help.
The answer to the post has been proposed by sloriot :
Try using the non-documented clip() function located in Polygon_mesh_processing/include/CGAL/Polygon_mesh_processing/internal/clip.h
I have this following fucntion to handle points (among a set of points) whose type is Start_Vertex:
void handleStartVertex(Vertex vi)
{
cout << "start Vertex begins ######################################################################################" << endl;
cout << "Handling start " << vi << vertexType[vi.type] << endl;
HalfEdge *ei = vi.incidentEdge;
std :: vector<HalfEdge > :: iterator it,itprev;
cout << "Origin of the incident Edge " << *(ei->origin) << endl;
//INSERT EI IN TOW AND SET HELPER AS VI
ei->setHelper(&vi);
tow.push_back(*ei);
cout << "Content of Tow in START_VERTEX" << endl;
for(it = tow.begin();it != tow.end();it++)
cout << "origin of edge " << *((*it).origin) << " Helper " << *((*it).helper) << endl;
cout << "start Vertex stops ######################################################################################\n\n" << endl;
}
where tow is :
vector<HalfEdge > tow;
The problem is when I am setting the helper of an Edge and pushing it into vector tow,for all edges previously present in tow, the helper changes to that of the edge being presently pushed. I am not understanding why is this happening? Any work around is appreciated. Here are some results which can make the question more clear.
start Vertex begins ###########################################################################
Handling start (2,9) START_VERTEX
Origin of the incident Edge (2,9)
Content of Tow in START_VERTEX
origin of edge (6,12)
Helper (2,9)
origin of edge (9,11)
Helper (2,9)
origin of edge (2,9)
Helper (2,9)
start Vertex stops ############################################################################
The correct result I should be getting is:
origin of edge (6,12) Helper (6,12)
origin of edge (9,11) Helper (9,11)
origin of edge (2,9) Helper (2,9)
The Vertex vi is destroyed at the return of the function. You are setting the helper to the memory occupied by Vertex vi, and this memory is occupied by Vertex (2, 9) when you do the last print. Pass a pointer to the Vertex vi as input to the function:
void handleStartVertex(Vertex* vi)
{
...
}
I've used OpenCV to calibrate my camera from different views and I obtained intrinsics, rvec and tvec with a reprojection error of .03 px (I thus think the calibration is fine).
Now, given one view of my scene, I want to be able to click on a point and find its projection on the other views.
To do so, I se the following functions:
void Camera::project(const vector<cv::Point2f> &pts_2d, vector<cv::Point3f> &pts_3d) {
std::cout << "Start proj 2d -> 3d" << std::endl;
cv::Mat pts_2d_homo;
convertPointsToHomogeneous(pts_2d, pts_2d_homo);
std::cout << "Cartesian to Homogeneous done!" << std::endl;
// Project point to camera normalized coordinates
cv::Mat unproj;
cv::transform(pts_2d_homo, unproj, intrinsics().inv());
std::cout << "Point unprojected: " << unproj.at<cv::Point3f>(0) << std::endl;
// Undo model view transform
unproj -= transVec();
cv::Mat rot;
cv::Rodrigues(rotVec(), rot);
cv::transform(unproj, unproj, rot.t());
unproj *= 1.f/cv::norm(unproj);
std::cout << "Model view undone: " << unproj.at<cv::Point3f>(0) << std::endl;
for (int i = 0; i < unproj.rows; ++i) {
std::cout << "Inside for :" << unproj.at<cv::Point3f>(i,0) << std::endl;
pts_3d.push_back(unproj.at<cv::Point3f>(i,0));
}
}
void Camera::project(const vector<cv::Point3f> &pts_3d, vector<cv::Point2f> &pts_2d) {
cv::projectPoints(pts_3d, rotVec(), transVec(), intrinsics(), dist_coeffs(), pts_2d);
}
Now I have mixed feelings about what I get as an output. When I draw the point projected on each view, they all correspond BUT no matter where I clicked at first in the "canonical view", the projected point is always the same.