CGAL: Get face data from surface mesh - c++

I'm trying to fill my own struct with data retrieved from a CGAL::Surface_mesh.
You can add a face to a surface mesh via..
CGAL::SM_Face_index face = SM_Surface_Mesh.add_face(SM_Vertex_Index, SM_Vertex_Index, SM_Vertex_Index);
.. but how does one retrieve that face given the SM_Face_Index? I've tried sifting through the documentation but to no avail.
InteropMesh * outputMesh = new InteropMesh();
uint32_t num = mesh1.number_of_vertices();
outputMesh->vertexCount = num;
outputMesh->vertices = new InteropVector3[num];
for (Mesh::Vertex_index vd : mesh1.vertices())
{
uint32_t index = vd; //via size_t
Point data = mesh1.point(vd);
outputMesh->vertices[index].x = (float)data.x();
outputMesh->vertices[index].y = (float)data.y();
outputMesh->vertices[index].z = (float)data.z();
}
outputMesh->indices = new uint32_t[mesh1.number_of_faces() * 3];
for (CGAL::SM_Face_index fd : mesh1.faces())
{
//? How do I get the three vertex indices?
}

A simple way of getting vertex and face indices can be done like this;
typedef CGAL::Exact_predicates_inexact_constructions_kernel K;
typedef CGAL::Surface_mesh<K::Point_3> Mesh;
Mesh sm;
// sm created here or as a result of some CGAL function
std::vector<float> verts;
std::vector<uint32_t> indices;
//Get vertices ...
for (Mesh::Vertex_index vi : sm.vertices()) {
K::Point_3 pt = sm.point(vi);
verts.push_back((float)pt.x());
verts.push_back((float)pt.y());
verts.push_back((float)pt.z());
}
//Get face indices ...
for (Mesh::Face_index face_index : sm.faces()) {
CGAL::Vertex_around_face_circulator<Mesh> vcirc(sm.halfedge(face_index), sm), done(vcirc);
do indices.push_back(*vcirc++); while (vcirc != done);
}
This example assumes triangle output (i.e. every 3 indices describes a triangle) although faces could have more indices as Andry points out.
Another function should be added to check the face index count and split faces into triangles if there are more than 3 indices.

The Surface_mesh data structure can represent more than only triangle meshes. Meaning that you might have more than 3 vertices per face.
Once you get a face, you can navigate on its boundary edges and get the source and target vertices.
For example you can do:
Surface_mesh::Halfedge_index hf = sm.halfedge(fi);
for(Surface_mesh::Halfedge_index hi : halfedges_around_face(hf, sm))
{
Surface_mesh::Vertex_index vi = target(hi, sm);
}
You can also do it by hand:
Surface_mesh::Halfedge_index hstart = sm.halfedge(fi), hi=hstart;
do{
Surface_mesh::Vertex_index vi = target(hi, sm);
hi=sm.next(hi);
}
while(hi!=hstart)

Related

Building the set of unique vertices gives trashed result with random triangles around initial mesh

I have a problem with creating a set of unique vertices containing Position Coordinate, Texture Coordinade and Normals. I decided to use std::set for this kind of problem. The problem is that it seems that it somehow does not creating the correct set of vertices. The data I loaded from .obj file 100% correct as it is renders the mesh as I expected. But new set of data is just creates a blob of triangles.
Correct result using only Positions of vertices, expected result:
And generated data:
The code routine:
struct vertex
{
glm::vec3 Position;
glm::vec2 TextureCoord;
glm::vec3 Normal;
bool operator==(const vertex& rhs) const
{
bool Res1 = this->Position == rhs.Position;
bool Res2 = this->TextureCoord == rhs.TextureCoord;
bool Res3 = this->Normal == rhs.Normal;
return Res1 && Res2 && Res3;
}
};
namespace std
{
template<>
struct hash<vertex>
{
size_t operator()(const vertex& VertData) const
{
size_t res = 0;
const_cast<hash<vertex>*>(this)->hash_combine(res, hash<glm::vec3>()(VertData.Position));
const_cast<hash<vertex>*>(this)->hash_combine(res, hash<glm::vec2>()(VertData.TextureCoord));
const_cast<hash<vertex>*>(this)->hash_combine(res, hash<glm::vec3>()(VertData.Normal));
return res;
}
private:
// NOTE: got from glm library
void hash_combine(size_t& seed, size_t hash)
{
hash += 0x9e3779b9 + (seed << 6) + (seed >> 2);
seed ^= hash;
}
};
}
void mesh::BuildUniqueVertices()
{
std::unordered_set<vertex> UniqueVertices;
//std::unordered_map<u32, vertex> UniqueVertices;
u32 IndexCount = CoordIndices.size();
std::vector<u32> RemapedIndices(IndexCount);
for(u32 VertexIndex = 0;
VertexIndex < IndexCount;
++VertexIndex)
{
vertex Vert = {};
v3 Pos = Coords[CoordIndices[VertexIndex]];
Vert.Position = glm::vec3(Pos.x, Pos.y, Pos.z);
if(NormalIndices.size() != 0)
{
v3 Norm = Normals[NormalIndices[VertexIndex]];
Vert.Normal = glm::vec3(Norm.x, Norm.y, Norm.z);
}
if(TextCoords.size() != 0)
{
v2 TextCoord = TextCoords[TextCoordIndices[VertexIndex]];
Vert.TextureCoord = glm::vec2(TextCoord.x, TextCoord.y);
}
// NOTE: think about something faster
auto Hint = UniqueVertices.insert(Vert);
if (Hint.second)
{
RemapedIndices[VertexIndex] = VertexIndex;
}
else
{
RemapedIndices[VertexIndex] = static_cast<u32>(std::distance(UniqueVertices.begin(), UniqueVertices.find(Vert)));
}
}
VertexIndices = std::move(RemapedIndices);
Vertices.reserve(UniqueVertices.size());
for(auto it = UniqueVertices.begin();
it != UniqueVertices.end();)
{
Vertices.push_back(std::move(UniqueVertices.extract(it++).value()));
}
}
What the error could be. I am suspecting that hash function is not doing its job right and therefor I am getting really trashed results of random triangles around initial mesh bounds. Thanks.
This construction shows that you're going down the wrong path:
RemapedIndices[VertexIndex] = static_cast<u32>(std::distance(UniqueVertices.begin(), UniqueVertices.find(Vert)));
It's clear that you want the "offset" of the vertex within the UniqueVertices but you're modifying the container in the same loop with UniqueVertices.insert(Vert);. That means that the std::distance result you calculate is potentially invalidated by the next loop. Every index except the very last one you calculate will potentially be garbage by the time you finish the outer loop.
Just use a vector<vertex> and stop trying to de-dupe vertices. If you later find you need to optimize for better performance, you're going to be better off trying to make sure that the mesh is optimized for cache locality.
If you really, really feel the need to dedupe, you need to do it in a multi-step process, something like this...
First map the original indices to some kind of unique key (NOT the full vertex... you want something that can be used as both a map key and value, so has good hashing and equality functionality, like std::string). For instance, you could take the 3 source indices and convert them to fixed length hex strings concatenated together. Just make sure the transformation is reversible. Here I've stubbed it out wit std::string to_key(uint32_t CoordIndex, uint32_t NormalIndex, uint32_t TextCoordIndex)
std::unordered_map<uint32_t, std::string> originalIndexToKey;
std::unordered_set<std::string> uniqueKeys;
uint32_t IndexCount = CoordIndices.size();
for (u32 VertexIndex = 0; VertexIndex < IndexCount; ++VertexIndex) {
uint32_t CoordIndex = UINT32_MAX, NormalIndex = UINT32_MAX, TextCoordIndex = UINT32_MAX;
CoordIndex = CoordIndices[VertexIndex];
if (NormalIndices.size() != 0) {
NormalIndex = NormalIndices[VertexIndex];
}
if (TextCoords.size() != 0) {
TextCoordIndex = TextCoordIndices[VertexIndex];
}
std::string key = to_key(CoordIndex, NormalIndex, TextCoordIndex);
originalIndexToKey.insert(VertexIndex, key);
uniqueKeys.insert(key);
}
Next, take all the unique keys and construct the unique vertices with them in a vector, so they have fixed positions. Here you need to get the sub-indices back from the key with a void from_key(const std::string& key, uint32_t & CoordIndex, uint32_t & NormalIndex, uint32_t & TextCoordIndex) function.
std::unordered_map<std::string, uint32_t> keyToNewIndex;
std::vector<vertex> uniqueVertices;
for (const auto& key : uniqueKeys) {
uint32_t NewIndex = uniqueVertices.size();
keyToNewIndex.insert(key, NewIndex)
// convert the key back into 3 indices
uint32_t CoordIndex, NormalIndex, TextCoordIndex;
from_key(key, CoordIndex, NormalIndex, TextCoordIndex);
vertex Vert = {};
v3 Pos = Coords[CoordIndex];
Vert.Position = glm::vec3(Pos.x, Pos.y, Pos.z);
if (NormalIndex != UINT32_MAX) {
v3 Norm = Normals[NormalIndices[VertexIndex]];
Vert.Normal = glm::vec3(Norm.x, Norm.y, Norm.z);
}
if (TextCoordIndex != UINT32_MAX) {
v2 TextCoord = TextCoords[TextCoordIndices[VertexIndex]];
Vert.TextureCoord = glm::vec2(TextCoord.x, TextCoord.y);
}
uniqueVertices.push_back(Vert);
}
Finally, you need to map from the original index out to the new index, preserving the triangle geometry.
std::vector<uint32_t> RemapedIndices;
RemapedIndices.reserve(IndexCount);
for(u32 OriginalVertexIndex = 0; OriginalVertexIndex < IndexCount; ++OriginalVertexIndex) {
auto key = originalIndexToKey[OriginalVertexIndex];
auto NewIndex = keyToNewIndex[key];
RemapedIndices.push_back(NewIndex );
}
The image is not completely randomly messed up. If you combine the two images, these fit perfectly:
The only problem, a mesh means triangles. You tell Vulkan how to draw each triangle. Having a mesh composed of 8 triangles, 9 unique vertices, you pass vertices for each one of 8 triangles separately, see screenshot below:
1:[1,2,4], 2:[2,5,4], 3:[2,3,5], 4:[3,6,5], 5:[4,5,7], 6:[5,8,7], 7:[5,6,8], 8:[6,9,8] (see red numbers).
So the vertex 5 will be repeated 6 times, because there the triangles 2,3,4,5,6,7 shares the vertex 5. Same goes for normals and textures. The vast majority of vertices with all set coordinates, normals, textures, colors, will be repeated exactly 6 times. If it is double coated it will be repeated 12 times. The boundary vertices will be repeated 3 and respectively 6 times if double coated. And the edge vertices will be repeated 1/2 times, see left and right edge, and respectively 2/4 times if double coated:
If you try to remove the duplicates, you transform a mesh in a mess. And this is regardless of how well these are sorted or unsorted. The order makes no more any sense.
If you want to use unique set of coordinates, there probably exists in Vulkan some technique to add an index buffer, similar to OpenGL. So you can pass unique coordinates, but build correctly the index array. All vertices will be unique, but the indexes in index array will not.

C++ obj loader texture coordinates messed up

I have written a simple obj parser in c++ that loads the vertices, indices and texture coordinates (that's all the data I need).
Here is the function:
Model* ModelLoader::loadFromOBJ(string objFile, ShaderProgram *shader, GLuint texture)
{
fstream file;
file.open(objFile);
if (!file.is_open())
{
cout << "ModelLoader: " << objFile << " was not found";
return NULL;
}
int vertexCount = 0;
int indexCount = 0;
vector<Vector3> vertices;
vector<Vector2> textureCoordinates;
vector<Vector2> textureCoordinatesFinal;
vector<unsigned int> vertexIndices;
vector<unsigned int> textureIndices;
string line;
while (getline(file, line))
{
vector<string> splitLine = Common::splitString(line, ' ');
// v - vertex
if (splitLine[0] == "v")
{
Vector3 vertex(stof(splitLine[1]), stof(splitLine[2]), stof(splitLine[3]));
vertices.push_back(vertex);
vertexCount++;
}
// vt - texture coordinate
else if (splitLine[0] == "vt")
{
Vector2 textureCoordinate(stof(splitLine[1]), 1 - stof(splitLine[2]));
textureCoordinates.push_back(textureCoordinate);
}
// f - face
else if (splitLine[0] == "f")
{
vector<string> faceSplit1 = Common::splitString(splitLine[1], '/');
vector<string> faceSplit2 = Common::splitString(splitLine[2], '/');
vector<string> faceSplit3 = Common::splitString(splitLine[3], '/');
unsigned int vi1 = stoi(faceSplit1[0]) - 1;
unsigned int vi2 = stoi(faceSplit2[0]) - 1;
unsigned int vi3 = stoi(faceSplit3[0]) - 1;
unsigned int ti1 = stoi(faceSplit1[1]) - 1;
unsigned int ti2 = stoi(faceSplit2[1]) - 1;
unsigned int ti3 = stoi(faceSplit3[1]) - 1;
vertexIndices.push_back(vi1);
vertexIndices.push_back(vi2);
vertexIndices.push_back(vi3);
textureIndices.push_back(ti1);
textureIndices.push_back(ti2);
textureIndices.push_back(ti3);
indexCount += 3;
}
}
// rearanging textureCoordinates into textureCoordinatesFinal based on textureIndices
for (int i = 0; i < indexCount; i++)
textureCoordinatesFinal.push_back(textureCoordinates[textureIndices[i]]);
Model *result = new Model(shader, vertexCount, &vertices[0], NULL, texture, indexCount, &textureCoordinatesFinal[0], &vertexIndices[0]);
models.push_back(result);
return result;
}
As you can see, I take into account the 1 - texCoord.y (because blender and opengl use a different coordinate system for textures).
I also arrange the texture coordinates based on the texture indices after the while loop.
However, the models I try to render have their textures messed up. Here is an example:
Texture messed up
I even tried it with a single cube which I unwrapped myself in blender and applied a very simple brick texture. In 1 or 2 faces, the texture was fine and working, then in some other faces, 1 of the tringles had a correct texture and the others appeared streched out (same as in the picture above).
To define a mesh, there is only one index list that indexes the vertex attributes. The vertex attributes (in your case the vertices and the texture coordinate) form a record set, which is referred by these indices.
This causes, that each vertex coordinate may occur several times in the list and each texture coordinate may occur several times in the list. But each combination of vertices and texture coordinates is unique.
Take the vertexIndices and textureIndices an create unique pairs of vertices and texture coordinates (verticesFinal, textureCoordinatesFinal).
Create new attribute_indices, which indexes the pairs.
Use the a temporary container attribute_pairs to manage the unique pairs and to identify their indices:
#include <vector>
#include <map>
// input
std::vector<Vector3> vertices;
std::vector<Vector2> textureCoordinates;
std::vector<unsigned int> vertexIndices;
std::vector<unsigned int> textureIndices;
std::vector<unsigned int> attribute_indices; // final indices
std::vector<Vector3> verticesFinal; // final vertices buffer
std::vector<Vector2> textureCoordinatesFinal; // final texture coordinate buffer
// map a pair of indices to the final attribute index
std::map<std::pair<unsigned int, unsigned int>, unsigned int> attribute_pairs;
// vertexIndices.size() == textureIndices.size()
for ( size_t i = 0; i < vertexIndices.size(); ++ i )
{
// pair of vertex index an texture index
auto attr = std::make_pair( vertexIndices[i], textureIndices[i] );
// check if the pair aready is a member of "attribute_pairs"
auto attr_it = attribute_pairs.find( attr );
if ( attr_it == attribute_pairs.end() )
{
// "attr" is a new pair
// add the attributes to the final buffers
verticesFinal.push_back( vertices[attr.first] );
textureCoordinatesFinal.push_back( textureCoordinates[attr.first] );
// the new final index is the next index
unsigned int new_index = (unsigned int)attribute_pairs.size();
attribute_indices.push_back( new_index );
// add the new map entry
attribute_pairs[attr] = new_index;
}
else
{
// the pair "attr" already exists: add the index which was found in the map
attribute_indices.push_back( attr_it->second );
}
}
Note the number of the vertex coordinates (verticesFinal.size()) is equal the number of the texture coordiantes (textureCoordinatesFinal.size()). But the number of the indices (attribute_indices.size()) is something completely different.
// verticesFinal.size() == textureCoordinatesFinal.size()
Model *result = new Model(
shader,
verticesFinal.size(),
verticesFinal.data(),
NULL, texture,
attribute_indices.size(),
textureCoordinatesFinal.data(),
attribute_indices.data() );

Extracting skin data from an FBX file

I need to convert animation data from Autodesk's FBX file format to one that is compatible with DirectX; specifically, I need to calculate the offset matrices for my skinned mesh. I have written a converter( which in this case converts .fbx to my own 'scene' format ) in which I would like to calculate an offset matrix for my mesh. Here is code:
//
// Skin
//
if(bHasDeformer)
{
// iterate deformers( TODO: ACCOUNT FOR MULTIPLE DEFORMERS )
for(int i = 0; i < ncDeformers && i < 1; ++i)
{
// skin
FbxSkin *pSkin = (FbxSkin*)pMesh->GetDeformer(i, FbxDeformer::eSkin);
if(pSkin == NULL)
continue;
// bone count
int ncBones = pSkin->GetClusterCount();
// iterate bones
for (int boneIndex = 0; boneIndex < ncBones; ++boneIndex)
{
// cluster
FbxCluster* cluster = pSkin->GetCluster(boneIndex);
// bone ref
FbxNode* pBone = cluster->GetLink();
// Get the bind pose
FbxAMatrix bindPoseMatrix, transformMatrix;
cluster->GetTransformMatrix(transformMatrix);
cluster->GetTransformLinkMatrix(bindPoseMatrix);
// decomposed transform components
vS = bindPoseMatrix.GetS();
vR = bindPoseMatrix.GetR();
vT = bindPoseMatrix.GetT();
int *pVertexIndices = cluster->GetControlPointIndices();
double *pVertexWeights = cluster->GetControlPointWeights();
// Iterate through all the vertices, which are affected by the bone
int ncVertexIndices = cluster->GetControlPointIndicesCount();
for (int iBoneVertexIndex = 0; iBoneVertexIndex < ncVertexIndices; iBoneVertexIndex++)
{
// vertex
int niVertex = pVertexIndices[iBoneVertexIndex];
// weight
float fWeight = (float)pVertexWeights[iBoneVertexIndex];
}
}
}
How do I convert the fbx transforms to a bone offset matrix?

How to unify normal orientation

I've been trying to realize a mesh that has all face normals pointing outward.
In order to realize this, I load a mesh from a *.ctm file, then walk over all
triangles to determine the normal using a cross product and if the normal
is pointing to the negative z direction, I flip v1 and v2 (thus the normal orientation).
After this is done I save the result to a *.ctm file and view it with Meshlab.
The result in Meshlab still shows that normals are pointing in both positive and
negative z direction ( can be seen from the black triangles). Also when viewing
the normals in Meshlab they are really pointing backwards.
Can anyone give me some advice on how to solve this?
The source code for the normalization part is:
pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud1 (new pcl::PointCloud<pcl::PointXYZRGBA> ());
pcl::fromROSMsg (meshFixed.cloud,*cloud1);for(std::vector<pcl::Vertices>::iterator it = meshFixed.polygons.begin(); it != meshFixed.polygons.end(); ++it)
{
alglib::real_2d_array v0;
double _v0[] = {cloud1->points[it->vertices[0]].x,cloud1->points[it->vertices[0]].y,cloud1->points[it->vertices[0]].z};
v0.setcontent(3,1,_v0); //3 rows, 1col
alglib::real_2d_array v1;
double _v1[] = {cloud1->points[it->vertices[1]].x,cloud1->points[it->vertices[1]].y,cloud1->points[it->vertices[1]].z};
v1.setcontent(3,1,_v1); //3 rows, 1col
alglib::real_2d_array v2;
double _v2[] = {cloud1->points[it->vertices[2]].x,cloud1->points[it->vertices[2]].y,cloud1->points[it->vertices[2]].z};
v2.setcontent(1,3,_v2); //3 rows, 1col
alglib::real_2d_array normal;
normal = cross(v1-v0,v2-v0);
//if z<0 change indices order v1->v2 and v2->v1
alglib::real_2d_array normalizedNormal;
if(normal[2][0]<0)
{
int index1,index2;
index1 = it->vertices[1];
index2 = it->vertices[2];
it->vertices[1] = index2;
it->vertices[2] = index1;
//make normal of length 1
double normalScaling = 1.0/sqrt(dot(normal,normal));
normal[0][0] = -1*normal[0][0];
normal[1][0] = -1*normal[1][0];
normal[2][0] = -1*normal[2][0];
normalizedNormal = normalScaling * normal;
}
else
{
//make normal of length 1
double normalScaling = 1.0/sqrt(dot(normal,normal));
normalizedNormal = normalScaling * normal;
}
//add to normal cloud
pcl::Normal pclNormalizedNormal;
pclNormalizedNormal.normal_x = normalizedNormal[0][0];
pclNormalizedNormal.normal_y = normalizedNormal[1][0];
pclNormalizedNormal.normal_z = normalizedNormal[2][0];
normalsFixed.push_back(pclNormalizedNormal);
}
The result from this code is:
I've found some code in the VCG library to orient the face and vertex normals.
After using this a large part of the mesh has correct face normals, but not all.
The new code:
// VCG library implementation
MyMesh m;
// Convert pcl::PolygonMesh to VCG MyMesh
m.Clear();
// Create temporary cloud in to have handy struct object
pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud1 (new pcl::PointCloud<pcl::PointXYZRGBA> ());
pcl::fromROSMsg (meshFixed.cloud,*cloud1);
// Now convert the vertices to VCG MyMesh
int vertCount = cloud1->width*cloud1->height;
vcg::tri::Allocator<MyMesh>::AddVertices(m, vertCount);
for(unsigned int i=0;i<vertCount;++i)
m.vert[i].P()=vcg::Point3f(cloud1->points[i].x,cloud1->points[i].y,cloud1->points[i].z);
// Now convert the polygon indices to VCG MyMesh => make VCG faces..
int triCount = meshFixed.polygons.size();
if(triCount==1)
{
if(meshFixed.polygons[0].vertices[0]==0 && meshFixed.polygons[0].vertices[1]==0 && meshFixed.polygons[0].vertices[2]==0)
triCount=0;
}
Allocator<MyMesh>::AddFaces(m, triCount);
for(unsigned int i=0;i<triCount;++i)
{
m.face[i].V(0)=&m.vert[meshFixed.polygons[i].vertices[0]];
m.face[i].V(1)=&m.vert[meshFixed.polygons[i].vertices[1]];
m.face[i].V(2)=&m.vert[meshFixed.polygons[i].vertices[2]];
}
vcg::tri::UpdateBounding<MyMesh>::Box(m);
vcg::tri::UpdateNormal<MyMesh>::PerFace(m);
vcg::tri::UpdateNormal<MyMesh>::PerVertexNormalizedPerFace(m);
printf("Input mesh vn:%i fn:%i\n",m.VN(),m.FN());
// Start to flip all normals to outside
vcg::face::FFAdj<MyMesh>::FFAdj();
vcg::tri::UpdateTopology<MyMesh>::FaceFace(m);
bool oriented, orientable;
if ( vcg::tri::Clean<MyMesh>::CountNonManifoldEdgeFF(m)>0 ) {
std::cout << "Mesh has some not 2-manifold faces, Orientability requires manifoldness" << std::endl; // text
return; // can't continue, mesh can't be processed
}
vcg::tri::Clean<MyMesh>::OrientCoherentlyMesh(m, oriented,orientable);
vcg::tri::Clean<MyMesh>::FlipNormalOutside(m);
vcg::tri::Clean<MyMesh>::FlipMesh(m);
//vcg::tri::UpdateTopology<MyMesh>::FaceFace(m);
//vcg::tri::UpdateTopology<MyMesh>::TestFaceFace(m);
vcg::tri::UpdateNormal<MyMesh>::PerVertexNormalizedPerFace(m);
vcg::tri::UpdateNormal<MyMesh>::PerVertexFromCurrentFaceNormal(m);
// now convert VCG back to pcl::PolygonMesh
pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZRGBA>);
cloud->is_dense = false;
cloud->width = vertCount;
cloud->height = 1;
cloud->points.resize (vertCount);
// Now fill the pointcloud of the mesh
for(int i=0; i<vertCount; i++)
{
cloud->points[i].x = m.vert[i].P()[0];
cloud->points[i].y = m.vert[i].P()[1];
cloud->points[i].z = m.vert[i].P()[2];
}
pcl::toROSMsg(*cloud,meshFixed.cloud);
std::vector<pcl::Vertices> polygons;
// Now fill the indices of the triangles/faces of the mesh
for(int i=0; i<triCount; i++)
{
pcl::Vertices vertices;
vertices.vertices.push_back(m.face[i].V(0)-&*m.vert.begin());
vertices.vertices.push_back(m.face[i].V(1)-&*m.vert.begin());
vertices.vertices.push_back(m.face[i].V(2)-&*m.vert.begin());
polygons.push_back(vertices);
}
meshFixed.polygons = polygons;
Which results in: (Meshlab still shows normals are facing both sides)
I finally solved the problem. So I'm still using VCG library. From the above new code I slightly updated the following section:
vcg::tri::Clean<MyMesh>::OrientCoherentlyMesh(m, oriented,orientable);
//vcg::tri::Clean<MyMesh>::FlipNormalOutside(m);
//vcg::tri::Clean<MyMesh>::FlipMesh(m);
//vcg::tri::UpdateTopology<MyMesh>::FaceFace(m);
//vcg::tri::UpdateTopology<MyMesh>::TestFaceFace(m);
vcg::tri::UpdateNormal<MyMesh>::PerVertexNormalizedPerFace(m);
vcg::tri::UpdateNormal<MyMesh>::PerVertexFromCurrentFaceNormal(m);
Now I've updated the vcg::tri::Clean<MyMesh>::OrientCoherentlyMesh() function in clean.h. Here the update is to orient the first polygon of a group correctly. Also after swapping the edge the normal of the face is calculated and updated.
static void OrientCoherentlyMesh(MeshType &m, bool &Oriented, bool &Orientable)
{
RequireFFAdjacency(m);
assert(&Oriented != &Orientable);
assert(m.face.back().FFp(0)); // This algorithms require FF topology initialized
Orientable = true;
Oriented = true;
tri::UpdateSelection<MeshType>::FaceClear(m);
std::stack<FacePointer> faces;
for (FaceIterator fi = m.face.begin(); fi != m.face.end(); ++fi)
{
if (!fi->IsD() && !fi->IsS())
{
// each face put in the stack is selected (and oriented)
fi->SetS();
// New section of code to orient the initial face correctly
if(fi->N()[2]>0.0)
{
face::SwapEdge<FaceType,true>(*fi, 0);
face::ComputeNormal(*fi);
}
// End of new code section.
faces.push(&(*fi));
// empty the stack
while (!faces.empty())
{
FacePointer fp = faces.top();
faces.pop();
// make consistently oriented the adjacent faces
for (int j = 0; j < 3; j++)
{
//get one of the adjacent face
FacePointer fpaux = fp->FFp(j);
int iaux = fp->FFi(j);
if (!fpaux->IsD() && fpaux != fp && face::IsManifold<FaceType>(*fp, j))
{
if (!CheckOrientation(*fpaux, iaux))
{
Oriented = false;
if (!fpaux->IsS())
{
face::SwapEdge<FaceType,true>(*fpaux, iaux);
// New line to update face normal
face::ComputeNormal(*fpaux);
// end of new section.
assert(CheckOrientation(*fpaux, iaux));
}
else
{
Orientable = false;
break;
}
}
// put the oriented face into the stack
if (!fpaux->IsS())
{
fpaux->SetS();
faces.push(fpaux);
}
}
}
}
}
if (!Orientable) break;
}
}
Besides I also updated the function bool CheckOrientation(FaceType &f, int z) to perform a calculation based on normal z-direction.
template <class FaceType>
bool CheckOrientation(FaceType &f, int z)
{
// Added next section to calculate the difference between normal z-directions
FaceType *original = f.FFp(z);
double nf2,ng2;
nf2=f.N()[2];
ng2=original->N()[2];
// End of additional section
if (IsBorder(f, z))
return true;
else
{
FaceType *g = f.FFp(z);
int gi = f.FFi(z);
// changed if statement from: if (f.V0(z) == g->V1(gi))
if (nf2/abs(nf2)==ng2/abs(ng2))
return true;
else
return false;
}
}
The result is as I expect and desire from the algorithm:

My shadow volumes don't move with my light

I'm currently trying to implement shadow volumes in my opengl world. Right now I'm just focusing on getting the volumes calculated correctly.
Right now I have a teapot that's rendered, and I can get it to generate some shadow volumes, however they always point directly to the left of the teapot. No matter where I move my light(and I can tell that I'm actually moving the light because the teapot is lit with diffuse lighting), the shadow volumes always go straight left.
The method I'm using to create the volumes is:
1. Find silhouette edges by looking at every triangle in the object. If the triangle isn't lit up(tested with the dot product), then skip it. If it is lit, then check all of its edges. If the edge is currently in the list of silhouette edges, remove it. Otherwise add it.
2. Once I have all the silhouette edges, I go through each edge creating a quad with one vertex at each vertex of the edge, and the other two just extended away from the light.
Here is my code that does it all:
void getSilhoueteEdges(Model model, vector<Edge> &edges, Vector3f lightPos) {
//for every triangle
// if triangle is not facing the light then skip
// for every edge
// if edge is already in the list
// remove
// else
// add
vector<Face> faces = model.faces;
//for every triangle
for ( unsigned int i = 0; i < faces.size(); i++ ) {
Face currentFace = faces.at(i);
//if triangle is not facing the light
//for this i'll just use the normal of any vertex, it should be the same for all of them
Vector3f v1 = model.vertices[currentFace.vertices[0] - 1];
Vector3f n1 = model.normals[currentFace.normals[0] - 1];
Vector3f dirToLight = lightPos - v1;
dirToLight.normalize();
float dot = n1.dot(dirToLight);
if ( dot <= 0.0f )
continue; //then skip
//lets get the edges
//v1,v2; v2,v3; v3,v1
Vector3f v2 = model.vertices[currentFace.vertices[1] - 1];
Vector3f v3 = model.vertices[currentFace.vertices[2] - 1];
Edge e[3];
e[0] = Edge(v1, v2);
e[1] = Edge(v2, v3);
e[2] = Edge(v3, v1);
//for every edge
//triangles only have 3 edges so loop 3 times
for ( int j = 0; j < 3; j++ ) {
if ( edges.size() == 0 ) {
edges.push_back(e[j]);
continue;
}
bool wasRemoved = false;
//if edge is in the list
for ( unsigned int k = 0; k < edges.size(); k++ ) {
Edge tempEdge = edges.at(k);
if ( tempEdge == e[j] ) {
edges.erase(edges.begin() + k);
wasRemoved = true;
break;
}
}
if ( ! wasRemoved )
edges.push_back(e[j]);
}
}
}
void extendEdges(vector<Edge> edges, Vector3f lightPos, GLBatch &batch) {
float extrudeSize = 100.0f;
batch.Begin(GL_QUADS, edges.size() * 4);
for ( unsigned int i = 0; i < edges.size(); i++ ) {
Edge edge = edges.at(i);
batch.Vertex3f(edge.v1.x, edge.v1.y, edge.v1.z);
batch.Vertex3f(edge.v2.x, edge.v2.y, edge.v2.z);
Vector3f temp = edge.v2 + (( edge.v2 - lightPos ) * extrudeSize);
batch.Vertex3f(temp.x, temp.y, temp.z);
temp = edge.v1 + ((edge.v1 - lightPos) * extrudeSize);
batch.Vertex3f(temp.x, temp.y, temp.z);
}
batch.End();
}
void createShadowVolumesLM(Vector3f lightPos, Model model) {
getSilhoueteEdges(model, silhoueteEdges, lightPos);
extendEdges(silhoueteEdges, lightPos, boxShadow);
}
I have my light defined as and the main shadow volume generation method is called by:
Vector3f vLightPos = Vector3f(-5.0f,0.0f,2.0f);
createShadowVolumesLM(vLightPos, boxModel);
All of my code seems self documented in places I don't have any comments, but if there are any confusing parts, let me know.
I have a feeling it's just a simple mistake I over looked. Here is what it looks like with and without the shadow volumes being rendered.
It would seem you aren't transforming the shadow volumes. You either need to set the model view matrix on them so they get transformed the same as the rest of the geometry. Or you need to transform all the vertices (by hand) into view space and then do the silhouetting and transformation in view space.
Obviously the first method will use less CPU time and would be, IMO, preferrable.