I spent the day trying to achieve the following, with no luck. I have been able to attach an id to the vertices (vertex()->info()) for some triangulations, with the help of std::pair. But now I want to do so for a Poisson surface reconstruction, which uses a Polyhedron_3 as a mesh, and I don't manage. Doing the Poisson surface reconstruction requires to use points with normals. So, for the points, I defined pairs constituted of a pair (point, normal) and an integer; my inputs are a matrix of points pts and a matrix of normals normals (these matrices come from R which runs the C++ code):
const size_t npoints = pts.nrow();
std::vector<IP3wn> points(npoints);
for(size_t i = 0; i < npoints; i++) {
points[i] = std::make_pair(
std::make_pair(Point3(pts(i, 0), pts(i, 1), pts(i, 2)), i + 1),
Vector3(normals(i, 0), normals(i, 1), normals(i, 2)));
}
Polyhedron mesh;
But I can't access to the field info() of the vertices:
for(Polyhedron::Facet_iterator fit = mesh.facets_begin();
fit != mesh.facets_end(); fit++) {
Polyhedron::Facet f = *fit;
facets(i, 0) = f.halfedge()->vertex()->info();
The error message claims that there is no member info. Could you show me the right way please?
Related
I am trying to draw this free airwing model from Starfox 64 in OpenGL. I converted the .fbx file to .obj in Blender and am using tinyobjloader to load it (all requirements for my university subject).
I pretty much slapped the example code (with the modern API) into my program, replaced the file name, and grabbed the attrib.vertices and attrib.normals vectors to draw the airwing.
I can view the vertices with GL_POINTS:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, &vertices[0]);
glDrawArrays(GL_POINTS, 0, vertices.size() / 3);
glDisableClientState(GL_VERTEX_ARRAY);
Which looks correct (I ... think?):
But I'm not sure how to render a solid model. Simply replacing GL_POINTS with GL_TRIANGLES (shown) or GL_QUADS doesn't work:
I am using OpenGL 1.1 w/ GLUT (again, university). I think I just don't know what I'm doing, really. Help?
E: When I wrote this answer originally I had only worked with vertices and normals. I've figured out how to get materials and textures working, but don't have time to write that out at the moment. I will add that in when I have some time, but it's largely the same logic if you wanna poke around the tinyobj header yourselves in the meantime. :-)
I've learned a lot about TinyOBJLoader in the last day so I hope this helps someone in the future. Credit goes to this GitHub repository which uses TinyOBJLoader very clearly and cleanly in fileloader.cpp.
To summarise what I learned studying that code:
Shapes are of type shape_t. For a single model OBJ, the size of shapes is 1. I'm assuming OBJ files can contain multiple objects but I haven't used the file format much to know.
shape_t's have a member mesh of type mesh_t. This member stores the information parsed from the face rows of the OBJ. You can figure out the number of faces your object has by checking the size of the material_ids member.
The vertex, texture coordinate and normal indices of each face are stored in the indices member of the mesh. This is of type std::vector<index_t>. This is a flattened vector of indices. So for a model with triangulated faces f1, f2 ... fi, it stores v1, t1, n1, v2, t2, n2 ... vi, ti, ni. Remember that these indices correspond to the whole vertex, texture coordinate or normal. Personally I triangulated my model by importing into Blender and exporting it with triangulation turned on. TinyOBJ has its own triangulation algorithm you can turn on by setting the reader_config.triangulate flag.
I've only worked with the vertices and normals so far. Here's how I access and store them to be used in OpenGL:
Convert the flat vertices and normal arrays into groups of 3, i.e. 3D vectors
for (size_t vec_start = 0; vec_start < attrib.vertices.size(); vec_start += 3) {
vertices.emplace_back(
attrib.vertices[vec_start],
attrib.vertices[vec_start + 1],
attrib.vertices[vec_start + 2]);
}
for (size_t norm_start = 0; norm_start < attrib.normals.size(); norm_start += 3) {
normals.emplace_back(
attrib.normals[norm_start],
attrib.normals[norm_start + 1],
attrib.normals[norm_start + 2]);
}
This way the index of the vertices and normals containers will correspond with the indices given by the face entries.
Loop over every face, and store the vertex and normal indices in a separate object
for (auto shape = shapes.begin(); shape < shapes.end(); ++shape) {
const std::vector<tinyobj::index_t>& indices = shape->mesh.indices;
const std::vector<int>& material_ids = shape->mesh.material_ids;
for (size_t index = 0; index < material_ids.size(); ++index) {
// offset by 3 because values are grouped as vertex/normal/texture
triangles.push_back(Triangle(
{ indices[3 * index].vertex_index, indices[3 * index + 1].vertex_index, indices[3 * index + 2].vertex_index },
{ indices[3 * index].normal_index, indices[3 * index + 1].normal_index, indices[3 * index + 2].normal_index })
);
}
}
Drawing is then quite easy:
glBegin(GL_TRIANGLES);
for (auto triangle = triangles.begin(); triangle != triangles.end(); ++triangle) {
glNormal3f(normals[triangle->normals[0]].X, normals[triangle->normals[0]].Y, normals[triangle->normals[0]].Z);
glVertex3f(vertices[triangle->vertices[0]].X, vertices[triangle->vertices[0]].Y, vertices[triangle->vertices[0]].Z);
glNormal3f(normals[triangle->normals[1]].X, normals[triangle->normals[1]].Y, normals[triangle->normals[1]].Z);
glVertex3f(vertices[triangle->vertices[1]].X, vertices[triangle->vertices[1]].Y, vertices[triangle->vertices[1]].Z);
glNormal3f(normals[triangle->normals[2]].X, normals[triangle->normals[2]].Y, normals[triangle->normals[2]].Z);
glVertex3f(vertices[triangle->vertices[2]].X, vertices[triangle->vertices[2]].Y, vertices[triangle->vertices[2]].Z);
}
glEnd();
I have vertex position and index and I want vertex normal:
// input
vector<Vec3f> points = ... // position
vector<Vec3i> facets = ... // index (triangles)
// output
vector<Vec3f> norms; // normal
Method 1
I compute normal like this:
norms.resize(points.size()); // for each vertex there is a normal
for (Vec3i f : facets) {
int i0 = f.x();
int i1 = f.y(); // index
int i2 = f.z();
Vec3d pos0 = points.at(i0);
Vec3d pos1 = points.at(i1); // position
Vec3d pos2 = points.at(i2);
Vec3d N = triangleNormal(pos0, pos1, pos2); // face/triangle normal
norms[i0] = N;
norms[i1] = N; // Use the same normal for all 3 vertices
norms[i2] = N;
}
Then, the output mesh is rendered like this with a Phong material:
Method 1 with reversed normal
When I reverse normal direction in method 1:
norms[i0] = -N;
norms[i1] = -N;
norms[i2] = -N;
The dark and light regions are swapped:
The same happens by swapping position 0 with position 1 by:
// Vec3d N = triangleNormal(pos0, pos1, pos2);
Vec3d N = triangleNormal(pos1, pos0, pos2); // Swap pos0 with pos1
Method 2
I compute the normal by this method:
// Count how many faces/triangles a vertex is shared by
vector<int> counters;
counters.resize(points.size());
norms.resize(points.size());
for (Vec3i f : facets) {
int i0 = f.x();
int i1 = f.y(); // index
int i2 = f.z();
Vec3d pos0 = points.at(i0);
Vec3d pos1 = points.at(i1); // position
Vec3d pos2 = points.at(i2);
Vec3d N = triangleNormal(pos0, pos1, pos2);
// Must be normalized
// https://stackoverflow.com/a/21930058/3405291
N.normalize();
norms[i0] += N;
norms[i1] += N; // add normal to all vertices used in face
norms[i2] += N;
counters[i0]++;
counters[i1]++; // increment count for all vertices used in face
counters[i2]++;
}
// https://stackoverflow.com/a/21930058/3405291
for (int i = 0; i < static_cast<int>(norms.size()); ++i) {
if (counters[i] > 0)
norms[i] /= counters[i];
else
norms[i].normalize();
}
This method yields a totally dark final render by a Phong material:
I also tried methods suggested here and there which are similar to method 2. They all result in a final render which looks like that of method 2 i.e. all dark regions without any light one.
Method 2 with reversed normal
I used method 2, but at the end, I reversed the normal direction by:
for (Vec3d & n : norms) {
n = -n;
}
To my surprise, the final render is all darK:
Also in method 2, I tried swapping position 0 with position 1:
// Vec3d N = triangleNormal(pos0, pos1, pos2);
Vec3d N = triangleNormal(pos1, pos0, pos2); // swap pos0 with pos1
The final render is all dark regions without any light ones.
How?
Any idea how I can get my final render to be all light without any dark region?
That looks like your mesh does not have consistent winding rule. So some triangles/faces are defined CW other in CCW order of vertexes causing that some of your normals are facing in opposite direction. There are few things you can do to remedy:
use double sided normals lighting
this is easiest... somwhere in fragment or wherever you are computing the shading something like this:
out_color = face_color*(ambient_light+diffuse_light*max(0.0,dot(face_normal,light_direction)));
when the normal is in wrong direction the result of dot is negative leading to dark color so just use abs value instead:
out_color = face_color*(ambient_light+diffuse_light*abs(dot(face_normal,light_direction)));
In fixed function pipeline there is even switch for this IIRC:
glLightModeli(GL_LIGHT_MODEL_TWO_SIDE, GL_TRUE);
repair mesh winding
there must be 3D tools to do this (Blender,3DS,...) or if your mesh is generated on the fly you could update your code to create consistent winding on your own.
Correct winding enables you the use of GL_CULL_FACE which speeds up rendering considerably. Also it enables more advanced stuff like this:
OpenGL - How to create Order Independent transparency?
repair normals
In some cases there are ways to detect if the normal is pointing outwards or inwards to mesh for example like this:
Determing the direction of face normals consistently?
So just negate the wrong ones during computation of normal and that is it. However if your mesh is too complicated (too far from convex) is this not so easily done as you need to use local "centers" of mesh or even inside polygon tests which are expensive.
The averaging method of generating normals gives you dark colors for both directions of normals which means you wrongly computed them and they are most likely zero. For more info about such approach see:
How to achieve smooth tangent space normals?
Anyway to debug problems like this its best to render your normals as lines going from the vertexes of your mesh (use wireframe). Then you would see directly what normals are good and bad. Here example:
I have the following problem as shown in the figure. I have point cloud and a mesh generated by a tetrahedral algorithm. How would I carve the mesh using the that algorithm ? Are landmarks are the point cloud ?
Pseudo code of the algorithm:
for every 3D feature point
convert it 2D projected coordinates
for every 2D feature point
cast a ray toward the polygons of the mesh
get intersection point
if zintersection < z of 3D feature point
for ( every triangle vertices )
cull that triangle.
Here is a follow up implementation of the algorithm mentioned by the Guru Spektre :)
Update code for the algorithm:
int i;
for (i = 0; i < out.numberofpoints; i++)
{
Ogre::Vector3 ray_pos = pos; // camera position);
Ogre::Vector3 ray_dir = (Ogre::Vector3 (out.pointlist[(i*3)], out.pointlist[(3*i)+1], out.pointlist[(3*i)+2]) - pos).normalisedCopy(); // vertex - camea pos ;
Ogre::Ray ray;
ray.setOrigin(Ogre::Vector3( ray_pos.x, ray_pos.y, ray_pos.z));
ray.setDirection(Ogre::Vector3(ray_dir.x, ray_dir.y, ray_dir.z));
Ogre::Vector3 result;
unsigned int u1;
unsigned int u2;
unsigned int u3;
bool rayCastResult = RaycastFromPoint(ray.getOrigin(), ray.getDirection(), result, u1, u2, u3);
if ( rayCastResult )
{
Ogre::Vector3 targetVertex(out.pointlist[(i*3)], out.pointlist[(3*i)+1], out.pointlist[(3*i)+2]);
float distanceTargetFocus = targetVertex.squaredDistance(pos);
float distanceIntersectionFocus = result.squaredDistance(pos);
if(abs(distanceTargetFocus) >= abs(distanceIntersectionFocus))
{
if ( u1 != -1 && u2 != -1 && u3 != -1)
{
std::cout << "Remove index "<< "u1 ==> " <<u1 << "u2 ==>"<<u2<<"u3 ==> "<<u3<< std::endl;
updatedIndices.erase(updatedIndices.begin()+ u1);
updatedIndices.erase(updatedIndices.begin()+ u2);
updatedIndices.erase(updatedIndices.begin()+ u3);
}
}
}
}
if ( updatedIndices.size() <= out.numberoftrifaces)
{
std::cout << "current face list===> "<< out.numberoftrifaces << std::endl;
std::cout << "deleted face list===> "<< updatedIndices.size() << std::endl;
manual->begin("Pointcloud", Ogre::RenderOperation::OT_TRIANGLE_LIST);
for (int n = 0; n < out.numberofpoints; n++)
{
Ogre::Vector3 vertexTransformed = Ogre::Vector3( out.pointlist[3*n+0], out.pointlist[3*n+1], out.pointlist[3*n+2]) - mReferencePoint;
vertexTransformed *=1000.0 ;
vertexTransformed = mDeltaYaw * vertexTransformed;
manual->position(vertexTransformed);
}
for (int n = 0 ; n < updatedIndices.size(); n++)
{
int n0 = updatedIndices[n+0];
int n1 = updatedIndices[n+1];
int n2 = updatedIndices[n+2];
if ( n0 < 0 || n1 <0 || n2 <0 )
{
std::cout<<"negative indices"<<std::endl;
break;
}
manual->triangle(n0, n1, n2);
}
manual->end();
Follow up with the algorithm:
I have now two versions one is the triangulated one and the other is the carved version.
It's not not a surface mesh.
Here are the two files
http://www.mediafire.com/file/cczw49ja257mnzr/ahmed_non_triangulated.obj
http://www.mediafire.com/file/cczw49ja257mnzr/ahmed_triangulated.obj
I see it like this:
So you got image from camera with known matrix and FOV and focal length.
From that you know where exactly the focal point is and where the image is proected onto the camera chip (Z_near plane). So any vertex, its corresponding pixel and focal point lies on the same line.
So for each view cas ray from focal point to each visible vertex of the pointcloud. and test if any face of the mesh hits before hitting face containing target vertex. If yes remove it as it would block the visibility.
Landmark in this context is just feature point corresponding to vertex from pointcloud. It can be anything detectable (change of intensity, color, pattern whatever) usually SIFT/SURF is used for this. You should have them located already as that is the input for pointcloud generation. If not you can peek pixel corresponding to each vertex and test for background color.
Not sure how you want to do this without the input images. For that you need to decide which vertex is visible from which side/view. May be it is doable form nearby vertexes somehow (like using vertex density points or corespondence to planar face...) or the algo is changed somehow for finding unused vertexes inside mesh.
To cast a ray do this:
ray_pos=tm_eye*vec4(imgx/aspect,imgy,0.0,1.0);
ray_dir=ray_pos-tm_eye*vec4(0.0,0.0,-focal_length,1.0);
where tm_eye is camera direct transform matrix, imgx,imgy is the 2D pixel position in image normalized to <-1,+1> where (0,0) is the middle of image. The focal_length determines the FOV of camera and aspect ratio is ratio of image resolution image_ys/image_xs
Ray triangle intersection equation can be found here:
Reflection and refraction impossible without recursive ray tracing?
If I extract it:
vec3 v0,v1,v2; // input triangle vertexes
vec3 e1,e2,n,p,q,r;
float t,u,v,det,idet;
//compute ray triangle intersection
e1=v1-v0;
e2=v2-v0;
// Calculate planes normal vector
p=cross(ray[i0].dir,e2);
det=dot(e1,p);
// Ray is parallel to plane
if (abs(det)<1e-8) no intersection;
idet=1.0/det;
r=ray[i0].pos-v0;
u=dot(r,p)*idet;
if ((u<0.0)||(u>1.0)) no intersection;
q=cross(r,e1);
v=dot(ray[i0].dir,q)*idet;
if ((v<0.0)||(u+v>1.0)) no intersection;
t=dot(e2,q)*idet;
if ((t>_zero)&&((t<=tt)) // tt is distance to target vertex
{
// intersection
}
Follow ups:
To move between normalized image (imgx,imgy) and raw image (rawx,rawy) coordinates for image of size (imgxs,imgys) where (0,0) is top left corner and (imgxs-1,imgys-1) is bottom right corner you need:
imgx = (2.0*rawx / (imgxs-1)) - 1.0
imgy = 1.0 - (2.0*rawy / (imgys-1))
rawx = (imgx + 1.0)*(imgxs-1)/2.0
rawy = (1.0 - imgy)*(imgys-1)/2.0
[progress update 1]
I finally got to the point I can compile sample test input data for this to get even started (as you are unable to share valid data at all):
I created small app with hard-coded table mesh (gray) and pointcloud (aqua) and simple camera control. Where I can save any number of views (screenshot + camera direct matrix). When loaded back it aligns with the mesh itself (yellow ray goes through aqua dot in image and goes through the table mesh too). The blue lines are casted from camera focal point to its corners. This will emulate the input you got. The second part of the app will use only these images and matrices with the point cloud (no mesh surface anymore) tetragonize it (already finished) now just cast ray through each landmark in each view (aqua dot) and remove all tetragonals before target vertex in pointcloud is hit (this stuff is not even started yet may be in weekend)... And lastly store only surface triangles (easy just use all triangles which are used just once also already finished except the save part but to write wavefront obj from it is easy ...).
[Progress update 2]
I added landmark detection and matching with the point cloud
as you can see only valid rays are cast (those that are visible on image) so some points on point cloud does not cast rays (singular aqua dots)). So now just the ray/triangle intersection and tetrahedron removal from list is what is missing...
I get image from a camera (calibrated and without lens distortions) and I need to detect a rectangular object. Markers are a good example. For markers I check corner count, min size, board contrast and convexity. I had an idea on how to improve this in cases where there is large amount of false rectangles.
Here is an example image:
Normally all of these are valid, because without knowing anything about camera we cannot determine if perspective allows these kinds of shapes. I know the size (or at least the ratio) of the rectangle in real-life. So I had an idea that I should be able to disregard many of these shapes just by reprojecting them and checking for error.
Like if I use solvePnPRansac it would not be able to converge if the shape is not possible. If it doesn't converge I just disregard it. Sadly, none of the OpenCV solve functions allow checking me for error or convergence. I actually need some ratio or quality, because it is possible that some of the rectangles overlap. For example my object finder identifies these rectangles:
One of the three is actually correct, or at least "the best". But I need some way to know which one it is. I cannot use things like line lengths because of the camera perspective. So I just thought I could solve and see which has the smallest error.
There are no lens distortions in the image, but even if there were solvePnP usually allows passing D to it as well.
Is this even possible or am I missing something?
I guess I could try hacking around solvePnPRansac just to return convergence, but maybe there is a simpler way?
I figured I can do something like what is done during calibration with a grid. I can calculate the reprojection error. So first I solve to get the transformation matrix. Then I transform the points in 3D using the transformation matrix and afterwards use projectPoints to project them back in 2D. Then I check distance between original 2D points and the projected 2D points. This can then be used for quality. Objects that are not possible often have 100 pixels or more reprojection error in my images, but possible objects have less than 20px. So I just did a 25 pixel cutoff and it seems to work fine.
Note that more transformations are possible than I though. In my original image maybe two are not possible with my current camera, but it still did reject a lot of fakes.
If nobody else has some ideas I will accept this as answer.
Here is some code for the method I use:
//This is the object in 3D
double width = 50.0; //Object is 50mm wide
double height = 30.0; //Object is 30mm tall
cv::Mat object_points(4,3,CV_64FC1);
object_points.at<double>(0,0)=0;
object_points.at<double>(0,1)=0;
object_points.at<double>(0,2)=0;
object_points.at<double>(1,0)=width;
object_points.at<double>(1,1)=0;
object_points.at<double>(1,2)=0;
object_points.at<double>(2,0)=width;
object_points.at<double>(2,1)=height;
object_points.at<double>(2,2)=0;
object_points.at<double>(3,0)=0;
object_points.at<double>(3,1)=height;
object_points.at<double>(3,2)=0;
//Check all rectangles for error
cv::Mat image_points(4,2,CV_64FC1);
for (size_t i = 0; i < rectangles_to_test.size(); i++) {
// Get rectangle points
for (size_t c = 0; c < 4; ++c) {
image_points.at<double>(c,0) = (rectangles_to_test[i].points[c].x);
image_points.at<double>(c,1) = (rectangles_to_test[i].points[c].y);
}
// Calculate transformation matrix
cv::Mat rvec, tvec;
cv::solvePnP(object_points, image_points, M1, D1, rvec, tvec);
cv::Mat rotation;
Matrix4<double> transform;
transform.init_identity();
cv::Rodrigues(rvec, rotation);
for(size_t row = 0; row < 3; ++row) {
for(size_t col = 0; col < 3; ++col) {
transform.set(row, col, rotation.at<double>(row, col));
}
transform.set(row, 3, tvec.at<double>(row, 0));
}
// Calculate projection
std::vector<cv::Point3f> p3(4);
std::vector<cv::Point2f> p2;
Vector4<double> p = transform * Vector4<double>(0, 0, 0, 1);
p3[0] = cv::Point3f((float)p.x, (float)p.y, (float)p.z);
p = transform * Vector4<double>(width, 0, 0, 1);
p3[1] = cv::Point3f((float)p.x, (float)p.y, (float)p.z);
p = transform * Vector4<double>(width, height, 0, 1);
p3[2] = cv::Point3f((float)p.x, (float)p.y, (float)p.z);
p = transform * Vector4<double>(0, height, 0, 1);
p3[3] = cv::Point3f((float)p.x, (float)p.y, (float)p.z);
cv::projectPoints(p3, cv::Mat::zeros(1, 3, CV_64FC1), cv::Mat::zeros(1, 3, CV_64FC1), M1, D1, p2);
// Calculate reprojection error
rectangles_to_test[i].reprojection_error = 0.0;
for (size_t c = 0; c < 4; ++c) {
double dx = p2[c].x - rectangles_to_test[i].points[c].x;
double dy = p2[c].y - rectangles_to_test[i].points[c].y;
rectangles_to_test[i].reprojection_error += std::sqrt(dx*dx + dy*dy);
}
if (rectangles_to_test[i].reprojection_error > reprojection_error_threshold) {
//rectangle is no good
}
}
I've been trying to implement the marching cubes algorithm with C++ and Qt. Anyway, so far all the steps have been written, but I'm getting a really bad result. I'm looking for orientation or advices about what can be going wrong. I suspect one of the problems may be with the voxel conception, specifically about which vertex goes in which corner (0, 1, ..., 7). Also, I'm not a 100% sure about how to interpret the input for the algorithm (I'm using datasets). Should I read it in the ZYX order and move the marching cube in the same way or it doesn't matter at all? (Leaving aside the fact that no every dimension has to have the same size).
Here is what I'm getting against what it should look like...
http://i57.tinypic.com/2nb7g46.jpg
http://en.wikipedia.org/wiki/Marching_cubes
http://en.wikipedia.org/wiki/Marching_cubes#External_links
Paul Bourke. "Overview and source code".
http://paulbourke.net/geometry/polygonise/
Qt_MARCHING_CUBES.zip: Qt/OpenGL example courtesy Dr. Klaus Miltenberger.
http://paulbourke.net/geometry/polygonise/Qt_MARCHING_CUBES.zip
The example requires boost, but looks like it probably should work.
In his example, it has in marchingcubes.cpp, a few different methods for calculating the marching cubes: vMarchCube1 and vMarchCube2.
In the comments it says vMarchCube2 performs the Marching Tetrahedrons algorithm on a single cube by making six calls to vMarchTetrahedron.
Below is the source for the first one vMarchCube1:
//vMarchCube1 performs the Marching Cubes algorithm on a single cube
GLvoid GL_Widget::vMarchCube1(const GLfloat &fX, const GLfloat &fY, const GLfloat &fZ, const GLfloat &fScale, const GLfloat &fTv)
{
GLint iCorner, iVertex, iVertexTest, iEdge, iTriangle, iFlagIndex, iEdgeFlags;
GLfloat fOffset;
GLvector sColor;
GLfloat afCubeValue[8];
GLvector asEdgeVertex[12];
GLvector asEdgeNorm[12];
//Make a local copy of the values at the cube's corners
for(iVertex = 0; iVertex < 8; iVertex++)
{
afCubeValue[iVertex] = (this->*fSample)(fX + a2fVertexOffset[iVertex][0]*fScale,fY + a2fVertexOffset[iVertex][1]*fScale,fZ + a2fVertexOffset[iVertex][2]*fScale);
}
//Find which vertices are inside of the surface and which are outside
iFlagIndex = 0;
for(iVertexTest = 0; iVertexTest < 8; iVertexTest++)
{
if(afCubeValue[iVertexTest] <= fTv) iFlagIndex |= 1<<iVertexTest;
}
//Find which edges are intersected by the surface
iEdgeFlags = aiCubeEdgeFlags[iFlagIndex];
//If the cube is entirely inside or outside of the surface, then there will be no intersections
if(iEdgeFlags == 0)
{
return;
}
//Find the point of intersection of the surface with each edge
//Then find the normal to the surface at those points
for(iEdge = 0; iEdge < 12; iEdge++)
{
//if there is an intersection on this edge
if(iEdgeFlags & (1<<iEdge))
{
fOffset = fGetOffset(afCubeValue[ a2iEdgeConnection[iEdge][0] ],afCubeValue[ a2iEdgeConnection[iEdge][1] ], fTv);
asEdgeVertex[iEdge].fX = fX + (a2fVertexOffset[ a2iEdgeConnection[iEdge][0] ][0] + fOffset * a2fEdgeDirection[iEdge][0]) * fScale;
asEdgeVertex[iEdge].fY = fY + (a2fVertexOffset[ a2iEdgeConnection[iEdge][0] ][1] + fOffset * a2fEdgeDirection[iEdge][1]) * fScale;
asEdgeVertex[iEdge].fZ = fZ + (a2fVertexOffset[ a2iEdgeConnection[iEdge][0] ][2] + fOffset * a2fEdgeDirection[iEdge][2]) * fScale;
vGetNormal(asEdgeNorm[iEdge], asEdgeVertex[iEdge].fX, asEdgeVertex[iEdge].fY, asEdgeVertex[iEdge].fZ);
}
}
//Draw the triangles that were found. There can be up to five per cube
for(iTriangle = 0; iTriangle < 5; iTriangle++)
{
if(a2iTriangleConnectionTable[iFlagIndex][3*iTriangle] < 0) break;
for(iCorner = 0; iCorner < 3; iCorner++)
{
iVertex = a2iTriangleConnectionTable[iFlagIndex][3*iTriangle+iCorner];
vGetColor(sColor, asEdgeVertex[iVertex], asEdgeNorm[iVertex]);
glColor4f(sColor.fX, sColor.fY, sColor.fZ, 0.6);
glNormal3f(asEdgeNorm[iVertex].fX, asEdgeNorm[iVertex].fY, asEdgeNorm[iVertex].fZ);
glVertex3f(asEdgeVertex[iVertex].fX, asEdgeVertex[iVertex].fY, asEdgeVertex[iVertex].fZ);
}
}
}
UPDATE: Github working example, tested
https://github.com/peteristhegreat/qt-marching-cubes
Hope that helps.
Finally, I found what was wrong.
I use a VBO indexer class to reduce the ammount of duplicated vertices and make the render faster. This class is implemented with a std::map to find and discard already existing vertices, using a tuple of < vec3, unsigned short >. As you may imagine, a marching cubes algorithm generates structures with thousands if not millions of vertices. The highest number a common unsigned short can hold is 65536, or 2^16. So, when the output geometry had more than that, the map index started to overflow and the result was a mess, since it started to overwrite vertices with the new ones. I just changed my implementation to draw with common VBO and not indexed while I fix my class to support millions of vertices.
The result, with some minor vertex normal issues, speaks for itself:
http://i61.tinypic.com/fep2t3.jpg