Load mesh file with TetGen in C++ - c++

I want to load a mesh file using TetGen library in C++ but I don't know the right procedure or what switches to activate in my code in order to show the Constrained Delaunay mesh.
I tried something basic loading of a dinosaur mesh (from rocq.inria.fr) with default behavior:
tetgenio in, out;
in.firstnumber = 0;
in.load_medit("TetGen\\parasaur1_cut.mesh",0);
tetgenbehavior *b = new tetgenbehavior();
tetrahedralize(b, &in, &out);
The shape is supposed to be like this:
When using TetView it works perfectly. But with my code I got the following result:
I tried to activate the Piecewise Linear Complex (plc) property for Delaunay Constraint:
b->plc = 1;
and I got just a few parts from the mesh:
Maybe there are more parts but I don't know how to get them.

That looks a lot like you might be loading a quad mesh as a triangle mesh or vice versa. One thing is clear, you are getting the floats from the file, since the boundaries of the object look roughly correct. Make certain you are loading a strictly triangle or quad-based mesh. If it is a format that you can load into Blender, I'd recommend loading it, triangulating it, and re-exporting it, just in case a poly snuck into there.
Another possibility is an indexing off by one error. Are you sure you are getting each triangle/quad in the correct order? Which is to say -- make sure you are loading triangles 123 123 123 and NOT 1 231 231 231.
One other possibility, if this format indexes all of the vertices, and then lists the indexes of the vertices, you might be loading all of the vertices correctly, and then getting the indexes of the triangles/quads messed up, as described in the previous two paragraphs. I'm thinking this is the case, since it looks like all of your points are correct, but the lines connecting them are way wrong.

Related

Rendering mathematical equations with AST in SFML C++

I'm trying to render a mathematical equation from an AST tree in SFML.
My current approach is to have a function that create base sf::Texture from characters, such as:
sf::Texture ASTHelper::GetTextureFromDefaultChar(char c) {
sf::Text tmp;
tmp.setFillColor(sf::Color::Black);
tmp.setString(std::string(1, c));
tmp.setFont(this->textFont);
tmp.setCharacterSize(this->fontSize);
int x = tmp.getLocalBounds().width;
int y = tmp.getLocalBounds().height;
sf::RenderTexture tex;
tex.create(x, y);
tex.clear();
tex.draw(tmp);
tex.display();
sf::Texture returnTex = tex.getTexture();
return returnTex;
}
then merge/move/copy those textures into more convoluted equations while traversing the AST tree.
For example, given an expression like (x+1), I can use GetTextureFromDefaultChar() for each character, then merge the textures together horizontally.
Problem is, seems like merging and copying sf::Texture/sf::RenderTexture is strongly discouraged. And certainly not every frame.
https://en.sfml-dev.org/forums/index.php?topic=17566.0
https://en.sfml-dev.org/forums/index.php?topic=18020.0
https://en.sfml-dev.org/forums/index.php?topic=10512.0
I've also look into other API/libraries to see if anything could be use in C++ (since I don't want to reinvent the wheel), and MathJax seems to be used a lot (Mathjax in C++ console), but in browsers.
So,
Is there a better way than merging/copying sf::Texture and sf::RenderTexture (or some completely different arrangements that I may have overlook)?
and
If not, are there any different C++ libraries that support this?
Let say you have every math symbol as sf::Int/FloatRect which would be their position in sf::Texture that would contain every single one of them.
Then you could just calculate their position in one line i.e. how would they be positioned if you would like to draw them.
Then draw these as sf::Sprite or sf:RectangleShape to one sf::RenderTexture, display it and now you have a texture which is made out of your symbols.
This way you don't draw all textures nor create that sf::RenderTexture every frame from scratch. Also, making another math equation into a texture would just require to call sf::RenderTexture::clear(sf::Color::Transparent) and repeating the same steps as before.
This way there is no merging, copying of textures, just one texture sheet of symbols which are drawn into one renderTexture.

Unable to get textures to work in OpenGL in Common Lisp

I am building a simple Solar system model and trying to set textures on some spheres.
The geometry is properly generated, and I tried a couple different ways to generate the texture coordinates. At present I am relying on glu:quadric-texture for generating the coordinates when glu:sphere is called.
However, the textures never appear - objects are rendered in flat colors.
I went through several OpenGL guides and I do not think I am missing a step, but who knows.
Here is what is roughly happening:
call gl:enable :texture-2d to turn on textures
load images using cl-jpeg
call gl:bind-texture
copy data from image using gl:tex-image-2d
generate texture ids with gl:gen-textures. Also tried generating ids one by one instead of all at once, which had no effect.
during drawing create new quadric, enable texture coordinates generation and bind the texture before generating the quadric points:
(let ((q (glu:new-quadric)))
(if (planet-state-texture-id ps)
(progn (gl:enable :texture-gen-s)
(gl:enable :texture-gen-t)
(glu:quadric-texture q :true)
(gl:bind-texture :texture-2d planet-texture-id)))
(glu:quadric-texture q :false))
(glu:sphere q
planet-diameter
*sphere-resolution*
*sphere-resolution*)
I also tried a more manual method of texture coordinates generation, which had no effect.
Out of ideas hereā€¦
make-texture function
texture id generation
quadric drawing
When the program runs, I can see the textures are loaded and texture ids are reserved, it prints
loading texture from textures/2k_neptune.jpg with id 1919249769
Loaded data. Image dimensions: 1024x2048
I don't know if you've discovered a solution to your problem, but after creating a test image, and modifying some of your code, I was able to get the texture to be applied to the sphere.
The problem comes into play with the fact that you are attempting to upload textures to the GPU before you've enabled them. (gl:enable :texture-2d) has to be called before you start handling texture/image data.
I'd recommend putting the let* block with the planets-init that is in the main function after 'setup-gl', and also moving the 'format' function with the planets data to work correctly without an error coming up.
My recommendation is something like:
(let ((camera ...
...
(setup-gl ...)
(let* ((planets...
...
(format ... planet-state)
In your draw-planet function, you'll want to add (gl:bind-texture :texture-2d 0) at the end of it so that the texture isn't used for another object, like the orbital path.
As is, the (gl:color 1.0 ...) before the (gl:quadratic-texture ...) will modify the color of the rendered object, so it may not look like what you're expecting it to look like.
Edit: I should've clarified this, but as your code stands it goes
initialize-planets > make-textures > enable-textures > render
When it should be
enable-textures > init-planets > make-textures > render
You're correct about not missing a step, the steps in your code are just misordered.

How do I remove self-intersecting triangles from a 3D surface mesh?

I have a CGAL surface_mesh of triangles with some self-intersecting triangles which I'm trying to remove to create a continuous 2-manifold shell, ultimately for printing.
I've attempted to use remove_self_intersection() and autorefine_and_remove_self_intersections() from this answer. The first only removes a few self-intersections while the second completely removes my mesh.
So, I'm trying my own approach - I'm finding the self-intersections and then attempting to delete them. I've tried using the low level remove_face but the borders are not detectable afterwards so I'm unable to fill the resulting holes. This answer refers to using the higher level Euler remove_face but this method, and make_hole seem to discard my mesh entirely.
Here is an extract (I'm using break to see if I can get at least one triangle removed, and I'm just trying with the first of the pair):
vector<pair<face_descriptor, face_descriptor> > intersected_tris;
PMP::self_intersections(mesh, back_inserter(intersected_tris));
for (pair<face_descriptor, face_descriptor> &p : intersected_tris) {
CGAL::Euler::remove_face(mesh.halfedge(get<0>(p)), mesh);
break;
}
My approach to removing self-intersecting triangles is to aggressively delete the intersecting faces, along with nearby faces and fill the resulting holes. Thanks to #sloriot 's comment I realised that the Euler::remove_face function was failing due to duplicate faces in the set returned from both the self_intersections and expand_face_selection functions.
A quick way to remove duplicate faces from the vector result of those two functions is:
std::set<face_descriptor> s(selected_faces.begin(), selected_faces.end());
selected_faces.assign(s.begin(), s.end());
This code converts the vector of faces into a set (sets contain no duplicates) and then converting the set back again.
Once the duplicates were removed, the Euler::remove_face function worked correctly, including updating the borders so that the triangulate_hole function could be used on the result producing a final surface with no self-intersections.

OBJ, Buffer objects, and face indices

I most recently had great progress in getting Vertex buffer objects to work.
So I moved on to Element arrays and I figured with such implemented I could then load vertices and face data from an obj.
I'm not too good at reading files in c++ so I wrote a python doc to parse the obj and write 2 separate txts to give me a vertex array and face indices and pasted them directly in my code. Which is like 6000 lines but it works (without compiling errors).
And Here's what it looks like
.
I think they're wrong. I'm not sure. The order of the vertices and faces aren't changed just extracted from the obj because I don't have normals or textures working for buffer objects yet. I kinda do if you look at the cube but not really.
Heres the render code
void Mesh_handle::DrawTri(){
glBindBuffer(GL_ARRAY_BUFFER,vertexbufferid);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,elementbufferid);
int index1=glGetAttribLocation(bound_program,"inputvertex");
int index2=glGetAttribLocation(bound_program,"inputcolor");
int index3=glGetAttribLocation(bound_program,"inputtexcoord");
glEnableVertexAttribArray(index1);
glVertexAttribPointer(index1,3,GL_FLOAT,GL_FALSE,9*sizeof(float),0);
glEnableVertexAttribArray(index2);
glVertexAttribPointer(index2,4,GL_FLOAT,GL_FALSE,9*sizeof(float),(void*)(3*sizeof(float)));
glEnableVertexAttribArray(index3);
glVertexAttribPointer(index3,2,GL_FLOAT,GL_FALSE,9*sizeof(float),(void*)(7*sizeof(float)));
glDrawArrays(GL_TRIANGLE_STRIP,0,elementcount);
//glDrawElements(GL_TRIANGLE_STRIP,elementcount,GL_UNSIGNED_INT,0);
}
My python parser which just writes the info into a file: source
The object is Ezreal from League of Legends
I'm not sure if I'm reading the faces wrong or if their not even what I thought they were. Am I suppose to use GL_TRIANGLE_STRIP or something else. Any hints or request more info.
Indices in obj-files are 1 based, so you have to subtract 1 from all indices in order to use them with OpenGL.
First, as Andreas stated, .obj files use 1-based indices, so you need to convert them to 0-based indices.
Second:
glDrawArrays(GL_TRIANGLE_STRIP,0,elementcount);
//glDrawElements(GL_TRIANGLE_STRIP,elementcount,GL_UNSIGNED_INT,0);
Unless you did some special work to turn the face list you were given in your .obj file into a triangle strip, you don't have triangle strips. You should be rendering GL_TRIANGLES, not strips.
From the image for sure your verticies are messed up. It looks like you specified a stride of 9*sizeof(float) in your glGetAttribLocation but from what I can tell from your code your array is tightly packed.
glEnableVertexAttribArray(index1);
glVertexAttribPointer(index1,3,GL_FLOAT,GL_FALSE,0,0);
Also remove stride from color/texture coords.

Parsing a Wavefront .obj file using C++

While trying to a parse a wavefront .obj file, I thought of two approaches:
Create an 2D array the size of the number of vertices. When a face uses a vertex, get it's coordinates from the array.
Get the starting position of the vertex list and then when a face uses a vertex, scan the lines until you reach the vertex.
IMO, option 1 will be very memory intensive, but much faster.
Since option 2 involves extensive file reading, (and because the number of vertices in most objects becomes very large) this will be much slower, but less memmory intensive.
The question is: Comparing the tradeoff between memory and speed, which option would be better suited to an average computer?
And, is there an alternative method?
I plan to use OpenGL along with GLFW to render the object.
IMO, Option 1 will be very memory intensive, but much faster.
You must get those vertices into memory anyway. But there's no need for a 2D array, which BTW would cause two pointer indirections, thus a major performance hit. Just use a simple std::vector<Vertex> for your data, the vector index is the index for the accompanying face list.
EDIT due to comment
class Vertex
{
union { struct { float x, y, z }; float pos[3] };
union { struct { float nx, ny, nz }; float normal[3] };
union { struct { float s, t }; float pos[2] };
Vertex &operator=();
}
std::vector<Vertex>;
Generally you read the list of vertices into an array. Parsing ASCII text is extremely slow; do it only once when loading the file and then store everything in arrays in memory.
Same goes with the triangles / faces. Each triangle generally is composed of a list of three vertex indexes. That should also be stored in an array.
You may find the OBJ reader in the VTK open source library to be useful: http://www.vtk.org/doc/nightly/html/classvtkOBJReader.html. We use it and have had no reason to write our own... Use VTK directly, or you may find studying the source code to be good for further inspiration of your own reader.
In my opinion, one of the major shortcomings with OBJ files is the use of ASCII. 3D ASCII files (be it STL, PLY, OBJ, etc.) are very slow to load if they are ASCII due to the string parsing. Binary format files are much faster and should always be used if performance is an issue: the load time for a good binary format is instantaneous.
Just load them into arrays. Memory should not be an issue. Your system (usually) has way more memory than your GPU. If you are running into memory problems, you are probably loading a model that is too detailed. (I am semi-assuming that you are going to make a game in OpenGL. If you have a specific need for such large model files, you will still have to work out a way to load the appropriate chunks.)
You shouldn't need a 2 dimensional array. Your models should be triangulated and then you can simply load the obj file using gluts obj loader. Simply store points, faces and normals in 3 seperate arrays/buffers. There is an example how you can do it here, but if you want to do it fast you should go for a binary format.
This is a pretty decent solution for prototyping, running a script that generates the arrays for use in OpenGL or your preferred rendering API. obj2opengl.pl is a perl script, you'll need perl installed that you can find here. GitHub link is here.
While running the perl script you may get a runtime error on line 154 concerning if(defined(#center)). Replace it with if(#center).
From the example, once the header file is generated with the data, you can use the it as shown:
/*
created with obj2opengl.pl
source file : ./banana.obj
vertices : 4032
faces : 8056
normals : 4032
texture coords : 4420
// include generated arrays
#import "./banana.h"
// set input data to arrays
glVertexPointer(3, GL_FLOAT, 0, bananaVerts);
glNormalPointer(GL_FLOAT, 0, bananaNormals);
glTexCoordPointer(2, GL_FLOAT, 0, bananaTexCoords);
// draw data
glDrawArrays(GL_TRIANGLES, 0, bananaNumVerts);
*/