CGAL - Triangulate Mesh, Return Face to Face Mapping? - c++

I have a quad mesh which I want to triangulate, but I also would like to have a mapping which records which quads of the original mesh have become which triangles on the resultant mesh.
Obviously I am aware of the
CGAL::Polygon_mesh_processing::triangulate_faces(mesh);
function. But I was wondering via the NamedParameters if it is possible to return this information?

This functionality is NOT provided, but it would be rather straightforward to add it. In what context do you use it (homework, another open source project, company,...?), what is your time frame, ....
Let me add that I added the functionality in a github PR. It is not necessarily the final version, as it has to go through the CGAL review process first. Maybe I can even get feedback from you.
Have a look at the example where I copy the non-triangle mesh, and while I copy I create a face to face map. This one is then passed to the visitor that updates the map.
It is

Related

Vertex buffer not clearing properly

Context
I'm a beginner in 3D graphics and I'm starting out with Vulkan, which I already know it's not recommended save it please, currently working on a university project to develop the base of a 3D computer graphics engine based on the Vulkan API.
The problem
Example of running the app to render the classic 2D triangle
Drawing a 3D mesh after having drawn the triangle
So as you can see in the images above I want to be able to:
Run the engine.
Choose an object to be drawn.
Close the window.
Choose another object to be drawn.
Open the same window back up with only the last object chosen visible.
And the way I have been doing this is by essentially cleaning up the whole swap chain and recreating it from scratch once the window is closed and a new object has been chosen. Now I'm aware this probably sounds like terrorism for any computer graphics engineer but the reason I'm doing this is because I don't know a better way, I have just finished the vulkan tutorial.
Solutions tried
I have checked that I do a vkDestroyBuffer and vkFreeMemory on the current vertex buffer before recreating it again once I choose a different object.
I have disabled depth testing entirely in case it had something to do with it, it doesn't.
Note: The code is extensive and I really don't have a clue of which part of it could be relevant to the problem, so I opted for not cluttering the question, if there is an specific part you think it might help you find the solution please request it.
Thank you for taking the time to read my question.
A comment by user369070 ended up drawing my attention to the function I use to read OBJ files which made me realize that this function wasn't cleaning a data structure I use to store the vertices of the object chosen to be drawn before passing them to the vertex buffer.
I just had to add vertices = {}; at the top of the function to solve it.

Create Wavefront .obj files in C++ (mesh 3D)

I have a vector<vector<int>>, which contains my map (2D array created with my random generator):
(source: cjoint.com)
I want to display this map in 3D (with Irrlicht graphic 3D library). The big problem: my map is too big (1920x1080), so i can't display 2073600 little cube in my screen. (I want to be able to change my map and reload the screen with the good mesh)
So my solution is to create One cube, and write on it all the pixel I want
(here is my little paint to show you...)
(source: cjoint.com)
So... I know how to create/write/parse a file in c++, now my problem is: I don't know very well 3D perspective and .obj object...
I am learning OBJ format with wikipedia and other docs.
I wonder if there is more simplest solution than changing in live a .obj object... And if not... i required some help for the conception of my obj...
I think you are confusing issues here. Alias wavefront obj is a file format for storing 3d geometry, it is extremely easy to use to extract the geometry. The MTL (Material Template Library) is a bit more complex that just the geometry and is usually associated with .obj files for defining the visual representation of the geometry (in regards to its material appearance).
What you are asking is more along the lines of a geometric question (how to remove a hole from a surface) and depends entirely how your geometry is represented (I assume triangulated since you ask about obj, which represent triangulated data). More information is required about how you store your data.
Maybe try looking into Constructive Solid Geometry which use Boolean operations to construct geometry. If you have triangulated data, then unless you employ some form of BVH for processing which triangles/geometry to work on, you will ultimately be using brute force to see which triangles are valid and which need removing for your 'hole removal'.

SDL Tile and Sprite Rendering Terrain

Hello recently i started to mess around with SDL. Since i was interested in some 2D/2.5D games.So i started messing around with SDL in C++, I was looking to recreate something similar to Original Zelda.
So as far as i understand those game work with some kind of isometric prespective, or standard Orthogonal view but one thing i do not understand is how can you generate 3D-like Collisions between those objects on the map (tiles, sprites etc which are in 2D). Have a look at the video link below. Is this created purely in SDL, is it PerPixel collision or rectangular ? Or it might involve OpenGL as well ?
Link: https://www.youtube.com/watch?v=wFvAByqAuk0
The original was probably a simple Rectangular collision.
I believe that your "3D collision" is the partial collision present in some objects. For example, Link can go through the leaves, but not through the trunk.
You can do it easily in 2 ways:
Layers of rendering and collision. The trunk is located in one layer and is covered by some collision boxes. Link is present in a intermediary layer. And the leaves are in another layer, on top of Link. Then you can check collision between Link's Layer and the layer with the trunk and other objects, for example.
Additionally you can create a property for your tiles in which you can store the type of collision you hope to obtain. For example, 'box' collision will tell your engine that the object is collidable on every side. Or 'bottom' collision will tell your engine that Link will collide with this object only if he is walking down into the object (this is the effect of you will see on some 2D sidescrollers: jump through a tile but then fall into it solid.
Per pixel collision in those simple cases is not worth it. I find it much better to personalize the collision ourselves, using creativity, masks and layers.
BTW: This topic would fit better on https://gamedev.stackexchange.com/

Rotating a 3D mesh in direct x

I have inhereted a Direct x project which I am trying to improve. The problem I am having is that I have 2 meshes and I want to move one independent of the other. at the moment I can manipulate the world matrix simply enough, but I am unable to rotate an indervidual mesh.
V( g_MeshLeftWing.Create( pd3dDevice, L"Media\\Wing\\Wing.sdkmesh", true));
loades the mesh and later it is rendered
renderMesh(pd3dDevice, &g_MeshLeftWing );
Is there a way I can rotate the mesh. I tried transforming it using a matirx with no success?
g_MeshLeftWing.TransformMesh(&matLeftWingWorld,0);
any help would be great
Firstly, you appear to be loading a ".sdkmesh" file. It was documented heavily in the DirectX SDK that ".sdkmesh" was made for the SDK and should not be used as an actual mesh loading/drawing solution.
Therefore I would advise you start looking at alternative means to load and draw your model, not only will that give you a greater understanding of DirectX, but it should ultimately answer your question in the long run!

facial animation

I am developing a 3d facial animation application in java using opengl(jogl) i had done the animation with morph targets and now i am trying to do a parametrized facial animation .
I can't attach the vertex of the face with the appropriate parameter,for example i have the parameter eyebrow length ,how could i know wish are the vertex of the eyebrows (face features),please please could anybody help me i'm using obj file to read the face model and face gen to create it.
I appreciate your help .
For each morph target you assign each vertex a weight, how strong it is influenced by the respective morph target. There's no algorithmic way to do this, this must be done manually.
Lightwave OBJ file format may not be ideal for storing the geometry in this case, since it lacks support for storing such auxiliary data. You may extend the file format, however this will probably clash with programs expecting a "normal" OBJ.
I strongly suggest using a format that has been designed to support any number of additional attributes. You may want to look at OpenCTM.