List of verticies from OpenGL program to something importable - c++

I'm working on making a new visualization of the type of binary stars I study, and I'm starting from an existing code that renders a nice view of them given some sensible physical parameters.
I would like a bit more freedom on the animation side of things, however, and my first thought was to output the models made by the program in a format that could be read in by something else (Blender?) I've read up on the (Wavefront?) .OBJ format, and while it seems straightforward, I can't seem to get it right; importing fails silently, and I suspect it's because I'm not understanding how the objects are actually stored.
The program I'm starting from is a C++ project called BinSim, and it already has a flag to output vertices to a log file for all the objects created. It seems pretty simple, just a list of indices, x, y, z, and R, G, B (sometimes A) values. An example output format I've been working with can be found here; Each object is divided up into a latitude/longitude grid of points, and this is a small snippet (full file is upwards of 180 MB for all the objects created).
I've been able to see that the objects are defined as triangle strips, but I'm confused enough by all of this that I can't see the clear path towards making this list of vertices into an .OBJ (or whatever) format. Sorry if this really belongs in another area (GameDev?), and thanks!

OpenGL is not a scene management system. It's a drawing API and starting off OpenGL data structures for model storage is tedious. As already said, OpenGL draws things. There are several drawing primitives, the triangle strip being one of them. You start with two vertices (forming a line) and each next incoming vertex extends the line of the last two specified vertices to a triangle. The Wavefront OBJ format doesn't know triangle strips, you'd have to break them down into individual triangles, emulating the way OpenGL does it.
Also don't forget that Blender is easily extensible using Python scripting and you can just write a import script for whatever data you already have without going through the hassle of using some ill suited format.

Related

Joining two meshes into one

Suppose I have two meshes stored in any sane format (e.g. wavefront .obj or collada .dae), and I want to combine them into one mesh programmatically. More precise, I have a landscape and an object as two meshes. I want to put object into landscape after performing transformation to it, so it gets on the right place, and export this as result model.
As far as I understood, in assimp there is something similar named SceneCombiner, yet it seems that this is internal structure and has no interface (even though here https://github.com/assimp/assimp/issues/584 the ticket concerning it is closed, I couldn't find out how to use it).
Maybe I should use CGAL or something like that? I don't have very much experience in CG libraries, so any advice will be really useful!
You can do that with CGAL. You would read two meshes, and the call copy_face_graph(), and then write the mesh back.

Create Wavefront .obj files in C++ (mesh 3D)

I have a vector<vector<int>>, which contains my map (2D array created with my random generator):
(source: cjoint.com)
I want to display this map in 3D (with Irrlicht graphic 3D library). The big problem: my map is too big (1920x1080), so i can't display 2073600 little cube in my screen. (I want to be able to change my map and reload the screen with the good mesh)
So my solution is to create One cube, and write on it all the pixel I want
(here is my little paint to show you...)
(source: cjoint.com)
So... I know how to create/write/parse a file in c++, now my problem is: I don't know very well 3D perspective and .obj object...
I am learning OBJ format with wikipedia and other docs.
I wonder if there is more simplest solution than changing in live a .obj object... And if not... i required some help for the conception of my obj...
I think you are confusing issues here. Alias wavefront obj is a file format for storing 3d geometry, it is extremely easy to use to extract the geometry. The MTL (Material Template Library) is a bit more complex that just the geometry and is usually associated with .obj files for defining the visual representation of the geometry (in regards to its material appearance).
What you are asking is more along the lines of a geometric question (how to remove a hole from a surface) and depends entirely how your geometry is represented (I assume triangulated since you ask about obj, which represent triangulated data). More information is required about how you store your data.
Maybe try looking into Constructive Solid Geometry which use Boolean operations to construct geometry. If you have triangulated data, then unless you employ some form of BVH for processing which triangles/geometry to work on, you will ultimately be using brute force to see which triangles are valid and which need removing for your 'hole removal'.

Highlight specific parts of a mesh c++ OpenGL

I have imported a mesh object (.obj file from blender) into openGl window (glfw) context. I am following various tutorials on 3D picking to allow me to select it. What I cannot get my head around is, how to allow sub-portions of the mesh to get highlighted when clicked at one point. For example, a car mesh in which if you click over the door, the entire door gets highlighted. Without going into game engines, because my intention is to apply this concept to 3d diagrams in an app, what is the most straightforward way to implement this.
PS -- Before someone downvotes this, I have spent hours on google trying to search for an answer so apologies if this is off-topic / unsuitable.
the mesh has some colour information in form of vertex colours or textures. To highlight part of the mesh, you need to change the colour information from vertex arrays or textures that are used. This can be expensive cpu operation to generate the required arrays and textures, but after the data is generated, blitting it to screen takes no time. The main complexity is in modifying the data structures of the mesh.

When using Direct 3D, what should be processed in code and what should be processed in HLSL?

I am very new to 3D programming, namely with DirectX. I have been trying to follow tutorials on how to do basic things, and I have been looking at the samples provided by Microsoft. One of the big questions I have had is how to tell what calculations should be done in the actual game code and what calculations should be done in HLSL. I have not been able to understand what should be done where, because it looks like, to me, you could have almost all code pertaining to calculations in your shader file, or you could have it all in the executable code and only send the bear minimum to the pixel and vertex shaders. How can one tell what code should go where? If you need an example, I'll try to find one.
"Code" - CPU code
"HLSL" - GPU code
Basically, you want everything that is pure graphics to happen on the GPU. That is, when the information about what you want to render has been sent to the GPU, it should take over and use that information to generate the final image.
You want to the CPU to say to the GPU "this is what I want to render, and here is everything you need to make it happen" and then make sure to tell the GPU "this is how you render it".
Some examples (not a complete or final list in anyway):
CPU:
Anything dealing with window opening/closing/resizing
User input from mouse, keyboard
Reading and setting configuration
Generating and updating view matrices
Application logic
Setting up and initializing rendering (textures, buffers etc)
Generating vertex data (position, texture coordinates etc)
Creating graphic entities (triangles, textures, colors etc)
Handling animation (timestepping, swapping buffers)
Sending updated data to the GPU for each frame
GPU:
Use the view matrices to put things on the right place on the screen
Interpolate from vertex data to fragment data
Shading (usually, this is the most complicated part)
Calculate and write final pixel color

Store 3d models in game, the best way

What is the best method to store 3d models in game ?
I store in vectors:
vector triangles (each triangle contain number of texcords, numer of vertex and number of normal),
vector points;
vector normals;
vector texCords;
I'm not sure what constitutes "the best method" in this case, as that's going to be situation dependent and in your question, it's somewhat open to interpretation.
If you're talking about how to rapidly render static objects, you can go a long way using Display Lists. They can be used to memoize all of the OpenGL calls once and then recall those instructions to render the object whenever used in your game. All of the overhead you incured to calculate vertex locations, normals, etc are only performed once when you build each display list. The drawback is that you won't see much of a performance gain if your models change too often.
EDIT: SurvivalMachine below mentions that display lists are deprecated. In particular, they are deprecated in OpenGL Version 3.0 and completely removed from the standard in Version 3.1. After a little research, it appears that the Vertex Buffer Object (VBO) extension is the prefered alternative, though a number of sources I found claimed that performance wasn't as good as display lists.
I chose to import models from the .ms3d format, and while I may refactor later, I think it provided a decent foundation for the data structure of my 3D models.
The spec (in C header format) is a pretty straightforward read; I am writing my game in Java so I simply ported over each data structure: vertex, triangle, group, material, and optionally the skeletal animation elements.
But really, a model is just triplets of vertices (or triangles), each with a material, right? Start by creating those basic structures, write a draw function that takes a model for an argument and draws it, and then add on any other features you might need as you need them. Iterative design, if you will.