On reading through the OBJ file format, I realized that there are several other attributes like Sub-objects (-o), Groups (-g) etc.
I am having trouble understanding how to parse these other parameters (if necessary). Should I load the vertices and faces data of every (-o) into separate VAOs?
Also, in that case, how will the rotation of the whole object work? Will all Sub-objects start rotating about their own centers?
How to make the multiple sub-objects rotate in unison like a single object and still have independent control over each whenever necessary?
Should I load the vertices and faces data of every (-o) into separate VAOs?
Basically yes. Each object (-o) or smoothing group (-g or -s) is essentially a separate part of the overall model. So you should be creating a separate renderable mesh for each group.
Generally the vertex data (vertices, normals, texture coordinates) is not shared between models, but there's nothing in the OBJ format to say they couldn't be AFAIK.
How to make the multiple sub-objects rotate in unison like a single object and still have independent control over each whenever necessary?
This is more of a design question. Assuming you have some sort of tree-like scene-graph then you would create a root node for the entire OBJ model and attach separate nodes for each group. To rotate the whole model you apply a transform to the root node. As far as I know the OBJ format does not specify transformations, rotations, etc.
Related
I have multiple geometries (D3D12_RAYTRACING_GEOMETRY_DESC) inside of a single DXR bottom level acceleration structure (BLAS). How can I determine which of those was hit inside of a closest hit shader?
The following HLSL intrinsics do something different:
PrimitiveIndex() returns the triangle index for the current geometry, but it restarts for each new geometry inside of the BLAS, so I don't know which one was hit.
InstanceIndex() returns the index of the top level but not of the bottom level
InstanceID() again, is only defined for the top level
Starting with D3D12_RAYTRACING_TIER_1_1 there is a new intrinsic called uint GeometryIndex() Specification.
I am wondering about that as well. Unfortunately, I can't give you a definitive answer but on this page I found the following statement:
Since hit groups need information about the geometry that was hit –
its vertex data and material properties – you typically need a local
root table for them. To avoid passing data through the local root
table, you may also use the instance id field in the top-level
instance descriptor and utilize the instance index, which are
implicitly available in the hit group shaders. Remember, though, that
these values are shared between all geometries in the instance when
the bottom level structure contains more than one geometry. To have
unique data for each geometry, a local root table must be used.
So if I understood that correctly, you either have to use a local root table or you have to restrict yourself to only one geometry per bottom level structure.
There is a MultiplierForGeometryContributionToShaderIndex parameter in TraceRay which you can set to 1 to get a different hit group per geometry. If you store a list of materials per hit group, you probably need only one hit group per geometry.
See also RaytracingMiniEngineSample.
I'm developing a simple rendering engine as a pet project.
So far I'm able to load geometry data from Wavefront .obj files and render them onscreen separately. I know that vertex coordinates stored in these files are defined in Model space and to place them correctly in the scene I need to apply Model-to-world transform matrix to each vertex position (am I even correct here?).
But how do I define those matrices for each object? Do i need to develop a separate tool for scene composition, in which I will move objects around and the "tool" will calculate appropriate Model-to-world matrices based on translations, rotations an so on?
I would look into the "Scene Graph" data structure. It's essentially a tree, where nodes (may) define their transformations relative to their parent. Think of it this way. Each of your fingers moves relative to your hand. Moving your hand, rotating or scaling it also involves doing the same transformation on your fingers.
It is therefore beneficial to base all these relative transformations on one another as relative ones, and combine trhem to determine the overall transformation of each individual part of your model. As such you don't just define the direct model to view transformation, but rather a transformation from each part to its parent.
This saves having to define a whole bunch of transformations yourself, which are in the vast majority of cases similarly in the way I described anyway. As such you save yourself a lot of work by representing your models/scene in this manner.
Each of these relative transformations is usually a 4x4 affine transformation matrix. Combining these is just a matter of multiplying them together to obtain the combination of all of them.
A description of Scene Graphs
In order to animate objects within a scene graph, you need to specify transformations relative to their parent in the tree. For instance, spinning wheels of a car need to rotate relative to the car's chassis. These transformations largely depend on what kind of animations you'd like to show.
So I guess the answer to your question is "mostly yes". You do need to define transformations for every single object in your scene if things are going to look good. However, orgasnising the scene into a tree structure makes this process a lot easier to handle.
Regarding the creation of those matrices what you have to do is to export a scene from an authoring package.
That software can be the same you used to model the objects in the first place, Maya, Lightwave...
Right now you have your objects independent of each other.
So, using the package of your choice, either find a file format allowing you to export a scene you would have made by positioning each of your meshes where you want them, like FBX or GLTF or make your own.
Either way there is a scene structure, containing models, transforms, lights, cameras, everything you want in your engine.
After that you have to parse that structure.
You'll find here some explanations regarding how you could architect that:
https://nlguillemot.wordpress.com/2016/11/18/opengl-renderer-design/
Good luck,
I know OpenGL only slight and all this docs and tutorials are damn hard to read so i do not helps.. I got some vision though how it could work and only would like some clarification or validation of my vision
I assume 3D world is build from 3d meshes, each mesh may be hold in some array or few arrays (storing the geometry for that mesh).. I assume also that some meshes may be sorta like cloned and used more than once on the scene.. So in my wision i got say 50 meshes but some of them are used more than once... Lets say those clones i would name as a instance of a mesh (each mesh may have 0 instances, 1 instance or more instances)
Is this vision okay? Some more may be added?
I understand that each instance should have its own position and orientation, so do we have some array of instances each element containing one pos-oriantation matrix? or thiose matrices only existing in the code branches (you know what i mean, i set such matrix then send a mesh then modify this position matrix then send the mesh again till all instances are sent) ?
Do this exhaust the geomety (non-shader) part of the things?
(then shaders part come which i also not quite understand, there is a tremendous amount of hoax on shaders where this geometry part seem more important to me, well whatever)
Can someone validate the vision i spread here?
So you have a model which will contain one or more meshes, a mesh that will contain one or more groups, and a group that will contain vertex data.
There is only a small difference between a model and a mesh such as a model will contain other data such as texture which will be used by a mesh (or meshes).
A mesh will also contain data on how to draw the groups such as a matrix.
A group is a part of the mesh which is generally used to move a part of the model using sub matrices. Take a look at "skeletal animation".
So as traditional fixed pipelines suggest you will usually have a stack of matrices which can be pushed and popped to define somewhat "sub-positions". Imaging having a model representing a dragon. The model would most likely consist of a single mesh, a texture and maybe some other data on the drawing. In the runtime this model would have some matrix defining the model basic position and rotation, even scale. Then when the dragon needs to fly you would move its wings. Since the wings may be identical there may be only 1 group but the mesh would contain data to draw it twice with a different matrix. So the model has the matrix which is then multiplied with the wing group matrix to draw the wing itself:
push model matrix
multiply with the dragon matrix
push model matrix
multiply with the wing matrix
draw wing
pop matrix
push matrix
multiply with the second wing matrix
draw second wing
pop matrix
... draw other parts of the dragon
pop matrix
You can probably imagine the wing is then divided into multiple parts each again containing an internal relative matrix achieving a deeper level of matrix usage and drawing.
The same procedures would then be used on other parts of the model/mesh.
So the idea is to put as least data as possible on the GPU and reuse them. So when model is loaded all the textures and vertex data should be sent to the GPU and be prepared to use. The CPU must be aware of those buffers and how are they used. A whole model may have a single vertex buffer where each of the draw calls will reuse a different part of the buffer but rather just imagine there is a buffer for every major part of the mode such as a wing, a head, body, leg...
In the end we usually come up with something like a shared object containing all the data needed to draw a dragon which would be textures and vertex buffers. Then we have another dragon object which will point out to that model and contain all the necessary data to draw a specific dragon on the scene. That would include the matrix data for the position in the scene, the matrix for the groups to animate the wings and other parts, maybe some size or even some basic color to combine with the original model... Also some states are usually stored here such as speed, some AI parameters or maybe even hit points.
So in the end what we want to do is something like foreach(dragon in dragons) dragon.draw() which will use its internal data to setup the basic model matrices and use any additional data needed. Then the draw method will call out to all the groups, meshes in the model to be drawn as well until the "recursion" is done and the whole model is drawn.
So yes, the structure of the data is quite complicated in the end but if you start with the smaller parts and continue outwards it all fits together quite epic.
There are other runtime systems that need to be handled as well to have a smooth loading. For instance if you are in a game and there are dragons in vicinity you will not have the model for the dragon loaded. When the dragon enters the vicinity the model should be loaded in the background if possible but drawn only when needed (in visual range). Then when the dragon is gone you may not simply unload the model, you must be sure all of the dragons are gone and maybe even wait a little bit if someone might return. This then leads to something much like a garbage collector.
I hope this will help you to a better understanding.
I've an OBJ file that I've parsed, but not surprisingly indexing for vertex position and vertex texture is separate.
Here are a couple of OBJ lines to explicit what I mean with different indexing. These are quads, where first index references XYZ position and second index references UV coords:
f 3899/8605 3896/8606 720/8607 3897/8608
f 3898/8609 3899/8610 3897/8611 721/8612
I know that a solution is do some duplication, but what's the most clever way to proceed?
As per now I had these two options in mind:
1) Use the indexing to create two big sets of vertices and vertex texture coordinates. This means that I duplicate everything so that I will end up with a vertex for each couple v/vt in the faces blindly. If I have for example 1/3 in first face and the same 1/3 in a different face, I will end up with two separate vertices. Proceed then with glDrawArrays without using indices anymore, but the newly created sets (full of duplicates)
2) Examine each face vertex to come up to unique "GL vertices" (position+texture coord are the same in my specific case) and figure out a way of indexing with these. Differently from 1) here I will not consider as separate vertices the same couple found multiple times. I'll then create a new indexing for these new vertices and finally using glDrawElements when it comes to the draw call using the new indices.
Now I believe that the first option is way easier, but I guess each drawArrays call will be bit slower than a drawElement right? How much is this advantage I'd have?
The second option as a first thought looks pretty slow in a preprocessing step and more complicated to implement. But will it grants to me much better performance overall?
Are there any other way to account for this issue?
If you have few low-poly models - go for option #1, it's way easier to implement and performance difference will be unnoticeable.
Option #2 would be the proper way if you have some high-poly models (looking at the sample, you have at least 9k vertices in there).
Generally you should not worry about model loading time, cos that is done only once and after that you can convert/save it in a most optimal format you need (serialize it just the way it is stored in your code)
Where's the dividing line between these two approaches? It's impossible to say without real-life profiling on the target hardware and your vertex rendering pipe (skeletal animation, shadows, everything adds its toll).
I know that glVertexAttribDivisor can be used to modify the rate at which generic vertex attributes advance during instanced rendering, but I was wondering if there was any way to advance attributes at a specific rate WITHOUT instancing.
Here is an example of what I mean:
Let's say you are defining a list of vertex positions that make up a series of lines, and with each line you wish to associate an ID. So, you create two vbos that each house the data related to one of those attributes (either all the vertex positions or all the vertex IDs). Traditionally, this means each vbo must be the size (in elements) of the number of lines X 2 (as each point contains two lines). This of course means I am duplicating the same ID value for each point in a line.
What I would like to do instead is specify that the IDs advance 1 element for every 2 elements the vertex position buffer advances. I know this requires that my vertex position buffer is declared first (so that I may reference it to tell OpenGL how often to advance the ID buffer) but it still seems like it would be possible. However, I cannot find any functions in the OpenGL specification that allow such a maneuver.
What you want is not generally possible in OpenGL. It's effectively a restrained form of multi-indexed rendering, so you'll have to use one of those techniques to get it.