Loading models and their materials in Assimp robustly with OpenGL? - c++

I realised that models can be loaded in many different ways. Some might have no textures and just use vertex colours, while others have materials with textures. Now, I have one shader that is used to draw 3d models with lighting but how would I go about loading models that are required to be rendered differently and have them rendered correctly in the scene.
One method I thought of is having hard defined attribute pointers in the vertex shader and that if the model doesn't require binding an attribute, it doesn't have to. I would imagine that if an attribute isn't binded to and is used in calculations, it won't contribute or offset any values (especially if you do the calculations separately and sum them in the end (for example, calculating the colour of the pixel for the vertex colour at that fragment and summing that for the colour of the pixel for the texture)
You can imagine the vertex shader looking like:
#version 330 core
layout (location = 0) in vec3 aPos; // Vertex positions
layout (location = 1) in vec3 aNormal; // Normals
layout (location = 2) in vec2 aTexCoords; // Optional Texture coordinates
layout (location = 3) in vec3 aColors; // Optional vertex colours
out vec3 FragPos;
out vec3 Normal;
out vec2 TexCoords;
out vec3 Colors;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
FragPos = vec3(model * vec4(aPos, 1.0));
Normal = mat3(transpose(inverse(model))) * aNormal;
TexCoords = aTexCoords;
Colors = aColors;
gl_Position = projection * view * vec4(FragPos, 1.0);
}
An issue I can see with this is I'm not sure how I could tell whether to use texCoords or colours or both. Perhaps using a uniform as a flag and keeping that data in the model object?
Another method would be by defining shaders for different types of lighting techniques and then determining which shader is called for which model. This is my least favourite as it doesn't allow for dynamic approaches (i.e both vertex colours and textures at the same time)
What would be the approach done in a professional setting so that any model can be loaded and drawn into a scene with lighting in the same vain as game engines manage? (Using forward rendering right now so don't need any deffered shading concepts as that's above my paygrade at the moment). I'm not seeking specific solutions, more of an experts knowledge of how this issue is approached.
Also, I'm using OpenGL 3.3

What would be the approach done in a professional setting so that any model can be loaded and drawn into a scene with lighting in the same vain as game engines manage?
Either this:
defining shaders for different types of lighting techniques and then determining which shader is called for which model
or this:
having hard defined attribute pointers in the vertex shader and that if the model doesn't require binding an attribute, it doesn't have to
Pick one. In the second approach, you don't need flags - you can just bind dummy values, like a plain white texture if the model has no texture, and a white vertex colour if the model has no vertex colours.

Related

GLSL only render fragments with Z-position between uniforms minZ and maxZ

I have a mesh with many thousands of vertices. The model represents a complex structure that needs to be visualized both in its entirety but also in part. The user of my application should be able to define a minimum and maximum value for the Z-Axis. Only fragments with a position between these limits should be rendered.
My naive solution would be to write a Fragment shader somewhat like this:
#extension GL_EXT_texture_array : enable
uniform sampler2DArray m_ColorMap;
uniform float m_minZ;
uniform float m_maxZ;
in vec4 fragPosition;
in vec3 texCoord;
void main(){
if (fragPosition.z < m_minZ || fragPosition.z > m_maxZ) {
discard;
}
gl_FragColor = texture2DArray(m_ColorMap, texCoord);
}
Alternatively I could try to somehow filter out vertices in the vertex shader. Perhaps by setting their position values to (0,0,0,0) if they fall out of range.
I am fairly certain both of these approaches can work. But I would like to know if there is some better way of doing this that I am not aware of. Some kind of standard approach for slicing models along an axis.
Please keep in mind that I do not want to use separate VBO's for each slice since they can be set dynamically by the user.
Thank you very much.

how to pass GL_PATCHES

I am trying to create an example of an interpolated surface.
First I created an example of an interpolated trefoil.
Here the source of my example.
Then I had to noticed that the animation is pretty slow, around 20-30FPS.
After some papers, I know that have to "move" the evaluation of the trefoil into the GPU. Thus I studied some papers about tessellation shaders.
At the moment I bind following simply vertex shader:
#version 130
in vec4 Position;
in vec3 Normal;
uniform mat4 Projection;
uniform mat4 Modelview;
uniform mat3 NormalMatrix;
uniform vec3 DiffuseMaterial;
out vec3 EyespaceNormal;
out vec3 Diffuse;
void main()
{
EyespaceNormal = NormalMatrix * Normal;
gl_Position = Projection * Modelview * Position;
Diffuse = DiffuseMaterial;
}
Now I have multiply questions:
Do I use an array of vertices to pass GL_PATCHES like I already did with Triangle_Strips ? Which way is faster? DrawElements?
glDrawElements(GL_TRIANGLE_STRIP, Indices.Length, OpenGL.GL_UNSIGNED_SHORT, IntPtr.Zero);
or should I use
glPatchParameteri(GL_PATCH_VERTICES,16);
glBegin(GL_PATCHES);
glVertex3f(x0,y0,z0)
...
glEnd();
What about the array of indices? How can I determine the path means in which order the patches will be passed.
Do I calculate the normals in the Shader as well?
I found some examples of tessellation shader but in #version400
Can I use this version on mobile devices as well?(OpenGL ES)
Can I pass multiple Patches to the GPU by Multithreading?
Many many thanks in advance.
In essence I don't believe you have to send anything to the GPU in terms of indices (or vertices) as everything can be synthesized. I don't know if the evaluation of the trefoil knot directly maps onto the connectivity of the resulting tessellated mesh of a bilinear patch, but this could work.
You could do with a simple vertex buffer where each vertex is the position of a single trefoil knot. Set glPatchParameteri(GL_PATCH_VERTICES​, 1). Then you could draw multiple knots with a single call to glDrawArrays:
glDrawArrays(GL_PATCHES, 0, numKnots);
The tessellation control stage can be a simple pass through stage. Then in the tessellation evaluation shader you can use the abstract patch type of quads. Then move the evaluation of the trefoil knot, or any other biparametric shape, into the tessellation evaluation shader, using the supplied [u, v] coordinates. Then you could translate every trefoil by the input vertex. The normals can be calculated in the shader as well.
Alternatively, you could use the geometry shader to synthesize the trefoil just from one input vertex position using points as input primitive and triangle strip as output primitive. Then you could just call again
glDrawArrays(GL_POINTS, 0, numKnots);
and create the trefoil in the geometry shader using the function for the generation of the indices to describe the order of evaluation and translating the generated vertices with the input vertex.
In both cases there would be no need to multithread draw calls, which is ineffective with OpenGL anyways. You are limited by the number of vertices that can be generated maximum per-patch which should be 64 times 64 for tessellation and GL_MAX_GEOMETRY_OUTPUT_VERTICES for geometry shaders.

opengl vertex color interpolations

I'm just starting to learn graphics using opengl and barely grasp the ideas of shaders and so forth. Following a set of tutorials, I've drawn a triangle on screen and assigned a color attribute to each vertex.
Using a vertex shader I forwarded the color values to a fragment shader which then simply assigned the vertex color to the fragment.
Vertex shader:
[.....]
layout(location = 1) in vec3 vertexColor;
out vec3 fragmentColor;
void main(){
[.....]
fragmentColor = vertexColor;
}
Fragment shader:
[.....]
out vec3 color;
in vec3 fragmentColor;
void main()
{
color = fragmentColor;
}
So I assigned a different colour to each vertex of the triangle. The result was a smoothly interpolated coloured triangle.
My question is: since I send a specific colour to the fragment shader, where did the smooth interpolation happen? Is it a state enabled by default in opengl? What other values can this state have and how do I switch among them? I would expect to have total control over the pixel colours using a fragment shader, but there seem to be calculations behind the scenes that alter the result. There are clearly things I don't understand, can anyone help on this matter?
Within the OpenGL pipeline, between the vertex shading stages (vertex, tesselation, and geometry shading) and fragment shading, is the rasterizer. Its job is to determine which screen locations are covered by a particular piece of geometry(point, line, or triangle). Knowing those locations, along with the input vertex data, the rasterizer linearly interpolates the data values for each varying variable in the fragment shader and sends those values as inputs into your fragment shader. When applied to color values, this is called Gouraud shading.
source : OpenGL Programming Guide, Eighth Edition.
If you want to see what happens without interpolation, call glShadeModel(GL_FLAT) before you draw. The default value is GL_SMOOTH.

Fragment shader color interpolation: details and hardware support

I know using a very simple vertex shader like
attribute vec3 aVertexPosition;
attribute vec4 aVertexColor;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
varying vec4 vColor;
void main(void) {
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
vColor = aVertexColor;
}
and a very simple fragment shader like
precision mediump float;
varying vec4 vColor;
void main(void) {
gl_FragColor = vColor;
}
to draw a triangle with red, blue, and green vertices will end up having a triangle like this
My questions are:
Do calculations for interpolating fragment colors belonging to one triangle (or a primitive) happen in parallel on GPU?
What are the algorithm and also hardware support for interpolating fragment colors inside the triangle?
The interpolation is the step Fragment Processor
Algorithm is very simple they just interpolate the color according to their UV
Yes, absolutely.
Triangle color interpolation is part of the fixed-function pipeline (it's actually part of the rasterization step, which happens before fragment processing), so it is carried out entirely in hardware with probably all video cards. The equations for interpolating vertex data can be found e.g. in OpenGL 4.3 specification, section 14.6.1 (pp. 405-406). The algorithm defines barycentric coordinates for the triangle and uses them to interpolate between the vertices.
Besides the answers giving here, I wanted to add that there doesn't have to be dedicated fixed-function hardware for the interpolations. Modern GPU tend to use "pull-model interpolation" where the interpolation is actually done via the shader units.
I recommend reading Fabian Giesen's blog article about the topic (and the whole series of articles about the graphics pipeline in genreal).
On the first question - though there are parallel units on the GPU, it depends on the size of the triangle in consideration. For most of the GPUs, drawing happens on a tile by tile basis, and depending on the "screen" size of the triangle, if it falls within just one tile completely, it will be processed completely in only one tile processor. If it is split, it can be done in parallel by different units.
The second question is answered by other posters before me.

Texture with alpha channel covers background objects

While implementing billboard objects to my engine i encountered a problem (screenshot below)
as you can see billboard object covers everything in background (skybox seems to be an exception). And this is not exacly how i would like it to work. I have no idea where is the problem.
my fragment shader is pretty simple:
#version 330
uniform sampler2D tex;
in vec2 TexCoord;
out vec4 FragColor;
void main()
{
FragColor = texture2D(tex, TexCoord);
}
and the billboard is just triangle strip made in geometry shader.
All ideas would be nice.
Probably draw order issue, you need to draw opaque objects first and then alpha blended objects back to front. Alternatively you can enabled alpha testing or in your shader discard fragments if their alpha is below a certain threshold.