What is the purpose of semantics?
if I had a vertex layout like this:
struct VS_Input
{
float4 position : COLOR;
float4 color : POSITION;
};
Would it actually matter that I reversed the semantics on the two members?
If I have to send Direct3D a struct per vertex, why couldn't it just copy my data as is?
If I provide direct3D with a vertex with a layout that doesn't match that of the shader, what will happen? for example, if I pass the following vertex into the above shader?
struct MyVertex
{
Vec4 pos;
Vec2 tex;
Vec4 col;
};
In the D3D documentation, it said that a warning will be produced, and that my data will be "reinterpreted"
Does that mean "reinterpreted" as in reinterpret_cast<>? like, my shader will try to use the texture coordinates and half of the color as the color in the shader? or will it search my vertex layout for the element that matches each semantic and shuffle the input into the right places to make the shader work?
And if the above is not true, then why does D3D require an explicit vertex layout?
Semantics are used to bind your vertex buffers to your shader inputs. In D3D11 you have buffers which are just chunks of memory to store data in, shaders which have an input signature describing the inputs they expect and input layouts which represent the binding between buffers and shaders and describe how the data in your buffers is to be interpreted. The role of the semantic is just to match elements in the buffer layout description with the corresponding shader inputs, the names are not really important as long as they match up.
It's up to you to correctly specify the layout of your vertex data when you create an input layout object. If your input layout doesn't match the actual layout in memory of your data then it will be effectively like using reinterpret_cast and you'll render garbage. Providing your semantics match up correctly between your input elements and your shader input however they will be correctly bound and things like the order of elements don't matter. It's the semantics that describe how data elements from the vertex buffer are to be passed to the inputs of a shader.
Related
I know that each time a vertex shader is run it basically accesses part of the buffer (VBO) being drawn, when drawing vertex number 7 for example it's basically indexing 7 vertices into that VBO, based on the vertex attributes and so on.
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 normal;
layout (location = 2) in vec3 texCoords; // This may be running on the 7th vertex for example.
What I want to do is have access to an earlier part of the VBO for example, so when it's drawing the 7th Vertex I would like to have access to vertex number 1 for example, so that I can interpolate with it.
Seeing that at the time of running the shader it's already indexing into the VBO already, I would think that this is possible, but I don't know how to do it.
Thank you.
As you can see in the documentation, vertex attributes are expected to change on every shader run. So no, you can't access attributes defined for other vertices in a vertex shader.
You can probably do this:
Define a uniform array and pass in the values you need. But keep in mind that you are using more memory this way, you need to pass more data etc.
As #Reaper said you can use a uniform buffer, which can be accessed freely. But the GPU doesn't like random access, it's usually more efficient to stream the data.
You can solve this as well by just adding the data for later/earlier vertices into the array, because in C++ all vertices are at your disposal.
For example if this is the "normal" array:
{
vertex1_x, vertex1_y, vertex1_z, normal1_x, normal1_y, normal1_z, texCoord1_x, texCoord1_y,
...
}
Then you could extend it with data for the other vertex to interpolate with:
{
vertex1_x, vertex1_y, vertex1_z, normal1_x, normal1_y, normal1_z, texCoord1_x, texCoord1_y, vertex2_x, vertex2_y, vertex2_z, normal2_x, normal2_y, normal2_z, texCoord2_x, texCoord2_y,
...
}
Actually you can pass any data per vertex. Just make sure that the stride size and other parameters are adjusted in the glVertexAttribPointer parameters.
So I was reading "The Official OpenGL Guide" and in a section where they taught material lighting, they suddenly used the "flat" qualifier for an input variable in the fragment shader.
I google'd the matter and all I came up with was "flat shading" and "smooth shading" and the differences between them, which I cannot understand how it is related to a simple "MatIndex" variable.
here's the example code from the book:
struct MaterialProperties {
vec3 emission;
vec3 ambient;
vec3 diffuse;
vec3 specular;
float shininess;
};
// a set of materials to select between, per shader invocation
const int NumMaterials = 14;
uniform MaterialProperties Material[NumMaterials];
flat in int MatIndex; // input material index from vertex shader
What is this all about?
In the general case, there is not a 1:1 mapping between a vertex and a fragment. By default, the associated data per vertex is interpolated across the primitive to generate the corresponding associated data per fragment, which is what smooth shading does.
Using the flat keyword, no interpolation is done, so every fragment generated during the rasterization of that particular primitive will get the same data. Since primitives are usually defined by more than one vertex, this means that the data from only one vertex is used in that case. This is called the provoking vertex in OpenGL.
Also note that integer types are never interpolated. You must declare them as flat in any case.
In your specific example, the code means that each primitive can only have one material, which is defined by the material ID of the provoking vertex of each primitive.
It's part of how the attribute is interpolated for the fragment shader, the default is a perspective correct interpolation.
in the GLSL spec section 4.5
A variable qualified as flat will not be interpolated. Instead, it
will have the same value for every fragment within a triangle. This
value will come from a single provoking vertex, as described by the
OpenGL Graphics System Specification. A variable may be qualified as
flat can also be qualified as centroid or sample, which will mean the same thing as qualifying it only as flat
Integral attributes need to be flat.
You can find the table of which vertex is the provoking vertex in the openGL spec table 13.2 in section 13.4.
I want to keep multiple different meshes in the same VBO, so I can draw them with glDrawElementsBaseVertex. How different can vertex specifications for different meshes be in such a VBO in order to be able to draw them like that?
To be more specific:
Can vertex arrays for different meshes have different layouts (interleaved (VNCVNCVNCVNC) vs batch(VVVVNNNNCCCC))?
Can vertices of different meshes have different numbers of attributes?
Can attributes at same shader locations have different sizes for different meshes (vec3, vec4, ...)?
Can attributes at same shader locations have different types for different meshes (GL_FLOAT, GL_HALF_FLOAT, ...)?
P.S.
When i say mesh I mean an array of vertices, where each vertex has some attributes (position, color, normal, uv, ...).
openGL doesn't care what is in each buffer, all it looks at is how the attributes are specified and if they happen to use the same buffer or even overlap then fine. It assumes you know what you are doing.
Yes, if you use a VAO for each mesh then the layout of each is stored in the VAO and binding the other will set the attributes correctly. This way you can define the offset from the start of the buffer so you don't need the glDraw*BaseVertex variants
Yes
not sure
yes they will be auto converted to the correct type as defined in the attributePtr call.
In addition to ratchet freak's answer, I'll only elaborate on point 3:
Yes, you can do that. If you set up your attribute pointers to specify more elements than your shader uses, the additional values are just never used.
If you do it the other way around and read more elements in the shader than are specified in your array, the missing
elements are automatically extened to build a vector of the form (0, 0, 0, 1), so the fourth component will be implicitely 1 and all other (unspecified) ones 0. This makes it possible to use the vectors directly as homogenous coordinates or RGBA colors in many cases.
In many shaders, one sees somehting like
in vec3 pos;
...
gl_Position = matrix * vec4(pos, 1.0);
This is actually not necessary, one could directly use:
in vec4 pos;
...
gl_Position = matrix * pos;
while still storing only 3 component vectors in the attribute array. As a side effect, one now has a shader which also can deal with full 4-component homogenous coordinates.
My (fragment) shader has a uniform array containing 12 structs:
struct LightSource
{
vec3 position;
vec4 color;
float dist;
};
uniform LightSource lightSources[12];
In my program I have 12 buffer objects that each contain the data for one light source. (They need to be seperate buffers.)
How can I bind these buffers to their respective position inside the shader?
I'm not even sure how to retrieve the location of the array.
glGetUniformLocation(program,"lightSources");
glGetUniformLocation(program,"lightSources[0]");
These run without invoking an error, but the location is definitely wrong(4294967295). (The array is being used inside the shader, so I don't think it's being optimized out)
As glGetUniformLocation docs say:
name must be an active uniform variable
name in program that is not a structure,
an array of structures, or a subcomponent of a vector or a
matrix.
...
Uniform variables that are structures or arrays of
structures may be queried by calling
glGetUniformLocation for each field within
the structure.
So, you can only query one field at a time.
Like this:
glGetUniformLocation(program,"lightSources[0].position")
glGetUniformLocation(program,"lightSources[0].color")
glGetUniformLocation(program,"lightSources[0].dist")
Hope it helps.
Edit:
You can make your life easier (at a cost of old hardware/drivers compatibility) by using Interface Blocks, Uniform Buffer Objects and glGetUniformBlockIndex. This will be more like DirectX constant buffers. Required hardware/drivers support for that: either OpenglGL 3.1 core or ARB_uniform_buffer_object extension.
Is it possible to have two layouts with the same location equate to two different input variables of different types in a shader? Currently, my program is not explicitly assigning any location for the vertex, texture, normal vertex arrays. But in my shader, when I have selected the location 0 for both my vertex position and texture coords, it gives me a perfect output. I wanted to know if this is just a coincidence or is it really possible to assign to the same location? Here is my definition of the input variables in the vertex shader:
#version 440
layout (location = 0) in vec4 VertexPosition;
layout (location = 2) in vec4 VertexNormal;
layout (location = 0) in vec2 VertexTexCoord;
Technically... yes, you can. For Vertex Shader inputs (and only for vertex shader inputs), you can assign two variables to the same location. However, you may not attempt to read from both of them. You can dynamically select which one to read from, but it's undefined behavior if your shader takes a path that reads from both variables.
The relevant quote from the standard is:
The one exception where component aliasing is permitted is for two input variables (not block members) to a vertex shader, which are allowed to have component aliasing.This vertex-variable component aliasing is intended only to support vertex shaders where each execution path accesses at most one input per each aliased component.
But this is stupid and pointless. Don't do it.