I am still kinda confused by the position input in the vertex shader. Because you never actually asign that variable. Does OpenGL just know it?
It's the same with sampler2d. You never assign that variable either, and in all of the tutorials I've seen/read that's never adressed.
layout(location = 0) in vec3 position;
uniform sample2d textureSampler.
Related
I am new to OpenGL and was doing this tutorial.
This was in the vertex shader code:
layout(location = 0) in vec3 vertexPosition_modelspace;
I understand what layout is doing, but could not find anything about vertexPosition_modelspace
Explained by m-zayan:
It's an attribute, it's not a keyword (syntax), same as defining variable This line could be interpreted as follows, I am going to use a buffer (x) specified by layout to add some value (vec3 type) to vertexPosition_modelspace
In a vertex shader how can a function within the shader be made to access a specific attribute array value after buffering its vertex data to a VBO?
In the shader below the cmp() function is supposed to compare a uniform variable with vertex i.
#version 150 core
in vec2 vertices;
in vec3 color;
out vec3 Color;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
uniform vec2 cmp_vertex; // Vertex to compare
out int isEqual; // Output variable for cmp()
// Comparator
vec2 cmp(){
int i = 3;
return (cmp_vertex == vertices[i]);
}
void main() {
Color = color;
gl_Position = projection * view * model * vec4(vertices, 0.0, 1.0);
isEqual = cmp();
}
Also, can cmp() be modified so that it does the comparison in parallel?
Based on the naming in your shader code, and the wording of your question, it looks like you misunderstood the concept of vertex shaders.
The vertex shader is invoked once for each vertex. So when your vertex shader code executes, it always operates on a single vertex. This means that the name of your in variable is misleading:
in vec2 vertices;
This variable gives you the position of the one and only vertex your shader is working on. So it would probably be clearer if you used a name in singular form:
in vec2 vertex;
Once you realize that you're operating on a single vertex, the rest becomes easy. For the comparison:
bool cmp() {
return (cmp_vertex == vertex);
}
Vertex shaders are typically already invoked in parallel, meaning that many instances can execute at the same time, each one on its own vertex. So there is no need for parallelism within a single shader instance.
You'll probably have more issues achieving what you're after. But I hope that this gets you at least over the initial hurdle.
For example, the following out variable is problematic:
out int isEqual;
out variables of the vertex shader have matching in variables in the fragment shader. By default, the value written by the vertex shader is linearly interpolated across triangles, and the fragment shader gets the interpolated values. This is not supported for variables of type int. They only support flat interpolation:
flat out int isEqual;
But this will probably not give you what you're after, since the value you see in the fragment shader will always be the same across an entire triangle.
As I know, if the vertex buffer has an attribute that shader does not use, there will be no problem.
What happens if the vertex buffer does not have an attribute that the vertex shader uses for OpenGL?
I know for DirectX11, nothing will draw if the attribute needed in shader is not provided in vertex buffer.
Example
vb only has: position
vertex shader:
attribute vec3 position;
attribute vec4 color;
varying vec4 out_color;
void main()
{
gl_Position = vec4(position, 1.0);
out_color = color;
}
pixel shader:
varying vec4 out_color;
void main()
{
gl_FragColor = vertex_color;
}
What is the pixel color after the shaders executed?
There are two scenarios:
If the attribute array is enabled (i.e. glEnableVertexAttribArray() was called for the attribute), but you didn't make a glVertexAttribPointer() call that specifies the data to be in a valid VBO, bad things can happen. I believe it can be implementation dependent what exactly the outcome is. For example, the draw call could crash, or there could be garbage rendering. The best thing I can find in the spec, which still sounds somewhat vague to me, is:
Most, but not all GL commands operating on buffer objects will detect attempts to read from or write to a location in a bound buffer object at an offset less than zero, or greater than or equal to the buffer’s size. When such an attempt is detected, a GL error will be generated. Any command which does not detect these attempts, and performs such an invalid read or write, has undefined results, and may result in GL interruption or termination.
If the attribute array is not enabled, the current attribute value is used for all vertices. This is the value set with glVertexAttrib4fv() and similar calls. If no such call was made, the default for the current attribute value is (0.0, 0.0, 0.0, 1.0).
In HLSL I can write:
struct vertex_in
{
float3 pos : POSITION;
float3 normal : NORMAL;
float2 tex : TEXCOORD;
};
and use this struct as an input of a vertex shader:
vertex_out VS(vertex_in v) { ... }
Is there something similar in GLSL? Or do I need to write something like:
layout(location = 0) in vec4 aPosition;
layout(location = 1) in vec4 aNormal;
...
What you are looking for are known as interface blocks in GLSL.
Unfortunately, they are not supported for the input to the Vertex Shader stage or for the output from the Fragment Shader stage. Those must have explicitly bound data locations at program link-time (or they will be automatically assigned, and you can query them later).
Regarding the use of HLSL semantics like : POSITION, : NORMAL, : TEXCOORD, modern GLSL does not have anything like that. Vertex attributes are bound to a generic (numbered) location by name either using glBindAttribLocation (...) prior to linking or as in your example, layout (location = ...).
For input / output between shader stages, that is matched entirely on the basis of variable / interface block name during GLSL program linking (except if you use the relatively new Separate Shader Objects extension). In no case will you have constant pre-defined named semantics like POSITION, NORMAL, etc. though; locations are all application- or linker-defined numbers.
When specifying the vertex attribute location in the shader code using layout(location = ...) I do not need to fetch the locations using glGetAttribLocation in my C++ program.
If I neither define the location in the shader using the layout qualifiers nor fetch them in my C++ program, they are assigned automatically. The question is about this automatic assignment. Does the order equal the order of definition in the shader code?
For example, in first shader code, is the location guaranteed to be the same as in the second shader code?
// first shader code
#version 330
in vec3 position;
in vec3 normal;
in vec2 texcoord;
// second shader code
#version 330
layout(location = 0) in vec3 position;
layout(location = 1) in vec3 normal;
layout(location = 2) in vec2 texcoord;
Moreover, does the same rule apply to fragment shader outputs? For now I use glBindFragDataLocation to fetch them.
Automatic attribute assignments are arbitrary; they follow whatever algorithm that the implementation chooses for them to.
If you didn't assign an attribute location, then you cannot assume anything about it.
No -- at least one driver I'm aware of sorts the attributes into alphabetical order before assigning locations to them if they don't have explicit layout directives. In addition, any attribute that is unused in the program will almost certainly be left out and not assigned a location.
You can find out the values that were assigned to your attributes by using a debugger:
positionAttribute = glGetAttribLocation(shaderProgram, "position");