GLSL OpenGL 3.x how to specify the mapping between generic vertex attribute indices and semantics? - opengl

I'm switching from HLSL to GLSL
When defining vertex attributes in of a vertexbuffer, one has to call
glVertexAttribPointer( GLuint index,
GLint size,
GLenum type,
GLboolean normalized,
GLsizei stride,
const GLvoid * pointer);
and pass an index. But how do I specify which index maps to which semantic in the shader?
for example gl_Normal. How can I specify that when using gl_Normal in a vertex shader, I want this to be the generic vertex attribute with index 1?

There is no such thing as a "semantic" in GLSL. There are just attribute indices and vertex shader inputs.
There are two kinds of vertex shader inputs. The kind that were removed in 3.1 (the ones that start with "gl_") and the user-defined kind. The removed kind cannot be set with glVertexAttribPointer; each of these variables had its own special function. gl_Normal had glNormalPointer, gl_Color had glColorPointer, etc. But those functions aren't around in core OpenGL anymore.
User-defined vertex shader inputs are associated with an attribute index. Each named input is assigned an index in one of the following ways, in order from most overriding to the default:
Through the use of the GLSL 3.30 or ARB_explicit_attrib_location extension syntax layout(location = #), where # is the attribute index. So if I have an input called position, I would give it index 3 like this:
layout(location = 3) in vec4 position;
This is my preferred method of handling this. Explicit_attrib_location is available on pretty much any hardware that is still being supported (that isn't Intel).
Explicit association via glBindAttribLocation. You call this function before linking the program. To do the above, we would do this:
GLuint program = glCreateProgram();
glAttachShader(program, some_shader);
glBindVertexAttrib(program, 3, "position");
glLinkProgram(program);
You can set multiple attributes. Indeed, you can set multiple attribute names to the same index. The idea with that is to be able to just set a bunch of mappings automatically and let OpenGL figure out which one works with the actual shader code. So you could have "position" and "axis" map to index 3, and as long as you don't put a shader into this system that has both of those inputs, you'll be fine.
Let OpenGL assign it. If you don't assign an attribute index to an attribute in one of the other ways, the GLSL linker will assign it for you. You can fetch the attribute post-linking with glGetAttribLocation.
I really don't advise this, because OpenGL will assign the indices arbitrarily. So every shader that uses an attribute named position may have the position in a different index. I don't think it's a good idea. So if you can't explicitly set it in the shader, then at least explicitly set it in your OpenGL code before linking. That way, you can have a convention about what attribute index 0 means, what index 1 means, etc.

Related

Is it legal to bind gl_InstanceID as uniform location?

I have a GLSL shader that makes use of the gl_InstanceID input variable which is set by a glDrawArraysInstanced all. I want this shader to work with a drawcall that doesn't set gl_InstanceID. Here, I want to set gl_InstanceID manually uniform style.
Is it legal / defined behavior to bind glInstanceID as a uniform for these cases?
GLint const instanceIdx = glGetUniformLocation(pid, "gl_InstanceID");
If you're trying to manually provide a value for gl_InstanceID, getting its location as a uniform isn't going to work for several reasons but mainly because it's not a uniform. It is a built-in vertex shader input variable, which is different from a vertex attribute (which is user provided, not built-in).
The value of gl_InstanceID will be zero for any draw call that doesn't use instancing. If you want it to be non-zero for such calls, that's not possible. gl_BaseInstance from GLSL 4.60/ARB_shader_draw_parameters will track the base instance value from BaseInstance draw calls. So you could use that to effectively set the instance index.
But without that, there's nothing you can do.

How does OpenGL differentiate binding points in VAO from ones defined with glBindBufferBase?

I am writing a particle simulation which uses OpenGL >= 4.3 and came upon a "problem" (or rather the lack of one), which confuses me.
For the compute shader part, I use various GL_SHADER_STORAGE_BUFFERs which are bound to binding points via glBindBufferBase().
One of these GL_SHADER_STORAGE_BUFFERs is also used in the vertex shader to supply normals needed for rendering.
The binding in both the compute and vertex shader GLSL (these are called shaders 1 below) looks like this:
OpenGL part:
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 1, normals_ssbo);
GLSL part:
...
layout(std430, binding = 1) buffer normals_ssbo
{
vec4 normals[];
};
...
The interesting part is that in a seperate shader program with a different vertex shader (below called shader 2), the binding point 1 is (re-)used like this:
GLSL:
layout(location = 1) in vec4 Normal;
but in this case, the normals come from a different buffer object and the binding is done using a VAO, like this:
OpenGL:
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 0, 0);
As you can see, the binding point and the layout of the data (both are vec4) are the same, but the actual buffer objects differ.
Now to my questions:
Why does the VAO of shader 2, which is created and used after setting up shaders 1 (which use glBindBufferBase for binding), seamingly overwrite (?) the binding point, but shaders 1 still remember the SSBO binding and work fine without calling glBindBufferBase again before using them?
How does OpenGL know which of those two buffer objects the binding point (which in both cases is 1) should use? Are binding points created via VAO and glBindBufferBase simply completely seperate things? If that's the case, why does something like this NOT work:
layout(std430, binding = 1) buffer normals_ssbo
{
vec4 normals[];
};
layout(location = 1) in vec4 Normal;
Are binding points created via VAO and glBindBufferBase simply completely seperate things?
Yes, they are. That's why they're set by two different functions.
If that's the case, why does something like this NOT work:
Two possibilities present themselves. You implemented it incorrectly on the rendering side, or your driver has a bug. Which is which cannot be determined without seeing your actual code.

When using glVertexAttribPointer, what index should I use for the gl_Normal attribute?

I buffer normal data to a VBO, then point to it using glVertexAttribPointer:
glVertexAttribPointer(<INDEX?>, 3, GL_FLOAT, GL_FALSE, 0, NULL);
However, what value should I use for the first parameter, the index, if I wish the data to be bound to the gl_Normal attribute in the shaders?
I am using an NVidia card, and I read here https://www.opengl.org/sdk/docs/tutorials/ClockworkCoders/attributes.php that gl_Normal is always at index 2 for these types of cards. But how do I know that gl_Normal is at this index for other cards?
Additionally, using an index of 2 doesn't seem to be working, and the gl_Normal data in the shader is all (0,0,0).
I am aware of glGetAttribLocation and glBindAttribLocation, however the documentation specifically says the function will throw an error if attempted with one of the built in vertex attributes that begin with 'gl_'.
EDIT:
Using OpenGL 3.0 with GLSL 130.
You don't. When using the core profile and VAOs, none of the fixed-function vertex attributes exist.
Define your own vertex attribute for normals in your shader:
in vec3 myVertexNormal;
Get the attribute location (or bind it to a location of your choice):
normalsLocation = glGetAttribLocation(program, "myVertexNormal");
Then use glVertexAttribPointer with the location:
glVertexAttribPointer(normalsLocation, 3, GL_FLOAT, GL_FALSE, 0, NULL);
In the core profile, you must also do this for positions, texture coordinates, etc. as well. OpenGL doesn't actually care what the data is, as long as your vertex shader assigns something to gl_Position and your fragment shader assigns something to its output(s).
If you insist on using the deprecated fixed-function attributes and gl_Normal, use glNormalPointer instead.

Usage of custom and generic vertex shader attributes in OpenGL and OpenGL ES

Since generic vertex attributes are deprecated in OpenGL, I tried to rewrite my vertex shader using only custom attributes. And I didn't work for me. Here is the vertex shader:
attribute vec3 aPosition;
attribute vec3 aNormal;
varying vec4 vColor;
vec4 calculateLight(vec4 normal) {
// ...
}
void main(void) {
gl_Position = uProjectionMatrix * uWorldViewMatrix * vec4(aPosition, 1);
vec4 rotatedNormal = normalize(uWorldViewMatrix * vec4(aNormal, 0));
vColor = calculateLight(rotatedNormal);
}
This works perfectly in OpenGL ES 2.0. However, when I try to use it with OpenGL I see black screen. If I change aNormal to generic gl_Normal everything works fine aswell (note that aPosition works fine in both contexts and I don't have to use gl_Vertex).
What am I doing wrong?
I use RenderMonkey to test shaders, and I've set up stream mapping in it with appropriate attribute names (aPosition and aNormal). Maybe it has something to do with attribute indices, becouse I have all of them set to 0? Also, here's what RenderMonkey documentation says about setting custom attribute names in "Stream Mapping":
The “Attribute Name” field displays the default name that can be
used in the shader editor to refer to that stream. In an OpenGL ES effect, the changed
name should be used to reference the stream; however, in a DirectX or OpenGL effect,
the new name has no affect in the shader editor
I wonder is this issue specific to RenderMonkey or OpenGL itself? And why aPosition still works then?
Attribute indices should be unique. It is possible to tell OpenGL to use specific indices via glBindAttribLocation before linking the program. Either way the normal way is to query the index with glGetAttribLocation. It sounds like RenderMonkey lets you choose, in which case have you tried making them separate?
I've seen fixed function rendering cross over to vertex attributes before, where glVertexPointer can wind up binding to the first attribute if its left unbound (I don't know if this is reproducible any more).
I also see some strange things when experimenting with attributes and fixed function names. Without calling glBindAttribLocation, I compile the following shader:
attribute vec4 a;
attribute vec4 b;
void main()
{
gl_Position = gl_Vertex + vec4(gl_Normal, 0) + a + b;
}
and I get the following locations (via glGetActiveAttrib):
a: 1
b: 3
gl_Vertex: -1
gl_Normal: -1
When experimenting, it seems the use of gl_Vertex takes up index 0 and gl_Normal takes index 2 (even if its not reported). I wonder if you throw in a padding attribute between aPosition and aNormal (don't forget to use it in the output or it'll be compiled away) makes it work.
In this case it's possible the position data is simply bound to location zero last. However, the black screen with aNormal points to nothing being bound (in which case it will always be {0, 0, 0}). This is a little less consistent - if the normal was bound to the same data as the position you'd expect some colour, if not correct colour, as the normal would have the position data.
Applications are allowed to bind more than one user-defined attribute
variable to the same generic vertex attribute index. This is called
aliasing, and it is allowed only if just one of the aliased attributes
is active in the executable program, or if no path through the shader
consumes more than one attribute of a set of attributes aliased to the
same location.
My feeling is then that RenderMonkey is using just glVertexPointer/glNormalPointer instead of attributes, which I would have though would bind both normal and position to either the normal or position data since you say both indices are zero.
in a DirectX or OpenGL effect, the new name has no affect in the shader editor
Maybe this means "named streams" are simply not available in the non-ES OpenGL version?
This is unrelated, but in the more recent OpenGL-GLSL versions, a #version number is needed and attributes use the keyword in.

GLSL - Uniform locations in GLSL 1.2 and depth testing in shaders

Two questions:
I am rendering elements in a large VBO with different shaders. In GLSL 1.2 which I must use if I am correct as it is the most current version on OSX does not support uniform locations, which I assume means that the location of your attributes is wherever the compiler decides. Is there any way around this? For instance, as my VBO up with interleaved (x,y,z,nx,ny,nz,texU,texV), I need multiple shaders to be able to access these attributes in the same place every time. I am finding however that the compiler is giving them different locations leading to the location being the normals, and so on. I need their locations to be consistent with my VBO attribute location.
I just got my first GLSL rendering completed and it looks exactly like I forgot to enable the depth test with various polygons rendered on top of one another. I enabled depth testing with:
glEnable(GL_DEPTH_TEST);
And the problem persists. Is there a different way to enable them with shaders? I thought the depth buffer took care of this?
Problem 2 Solved. Turned out to be an SFML issue where I needed to specify the OpenGL settings when creating the window.
Attribute locations are specified in one of 3 places, in order from highest priority to lowest:
Through the use of the GLSL 3.30 (or better) or ARB_explicit_attrib_location extension syntax layout(location = #), where # is the attribute index. So if I have an input called position, I would give it index 3 like this:
layout(location = 3) in vec4 position;
This is my preferred method of handling this. Explicit_attrib_location is available on pretty much any hardware that is still being supported (that isn't Intel).
Explicit association via glBindVertexAttrib. You call this function before linking the program. To do the above, we would do this:
GLuint program = glCreateProgram();
glAttachShader(program, some_shader);
glBindVertexAttrib(program, 3, "position");
glLinkProgram(program);
You can set multiple attributes. Indeed, you can set multiple attribute names to the same index. The idea with that is to be able to just set a bunch of mappings automatically and let OpenGL figure out which one works with the actual shader code. So you could have "position" and "axis" map to index 3, and as long as you don't put a shader into this system that has both of those inputs, you'll be fine.
Note that you can also set attributes that don't exist. You could give "normal" an attribute that isn't specified in the shader. That is fine; the linker will only care about attributes that actually exist. So you can establish a complex convention for this sort of thing, and just run every program on it before linking:
void AttribConvention(GLuint prog)
{
glBindVertexAttrib(program, 0, "position");
glBindVertexAttrib(program, 1, "color");
glBindVertexAttrib(program, 2, "normal");
glBindVertexAttrib(program, 3, "tangent");
glBindVertexAttrib(program, 4, "bitangent");
glBindVertexAttrib(program, 5, "texCoord");
}
GLuint program = glCreateProgram();
glAttachShader(program, some_shader);
AttribConvention(program);
glLinkProgram(program);
Even if a particular shader doesn't have all of these attributes, it will still work.
Let OpenGL assign it. If you don't assign an attribute index to an attribute in one of the other ways, the GLSL linker will assign it for you. You can fetch the attribute post-linking with glGetAttribLocation.
I really don't advise this, because OpenGL will assign the indices arbitrarily. So every shader that uses an attribute named position may have the position in a different index. I don't think it's a good idea. So if you can't explicitly set it in the shader, then at least explicitly set it in your OpenGL code before linking. That way, you can have a convention about what attribute index 0 means, what index 1 means, etc.
On OpenGL 3.3+ you have VAOs, when you use them, you do bind VBOs to it and you can define attributes in a custom order : http://www.opengl.org/sdk/docs/man3/xhtml/glEnableVertexAttribArray.xml (remember that attributes must be contiguous)
A nice/easy implementation of this can be found on XNA : VertexDeclaration, you might want to see all the Vertex* types as well.
Some hint on getting v3 to work with SFML :
http://en.sfml-dev.org/forums/index.php?topic=6314.0
An example on how to create and use VAOs : http://www.opentk.com/files/issues/HelloGL3.cs
(It's C# but I guess you'll get it)
Update :
On v2.1 you have it too http://www.opengl.org/sdk/docs/man/xhtml/glEnableVertexAttribArray.xml, though you can't create VAOs. Almost the same functionality can be achieved but you will have to bind attributes every time since it'll be on the fixed pipeline.