When using glVertexAttribPointer, what index should I use for the gl_Normal attribute? - c++

I buffer normal data to a VBO, then point to it using glVertexAttribPointer:
glVertexAttribPointer(<INDEX?>, 3, GL_FLOAT, GL_FALSE, 0, NULL);
However, what value should I use for the first parameter, the index, if I wish the data to be bound to the gl_Normal attribute in the shaders?
I am using an NVidia card, and I read here https://www.opengl.org/sdk/docs/tutorials/ClockworkCoders/attributes.php that gl_Normal is always at index 2 for these types of cards. But how do I know that gl_Normal is at this index for other cards?
Additionally, using an index of 2 doesn't seem to be working, and the gl_Normal data in the shader is all (0,0,0).
I am aware of glGetAttribLocation and glBindAttribLocation, however the documentation specifically says the function will throw an error if attempted with one of the built in vertex attributes that begin with 'gl_'.
EDIT:
Using OpenGL 3.0 with GLSL 130.

You don't. When using the core profile and VAOs, none of the fixed-function vertex attributes exist.
Define your own vertex attribute for normals in your shader:
in vec3 myVertexNormal;
Get the attribute location (or bind it to a location of your choice):
normalsLocation = glGetAttribLocation(program, "myVertexNormal");
Then use glVertexAttribPointer with the location:
glVertexAttribPointer(normalsLocation, 3, GL_FLOAT, GL_FALSE, 0, NULL);
In the core profile, you must also do this for positions, texture coordinates, etc. as well. OpenGL doesn't actually care what the data is, as long as your vertex shader assigns something to gl_Position and your fragment shader assigns something to its output(s).
If you insist on using the deprecated fixed-function attributes and gl_Normal, use glNormalPointer instead.

Related

glEnableClientState vs shader's attribute location

I can connect a VBO data with the shader's attribute using the followings functions:
GLint attribVertexPosition = glGetAttribLocation(progId, "vertexPosition");
glEnableVertexAttribArray(attribVertexPosition);
glVertexAttribPointer(attribVertexPosition, 3, GL_FLOAT, false, 0, 0);
I analyze a legacy OpenGL code where a VBO is used with the following functions:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, 0);
How that legacy code makes a connnection between a VBO data and the shader's attribute:
attribute vec3 vertexPosition;
?
For these fixed-function functions, the binding between the buffer and the vertex attribute is fixed. You don't tell glVertexArray which attribute it feeds; it always feeds the attribute gl_Vertex, which is defined for you in the shader. It cannot feed any user-defined attribute.
User-defined attributes are a separate set of attributes from the fixed-functioned ones.
Note that NVIDIA hardware is known to violate this rule. It aliases certain attribute locations with certain built-in vertex arrays, which allows user-defined attributes to receive data from fixed-function arrays. Perhaps you are looking at code that relies on this non-standard behavior that is only available on NVIDIA's implementation.

Why does this shader to draw a triangle in opengl not get run multiple times

I am learning to use OpenGL through some youtube tutorials online. At 24:43 is the code I am talking about: https:
//www.youtube.com/watch?v=71BLZwRGUJE&list=PLlrATfBNZ98foTJPJ_Ev03o2oq3-GGOS2&index=7
In the previous video of the series, the guy says that the vertex shader is run 3 times (for a triangle) and the fragment shader is run once for every pixel within the shape however in the video I have linked, there is nothing telling the vertex shader to run 3 times and there is nothing telling the fragment shader to be run multiple times either. Can someone please explain why?
Also I am struggling to understand the terminology being used. For example, in the vertex shader is the code: in vec4 position . And in the fragment shader there is the code out vec4 color. I searched around google alot for what this means but I couldn't find what it means anywhere.
1.
A vertex shader is executed for each vertex of the primitives that need to be drawn. Since only a triangle (i.e. primitive with three vertices) is being drawn in the example, the vertex shader is obviously executed three times, once for each vertex of that triangle. The scheduling of the vertex shaders is done by OpenGL itself. The user does not need to take care of this.
A fragment shader is executed for each fragment generated by the rasterizer (i.e. the rasterizer breaks primitives down into discrete elements called fragments). A fragment corresponds to a pixel. Though this is not a bijection, for some pixels there can be no fragments and for some pixels there can be more than one fragment depending on the scene to draw. The scheduling of the fragments is done by OpenGL itself. The user does not need to take care of this.
The user effectively only configures the configurable stages of the pipeline, binds the programmable shaders, binds the shader input and output resources, and binds the geometry resources (vertex and index buffers, topology). The latter corresponds in the example to the vertex buffer containing the three vertices of the triangle, and the GL_TRIANGLES topology.
So given the example:
// The buffer ID.
unsigned int buffer;
// Generate one buffer object:
glGenBuffers(1, &buffer);
// Bind the newly created buffer to the GL_ARRAY_BUFFER target:
glBindBuffer(GL_ARRAY_BUFFER, buffer);
// Copies the previously defined vertex data into the buffer's memory:
glBufferData(GL_ARRAY_BUFFER, 6 * sizeof(float), positions, GL_STATIC_DRAW);
// Set the vertex attributes pointers
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(float) * 2, 0);
...
// Bind the buffer as a vertex buffer:
glBindVertexArray(buffer);
...
// Draw a triangle list for the triangles with the vertices at indices [0,3) = 1 triangle:
glDrawArrays(GL_TRIANGLES, 0, 3);
A similar well-explained "How to draw a triangle"-tutorial.
2.
layout(location = 0) in vec4 position;
A user-defined input value to a vertex shader (i.e. vertex attribute) of type vec4 (a vector of 4 floats) with name position. In the example, each vertex has a position which needs to be transformed properly in the vertex shader before passing eventually to the rasterizer (assignment to gl_Position).
3.
layout(location = 0) out vec4 color
A user-defined output value to a fragment shader of type vec4 (a vector of 4 floats) with name color. In the example, the fragment shader outputs a constant color (e.g., red) for each fragment to be eventually written to the back buffer.
References
Some useful OpenGL/GLSL reference:
Learn OpenGL
And if you want to skip all CPU boiler plate and just focus on the shaders themselves, you can take a look at ShaderToy to facilitate prototyping.

Does uniforms set and vertex attributes values remain when shader is unbound

I am want to know if the uniform and vertex attribute variable values remain if the shader program is unbound and then rebound
Basically I want to ask this question Do uniform values remain in GLSL shader if unbound?. But I want to know if this applies to both uniforms and attribure variables?
If I do this
glUseProgram(shader1);
// Now set uniforms.
glUniform4fv(m_uniforms[COLOR_HANDLE], 1, color.AsFloat());
glUniformMatrix4fv(m_uniforms[MVP_HANDLE], 1, false, matrix);
glBindBuffer(GL_ARRAY_BUFFER, bufferIndex);
glEnableVertexAttribArray(m_attributes[POSITION_HANDLE1]);
glEnableVertexAttribArray(m_attributes[POSITION_HANDLE2]);
glVertexAttribPointer(m_attributes[POSITION_HANDLE], 3, GL_FLOAT, false, 3 * sizeof(GLfloat), 0);
Now save the current program, vao, vbo binded.
Then use second program
glUseProgram(shader2);
//bind some new vao, vbo, set some uniform, vertex attribute variable.
element.draw();
Then again use the first shader program. Rebind the vbo, vao
glUseProgram(shader1); //Here, do the uniforms and attributes set in first shader program remain?
element.draw();
Does this mean that complete state is restored and draw calls will work. I think this should work if the uniforms and attribute values are retained. So when I restore the client program with glUseProgram, all uniforms and attributes set by client will be restored.
If not, then how can I save complete state. Onething is client has to set them again. but if that is not an option, what is other way around. How can I save the full state and restore it later. Is it even possible ?
PS: I need to do it for opengl 2.0, opengl 3.2+, opengl es 2.0, opengles 3.0
Uniforms
Uniforms are part of the shader program object. Thus they keep saved even when the program object is unbound. The OpenGL 4.5 Specification says to this:
7.6 Uniform Variables
Uniforms in the default uniform block, except for subroutine uniforms, are
program object-specific state. They retain their values once loaded, and their values
are restored whenever a program object is used, as long as the program object has
not been re-linked.
Attributes
Attribute bindings are part of the VAO state. When no VAO is bound then the default VAO is used (which is btw. not allowed in Core Profile). When using VAOs, restoring attribute bindings is quite simple since it is sufficient to rebind the VAO. In the other case, I would have a look at the "Associated Gets" section here.

What's the proper way to do multitexturing in GLSL with independent texture coordinates?

Multitexturing used to be easy and straightforward. You bind your textures, you call glBegin, and then you do your rendering, except instead of glTexCoord you call glMultiTexCoord for each texture. Then all of that got deprecated.
I'm looking around trying to figure out the Right Way to do it now, but all the tutorials I find, both from official Khronos Group sources and on blogs, all assume that you want to use the same set of texture coordinates for all of your textures, which is a highly simplistic assumption that does not hold true for my use case.
Let's say I have texture A and texture B, and I want to render the colors from texture B, in the rect rB, using the alpha values in texture A, in the rect rA, (which has the same height and width as rB, for simplicity's sake, but not the same Left and Top values), using OpenGL 3, without any deprecated functionality. What would be the correct way to do this?
In the shaders you simply declare (and use) an extra set of texture coordinates and a second sampler.
vec4 sample1 = texture(texture1, texCoord1);
vec4 sample2 = texture(texture2, texCoord2);
When specifying the model you add the second set of texCoords to the attributes:
glVertexAttribPointer(tex1Loc, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)offsetof(texCoord1, Vertex));
glVertexAttribPointer(tex2LOC, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)offsetof(texCoord2, Vertex));

GLSL OpenGL 3.x how to specify the mapping between generic vertex attribute indices and semantics?

I'm switching from HLSL to GLSL
When defining vertex attributes in of a vertexbuffer, one has to call
glVertexAttribPointer( GLuint index,
GLint size,
GLenum type,
GLboolean normalized,
GLsizei stride,
const GLvoid * pointer);
and pass an index. But how do I specify which index maps to which semantic in the shader?
for example gl_Normal. How can I specify that when using gl_Normal in a vertex shader, I want this to be the generic vertex attribute with index 1?
There is no such thing as a "semantic" in GLSL. There are just attribute indices and vertex shader inputs.
There are two kinds of vertex shader inputs. The kind that were removed in 3.1 (the ones that start with "gl_") and the user-defined kind. The removed kind cannot be set with glVertexAttribPointer; each of these variables had its own special function. gl_Normal had glNormalPointer, gl_Color had glColorPointer, etc. But those functions aren't around in core OpenGL anymore.
User-defined vertex shader inputs are associated with an attribute index. Each named input is assigned an index in one of the following ways, in order from most overriding to the default:
Through the use of the GLSL 3.30 or ARB_explicit_attrib_location extension syntax layout(location = #), where # is the attribute index. So if I have an input called position, I would give it index 3 like this:
layout(location = 3) in vec4 position;
This is my preferred method of handling this. Explicit_attrib_location is available on pretty much any hardware that is still being supported (that isn't Intel).
Explicit association via glBindAttribLocation. You call this function before linking the program. To do the above, we would do this:
GLuint program = glCreateProgram();
glAttachShader(program, some_shader);
glBindVertexAttrib(program, 3, "position");
glLinkProgram(program);
You can set multiple attributes. Indeed, you can set multiple attribute names to the same index. The idea with that is to be able to just set a bunch of mappings automatically and let OpenGL figure out which one works with the actual shader code. So you could have "position" and "axis" map to index 3, and as long as you don't put a shader into this system that has both of those inputs, you'll be fine.
Let OpenGL assign it. If you don't assign an attribute index to an attribute in one of the other ways, the GLSL linker will assign it for you. You can fetch the attribute post-linking with glGetAttribLocation.
I really don't advise this, because OpenGL will assign the indices arbitrarily. So every shader that uses an attribute named position may have the position in a different index. I don't think it's a good idea. So if you can't explicitly set it in the shader, then at least explicitly set it in your OpenGL code before linking. That way, you can have a convention about what attribute index 0 means, what index 1 means, etc.