Why is glUseProgram called every frame with glUniform? - c++

I am following an OpenGL v3.3 tutorial that instructs me to modify a uniform attribute in a fragment shader using glUniform4f (refer to the code below). As far as I understand, OpenGL is a state machine, we don't unbind the current shaderProgram being used, we rather modify an attribute in one of the shaders attached to the program, so why do we need to call glUseProgram on every frame?
I understand that this is not the case for later versions of OpenGL, but I'd still like to understand why it's the case for v3.3
OpenGL Program:
while (!glfwWindowShouldClose(window))
{
processInput(window);
glClearColor(0.2f, 1.0f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(shaderProgram); // the function in question
float redValue = (sin(glfwGetTime()) / 2.0f) + 0.5f;
int colorUniformLocation = glGetUniformLocation(shaderProgram, "ourColor");
glUniform4f(colorUniformLocation, redValue, 0.0f, 0.0f, 1.0f);
std::cout << colorUniformLocation << std::endl;
glBindVertexArray(VAO[0]);
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(VAO[1]);
glDrawArrays(GL_TRIANGLES, 0, 3);
glfwSwapBuffers(window);
glfwPollEvents();
}
Fragment Shader
#version 330 core
out vec4 FragColor;
uniform vec4 ourColor;
void main()
{
FragColor = ourColor;
}
Edit: I forgot to point out that glUniform4f sets a new color (in a periodic fashion) each frame, the final output of the code are 2 triangles with animating color, removing glUseProgram from the while loop while result in a static image, which isn't the intended goal of the code.

In your case you probably don't have to set it every frame.
However in bigger program you'll use multiple shaders so will need to set the one you want before you use it each time, and likely the samples are just written to do that.

Mutable global variables (which is effectively what state is with OpenGL) are inherently dangerous. One of the most important dangers of mutable globals is making assumptions about their current state which turn out to be wrong. These kinds of failures make it incredibly difficult to understand whether or not a piece of code will work correctly, since its behavior is dependent on something external. Something that is assumed about the nature of the world rather than defined by the function that expects it.
Your code wants to issue two drawing commands that use a particular shader. By binding that shader at the point of use, this code us not bound to any assumptions as to the current shader. It doesn't matter what the previous shader was when you start the loop; you're setting it to what it needs to be.
This makes this code insulated to any later changes you might make. If you want to render a third thing that uses a different shader, your code continues to work: you reset the shader at the start of each loop. If you had only set the shader outside of the loop, and didn't reset it each time, then your code would be broken by any subsequent shader changes.
Yes, in a tiny toy program like this, that's probably an easy problem to track down and fix. But when you're dealing with code that spans hundreds of files with tens of thousands of lines of code, with dependency relationship scattered across 3rd party libraries all that might modify any particular OpenGL state? Yeah, it's probably best not to assume too much about the nature of the world.
Learning good habits early on is a good thing.
Now to be fair, re-specifying a bunch of OpenGL state at every point in the program is also a bad idea. Making assumptions/expectations about the nature of the OpenGL context as part of a function is not a-priori bad. If you have some rendering function for a mesh, it's OK for that function to assume that the user has bound the shader it intends to use. It's not the job of this function to specify all of the other state that needs to be specified for rendering. And indeed, it would be a bad mesh class/function if it did that, since you would be unable to render the same mesh with different state.
But at the beginning of each frame, or the start of each major part of your rendering process, specifying a baseline of OpenGL state is perfectly valid. When you loop back to the beginning of a new frame, you should basically assume nothing about OpenGL's state. Not because OpenGL won't remember, but because you might be wrong.

As the answers and comments demonstrated, in the example stated in my question, glUseProgram can only be written once outside the while loop to produce the intended output, which is 2 triangles with colors animating periodically. The misunderstanding I had is a result of the following chapter in learnopengl.com e-book https://learnopengl.com/Getting-started/Shaders where it states:
"updating a uniform does require you to first use the program (by calling glUseProgram), because it sets the uniform on the currently active shader program."
I thought that every time I wanted to update the uniform via glUniform* I had to also issue a call to glUseProgram which is an incorrect understanding.

Related

Texture used as FBO color attachment and sampler2D of a shader program at the same time

I have created an FBO and have a texture bound as its color attachment, and I have multiple shader programs that do some post processing on the texture, everything works great, but it does not make sense to me that the texture can be used as the input(sampler2D) as well as the output of the shaders at the same time.
Following are the steps I have taken:
Create an FBO fboA.
Create a texture textureA, and bind it as color attachment of fboA.
Call glBindFrameBuffer to bind fboA to the framebuffer target.
Call glUseProgram to use shader program shaderA.
Call glDrawArrays to draw something (eventually drawn on textureA because fboA is currently bound).
Call glUseProgram to use shader program shaderB which has a sampler2D uniform in the fragment shader.
Bind textureA as sampler2D uniform of shader program shaderB.
In the fragment shader, textureA is used to set the fragColor.
What I am confused about is the last two steps, where textureA is used as the input of the fragment shader, still it is bound to the current framebuffer. This appears to me that the fragment shader is reading from and writing to the same piece of memory, isn't this some kind of undefined behaviors, why it still works correctly?
isn't this some kind of undefined behaviors, why it still works correctly?
Because "undefined behavior" does not preclude the possibility that the behavior may appear to "work correctly". UB means anything can happen, including the thing you actually wanted to happen.
Of course, it might suddenly stop "working" tomorrow. Or when you take your code to a different GPU. Or if you start rendering more stuff. Or if you breath on your computer really hard.
Undefined behavior is undefined.
If you want to make it well-defined, then you need to use texture barrier functionality and abide by its rules: no more than one read/modify/write per-fragment between barriers, or just read from and write to non-overlapping sets of fragments.

How Many Shader Programs Do I Really Need?

Let's say I have a shader set up to use 3 textures, and that I need to render some polygon that needs all the same shader attributes except that it requires only 1 texture. I have noticed on my own graphics card that I can simply call glDisableVertexAttrib() to disable the other two textures, and that doing so apparently causes the disabled texture data received by the fragment shader to be all white (1.0f). In other words, if I have a fragment shader instruction (pseudo-code)...
final_red = tex0.red * tex1.red * tex2.red
...the operation produces the desired final value regardless whether I have 1, 2, or 3 textures enabled. From this comes a number of questions:
Is it legit to disable expected textures like this, or is it a coincidence that my particular graphics card has this apparent mathematical safeguard?
Is the "best practice" to create a separate shader program that only expects a single texture for single texture rendering?
If either approach is valid, is there a benefit to creating a second shader program? I'm thinking it would be cost less time to make 2 glDisableVertexAttrib() calls than to make a glUseProgram() + 5-6 glGetUniform() calls, but maybe #4 addresses that issue.
When changing the active shader program with glUseProgram() do I need to call glGetUniform... functions every time to re-establish the location of each uniform in the program, or is the location of each expected to be consistent until the shader program is deallocated?
Disabling vertex attributes would not really disable your textures, it would just give you undefined texture coordinates. That might produce an affect similar to disabling a certain texture, but to do this properly you should use a uniform or possibly subroutines (if you have dozens of variations of the same shader).
As far as time taken to disable a vertex array state, that's probably going to be slower than changing a uniform value. Setting uniform values don't really affect the render pipeline state, they're just small changes to memory. Likewise, constantly swapping the current GLSL program does things like invalidate shader cache, so that's also significantly more expensive than setting a uniform value.
If you're on a modern GL implementation (GL 4.1+ or one that implements GL_ARB_separate_shader_objects) you can even set uniform values without binding a GLSL program at all, simply by calling glProgramUniform* (...)
I am most concerned with the fact that you think you need to call glGetUniformLocation (...) each time you set a uniform's value. The only time the location of a uniform in a GLSL program changes is when you link it. Assuming you don't constantly re-link your GLSL program, you only need to query those locations once and store them persistently.

Understanding the shader workflow in OpenGL?

I'm having a little bit of trouble conceptualizing the workflow used in a shader-based OpenGL program. While I've never really done any major projects using either the fixed-function or shader-based pipelines, I've started learning and experimenting, and it's become quite clear to me that shaders are the way to go.
However, the fixed-function pipeline makes much more sense to me from an intuitive perspective. Rendering a scene with that method is simple and procedural—like painting a picture. If I want to draw a box, I tell the graphics card to draw a box. If I want a lot of boxes, I draw my box in a loop. The fixed-function pipeline fits well with my established programming tendencies.
These all seem to go out the window with shaders, and this is where I'm hitting a block. A lot of shader-based tutorials show how to, for example, draw a triangle or a cube on the screen, which works fine. However, they don't seem to go into at all how I would apply these concepts in, for example, a game. If I wanted to draw three procedurally generated triangles, would I need three shaders? Obviously not, since that would be infeasible. Still, it's clearly not as simple as just sticking the drawing code in a loop that runs three times.
Therefore, I'm wondering what the "best practices" are for using shaders in game development environments. How many shaders should I have for a simple game? How do I switch between them and use them to render a real scene?
I'm not looking for specifics, just a general understanding. For example, if I had a shader that rendered a circle, how would I reuse that shader to draw different sized circles at different points on the screen? If I want each circle to be a different color, how can I pass some information to the fragment shader for each individual circle?
There is really no conceptual difference between the fixed-function pipeline and the programmable pipeline. The only thing shaders introduce is the ability to program certain stages of the pipeline.
On current hardware you have (for the most part) control over the vertex, primitive assembly, tessellation and fragment stages. Some operations that occur inbetween and after these stages are still fixed-function, such as depth/stencil testing, blending, perspective divide, etc.
Because shaders are actually nothing more than programs that you drop-in to define the input and output of a particular stage, you should think of input to a fragment shader as coming from the output of one of the previous stages. Vertex outputs are interpolated during rasterization and these are often what you're dealing with when you have an in variable in a fragment shader.
You can also have program-wide variables, known as uniforms. These variables can be accessed by any stage simply by using the same name in each stage. They do not vary across invocations of a shader, hence the name uniform.
Now you should have enough information to figure out this circle example... you can use a uniform to scale your circle (likely a simple scaling matrix) and you can either rely on per-vertex color or a uniform that defines the color.
You don't have shaders that draws circles (ok, you may with the right tricks, but's let's forget it for now, because it is misleading and has very rare and specific uses). Shaders are little programs you write to take care of certain stages of the graphic pipeline, and are more specific than "drawing a circle".
Generally speaking, every time you make a draw call, you have to tell openGL which shaders to use ( with a call to glUseProgram You have to use at least a Vertex Shader and a Fragment Shader. The resulting pipeline will be something like
Vertex Shader: the code that is going to be executed for each of the vertices you are going to send to openGL. It will be executed for each indices you sent in the element array, and it will use as input data the correspnding vertex attributes, such as the vertex position, its normal, its uv coordinates, maybe its tangent (if you are doing normal mapping), or whatever you are sending to it. Generally you want to do your geometric calculations here. You can also access uniform variables you set up for your draw call, which are global variables whic are not goin to change per vertex. A typical uniform variable you might watn to use in a vertex shader is the PVM matrix. If you don't use tessellation, the vertex shader will be writing gl_Position, the position which the rasterizer is going to use to create fragments. You can also have the vertex outputs different things (as the uv coordinates, and the normals after you have dealt with thieri geometry), give them to the rasterizer an use them later.
Rasterization
Fragment Shader: the code that is going to be executed for each fragment (for each pixel if that is more clear). Generally you do here texture sampling and light calculation. You will use the data coming from the vertex shader and the rasterizer, such as the normals (to evaluate diffuse and specular terms) and the uv coordinates (to fetch the right colors form the textures). The texture are going to be uniform, and probably also the parameters of the lights you are evaluating.
Depth Test, Stencil Test. (which you can move before the fragment shader with the early fragments optimization ( http://www.opengl.org/wiki/Early_Fragment_Test )
Blending.
I suggest you to look at this nice program to develop simple shaders http://sourceforge.net/projects/quickshader/ , which has very good examples, also of some more advanced things you won't find on every tutorial.

Use only a Fragment Shader in libGDX?

I am porting my project from pure LWJGL to libGDX, and I'd like to know if there is a way to create a program only with the Fragment Shader?
What I'd like to do is to change the color of my texture where it is gray to a color I receive as a parameter. The shader worked perfectly before, but now, it seems that I need to add a Vertex Shader - what does not make any sense to me. So I wrote this:
void main() {
gl_Position = ftransform();
}
That, according to the internet, is the simplest possible VS: it does nothing, and it really shouldn't. But it doesn't work: nothing is displayed, and no compilation errors or warnings are thrown. I tried to replace my Fragment Shader with a simpler one, but the results were even stranger:
void main() {
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
This should only paint everything in red. But when I run, the program crashes, no error is thrown, no Exception, very odd behavior. I know that libGDX adds tilting, but I don't think it would work either, because I need to replace only the gray-scale colors, and depending on the intensity, I need to correctly modulate the correct color (that is varying). Another shader I am using (Fragment only, again) is to set the entire scene to gray-scale (when on the pause menu). It won't work either.
When it comes to shaders in modern OpenGL you always have to supply at least a vertex and a fragment shader. OpenGL-2 gave you some leeway, at the drawback, that you then still have to struggle with the arcane fixed function state machine.
That, according to the internet, is the simplest possible VS: it does nothing, and it really shouldn't.
What makes you think it does "nothing". Of course it does something: It translates incoming, raw numbers into something sensible, namely vertices in clip space.

ATI glsl point sprite problems

I've just moved my rendering code onto my laptop and am having issues with opengl and glsl.
I have a vertex shader like this (simplified):
uniform float tile_size;
void main(void) {
gl_PointSize = tile_size;
// gl_PointSize = 12;
}
and a fragment shader which uses gl_Pointcoord to read a texture and set the fragment colour.
In my c++ program I'm trying to bind tile_size as follows:
glEnable(GL_TEXTURE_2D);
glEnable(GL_POINT_SPRITE);
glEnable(GL_VERTEX_PROGRAM_POINT_SIZE);
GLint unif_tilesize = glGetUniformLocation(*shader program*, "tile_size");
glUniform1f(unif_tilesize, 12);
(Just to clarify I've already setup a program used glUseProgram, shown is just the snippet regarding this particular uniform)
Now setup like this I get one-pixel points and have discovered that opengl is failing to bind unif_tilesize (it gets set to -1).
If I swap the comments round in my vertex shader I get 12px point sprites fine.
Peculiarly the exact same code on my other computer works absolutely fine. The opengl version on my laptop is 2.1.8304 and it's running an ATI radeon x1200 (cf an nvidia 8800gt in my desktop) (if this is relevant...).
EDIT I've changed the question title to better reflect the problem.
You forgot to call glUseProgram before setting the uniform.
So after another day of playing around I've come to a point where, although I haven't solved my original problem of not being able to bind a uniform to gl_PointSize, I have modified my existing point sprite renderer to work on my ATI card (an old x1200) and thought I'd share some of the things I'd learned.
I think that something about gl_PointSize is broken (at least on my card); in the vertex shader I was able to get 8px point sprites using gl_PointSize=8.0;, but using gl_PointSize=tile_size; gave me 1px sprites whatever I tried to bind to the uniform tile_size.
Luckily I don't need different sized tiles for each vertex so I called glPointSize(tile_size) in my main.cpp instead and this worked fine.
In order to get gl_PointCoord to work (i.e. return values other than (0,0)) in my fragment shader, I had to call glTexEnvf( GL_POINT_SPRITE_ARB, GL_COORD_REPLACE_ARB, GL_TRUE ); in my main.cpp.
There persisted a ridiculous problem in which my varyings were beign messed up somewhere between my vertex and fragment shaders. After a long game of 'guess what to type into google to get relevant information', I found (and promptly lost) a forum where someone said that in come cases if you don't use gl_TexCoord[0] in at least one of your shaders, your varying will be corrupted.
In order to fix that I added a line at the end of my fragment shader:
_coord = gl_TexCoord[0].xy;
where _coord is an otherwise unused vec2. (note gl_Texcoord is not used anywhere else).
Without this line all my colours went blue and my texture lookup broke.