passing uniform float into vertex shader opengl - c++

I am having a problem passing a float from c++ by using uniform in to my vertex shader. This float is just meant to keep adding to itself.
Within my header i have
float pauto;
void pautoupdate(int program);
Now within my cpp
void World::pautoupdate(int program)
{
pauto += 0.1f;
glUniform1f(glGetUniformLocation(program, "pauto"), pauto);
}
Within my vertex shader it is declared as and it is just incrementing the x values
uniform float pauto;
terrx = Position.x;
terrx += pauto;
From here on i am not getting any results from doing this, i am not sure if i am incorrectly pointing to it or something.

Try the following:
I assume that the GLSL program you show is not the whole code, since there is no main or anything.
Check that program is set when you enter the function
Store the output of glGetUniformLocation and check it's OK
Do a call to glGetError before/after to see if GL detects an issue.
If you want to quickly test and setup shaders in a variety of situations, there are several tools to help you with that. On-line, there's ShaderToy (http://www.shadertoy.com) for example. Off-line, let me recommend one I developed, Tao3D (http://tao3d.sf.net).

GlUniform* should be applied between glUseProgram(program) and glUseProgram(0):
1.Use the shader
2.Set uniforms
3.Render stuff
4.Dispose the shader
It's so because OpenGL has to know witch program you are sending the uniform to.

Related

Why is glUseProgram called every frame with glUniform?

I am following an OpenGL v3.3 tutorial that instructs me to modify a uniform attribute in a fragment shader using glUniform4f (refer to the code below). As far as I understand, OpenGL is a state machine, we don't unbind the current shaderProgram being used, we rather modify an attribute in one of the shaders attached to the program, so why do we need to call glUseProgram on every frame?
I understand that this is not the case for later versions of OpenGL, but I'd still like to understand why it's the case for v3.3
OpenGL Program:
while (!glfwWindowShouldClose(window))
{
processInput(window);
glClearColor(0.2f, 1.0f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(shaderProgram); // the function in question
float redValue = (sin(glfwGetTime()) / 2.0f) + 0.5f;
int colorUniformLocation = glGetUniformLocation(shaderProgram, "ourColor");
glUniform4f(colorUniformLocation, redValue, 0.0f, 0.0f, 1.0f);
std::cout << colorUniformLocation << std::endl;
glBindVertexArray(VAO[0]);
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(VAO[1]);
glDrawArrays(GL_TRIANGLES, 0, 3);
glfwSwapBuffers(window);
glfwPollEvents();
}
Fragment Shader
#version 330 core
out vec4 FragColor;
uniform vec4 ourColor;
void main()
{
FragColor = ourColor;
}
Edit: I forgot to point out that glUniform4f sets a new color (in a periodic fashion) each frame, the final output of the code are 2 triangles with animating color, removing glUseProgram from the while loop while result in a static image, which isn't the intended goal of the code.
In your case you probably don't have to set it every frame.
However in bigger program you'll use multiple shaders so will need to set the one you want before you use it each time, and likely the samples are just written to do that.
Mutable global variables (which is effectively what state is with OpenGL) are inherently dangerous. One of the most important dangers of mutable globals is making assumptions about their current state which turn out to be wrong. These kinds of failures make it incredibly difficult to understand whether or not a piece of code will work correctly, since its behavior is dependent on something external. Something that is assumed about the nature of the world rather than defined by the function that expects it.
Your code wants to issue two drawing commands that use a particular shader. By binding that shader at the point of use, this code us not bound to any assumptions as to the current shader. It doesn't matter what the previous shader was when you start the loop; you're setting it to what it needs to be.
This makes this code insulated to any later changes you might make. If you want to render a third thing that uses a different shader, your code continues to work: you reset the shader at the start of each loop. If you had only set the shader outside of the loop, and didn't reset it each time, then your code would be broken by any subsequent shader changes.
Yes, in a tiny toy program like this, that's probably an easy problem to track down and fix. But when you're dealing with code that spans hundreds of files with tens of thousands of lines of code, with dependency relationship scattered across 3rd party libraries all that might modify any particular OpenGL state? Yeah, it's probably best not to assume too much about the nature of the world.
Learning good habits early on is a good thing.
Now to be fair, re-specifying a bunch of OpenGL state at every point in the program is also a bad idea. Making assumptions/expectations about the nature of the OpenGL context as part of a function is not a-priori bad. If you have some rendering function for a mesh, it's OK for that function to assume that the user has bound the shader it intends to use. It's not the job of this function to specify all of the other state that needs to be specified for rendering. And indeed, it would be a bad mesh class/function if it did that, since you would be unable to render the same mesh with different state.
But at the beginning of each frame, or the start of each major part of your rendering process, specifying a baseline of OpenGL state is perfectly valid. When you loop back to the beginning of a new frame, you should basically assume nothing about OpenGL's state. Not because OpenGL won't remember, but because you might be wrong.
As the answers and comments demonstrated, in the example stated in my question, glUseProgram can only be written once outside the while loop to produce the intended output, which is 2 triangles with colors animating periodically. The misunderstanding I had is a result of the following chapter in learnopengl.com e-book https://learnopengl.com/Getting-started/Shaders where it states:
"updating a uniform does require you to first use the program (by calling glUseProgram), because it sets the uniform on the currently active shader program."
I thought that every time I wanted to update the uniform via glUniform* I had to also issue a call to glUseProgram which is an incorrect understanding.

glsl Shader does not draw obj when including not used parameters

I setup a phong shader with glsl which works fine.
When I render my object without "this line", it works. But when I uncomment "this line" the world is stil built but the object is not rendered anymore, although "LVN2" is not used anywhere in the glsl code. The shader executes without throwing errors. I think my problem is a rather general glsl question as how the shader works properly.
The main code is written in java.
Vertex shader snippet:
// Light Vector 1
vec3 lightCamSpace = vec4(viewMatrix * modelMatrix * lightPosition).xyz;
out_LightVec = vec3(lightCamSpace - vertexCamSpace).xyz;
// Light Vector 2
vec3 lightCamSpace2 = vec4(viewMatrix * modelMatrix * lightPosition2).xyz;
out_LightVec2 = vec3(lightCamSpace2 - vertexCamSpace).xyz;
Fragment shader snippet:
vec3 LVN = normalize(out_LightVec);
//vec3 LVN2 = normalize(out_LightVec2); // <---- this line
EDIT 1:
GL_MAX_VERTEX_ATTRIBS is 29 and glGetError is already implemented but not throwing any errors.
If I change
vec3 LVN2 = normalize(out_LightVec2);
to
vec3 LVN2 = normalize(out_LightVec);
it actually renders the object again. So it really seems like something is maxed out. (LVN2 is still not used at any point in the shader)
I actually found my absolutly stupid mistake. In the main program I was giving the shader the wrong viewMatrix location... But I'm not sure why it sometimes worked.
I can't spot an error in your shaders. One thing that's possible is that you are exceeding GL_MAX_VERTEX_ATTRIBS by using a fifth four-component out slot. (Although limit of 4 would be weird, according to this answer the minimum supported amount should be 16, and it shouldn't even link in this case. Then again you are using GLSL 1.50, which implies OpenGL 3.2, which is pretty old. I couldn't find a specification stating minimum requirements for the attribute count.)
The reason for it working with the line uncommented could be the shader compiler being able to optimize the unused in/out parameter away, an unable to do so when it's referenced in the fragment shader body.
You could test my guess by querying the limit:
int limit;
glGetIntegerv(GL_MAX_VERTEX_ATTRIBS, &limit);
Beyond this, I would suggest inserting glGetError queries before and after your draw call to see if there's something else going on.

GLSL shader for each situation

In my game I want to create seperate GLSL shaders for each situation. In example if i would have 3 models character, shiny sword and blury ghost i would like to set renderShader, animationShader and lightingShader to the character, then renderShader, lightingShader and specularShader to shiny sword, and finally i would like to set renderShader, lightingShader and blurShader to the blury ghost.
The renderShader should multiply the positions of vertices by projection, world and other matrices, and it's fragmet shader should simply set the texture to the model.
animationShader should transform vertices by given bone transforms.
lightingShader should do the lighting and specularLighting should do the specular lighting.
blurShader should do the blur effect.
Now first of all how can i do multiple vertex transforms on different shaders? Because the animationShader should calculate the animated positions of vertices and then renderShader should get that position and trasform it by some matrices.
Secondly how can i change the color of fragments on different shader?
The basic idea is that i want to be able to use different shaders for each sutuations/effects, and i don't know how to achieve it.
I need to know how should i use these shaders in opengl, and how should i use GLSL so that all shaders would complete each other and the shaders would not care if another shader is used or not.
What you're asking for is decidedly non-trivial, and is probably extreme overkill for the relatively limited number of "shader" types you describe.
Doing what you want will require developing what is effectively your own shading language. It may be a highly #defined version of GLSL, but the shaders you write would not be pure GLSL. They would have specialized hooks and be written in ways that code could be expected to flow into other code.
You'll need to have your own way of specifying the inputs and outputs of your language. When you want to connect shaders together, you have to say who's outputs go to which shader's inputs. Some inputs can come from actual shader stage inputs, while others come from other shaders. Some outputs written by a shader will be actual shader stage outputs, while others will feed other shaders.
Therefore, a shader who needs an input from another shader must execute after that other shader. Your system will have to work out the dependency graph.
Once you've figured out all of the inputs and outputs for a specific sequence of shaders, you have to take all of those shader text files and compile them into GLSL, as appropriate. Obviously, this is a non-trivial process.
Your shader language might look like this:
INPUT vec4 modelSpacePosition;
OUTPUT vec4 clipSpacePosition;
uniform mat4 modelToClipMatrix;
void main()
{
clipSpacePosition = modelToClipMatrix * modelSpacePosition;
}
Your "compiler" will need to do textual transformations on this, converting references to modelSpacePosition into an actual vertex shader input or a variable written by another shader, as appropriate. Similarly, if clipSpacePosition is to be written to gl_Position, you will need to convert all uses of clipSpacePosition to gl_Position. Also, you will need to remove the explicit output declaration.
In short, this will be a lot of work.
If you're going to do this, I would strongly urge you to avoid trying to merge the concept of vertex and fragment shaders. Keep this shader system working within the well-defined shader stages. So your "lightingShader" would need to be either a vertex shader or a fragment shader. If it's a fragment shader, then one of the shaders in the vertex shader that feeds into it will need to provide a normal in some way, or you'll need the fragment shader component to compute the normal via some mechanism.
Effectively for every combination of the shader stages you'll have to create an individual shader program. To save work and redundancy you'd use some caching structure to create a program for each requested combination only one time and reuse it, whenever it is requested.
Similar you can do with the shader stages. However shader stages can not be linked from several compilation units (yet, this is an ongoing effort in OpenGL development to get there, separable shaders of OpenGL-4 are a stepping stone there). But you can compile a shader from several sources. So you'd write functions for each desired effect into a separate source and then combine them at compilation time. And again use a caching structure to map source module combinations to shader object.
Update due to comment
Let's say you want to have some modularity. For this we can exploit the fact that glShaderSource accepts multiple source strings, it simply concatenates. You write a number of shader modules. One doing the illumination per-vertex calculations
uniform vec3 light_positions[N_LIGHT_SOURCES];
out vec3 light_directions[N_LIGHT_SOURCES];
out vec3 light_halfdirections[N_LIGHT_SOURCES];
void illum_calculation()
{
for(int i = 0; i < N_LIGHT_SOURCES; i++) {
light_directions[i] = ...;
light_halfdirections[i] = ...;
}
}
you put this into illum_calculation.vs.glslmod (the filename and extensions are arbitrary). Next you have a small module that does bone animation
uniform vec4 armature_pose[N_ARMATURE_BONES];
uniform vec3 armature_bones[N_ARMATURE_BONES];
in vec3 vertex_position;
void skeletal_animation()
{
/* ...*/
}
put this into illum_skeletal_anim.vs.glslmod. Then you have some common header
#version 330
uniform ...;
in ...;
and some common tail which contains the main function, which invokes all the different stages
void main() {
skeletal_animation();
illum_calculation();
}
and so on. Now you can load all those modules, in the right order into a single shader stage. The same you can do with all shader stages. The fragment shader is special, since it can write to several framebuffer targets at the same time (in OpenGL versions large enough). And technically you can pass a lot of varyings between the stages. So you could pass a own set of varyings between shader stages for each framebuffer target. However the geometry and the transformed vertex positions are common to all of them.
You have to provide different shader programs for each Model you want to render.
You can switch between different shader combinations using the glUseProgram function.
So before rendering your character or shiny sword or whatever you have to initialize the appropriate shader attributes and uniforms.
So it just a question of the design of the code of your game,
because you need to provide all uniform attributes to the shader, for example light information, texture samples and you must enable all necessary vertex attributes of the shader in order to assign position, color and so on.
These attributes can differ between the shaders and also your client side model can have different kind of Vertex attribute structures.
That means the model of your code directly influences the assigned shader and depends on it.
If you want to share common code between different shader programs, e.g illuminateDiffuse
you have to outsource this function and providing it to your shader through simply insert the string literal which represents the function into your shaders code, which is nothin more than a string literal. So you can reach a kind of modularity or include behavior through string manipulation of you shader code.
In any case the shader compiler tells you whats wrong.
Best regards

ATI glsl point sprite problems

I've just moved my rendering code onto my laptop and am having issues with opengl and glsl.
I have a vertex shader like this (simplified):
uniform float tile_size;
void main(void) {
gl_PointSize = tile_size;
// gl_PointSize = 12;
}
and a fragment shader which uses gl_Pointcoord to read a texture and set the fragment colour.
In my c++ program I'm trying to bind tile_size as follows:
glEnable(GL_TEXTURE_2D);
glEnable(GL_POINT_SPRITE);
glEnable(GL_VERTEX_PROGRAM_POINT_SIZE);
GLint unif_tilesize = glGetUniformLocation(*shader program*, "tile_size");
glUniform1f(unif_tilesize, 12);
(Just to clarify I've already setup a program used glUseProgram, shown is just the snippet regarding this particular uniform)
Now setup like this I get one-pixel points and have discovered that opengl is failing to bind unif_tilesize (it gets set to -1).
If I swap the comments round in my vertex shader I get 12px point sprites fine.
Peculiarly the exact same code on my other computer works absolutely fine. The opengl version on my laptop is 2.1.8304 and it's running an ATI radeon x1200 (cf an nvidia 8800gt in my desktop) (if this is relevant...).
EDIT I've changed the question title to better reflect the problem.
You forgot to call glUseProgram before setting the uniform.
So after another day of playing around I've come to a point where, although I haven't solved my original problem of not being able to bind a uniform to gl_PointSize, I have modified my existing point sprite renderer to work on my ATI card (an old x1200) and thought I'd share some of the things I'd learned.
I think that something about gl_PointSize is broken (at least on my card); in the vertex shader I was able to get 8px point sprites using gl_PointSize=8.0;, but using gl_PointSize=tile_size; gave me 1px sprites whatever I tried to bind to the uniform tile_size.
Luckily I don't need different sized tiles for each vertex so I called glPointSize(tile_size) in my main.cpp instead and this worked fine.
In order to get gl_PointCoord to work (i.e. return values other than (0,0)) in my fragment shader, I had to call glTexEnvf( GL_POINT_SPRITE_ARB, GL_COORD_REPLACE_ARB, GL_TRUE ); in my main.cpp.
There persisted a ridiculous problem in which my varyings were beign messed up somewhere between my vertex and fragment shaders. After a long game of 'guess what to type into google to get relevant information', I found (and promptly lost) a forum where someone said that in come cases if you don't use gl_TexCoord[0] in at least one of your shaders, your varying will be corrupted.
In order to fix that I added a line at the end of my fragment shader:
_coord = gl_TexCoord[0].xy;
where _coord is an otherwise unused vec2. (note gl_Texcoord is not used anywhere else).
Without this line all my colours went blue and my texture lookup broke.

Strange shader corruption

I'm working with shaders just now and I'm observing some very strange behavior. I've loaded this vertex shader:
#version 150
uniform float r;
uniform float g;
uniform float b;
varying float retVal;
attribute float dummyAttrib;
void main(){
retVal = dummyAttrib+r+g+b; //deleting dummyAttrib = corruption
gl_Position = gl_ModelViewProjectionMatrix*vec4(100,100,0,1);
}
First of all I render with glDrawArrays(GL_POINTS,0,1000) with this shader without nothing special, just using shader program. If you run this shader and set point size to something visible, you should see white square in middle of screen (I'm using glOrtho2d(0,200,0,200)). DummyAttrib is just some attrib - my shaders won't run if there's none. Also I need to actually use that attribute so normally I do something like float c = dummyAttrib.That is also first question I would like to ask why it is that way.
However this would be fine but when you change the line with comment (retval=...) to retVal = r+g+b; and add that mentioned line to use attrib (float c = dummyAttrib), strange things happen. First of all you won't see that square anymore, so I had to set up transform feedback to watch what's happening.
I've set the dummyAttrib to 5 in each element of field and r=g=b=1. With current code the result of transform feedback is 8 - exactly what you'd expect. However changing it like above gives strange values like 250.128 and every time I modify the code somehow (just reorder calls), this value changes. As soon as I return that dummyAttrib to calculation of retVal everything is magically fixed.
This is why I think there's some sort of shader corruption. I'm using the same loading interface for shaders as I did in projects before and these were flawless, however they were using attributes in normal way, not just dummy for actually running shader.
These 2 problems can have connecion. To sum up - shader won't run without any attribute and shader is corrupted if that attribute isn't used for setting varying that is used either in fragment shader or for transform feedback.
PS: It came to my mind when I was writing this that it looks like every variable that isn't used for passing into next stage is opt out. This could opt out attribute as well and then this shader would be without attribute and wouldn't work properly. Could this be a driver fault? I have Radeon 3870HD with current catalyst version 2010.1105.19.41785.
In case of your artificial usage (float c = dummyAttrib) the attribute will be optimized out. The question is what your mesh preparing logic does in this case. If it queries used attributes from GL it will get nothing. And with no vertex attributes passed the primitive will not be drawn (behavior of my Radeon 2400HD on any Catalyst).
So, basically, you should pass an artificial non-used attribute (something like 1 byte per vertex on some un-initialized buffer) if GL reports attributes are not at all.