Strange shader corruption - c++

I'm working with shaders just now and I'm observing some very strange behavior. I've loaded this vertex shader:
#version 150
uniform float r;
uniform float g;
uniform float b;
varying float retVal;
attribute float dummyAttrib;
void main(){
retVal = dummyAttrib+r+g+b; //deleting dummyAttrib = corruption
gl_Position = gl_ModelViewProjectionMatrix*vec4(100,100,0,1);
}
First of all I render with glDrawArrays(GL_POINTS,0,1000) with this shader without nothing special, just using shader program. If you run this shader and set point size to something visible, you should see white square in middle of screen (I'm using glOrtho2d(0,200,0,200)). DummyAttrib is just some attrib - my shaders won't run if there's none. Also I need to actually use that attribute so normally I do something like float c = dummyAttrib.That is also first question I would like to ask why it is that way.
However this would be fine but when you change the line with comment (retval=...) to retVal = r+g+b; and add that mentioned line to use attrib (float c = dummyAttrib), strange things happen. First of all you won't see that square anymore, so I had to set up transform feedback to watch what's happening.
I've set the dummyAttrib to 5 in each element of field and r=g=b=1. With current code the result of transform feedback is 8 - exactly what you'd expect. However changing it like above gives strange values like 250.128 and every time I modify the code somehow (just reorder calls), this value changes. As soon as I return that dummyAttrib to calculation of retVal everything is magically fixed.
This is why I think there's some sort of shader corruption. I'm using the same loading interface for shaders as I did in projects before and these were flawless, however they were using attributes in normal way, not just dummy for actually running shader.
These 2 problems can have connecion. To sum up - shader won't run without any attribute and shader is corrupted if that attribute isn't used for setting varying that is used either in fragment shader or for transform feedback.
PS: It came to my mind when I was writing this that it looks like every variable that isn't used for passing into next stage is opt out. This could opt out attribute as well and then this shader would be without attribute and wouldn't work properly. Could this be a driver fault? I have Radeon 3870HD with current catalyst version 2010.1105.19.41785.

In case of your artificial usage (float c = dummyAttrib) the attribute will be optimized out. The question is what your mesh preparing logic does in this case. If it queries used attributes from GL it will get nothing. And with no vertex attributes passed the primitive will not be drawn (behavior of my Radeon 2400HD on any Catalyst).
So, basically, you should pass an artificial non-used attribute (something like 1 byte per vertex on some un-initialized buffer) if GL reports attributes are not at all.

Related

glsl Shader does not draw obj when including not used parameters

I setup a phong shader with glsl which works fine.
When I render my object without "this line", it works. But when I uncomment "this line" the world is stil built but the object is not rendered anymore, although "LVN2" is not used anywhere in the glsl code. The shader executes without throwing errors. I think my problem is a rather general glsl question as how the shader works properly.
The main code is written in java.
Vertex shader snippet:
// Light Vector 1
vec3 lightCamSpace = vec4(viewMatrix * modelMatrix * lightPosition).xyz;
out_LightVec = vec3(lightCamSpace - vertexCamSpace).xyz;
// Light Vector 2
vec3 lightCamSpace2 = vec4(viewMatrix * modelMatrix * lightPosition2).xyz;
out_LightVec2 = vec3(lightCamSpace2 - vertexCamSpace).xyz;
Fragment shader snippet:
vec3 LVN = normalize(out_LightVec);
//vec3 LVN2 = normalize(out_LightVec2); // <---- this line
EDIT 1:
GL_MAX_VERTEX_ATTRIBS is 29 and glGetError is already implemented but not throwing any errors.
If I change
vec3 LVN2 = normalize(out_LightVec2);
to
vec3 LVN2 = normalize(out_LightVec);
it actually renders the object again. So it really seems like something is maxed out. (LVN2 is still not used at any point in the shader)
I actually found my absolutly stupid mistake. In the main program I was giving the shader the wrong viewMatrix location... But I'm not sure why it sometimes worked.
I can't spot an error in your shaders. One thing that's possible is that you are exceeding GL_MAX_VERTEX_ATTRIBS by using a fifth four-component out slot. (Although limit of 4 would be weird, according to this answer the minimum supported amount should be 16, and it shouldn't even link in this case. Then again you are using GLSL 1.50, which implies OpenGL 3.2, which is pretty old. I couldn't find a specification stating minimum requirements for the attribute count.)
The reason for it working with the line uncommented could be the shader compiler being able to optimize the unused in/out parameter away, an unable to do so when it's referenced in the fragment shader body.
You could test my guess by querying the limit:
int limit;
glGetIntegerv(GL_MAX_VERTEX_ATTRIBS, &limit);
Beyond this, I would suggest inserting glGetError queries before and after your draw call to see if there's something else going on.

passing uniform float into vertex shader opengl

I am having a problem passing a float from c++ by using uniform in to my vertex shader. This float is just meant to keep adding to itself.
Within my header i have
float pauto;
void pautoupdate(int program);
Now within my cpp
void World::pautoupdate(int program)
{
pauto += 0.1f;
glUniform1f(glGetUniformLocation(program, "pauto"), pauto);
}
Within my vertex shader it is declared as and it is just incrementing the x values
uniform float pauto;
terrx = Position.x;
terrx += pauto;
From here on i am not getting any results from doing this, i am not sure if i am incorrectly pointing to it or something.
Try the following:
I assume that the GLSL program you show is not the whole code, since there is no main or anything.
Check that program is set when you enter the function
Store the output of glGetUniformLocation and check it's OK
Do a call to glGetError before/after to see if GL detects an issue.
If you want to quickly test and setup shaders in a variety of situations, there are several tools to help you with that. On-line, there's ShaderToy (http://www.shadertoy.com) for example. Off-line, let me recommend one I developed, Tao3D (http://tao3d.sf.net).
GlUniform* should be applied between glUseProgram(program) and glUseProgram(0):
1.Use the shader
2.Set uniforms
3.Render stuff
4.Dispose the shader
It's so because OpenGL has to know witch program you are sending the uniform to.

How Many Shader Programs Do I Really Need?

Let's say I have a shader set up to use 3 textures, and that I need to render some polygon that needs all the same shader attributes except that it requires only 1 texture. I have noticed on my own graphics card that I can simply call glDisableVertexAttrib() to disable the other two textures, and that doing so apparently causes the disabled texture data received by the fragment shader to be all white (1.0f). In other words, if I have a fragment shader instruction (pseudo-code)...
final_red = tex0.red * tex1.red * tex2.red
...the operation produces the desired final value regardless whether I have 1, 2, or 3 textures enabled. From this comes a number of questions:
Is it legit to disable expected textures like this, or is it a coincidence that my particular graphics card has this apparent mathematical safeguard?
Is the "best practice" to create a separate shader program that only expects a single texture for single texture rendering?
If either approach is valid, is there a benefit to creating a second shader program? I'm thinking it would be cost less time to make 2 glDisableVertexAttrib() calls than to make a glUseProgram() + 5-6 glGetUniform() calls, but maybe #4 addresses that issue.
When changing the active shader program with glUseProgram() do I need to call glGetUniform... functions every time to re-establish the location of each uniform in the program, or is the location of each expected to be consistent until the shader program is deallocated?
Disabling vertex attributes would not really disable your textures, it would just give you undefined texture coordinates. That might produce an affect similar to disabling a certain texture, but to do this properly you should use a uniform or possibly subroutines (if you have dozens of variations of the same shader).
As far as time taken to disable a vertex array state, that's probably going to be slower than changing a uniform value. Setting uniform values don't really affect the render pipeline state, they're just small changes to memory. Likewise, constantly swapping the current GLSL program does things like invalidate shader cache, so that's also significantly more expensive than setting a uniform value.
If you're on a modern GL implementation (GL 4.1+ or one that implements GL_ARB_separate_shader_objects) you can even set uniform values without binding a GLSL program at all, simply by calling glProgramUniform* (...)
I am most concerned with the fact that you think you need to call glGetUniformLocation (...) each time you set a uniform's value. The only time the location of a uniform in a GLSL program changes is when you link it. Assuming you don't constantly re-link your GLSL program, you only need to query those locations once and store them persistently.

Usage of custom and generic vertex shader attributes in OpenGL and OpenGL ES

Since generic vertex attributes are deprecated in OpenGL, I tried to rewrite my vertex shader using only custom attributes. And I didn't work for me. Here is the vertex shader:
attribute vec3 aPosition;
attribute vec3 aNormal;
varying vec4 vColor;
vec4 calculateLight(vec4 normal) {
// ...
}
void main(void) {
gl_Position = uProjectionMatrix * uWorldViewMatrix * vec4(aPosition, 1);
vec4 rotatedNormal = normalize(uWorldViewMatrix * vec4(aNormal, 0));
vColor = calculateLight(rotatedNormal);
}
This works perfectly in OpenGL ES 2.0. However, when I try to use it with OpenGL I see black screen. If I change aNormal to generic gl_Normal everything works fine aswell (note that aPosition works fine in both contexts and I don't have to use gl_Vertex).
What am I doing wrong?
I use RenderMonkey to test shaders, and I've set up stream mapping in it with appropriate attribute names (aPosition and aNormal). Maybe it has something to do with attribute indices, becouse I have all of them set to 0? Also, here's what RenderMonkey documentation says about setting custom attribute names in "Stream Mapping":
The “Attribute Name” field displays the default name that can be
used in the shader editor to refer to that stream. In an OpenGL ES effect, the changed
name should be used to reference the stream; however, in a DirectX or OpenGL effect,
the new name has no affect in the shader editor
I wonder is this issue specific to RenderMonkey or OpenGL itself? And why aPosition still works then?
Attribute indices should be unique. It is possible to tell OpenGL to use specific indices via glBindAttribLocation before linking the program. Either way the normal way is to query the index with glGetAttribLocation. It sounds like RenderMonkey lets you choose, in which case have you tried making them separate?
I've seen fixed function rendering cross over to vertex attributes before, where glVertexPointer can wind up binding to the first attribute if its left unbound (I don't know if this is reproducible any more).
I also see some strange things when experimenting with attributes and fixed function names. Without calling glBindAttribLocation, I compile the following shader:
attribute vec4 a;
attribute vec4 b;
void main()
{
gl_Position = gl_Vertex + vec4(gl_Normal, 0) + a + b;
}
and I get the following locations (via glGetActiveAttrib):
a: 1
b: 3
gl_Vertex: -1
gl_Normal: -1
When experimenting, it seems the use of gl_Vertex takes up index 0 and gl_Normal takes index 2 (even if its not reported). I wonder if you throw in a padding attribute between aPosition and aNormal (don't forget to use it in the output or it'll be compiled away) makes it work.
In this case it's possible the position data is simply bound to location zero last. However, the black screen with aNormal points to nothing being bound (in which case it will always be {0, 0, 0}). This is a little less consistent - if the normal was bound to the same data as the position you'd expect some colour, if not correct colour, as the normal would have the position data.
Applications are allowed to bind more than one user-defined attribute
variable to the same generic vertex attribute index. This is called
aliasing, and it is allowed only if just one of the aliased attributes
is active in the executable program, or if no path through the shader
consumes more than one attribute of a set of attributes aliased to the
same location.
My feeling is then that RenderMonkey is using just glVertexPointer/glNormalPointer instead of attributes, which I would have though would bind both normal and position to either the normal or position data since you say both indices are zero.
in a DirectX or OpenGL effect, the new name has no affect in the shader editor
Maybe this means "named streams" are simply not available in the non-ES OpenGL version?
This is unrelated, but in the more recent OpenGL-GLSL versions, a #version number is needed and attributes use the keyword in.

GLSL shader for each situation

In my game I want to create seperate GLSL shaders for each situation. In example if i would have 3 models character, shiny sword and blury ghost i would like to set renderShader, animationShader and lightingShader to the character, then renderShader, lightingShader and specularShader to shiny sword, and finally i would like to set renderShader, lightingShader and blurShader to the blury ghost.
The renderShader should multiply the positions of vertices by projection, world and other matrices, and it's fragmet shader should simply set the texture to the model.
animationShader should transform vertices by given bone transforms.
lightingShader should do the lighting and specularLighting should do the specular lighting.
blurShader should do the blur effect.
Now first of all how can i do multiple vertex transforms on different shaders? Because the animationShader should calculate the animated positions of vertices and then renderShader should get that position and trasform it by some matrices.
Secondly how can i change the color of fragments on different shader?
The basic idea is that i want to be able to use different shaders for each sutuations/effects, and i don't know how to achieve it.
I need to know how should i use these shaders in opengl, and how should i use GLSL so that all shaders would complete each other and the shaders would not care if another shader is used or not.
What you're asking for is decidedly non-trivial, and is probably extreme overkill for the relatively limited number of "shader" types you describe.
Doing what you want will require developing what is effectively your own shading language. It may be a highly #defined version of GLSL, but the shaders you write would not be pure GLSL. They would have specialized hooks and be written in ways that code could be expected to flow into other code.
You'll need to have your own way of specifying the inputs and outputs of your language. When you want to connect shaders together, you have to say who's outputs go to which shader's inputs. Some inputs can come from actual shader stage inputs, while others come from other shaders. Some outputs written by a shader will be actual shader stage outputs, while others will feed other shaders.
Therefore, a shader who needs an input from another shader must execute after that other shader. Your system will have to work out the dependency graph.
Once you've figured out all of the inputs and outputs for a specific sequence of shaders, you have to take all of those shader text files and compile them into GLSL, as appropriate. Obviously, this is a non-trivial process.
Your shader language might look like this:
INPUT vec4 modelSpacePosition;
OUTPUT vec4 clipSpacePosition;
uniform mat4 modelToClipMatrix;
void main()
{
clipSpacePosition = modelToClipMatrix * modelSpacePosition;
}
Your "compiler" will need to do textual transformations on this, converting references to modelSpacePosition into an actual vertex shader input or a variable written by another shader, as appropriate. Similarly, if clipSpacePosition is to be written to gl_Position, you will need to convert all uses of clipSpacePosition to gl_Position. Also, you will need to remove the explicit output declaration.
In short, this will be a lot of work.
If you're going to do this, I would strongly urge you to avoid trying to merge the concept of vertex and fragment shaders. Keep this shader system working within the well-defined shader stages. So your "lightingShader" would need to be either a vertex shader or a fragment shader. If it's a fragment shader, then one of the shaders in the vertex shader that feeds into it will need to provide a normal in some way, or you'll need the fragment shader component to compute the normal via some mechanism.
Effectively for every combination of the shader stages you'll have to create an individual shader program. To save work and redundancy you'd use some caching structure to create a program for each requested combination only one time and reuse it, whenever it is requested.
Similar you can do with the shader stages. However shader stages can not be linked from several compilation units (yet, this is an ongoing effort in OpenGL development to get there, separable shaders of OpenGL-4 are a stepping stone there). But you can compile a shader from several sources. So you'd write functions for each desired effect into a separate source and then combine them at compilation time. And again use a caching structure to map source module combinations to shader object.
Update due to comment
Let's say you want to have some modularity. For this we can exploit the fact that glShaderSource accepts multiple source strings, it simply concatenates. You write a number of shader modules. One doing the illumination per-vertex calculations
uniform vec3 light_positions[N_LIGHT_SOURCES];
out vec3 light_directions[N_LIGHT_SOURCES];
out vec3 light_halfdirections[N_LIGHT_SOURCES];
void illum_calculation()
{
for(int i = 0; i < N_LIGHT_SOURCES; i++) {
light_directions[i] = ...;
light_halfdirections[i] = ...;
}
}
you put this into illum_calculation.vs.glslmod (the filename and extensions are arbitrary). Next you have a small module that does bone animation
uniform vec4 armature_pose[N_ARMATURE_BONES];
uniform vec3 armature_bones[N_ARMATURE_BONES];
in vec3 vertex_position;
void skeletal_animation()
{
/* ...*/
}
put this into illum_skeletal_anim.vs.glslmod. Then you have some common header
#version 330
uniform ...;
in ...;
and some common tail which contains the main function, which invokes all the different stages
void main() {
skeletal_animation();
illum_calculation();
}
and so on. Now you can load all those modules, in the right order into a single shader stage. The same you can do with all shader stages. The fragment shader is special, since it can write to several framebuffer targets at the same time (in OpenGL versions large enough). And technically you can pass a lot of varyings between the stages. So you could pass a own set of varyings between shader stages for each framebuffer target. However the geometry and the transformed vertex positions are common to all of them.
You have to provide different shader programs for each Model you want to render.
You can switch between different shader combinations using the glUseProgram function.
So before rendering your character or shiny sword or whatever you have to initialize the appropriate shader attributes and uniforms.
So it just a question of the design of the code of your game,
because you need to provide all uniform attributes to the shader, for example light information, texture samples and you must enable all necessary vertex attributes of the shader in order to assign position, color and so on.
These attributes can differ between the shaders and also your client side model can have different kind of Vertex attribute structures.
That means the model of your code directly influences the assigned shader and depends on it.
If you want to share common code between different shader programs, e.g illuminateDiffuse
you have to outsource this function and providing it to your shader through simply insert the string literal which represents the function into your shaders code, which is nothin more than a string literal. So you can reach a kind of modularity or include behavior through string manipulation of you shader code.
In any case the shader compiler tells you whats wrong.
Best regards