glsl Shader does not draw obj when including not used parameters - opengl

I setup a phong shader with glsl which works fine.
When I render my object without "this line", it works. But when I uncomment "this line" the world is stil built but the object is not rendered anymore, although "LVN2" is not used anywhere in the glsl code. The shader executes without throwing errors. I think my problem is a rather general glsl question as how the shader works properly.
The main code is written in java.
Vertex shader snippet:
// Light Vector 1
vec3 lightCamSpace = vec4(viewMatrix * modelMatrix * lightPosition).xyz;
out_LightVec = vec3(lightCamSpace - vertexCamSpace).xyz;
// Light Vector 2
vec3 lightCamSpace2 = vec4(viewMatrix * modelMatrix * lightPosition2).xyz;
out_LightVec2 = vec3(lightCamSpace2 - vertexCamSpace).xyz;
Fragment shader snippet:
vec3 LVN = normalize(out_LightVec);
//vec3 LVN2 = normalize(out_LightVec2); // <---- this line
EDIT 1:
GL_MAX_VERTEX_ATTRIBS is 29 and glGetError is already implemented but not throwing any errors.
If I change
vec3 LVN2 = normalize(out_LightVec2);
to
vec3 LVN2 = normalize(out_LightVec);
it actually renders the object again. So it really seems like something is maxed out. (LVN2 is still not used at any point in the shader)

I actually found my absolutly stupid mistake. In the main program I was giving the shader the wrong viewMatrix location... But I'm not sure why it sometimes worked.

I can't spot an error in your shaders. One thing that's possible is that you are exceeding GL_MAX_VERTEX_ATTRIBS by using a fifth four-component out slot. (Although limit of 4 would be weird, according to this answer the minimum supported amount should be 16, and it shouldn't even link in this case. Then again you are using GLSL 1.50, which implies OpenGL 3.2, which is pretty old. I couldn't find a specification stating minimum requirements for the attribute count.)
The reason for it working with the line uncommented could be the shader compiler being able to optimize the unused in/out parameter away, an unable to do so when it's referenced in the fragment shader body.
You could test my guess by querying the limit:
int limit;
glGetIntegerv(GL_MAX_VERTEX_ATTRIBS, &limit);
Beyond this, I would suggest inserting glGetError queries before and after your draw call to see if there's something else going on.

Related

passing uniform float into vertex shader opengl

I am having a problem passing a float from c++ by using uniform in to my vertex shader. This float is just meant to keep adding to itself.
Within my header i have
float pauto;
void pautoupdate(int program);
Now within my cpp
void World::pautoupdate(int program)
{
pauto += 0.1f;
glUniform1f(glGetUniformLocation(program, "pauto"), pauto);
}
Within my vertex shader it is declared as and it is just incrementing the x values
uniform float pauto;
terrx = Position.x;
terrx += pauto;
From here on i am not getting any results from doing this, i am not sure if i am incorrectly pointing to it or something.
Try the following:
I assume that the GLSL program you show is not the whole code, since there is no main or anything.
Check that program is set when you enter the function
Store the output of glGetUniformLocation and check it's OK
Do a call to glGetError before/after to see if GL detects an issue.
If you want to quickly test and setup shaders in a variety of situations, there are several tools to help you with that. On-line, there's ShaderToy (http://www.shadertoy.com) for example. Off-line, let me recommend one I developed, Tao3D (http://tao3d.sf.net).
GlUniform* should be applied between glUseProgram(program) and glUseProgram(0):
1.Use the shader
2.Set uniforms
3.Render stuff
4.Dispose the shader
It's so because OpenGL has to know witch program you are sending the uniform to.

Usage of custom and generic vertex shader attributes in OpenGL and OpenGL ES

Since generic vertex attributes are deprecated in OpenGL, I tried to rewrite my vertex shader using only custom attributes. And I didn't work for me. Here is the vertex shader:
attribute vec3 aPosition;
attribute vec3 aNormal;
varying vec4 vColor;
vec4 calculateLight(vec4 normal) {
// ...
}
void main(void) {
gl_Position = uProjectionMatrix * uWorldViewMatrix * vec4(aPosition, 1);
vec4 rotatedNormal = normalize(uWorldViewMatrix * vec4(aNormal, 0));
vColor = calculateLight(rotatedNormal);
}
This works perfectly in OpenGL ES 2.0. However, when I try to use it with OpenGL I see black screen. If I change aNormal to generic gl_Normal everything works fine aswell (note that aPosition works fine in both contexts and I don't have to use gl_Vertex).
What am I doing wrong?
I use RenderMonkey to test shaders, and I've set up stream mapping in it with appropriate attribute names (aPosition and aNormal). Maybe it has something to do with attribute indices, becouse I have all of them set to 0? Also, here's what RenderMonkey documentation says about setting custom attribute names in "Stream Mapping":
The “Attribute Name” field displays the default name that can be
used in the shader editor to refer to that stream. In an OpenGL ES effect, the changed
name should be used to reference the stream; however, in a DirectX or OpenGL effect,
the new name has no affect in the shader editor
I wonder is this issue specific to RenderMonkey or OpenGL itself? And why aPosition still works then?
Attribute indices should be unique. It is possible to tell OpenGL to use specific indices via glBindAttribLocation before linking the program. Either way the normal way is to query the index with glGetAttribLocation. It sounds like RenderMonkey lets you choose, in which case have you tried making them separate?
I've seen fixed function rendering cross over to vertex attributes before, where glVertexPointer can wind up binding to the first attribute if its left unbound (I don't know if this is reproducible any more).
I also see some strange things when experimenting with attributes and fixed function names. Without calling glBindAttribLocation, I compile the following shader:
attribute vec4 a;
attribute vec4 b;
void main()
{
gl_Position = gl_Vertex + vec4(gl_Normal, 0) + a + b;
}
and I get the following locations (via glGetActiveAttrib):
a: 1
b: 3
gl_Vertex: -1
gl_Normal: -1
When experimenting, it seems the use of gl_Vertex takes up index 0 and gl_Normal takes index 2 (even if its not reported). I wonder if you throw in a padding attribute between aPosition and aNormal (don't forget to use it in the output or it'll be compiled away) makes it work.
In this case it's possible the position data is simply bound to location zero last. However, the black screen with aNormal points to nothing being bound (in which case it will always be {0, 0, 0}). This is a little less consistent - if the normal was bound to the same data as the position you'd expect some colour, if not correct colour, as the normal would have the position data.
Applications are allowed to bind more than one user-defined attribute
variable to the same generic vertex attribute index. This is called
aliasing, and it is allowed only if just one of the aliased attributes
is active in the executable program, or if no path through the shader
consumes more than one attribute of a set of attributes aliased to the
same location.
My feeling is then that RenderMonkey is using just glVertexPointer/glNormalPointer instead of attributes, which I would have though would bind both normal and position to either the normal or position data since you say both indices are zero.
in a DirectX or OpenGL effect, the new name has no affect in the shader editor
Maybe this means "named streams" are simply not available in the non-ES OpenGL version?
This is unrelated, but in the more recent OpenGL-GLSL versions, a #version number is needed and attributes use the keyword in.

Use only a Fragment Shader in libGDX?

I am porting my project from pure LWJGL to libGDX, and I'd like to know if there is a way to create a program only with the Fragment Shader?
What I'd like to do is to change the color of my texture where it is gray to a color I receive as a parameter. The shader worked perfectly before, but now, it seems that I need to add a Vertex Shader - what does not make any sense to me. So I wrote this:
void main() {
gl_Position = ftransform();
}
That, according to the internet, is the simplest possible VS: it does nothing, and it really shouldn't. But it doesn't work: nothing is displayed, and no compilation errors or warnings are thrown. I tried to replace my Fragment Shader with a simpler one, but the results were even stranger:
void main() {
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
This should only paint everything in red. But when I run, the program crashes, no error is thrown, no Exception, very odd behavior. I know that libGDX adds tilting, but I don't think it would work either, because I need to replace only the gray-scale colors, and depending on the intensity, I need to correctly modulate the correct color (that is varying). Another shader I am using (Fragment only, again) is to set the entire scene to gray-scale (when on the pause menu). It won't work either.
When it comes to shaders in modern OpenGL you always have to supply at least a vertex and a fragment shader. OpenGL-2 gave you some leeway, at the drawback, that you then still have to struggle with the arcane fixed function state machine.
That, according to the internet, is the simplest possible VS: it does nothing, and it really shouldn't.
What makes you think it does "nothing". Of course it does something: It translates incoming, raw numbers into something sensible, namely vertices in clip space.

ATI glsl point sprite problems

I've just moved my rendering code onto my laptop and am having issues with opengl and glsl.
I have a vertex shader like this (simplified):
uniform float tile_size;
void main(void) {
gl_PointSize = tile_size;
// gl_PointSize = 12;
}
and a fragment shader which uses gl_Pointcoord to read a texture and set the fragment colour.
In my c++ program I'm trying to bind tile_size as follows:
glEnable(GL_TEXTURE_2D);
glEnable(GL_POINT_SPRITE);
glEnable(GL_VERTEX_PROGRAM_POINT_SIZE);
GLint unif_tilesize = glGetUniformLocation(*shader program*, "tile_size");
glUniform1f(unif_tilesize, 12);
(Just to clarify I've already setup a program used glUseProgram, shown is just the snippet regarding this particular uniform)
Now setup like this I get one-pixel points and have discovered that opengl is failing to bind unif_tilesize (it gets set to -1).
If I swap the comments round in my vertex shader I get 12px point sprites fine.
Peculiarly the exact same code on my other computer works absolutely fine. The opengl version on my laptop is 2.1.8304 and it's running an ATI radeon x1200 (cf an nvidia 8800gt in my desktop) (if this is relevant...).
EDIT I've changed the question title to better reflect the problem.
You forgot to call glUseProgram before setting the uniform.
So after another day of playing around I've come to a point where, although I haven't solved my original problem of not being able to bind a uniform to gl_PointSize, I have modified my existing point sprite renderer to work on my ATI card (an old x1200) and thought I'd share some of the things I'd learned.
I think that something about gl_PointSize is broken (at least on my card); in the vertex shader I was able to get 8px point sprites using gl_PointSize=8.0;, but using gl_PointSize=tile_size; gave me 1px sprites whatever I tried to bind to the uniform tile_size.
Luckily I don't need different sized tiles for each vertex so I called glPointSize(tile_size) in my main.cpp instead and this worked fine.
In order to get gl_PointCoord to work (i.e. return values other than (0,0)) in my fragment shader, I had to call glTexEnvf( GL_POINT_SPRITE_ARB, GL_COORD_REPLACE_ARB, GL_TRUE ); in my main.cpp.
There persisted a ridiculous problem in which my varyings were beign messed up somewhere between my vertex and fragment shaders. After a long game of 'guess what to type into google to get relevant information', I found (and promptly lost) a forum where someone said that in come cases if you don't use gl_TexCoord[0] in at least one of your shaders, your varying will be corrupted.
In order to fix that I added a line at the end of my fragment shader:
_coord = gl_TexCoord[0].xy;
where _coord is an otherwise unused vec2. (note gl_Texcoord is not used anywhere else).
Without this line all my colours went blue and my texture lookup broke.

Strange shader corruption

I'm working with shaders just now and I'm observing some very strange behavior. I've loaded this vertex shader:
#version 150
uniform float r;
uniform float g;
uniform float b;
varying float retVal;
attribute float dummyAttrib;
void main(){
retVal = dummyAttrib+r+g+b; //deleting dummyAttrib = corruption
gl_Position = gl_ModelViewProjectionMatrix*vec4(100,100,0,1);
}
First of all I render with glDrawArrays(GL_POINTS,0,1000) with this shader without nothing special, just using shader program. If you run this shader and set point size to something visible, you should see white square in middle of screen (I'm using glOrtho2d(0,200,0,200)). DummyAttrib is just some attrib - my shaders won't run if there's none. Also I need to actually use that attribute so normally I do something like float c = dummyAttrib.That is also first question I would like to ask why it is that way.
However this would be fine but when you change the line with comment (retval=...) to retVal = r+g+b; and add that mentioned line to use attrib (float c = dummyAttrib), strange things happen. First of all you won't see that square anymore, so I had to set up transform feedback to watch what's happening.
I've set the dummyAttrib to 5 in each element of field and r=g=b=1. With current code the result of transform feedback is 8 - exactly what you'd expect. However changing it like above gives strange values like 250.128 and every time I modify the code somehow (just reorder calls), this value changes. As soon as I return that dummyAttrib to calculation of retVal everything is magically fixed.
This is why I think there's some sort of shader corruption. I'm using the same loading interface for shaders as I did in projects before and these were flawless, however they were using attributes in normal way, not just dummy for actually running shader.
These 2 problems can have connecion. To sum up - shader won't run without any attribute and shader is corrupted if that attribute isn't used for setting varying that is used either in fragment shader or for transform feedback.
PS: It came to my mind when I was writing this that it looks like every variable that isn't used for passing into next stage is opt out. This could opt out attribute as well and then this shader would be without attribute and wouldn't work properly. Could this be a driver fault? I have Radeon 3870HD with current catalyst version 2010.1105.19.41785.
In case of your artificial usage (float c = dummyAttrib) the attribute will be optimized out. The question is what your mesh preparing logic does in this case. If it queries used attributes from GL it will get nothing. And with no vertex attributes passed the primitive will not be drawn (behavior of my Radeon 2400HD on any Catalyst).
So, basically, you should pass an artificial non-used attribute (something like 1 byte per vertex on some un-initialized buffer) if GL reports attributes are not at all.