ATI glsl point sprite problems - opengl

I've just moved my rendering code onto my laptop and am having issues with opengl and glsl.
I have a vertex shader like this (simplified):
uniform float tile_size;
void main(void) {
gl_PointSize = tile_size;
// gl_PointSize = 12;
}
and a fragment shader which uses gl_Pointcoord to read a texture and set the fragment colour.
In my c++ program I'm trying to bind tile_size as follows:
glEnable(GL_TEXTURE_2D);
glEnable(GL_POINT_SPRITE);
glEnable(GL_VERTEX_PROGRAM_POINT_SIZE);
GLint unif_tilesize = glGetUniformLocation(*shader program*, "tile_size");
glUniform1f(unif_tilesize, 12);
(Just to clarify I've already setup a program used glUseProgram, shown is just the snippet regarding this particular uniform)
Now setup like this I get one-pixel points and have discovered that opengl is failing to bind unif_tilesize (it gets set to -1).
If I swap the comments round in my vertex shader I get 12px point sprites fine.
Peculiarly the exact same code on my other computer works absolutely fine. The opengl version on my laptop is 2.1.8304 and it's running an ATI radeon x1200 (cf an nvidia 8800gt in my desktop) (if this is relevant...).
EDIT I've changed the question title to better reflect the problem.

You forgot to call glUseProgram before setting the uniform.

So after another day of playing around I've come to a point where, although I haven't solved my original problem of not being able to bind a uniform to gl_PointSize, I have modified my existing point sprite renderer to work on my ATI card (an old x1200) and thought I'd share some of the things I'd learned.
I think that something about gl_PointSize is broken (at least on my card); in the vertex shader I was able to get 8px point sprites using gl_PointSize=8.0;, but using gl_PointSize=tile_size; gave me 1px sprites whatever I tried to bind to the uniform tile_size.
Luckily I don't need different sized tiles for each vertex so I called glPointSize(tile_size) in my main.cpp instead and this worked fine.
In order to get gl_PointCoord to work (i.e. return values other than (0,0)) in my fragment shader, I had to call glTexEnvf( GL_POINT_SPRITE_ARB, GL_COORD_REPLACE_ARB, GL_TRUE ); in my main.cpp.
There persisted a ridiculous problem in which my varyings were beign messed up somewhere between my vertex and fragment shaders. After a long game of 'guess what to type into google to get relevant information', I found (and promptly lost) a forum where someone said that in come cases if you don't use gl_TexCoord[0] in at least one of your shaders, your varying will be corrupted.
In order to fix that I added a line at the end of my fragment shader:
_coord = gl_TexCoord[0].xy;
where _coord is an otherwise unused vec2. (note gl_Texcoord is not used anywhere else).
Without this line all my colours went blue and my texture lookup broke.

Related

Alternative to gl_TexCoord.xy to get texture coordinate

I always did my shaders in glsl 3 (with the #version 330 line) but it's starting to be pretty old, so I recently tried to make a shader in glsl 4, and use it with the SFML library for rendering, instead of pure openGL.
For now, my goal is to do a basic shader for a 2d game, which takes the color of each pixel of a texture and modify them. I always did that with gl_TexCoord[0].xy, but it seems to be depreciated now, so I searched and I heard that I must use the in and out variables with a vertex shader, so I tried.
 
Fragment shader
#version 400
in vec2 fragCoord;
out vec4 fragColor;
uniform sampler2D image;
void main(){
// Get the color
vec4 color = texture( image, fragCoord );
/*
* Do things with the color
*/
// Return the color
fragColor = color;
}
 
Vertex shader
#version 400
in vec3 position;
in vec2 textureCoord;
out vec2 fragCoord;
void main(){
// Set the position of the pixel and vertex (I guess)
fragCoord = textureCoord;
gl_Position = vec4( position, 1.0 );
}
I also seen that we could add the projection, model, and view matrices, but I don't know how to do that with SFML (I don't even think we can), and I don't want to learn some complex things about openGL or SFML just to change some colors on a 2d game, so here is my question:
Is there an easy way to just get the coordinates of the pixel we're working on? Maybe get rid of the vertex shader, or use it without using matrices?
Unless you really want to learn a lot of nasty OpenGl, writing your own shaders just for textures is a little overkill. SFML can handle textures and shaders for you behind the scenes (here is a good article on how to use them) so you don't need to worry about shaders at all. Also note that you can change the color of SFML sprites (which is, I believe, what you are trying to do), with sprite.setColor(sf::color(*whatever*));. Plus, there's no problem in using version 330. That's what I usually use, albeit with in and out stuff as well.
If you really want to use your own shaders for fancy effects, like pixellation, blurring, etc. I can't help you much since I've only ever worked with pure OpenGl, so I don't know how the vertex information is handled by SFML, but this is some interesting example code you can check out, here is a tutorial, and here is a reference.
To more directly answer your question. gl_FragCoord is a built-in variable with GLSL that keeps track of the fragments position, but you have to set gl_Position in the vertex shader. You can't get rid of the vertex shader if you are doing anything OpenGl related. You'd have to do fancy matrix stuff (this is a wonderful library) and probably buffer stuff (like this) to tell GLSL yourself where everything is.

glsl Shader does not draw obj when including not used parameters

I setup a phong shader with glsl which works fine.
When I render my object without "this line", it works. But when I uncomment "this line" the world is stil built but the object is not rendered anymore, although "LVN2" is not used anywhere in the glsl code. The shader executes without throwing errors. I think my problem is a rather general glsl question as how the shader works properly.
The main code is written in java.
Vertex shader snippet:
// Light Vector 1
vec3 lightCamSpace = vec4(viewMatrix * modelMatrix * lightPosition).xyz;
out_LightVec = vec3(lightCamSpace - vertexCamSpace).xyz;
// Light Vector 2
vec3 lightCamSpace2 = vec4(viewMatrix * modelMatrix * lightPosition2).xyz;
out_LightVec2 = vec3(lightCamSpace2 - vertexCamSpace).xyz;
Fragment shader snippet:
vec3 LVN = normalize(out_LightVec);
//vec3 LVN2 = normalize(out_LightVec2); // <---- this line
EDIT 1:
GL_MAX_VERTEX_ATTRIBS is 29 and glGetError is already implemented but not throwing any errors.
If I change
vec3 LVN2 = normalize(out_LightVec2);
to
vec3 LVN2 = normalize(out_LightVec);
it actually renders the object again. So it really seems like something is maxed out. (LVN2 is still not used at any point in the shader)
I actually found my absolutly stupid mistake. In the main program I was giving the shader the wrong viewMatrix location... But I'm not sure why it sometimes worked.
I can't spot an error in your shaders. One thing that's possible is that you are exceeding GL_MAX_VERTEX_ATTRIBS by using a fifth four-component out slot. (Although limit of 4 would be weird, according to this answer the minimum supported amount should be 16, and it shouldn't even link in this case. Then again you are using GLSL 1.50, which implies OpenGL 3.2, which is pretty old. I couldn't find a specification stating minimum requirements for the attribute count.)
The reason for it working with the line uncommented could be the shader compiler being able to optimize the unused in/out parameter away, an unable to do so when it's referenced in the fragment shader body.
You could test my guess by querying the limit:
int limit;
glGetIntegerv(GL_MAX_VERTEX_ATTRIBS, &limit);
Beyond this, I would suggest inserting glGetError queries before and after your draw call to see if there's something else going on.

Using sampler2DShadow with multisampled deferred rendering breaks

As the title states, using a sampler2DShadow causes an error in the lighting shader of my multisampling FBO, but I cannot detect the problem due to having a very similar configuration using a standard deferred rendering setup without multisampling, which works fine.
Is there a compatibility issue with sampler2DShadow and multisampling in openlGL, or some alternative I should be using?
The shaders compile fine.
The code works fine until I run this line:
texture(gShadowMap2D, vec3(pCoord.xy, (pCoord.z) / pCoord.w));
and retrieve the result. I then get GL_INVALID_OPERATION.
The shadow map is from a directional light (depth map is valid and visible) and uses GL_COMPARE_R_TO_TEXTURE, set to a standard texture (GL_TEXTURE_2D).
The multisampling deferred FBO textures uses GL_TEXTURE_2D_MULTISAMPLE.
I'm using glsl 330 (openGL 3.3 core profile).
UPDATE
I think the problem is related to getting the world position from the position map in the multisampled fragment shader.
The standard way:
vec3 worldPos = texture(gPositionMap, texCoord).xyz;
The multisampled way:
vec2 texCoordMS = floor(vertTextureSize * texCoord.xy);
for(int i = 0; i < samples; i++)
{
worldPos += texelFetch(gPositionMapMS, ivec2(texCoordMS), i).xyz;
}
worldPos = worldPos / samples;
(I omitted the other samplers.)
I'm guessing I am out of bounds which throws the error when trying to access the sampler2DShadow (pCoord is calculated using worldPos).
Now to figure out how to get this multisampled worldPos to get the same result as the standard way???
Standard way (mDepthVP = mat4 (light's depth view prog):
vec4 coord = gLight.mDepthVP * vec4(worldPos, 1.0);
Well, after almost pulling my hair out desperately searching for a single hint as to why this problem was happening I finally figured it out, but I'm not entirely sure why it was causing the problem.
During the geometry pass (before the lighting pass) the models are rendered to the position, colour (diffuse), normals and depth-stencil as you would expect. During this pass a texture in binded (the diffuse texture of a mesh) but only as a standard texture (GL_TEXTURE_2D) at unit zero (GL_TEXTURE0) (I'm only using diffuse for now).
I left it like that as the system worked, because the lighting pass overrides that unit when it binds the four FBO textures for reading. However, in the multisampling FBO they were being binded as multisampling textures (GL_TEXTURE_2D_MULTISAMPLE) and it just happens that the 'position' map was using unit zero (GL_TEXTURE0).
For some reason this didn't overwrite the previously bound unit from the geometry pass and caused the GL_INVALID_OPERATION error. After calling:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, 0);
straight after the geometry pass the problem went away.
The question I ask comes down to asking "why didn't it overwrite?"

Use only a Fragment Shader in libGDX?

I am porting my project from pure LWJGL to libGDX, and I'd like to know if there is a way to create a program only with the Fragment Shader?
What I'd like to do is to change the color of my texture where it is gray to a color I receive as a parameter. The shader worked perfectly before, but now, it seems that I need to add a Vertex Shader - what does not make any sense to me. So I wrote this:
void main() {
gl_Position = ftransform();
}
That, according to the internet, is the simplest possible VS: it does nothing, and it really shouldn't. But it doesn't work: nothing is displayed, and no compilation errors or warnings are thrown. I tried to replace my Fragment Shader with a simpler one, but the results were even stranger:
void main() {
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
This should only paint everything in red. But when I run, the program crashes, no error is thrown, no Exception, very odd behavior. I know that libGDX adds tilting, but I don't think it would work either, because I need to replace only the gray-scale colors, and depending on the intensity, I need to correctly modulate the correct color (that is varying). Another shader I am using (Fragment only, again) is to set the entire scene to gray-scale (when on the pause menu). It won't work either.
When it comes to shaders in modern OpenGL you always have to supply at least a vertex and a fragment shader. OpenGL-2 gave you some leeway, at the drawback, that you then still have to struggle with the arcane fixed function state machine.
That, according to the internet, is the simplest possible VS: it does nothing, and it really shouldn't.
What makes you think it does "nothing". Of course it does something: It translates incoming, raw numbers into something sensible, namely vertices in clip space.

How to apply a fragment shader to only one object in OpenGL?

I'm just beginning to learn OpenGL. With all of the tutorials I've seen, they demonstrate using a fragment shader to set the color of all the objects in view. What I haven't found yet is how you would use a fragment shader on just one of the objects, giving different objects different colors. How do you do that?
To provide background to the question, I'm drawing a simple scene with a house and a road in 2d. I have discovered how to set the colors of each of my objects (the main body of the house, the window, etc) using the fixed graphics pipeline, I just don't understand how to set the colors using fragment shaders.
Any clarification would be greatly appreciated, including correction if I'm misunderstanding something.
To provide background to the question, I'm drawing a simple scene with a house and a road in 2d. I have discovered how to set the colors of each of my objects (the main body of the house, the window, etc) using the fixed graphics pipeline, I just don't understand how to set the colors using fragment shaders.
As RobertRouhani said, make the color a uniform and change it for each object.
How to apply a fragment shader to only one object in OpenGL?
You can simply change the shader program with glUseProgram and rendering calls after it will use the different shader.
See this: https://gamedev.stackexchange.com/questions/22216/using-multiple-shaders
Before you draw an object with glDrawArrays or glDrawElements, pass the color to a shader as a variable.
http://www.opengl.org/sdk/docs/man/xhtml/glUniform.xml
Sample GLSL fragment shader:
uniform vec4 u_color;
void main(void)
{
gl_FragColor = u_color;
}
I would expand on this answer but I am being lazy. Hope it helps somewhat. There are a lot of tutorials online, just search for glsl, glUniform4f, etc.