Using sampler2DShadow with multisampled deferred rendering breaks - c++

As the title states, using a sampler2DShadow causes an error in the lighting shader of my multisampling FBO, but I cannot detect the problem due to having a very similar configuration using a standard deferred rendering setup without multisampling, which works fine.
Is there a compatibility issue with sampler2DShadow and multisampling in openlGL, or some alternative I should be using?
The shaders compile fine.
The code works fine until I run this line:
texture(gShadowMap2D, vec3(pCoord.xy, (pCoord.z) / pCoord.w));
and retrieve the result. I then get GL_INVALID_OPERATION.
The shadow map is from a directional light (depth map is valid and visible) and uses GL_COMPARE_R_TO_TEXTURE, set to a standard texture (GL_TEXTURE_2D).
The multisampling deferred FBO textures uses GL_TEXTURE_2D_MULTISAMPLE.
I'm using glsl 330 (openGL 3.3 core profile).
UPDATE
I think the problem is related to getting the world position from the position map in the multisampled fragment shader.
The standard way:
vec3 worldPos = texture(gPositionMap, texCoord).xyz;
The multisampled way:
vec2 texCoordMS = floor(vertTextureSize * texCoord.xy);
for(int i = 0; i < samples; i++)
{
worldPos += texelFetch(gPositionMapMS, ivec2(texCoordMS), i).xyz;
}
worldPos = worldPos / samples;
(I omitted the other samplers.)
I'm guessing I am out of bounds which throws the error when trying to access the sampler2DShadow (pCoord is calculated using worldPos).
Now to figure out how to get this multisampled worldPos to get the same result as the standard way???
Standard way (mDepthVP = mat4 (light's depth view prog):
vec4 coord = gLight.mDepthVP * vec4(worldPos, 1.0);

Well, after almost pulling my hair out desperately searching for a single hint as to why this problem was happening I finally figured it out, but I'm not entirely sure why it was causing the problem.
During the geometry pass (before the lighting pass) the models are rendered to the position, colour (diffuse), normals and depth-stencil as you would expect. During this pass a texture in binded (the diffuse texture of a mesh) but only as a standard texture (GL_TEXTURE_2D) at unit zero (GL_TEXTURE0) (I'm only using diffuse for now).
I left it like that as the system worked, because the lighting pass overrides that unit when it binds the four FBO textures for reading. However, in the multisampling FBO they were being binded as multisampling textures (GL_TEXTURE_2D_MULTISAMPLE) and it just happens that the 'position' map was using unit zero (GL_TEXTURE0).
For some reason this didn't overwrite the previously bound unit from the geometry pass and caused the GL_INVALID_OPERATION error. After calling:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, 0);
straight after the geometry pass the problem went away.
The question I ask comes down to asking "why didn't it overwrite?"

Related

Perspective-correct shader rendering

I want to put a texture on a rectangle which has been transformed by a non-affine transform (more specifically a perspective transform).
I have a very complex implementation based on openscenegraph and loading my own vertex and fragment shaders.
The problem starts with the fact that the shaders were written quite a long time ago and are using GLSL 120.
The OpenGL side is written in C++ and in its simplest form, loads a texture and applies it to a quad. Up to recently, everything was working fine because the quad was at most affine-transformed (rotation + translation) so the rendering of the texture on it was correct.
Now however we want to support quads of any shape, including something like this:
http://ibin.co/1dbsGPpzbkOX
As you can see in the picture above, the texture on it is incorrect in the middle (shown by arrows)
After hours of research I found out that this is due to OpenGL splitting quads into triangles and rendering each triangle independently. This is of course incorrect if my quad is as shown, because the 4th point influences the texture stretch.
I then even found that this issue has a name: it's a "perspectively incorrect interpolation of texture coordinates", as explained here:
[1]
Looking for solutions to this, I came across this article which mentions the use of the "smooth" attribute in later GLSL versions: [2]
but this means updating my shaders to a newer version.
An alternative I found was to use GL_Hints, as described here: [3]
but the disadvantage here is that it is only a hint, and there is no way to make sure it is used.
Now that I have shown my research, here is my question:
Updating my (complex) shaders and all the OpenGL which goes with it to abide by the new OpenGL pipeline paradigm would be too time-consuming so I tried using the GLSL "version 330 compatibility" and changing the "varying" to "smooth out" and "smooth in", as well as adding the GL_NICE hint on the C++ side, but these changes did not solve my problem. Is this normal, because the compatibility mode somehow doesn't support this correct perspective transform? Or is there something more that I need to do?
Or is there a better way for me to get this functionality without needing to refactor everything?
Here is my vertex shader:
#version 330 compatibility
smooth out vec4 texel;
void main(void) {
gl_Position = ftransform();
texel = gl_TextureMatrix[0] * gl_MultiTexCoord0;
}
and the fragment shader is much too complex, but it starts with
#version 330 compatibility
smooth in vec4 texel;
Using derhass's hint I solved the problem in a much different way.
It is true that the "smooth" keyword was not the problem but rather the projective texture mapping.
To solve it I passed directly from my C++ code to the frag shader the perspective transform matrix and calculated the "correct" texture coordinate in there myself, without using GLSL's barycentric interpolation.
To help anyone with the same problem, here is a cut-down version of my shaders:
.vert
#version 330 compatibility
smooth out vec4 inQuadPos; // Used for the frag shader to know where each pixel is to be drawn
void main(void) {
gl_Position = ftransform();
inQuadPos = gl_Vertex;
}
.frag
uniform mat3 transformMat; // the transformation between texture coordinates and final quad coordinates (passed in from c++)
uniform sampler2DRect source;
smooth in vec4 inQuadPos;
void main(void)
{
// Calculate correct texel coordinate using the transformation matrix
vec3 real_texel = transformMat * vec3(inQuadPos.x/inQuadPos.w, inQuadPos.y/inQuadPos.w, 1);
vec2 tex = vec2(real_texel.x/real_texel.z, real_texel.y/real_texel.z);
gl_FragColor = texture2DRect(source, tex).rgba;
}
Note that the fragment shader code above has not been tested exactly like that so I cannot guarantee it will work out-of-the-box, but it should be mostly there.

ATI glsl point sprite problems

I've just moved my rendering code onto my laptop and am having issues with opengl and glsl.
I have a vertex shader like this (simplified):
uniform float tile_size;
void main(void) {
gl_PointSize = tile_size;
// gl_PointSize = 12;
}
and a fragment shader which uses gl_Pointcoord to read a texture and set the fragment colour.
In my c++ program I'm trying to bind tile_size as follows:
glEnable(GL_TEXTURE_2D);
glEnable(GL_POINT_SPRITE);
glEnable(GL_VERTEX_PROGRAM_POINT_SIZE);
GLint unif_tilesize = glGetUniformLocation(*shader program*, "tile_size");
glUniform1f(unif_tilesize, 12);
(Just to clarify I've already setup a program used glUseProgram, shown is just the snippet regarding this particular uniform)
Now setup like this I get one-pixel points and have discovered that opengl is failing to bind unif_tilesize (it gets set to -1).
If I swap the comments round in my vertex shader I get 12px point sprites fine.
Peculiarly the exact same code on my other computer works absolutely fine. The opengl version on my laptop is 2.1.8304 and it's running an ATI radeon x1200 (cf an nvidia 8800gt in my desktop) (if this is relevant...).
EDIT I've changed the question title to better reflect the problem.
You forgot to call glUseProgram before setting the uniform.
So after another day of playing around I've come to a point where, although I haven't solved my original problem of not being able to bind a uniform to gl_PointSize, I have modified my existing point sprite renderer to work on my ATI card (an old x1200) and thought I'd share some of the things I'd learned.
I think that something about gl_PointSize is broken (at least on my card); in the vertex shader I was able to get 8px point sprites using gl_PointSize=8.0;, but using gl_PointSize=tile_size; gave me 1px sprites whatever I tried to bind to the uniform tile_size.
Luckily I don't need different sized tiles for each vertex so I called glPointSize(tile_size) in my main.cpp instead and this worked fine.
In order to get gl_PointCoord to work (i.e. return values other than (0,0)) in my fragment shader, I had to call glTexEnvf( GL_POINT_SPRITE_ARB, GL_COORD_REPLACE_ARB, GL_TRUE ); in my main.cpp.
There persisted a ridiculous problem in which my varyings were beign messed up somewhere between my vertex and fragment shaders. After a long game of 'guess what to type into google to get relevant information', I found (and promptly lost) a forum where someone said that in come cases if you don't use gl_TexCoord[0] in at least one of your shaders, your varying will be corrupted.
In order to fix that I added a line at the end of my fragment shader:
_coord = gl_TexCoord[0].xy;
where _coord is an otherwise unused vec2. (note gl_Texcoord is not used anywhere else).
Without this line all my colours went blue and my texture lookup broke.

OpenGL 3.x core sampler2DRect (GL_TEXTURE_RECTANGLE) example

I'm trying to figure out how to make a semi-transparent 2D overlay over my 3D scene, reading the OpenGL SuperBible 5th edition for reference.
It has an example which overlays the OpenGL logo over a scene (in Chapter 7) using the texture target GL_TEXTURE_RECTANGLE, and a GLSL uniform type called sampler2DRect. The texture is supposed to be displayed in the fragment shader using the texture() command.
The example in this book uses many source files and I'm having a really hard time implementing it in a simple program, so I'm wondering if anyone could point me to a simpler example of the sampler2DRect.
I have no trouble with the part about switching to an orthographic projection, rather when I try to load the texture, it just displays the surface in white. My code's getting really messy at this point, and I can't seem to pinpoint the problem, so I'd rather start over from scratch following a simpler example if one is available anywhere.
P.S. I'm using SFML 2.0rc for loading the image file, in case it matters.
error C1101: ambiguous overloaded function reference "mul(mat4, vec3)"
(0) : mat3x4 mul(mat3x1, mat1x4)
(0) : mat3 mul(mat3x1, mat1x3)
(0) : mat3x2 mul(mat3x1, mat1x2)
(0) : mat3x1 mul(mat3x1, mat1)
(0) : mat2x4 mul(mat2x1, mat1x4)
.....
This is a very wordy way to tell you that there's no such function that multiplies a mat4 with a vec3. It's then listing all of the legal variants of mul.
Your dimensions must match when you multiply matrices, what you likely want is to multiply a mat4 with a vec4. If this is for your position coordinate, then add a 1.0 as the final value of the vector:
uniform mat4 mvpMatrix;
in vec3 position;
main()
gl_Position = mvpMatrix * vec4(position, 1.0);
In addition to Tim's answer, make sure that :
Your texture is bound : glBindTexture(GL_TEXTURE_2D, textureID);
The vertex shader outputs UV coords : out vec2 UV;
The vertex shader gets UV coords : in vec2 UV;
The VBO with the UVs exists, is enabled, bound and set ( glEnableVertexAttribArray, glBindBuffer, glVertexAttribPointer )
glEnable(GL_TEXTURE_2D)
And special items for rectangle textures :
The UV coords are in [0,width]x[0,height] (special case for rectangle textures).
Make sure that your quad has approx. the same size as the texture ( rect.tex don't have mipmaps)
Use standard textures instead. They can be NPOT.
Also : use gDebugger.

GLSL 4.10 Texture Mapping

I'm trying to figure out how to do texture mapping using GLSL version 4.10. I'm pretty new to GLSL and was happy to get a triangle rendering today with colors fading based on sin(time) using shaders. Now I'm interested in using shaders with a single texture.
A lot of tutorials and even Stack Overflow answers suggest using gl_MultiTexCoord0. However, this has been deprecated since GLSL 1.30 and the latest version is now 4.20. My graphics card doesn't support 4.20 which is why I'm trying to use 4.10.
I know I'm generating and binding my texture appropriately, and I have proper vertex coordinates and texture coordinates because my heightmap rendered perfectly when I was using the fixed-function pipeline, and it renders fine with color rather than the texture.
Here are my GLSL shaders and some of my C++ draw code:
---heightmap.vert (GLSL)---
in vec3 position;
in vec2 texcoord;
out vec2 p_texcoord;
uniform mat4 projection;
uniform mat4 modelview;
void main(void)
{
gl_Position = projection * modelview * vec4(position, 1.0);
p_texcoord = texcoord;
}
---heightmap.frag (GLSL)---
in vec2 p_texcoord;
out vec4 color;
uniform sampler2D texture;
void main(void)
{
color = texture2D(texture, p_texcoord);
}
---Heightmap::Draw() (C++)---
// Bind Shader
// Bind VBO + IBO
// Enable Vertex and Texcoord client state
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureId);
// glVertexPointer(...)
// glTexCoordPointer(...)
glUniform4fv(projLoc, projection);
glUniform4fv(modelviewLoc, modelview);
glUniform1i(textureId, 0);
// glDrawElements(...)
// glDisable/unbind everything
The thing that I am also suspicious about are whether I have to pass the texture coord stuff to the fragment shader as a varying since I'm not touching it in the vertex shader. Also, I have no idea how it's going to get the interpolated texcoords from that. It seems like it's just going to get 0.f or 1.f, not the interpolated coordinate. I don't know enough about shaders to understand how that works. If somebody could enlighten me I would be thrilled!
Edit 1:
#Bahbar: So sorry, that was a typo. I'm typing code on one machine while reading it off another. Like I said, it all worked with the fixed function pipeline. Although glEnableClientState and gl[Vertex|TexCoord]Pointer are deprecated, they should still work with shaders, no? glVertexPointer rather than glVertexAttribPointer worked with colors rather than textures. Also, I am using glBindAttribLocation (position to 0 and texcoord to 1).
The reason I am still using glVertexPointer is I am trying to un-deprecate one thing at a time.
glBindTexture takes a texture object as a second parameter.
// Enable Vertex and Texcoord client state
I assume you meant the generic vertex attributes ? Where are your position and texcoord attributes set up ? To do that, you need some calls to glEnableVertexAttrib, and glVertexAttribPointer instead of glEnableClientState and glVertex/TexCoordPointer (all those are deprecated in the same way that gl_MultiTexCoord is in glsl).
And of course, to figure out where the attributes are bound, you need to either call glGetAttribLocation to figure out where the GL chose to put the attrib, or define it yourself with glBindAttribLocation (before linking the program).
Edit to add, following your addition:
Well, 0 might end up pulling data from glVertexPointer (for reasons you should not rely on. attrib 0 is special and most IHVs make it work just like Vertex), but 1 very likely won't be pulling data from glTexCoord.
In theory, there is no overlap between the generic attributes (like your texcoord, that gets its data from glVertexAttribPointer(1,XXX), 1 here being your chosen location), and the built-in attributes (like gl_MultiTexCoord[0] that gets its data from glTexCoordPointer).
Now, nvidia is known to not follow the spec, and indeed aliases attributes (this comes from the Cg model, as far as I know), and will go so far as saying to use a specific attribute location for glTexCoord (the Cg spec suggests it uses location 8 for TexCoord0 - and location 1 is the attribute blendweight - see table 39, p242), but really you should just bite the bullet and switch your TexCoordPointer to VertexAttribPointer calls.

GLSL passing texture coordinates from vertex shader

What I'm trying to accomplish: Drawing the depth map of my scene on top of my scene (so that objects closer are darker, and further away are lighter)
Problem: I don't seem to understand how to pass the right texture coordinates from my vertex shader to my fragment shader.
So I created my FBO, and the texture that the depth map gets drawn to... not that I'm entirely sure what I was doing, but whatever, it works. I tested drawing the texture using the fixed functionality pipeline, and it looks just like it's supposed to (the depth map that is).
But trying to use it in my shaders just isn't working...
Here's the part from my render method that binds the texture:
glActiveTexture(GL_TEXTURE7);
glBindTexture(GL_TEXTURE_2D, depthTextureId);
glUniform1i(depthMapUniform, 7);
glUseProgram(shaderProgram);
look(); //updates my viewing matrix
box.render(); //renders box VBO
So... I think that's sort of right? Maybe? No clue why texture 7, that was just something that was in a tutorial I was checking...
And here's the important stuff from my vertex shader:
out vec4 ShadowCoord;
void main() {
gl_Position = PMatrix * (VMatrix * MMatrix) * gl_Vertex; //projection, view and model matrices
ShadowCoord = gl_MultiTexCoord0; //something I kept seeing in examples, was hoping it would work.
}
Aaand, fragment shader:
in vec4 ShadowCoord;
in vec3 Color; //passed from vertex shader, didn't include the code for it though. Just the vertex color.
out vec4 FragColor;
void main(
FragColor = vec4(texture2D(ShadowMap,shadowCoord.st).x * vec3(Color), 1.0);
Now the problem is that the coordinate that the fragment shader receives for the texture is always (0,0), or the bottom-left corner. I tried changing it to ShadowCoord = gl_MultiTexCoord7, because I figured maybe it had something to do with me putting the texture in slot number 7... but alas, the problem persisted. When the color of (0, 0) changes, so does the color of the entire scene, rather than being a change in color for only the appropriate pixel/fragment.
And that's what I'm hoping to get some insight on... how to pass the correct coordinates (I'd like for the corners of the texture to be the same coordinates as the corners of my screen). And yes, this is a beginners question... but I have been looking in the Orange Book, and the problem with it is that it's great on the GLSL side of things, but the OpenGL side of things is severely lacking in the examples that I could really use...
The input variable gl_MultiTexCoord0 (or 7) is the builtin per-vertex texture coordinate for the 0th (or 7th) texture coordinate, set by gl(Multi)TexCoord (when using immediate mode) or by glTexCoordPointer (when using arrays/VBOs).
But as your depth buffer is already in screen space, what you want is not a usual texture laid onto the object, but just the value in the texture for a specific pixel/fragment. So the vertex shader isn't involved in any way. Instead you just use the current fragment's screen space position as texture coordinate, that can be read in the fragment shader using gl_FragCoord. But keep in mind that this coordinate is in [0,w]x[0,h] and textures are accessed by normalized texture coordinates in [0,1]. So you have to divide the fragment's coordinate by the screen size:
uniform vec2 screenSize;
...
... texture2D(ShadowMap, gl_FragCoord.st/screenSize) ...
But you actually don't need two passes for this effect anyway, as you can just use the fragment's depth directly, without writing it into a texture. Instead of
texture2D(ShadowMap, gl_FragCoord.st/screenSize).x
you can just use
gl_FragCoord.z
which is nothing else than the fragment's depth value, that would have been written into the texture in the first pass. This way you completely spare the first depth-writing pass and the texture access in the second pass.