Incorrect normal texture using FBO - opengl

I've a strange problem with multiple render targets. I attached 3 textures to my FBO: color, normal and position. I can correctly render color and position, but rendering normal texture yields (green and red are part of a spinning cube):
In lower left corner, there is the result of rendering normal texture to a quad.
In my vertex shader, I'm computing normal as: normal = gl_NormalMatrix * gl_Normal, and in my fragment shader, I'm emitting it as: gl_FragData[1] = vec4(normal, 1);.
What's the issue here?

Turns out I forgot to supply normals for rendered quads. Adding glNormal3f() calls fixed the problem.

Related

OpenGL implementing skybox in a deferred renderer

I am trying to figure out how to render a skybox in a deferred renderer so that it can be included in post processing effects, However my Geometry stage is in view space and unfortunately the skybox in this stage will be effected by it's position relative to the light as any object would (it behaves like large box located very far from the light source and shows up very dark).
my setup without trying to incorporate the skybox in post processing is as follows:
1:(bind FBO) Render Geometry to color, normal, position FBO texture attachments (unbind FBO).
2:(bind FBO) Render the scene and calculate lighting in screen space.(unbind FBO)
3:(bind FBO) apply post processing effects (unbind FBO)
4: blit the Geometry FBO's depth buffer to the default frame buffer
5: render skybox.
I've tried to switch step 5 with 3
like this:
2:(bind FBO) Render the scene and calculate lighting in screen space.
5: render skybox
(unbind FBO)
3:(bind FBO) apply post processing effects (unbind FBO)
4: blit the Geometry FBO's depth buffer to the default frame buffer
but obviously the skybox has no depth information about the scene and renders on top of the lighting stage. And if I try to do any depth blitting between 2 and 5, I believe I am making invalid GL calls because I'm already bound to an FBO while calling
GL30.glBindFramebuffer(GL30.GL_READ_FRAMEBUFFER, DeferredFBO.fbo_handle);
GL30.glBindFramebuffer(GL30.GL_DRAW_FRAMEBUFFER, 0); // Write to default
// framebuffer or a skybox framebuffer
GL30.glBlitFramebuffer(0, 0, DisplayManager.Width,
DisplayManager.Height, 0, 0, DisplayManager.Width,
DisplayManager.Height, GL11.GL_DEPTH_BUFFER_BIT,
GL11.GL_NEAREST);
So I came up with a really easy hacky solution to this problem without having to incorporate any texture barriers or messing with the depth or color buffers.
I actually render the Skybox Geometry in the Geometry pass of the Deferred Rendering process, I render the skybox and set a flag in the fragment shader to color my skybox, remembering to modify the view matrix to remove the translation with another uniform flag in the vertex Shader. In the fragment shader I set the skybox color as such. Here is a basic summary without pasting all of the code.
layout (binding = 4) uniform samplerCube cubeMap;
uniform float SkyRender;
void main(){
if(SkyRender){
vec4 SkyColor = texture(cubeMap, skyTexCoords);
gAlbedoSpec.rgb = SkyColor.rgb;
gAlbedoSpec.a = -1;
}else{
gAlbedoSpec.rgb = texture(DiffuseTexture, TexCoords);
gAlbedoSpec.a = texture(SpecularTexture, TexCoords).r;
}
I set the alpha component of my skybox in the Color buffer as a flag for my Lighting pass. Here I set it to to -1.
In my lighting pass I simply choose to color my box with Diffuse Only instead of adding lighting calculations if my gAlbedoSpec alpha value is -1.
if(Diffuse.a > -1){
FragColor = SphereNormal * vec4(Dlighting, 1.0)+vec4(Slighting, 1.0);
}else{
FragColor = Diffuse ;
}
It's fairly simple and doesn't require much code and gets the job done.
Then give it the depth information it lacks.
When you rendered your scene in step 1, you used a depth buffer. So when you draw your skybox, you need an FBO that uses that same depth buffer. But this FBO also needs to use the color image that you rendered to in step 2.
Now, this FBO cannot be the same FBO you used in step 2. Why?
Because that would be undefined behavior. Presumably, step 2 reads from your depth buffer to reconstruct the position (if this is not the case, then you can just attach the depth buffer to the FBO from step 2. But then again, you're also wasting tons of performance). But that depth buffer is also attached to the FBO. And that makes it undefined behavior. Even if you're not writing to the depth, it is still undefined under OpenGL.
So you will need another FBO, which has the depth buffer from step 1 with the color buffer from step 2.
Unless you have access to OpenGL 4.5/ARB_texture_barrier/NV_texture_barrier. With that feature, it becomes defined behavior if you use write masks to turn off writes to the depth buffer. All you need to do is issue a glTextureBarrier before performing step 2. So you don't need another FBO if you have that.
In either case, keep the depth test enabled when rendering your skybox, but turn off depth writing. This will allow fragments behind your actual world to be culled, but the depth of the skybox fragments will be infinitely far away.

libgdx heighmap shader distortion artefact

Currently making a game and I want to add nice shader effect like water distortion.
I am rendering the scene to a FBO then apply a heightmap distortion shader on it.
The distortion is applied by the fragment shader.
normalMapPosition is the color vector at the current position of the normal map.
vec2 normalCoord = v_texCoord0;
vec4 normalMapPosition = 2 * texture2D(u_normals, v_texCoord0);
vec2 distortedCoord = normalCoord + (normalMapPosition.xz * 0.05);
then render it to the screen and I obtain the following result
The problem is that there is a diagonal artefact traversing the whole image.
I think this is due to the treatment by openGL of the texture as two triangles.
Is there a nice way to handle this kind of issue?
I finaly found the solution, the problem came from the utilisation of the same FBO to draw the scene and then to render the shader like:
batch.setShader(waterfallShaderProgram)
fbo.begin();
batch.begin();
// here is the problem, fbo is started and used to
// draw at the same time
batch.draw(fbo.getColorBufferTexture());
batch.end()
fbo.end();
the scene is rendered to the FBO fbo in order to apply other effect on top.
Introducing a new FBO fbo2 solved the issue.
batch.setShader(waterfallShaderProgram)
fbo2.begin();
batch.begin();
batch.draw(fbo.getColorBufferTexture());
batch.end()
fbo2.end();

OpenGL render-to-texture-via-FBO -- incorrect display vs. normal Texture

off-screen rendering to a texture-bound offscreen framebuffer object should be so trivial but I'm having a problem I cannot wrap my head around.
My full sample program (2D only for now!) is here:
http://pastebin.com/hSvXzhJT
See below for some descriptions.
I'm creating an rgba texture object 512x512, bind it to an FBO. No depth or other render buffers are needed at this point, strictly 2D.
The following extremely simple shaders render to this texture:
Vertex shader:
varying vec2 vPos; attribute vec2 aPos;
void main (void) {
vPos = (aPos + 1) / 2;
gl_Position = vec4(aPos, 0.0, 1.0);
}
In aPos this just gets a VBO containing 4 xy coords for a quad (-1, -1 :: 1, -1 :: 1, 1 :: -1, 1)
So although the framebuffer resolution should theoretically by 512x512 obviously the shader renders its "texture" onto a "full-(off)screen quad", following GLs -1..1 coords paradigm.
Fragment shader:
varying vec2 vPos;
void main (void) {
gl_FragColor = vec4(0.25, vPos, 1);
}
So it sets a fully opaque color with red fixed at 0.25 and green/blue depending on x/y anywhere between 0 and 1.
At this point my assumption is that a 512x512 texture is rendered showing only the -1..1 full-(off)screen quad, fragment-shaded for green/blue from 0..1.
So this is my off-screen setup. On-screen, I have another real visible full-screen quad with 4 xyz coords { -1, -1, 1 ::: 1, -1, 1 ::: 1, 1, 1 ::: -1, 1, 1 }. Again, for now this is 2D so no matrices and so z is always 1.
This quad is drawn by a different shader, simply rendering a given texture, text-book GL-101 style. In my sample program linked above I have a simple boolean toggle doRtt, when this is false (the default), render-to-texture is not performed at all and this shader simply shows uses texture.jpg from the current directory.
This doRtt=false mode shows that the second on-screen quad-renderer is "correct" for my current requirements and performs the texturing as I want it to: repeated twice vertically and twice horizontally (later will be clamped, repeat is just for testing here), otherwise scaling with NO texture filtering or mipmapping.
So no matter how the window (and thus view port) is resized, we always see a full-screen quad with a single texture repeated twice horizontally, twice vertically.
Now, with doRtt=true, the second shader still does its job but the texture is never fully correctly scaled -- or drawn, this I'm not sure since unfortunately we can't just say "hey gl save this FBO to disk for debugging purposes".
The RTT shader DOES perform some partial rendering (or maybe a full one, again can't be sure what's happening off-screen...) Especially when you resize the viewport a lot smaller than the default size you see the breaks between the texture repeats, and not all colors to be expected from our very simple RTT fragment shader are indeed shown.
(A) either: the 512x512 texture is created correctly but not mapped correctly by my code (but then why is with doRtt=false any given texture.jpg file using the exact same simple textured-quad-shader showing just fine?)
(B) or: the 512x512 texture is not rendered correctly and somehow the rtt frag shader changes its output depending on the window resolution -- but why? The off-screen quad is always at -1..1 for x and y, the vertex shader always maps this to fragment coords 0..1, the RTT texture always stays at 512x512 for this simple test!
Note, BOTH the off-screen quad AND the on-screen quad never change their coords and are always "full-screen" (-1..1 in both dimensions).
Again, this should be so simple. What on earth am I missing?
Specs: OpenGL 4.2 (but the code doesn't need any 4.2 features obviously!), Nvidia Quadro 5010M, openSuse 12.1 64bit, Golang Weekly 22-Feb-2012.
First of all - try checking OpenGL errors. Call glGetError() after each OpenGL function. Also you must set correct viewport for drawing. Before drawing to FBO call glViewport(0, 0, 512, 512). Before drawing to screen call glViewport(0, 0, display_width, display_height).
Also there is no need to bind rttFrameTex when you are rendering to it using FBO. Binding texture is needed only when you are reading texture in shader.

GLSL passing texture coordinates from vertex shader

What I'm trying to accomplish: Drawing the depth map of my scene on top of my scene (so that objects closer are darker, and further away are lighter)
Problem: I don't seem to understand how to pass the right texture coordinates from my vertex shader to my fragment shader.
So I created my FBO, and the texture that the depth map gets drawn to... not that I'm entirely sure what I was doing, but whatever, it works. I tested drawing the texture using the fixed functionality pipeline, and it looks just like it's supposed to (the depth map that is).
But trying to use it in my shaders just isn't working...
Here's the part from my render method that binds the texture:
glActiveTexture(GL_TEXTURE7);
glBindTexture(GL_TEXTURE_2D, depthTextureId);
glUniform1i(depthMapUniform, 7);
glUseProgram(shaderProgram);
look(); //updates my viewing matrix
box.render(); //renders box VBO
So... I think that's sort of right? Maybe? No clue why texture 7, that was just something that was in a tutorial I was checking...
And here's the important stuff from my vertex shader:
out vec4 ShadowCoord;
void main() {
gl_Position = PMatrix * (VMatrix * MMatrix) * gl_Vertex; //projection, view and model matrices
ShadowCoord = gl_MultiTexCoord0; //something I kept seeing in examples, was hoping it would work.
}
Aaand, fragment shader:
in vec4 ShadowCoord;
in vec3 Color; //passed from vertex shader, didn't include the code for it though. Just the vertex color.
out vec4 FragColor;
void main(
FragColor = vec4(texture2D(ShadowMap,shadowCoord.st).x * vec3(Color), 1.0);
Now the problem is that the coordinate that the fragment shader receives for the texture is always (0,0), or the bottom-left corner. I tried changing it to ShadowCoord = gl_MultiTexCoord7, because I figured maybe it had something to do with me putting the texture in slot number 7... but alas, the problem persisted. When the color of (0, 0) changes, so does the color of the entire scene, rather than being a change in color for only the appropriate pixel/fragment.
And that's what I'm hoping to get some insight on... how to pass the correct coordinates (I'd like for the corners of the texture to be the same coordinates as the corners of my screen). And yes, this is a beginners question... but I have been looking in the Orange Book, and the problem with it is that it's great on the GLSL side of things, but the OpenGL side of things is severely lacking in the examples that I could really use...
The input variable gl_MultiTexCoord0 (or 7) is the builtin per-vertex texture coordinate for the 0th (or 7th) texture coordinate, set by gl(Multi)TexCoord (when using immediate mode) or by glTexCoordPointer (when using arrays/VBOs).
But as your depth buffer is already in screen space, what you want is not a usual texture laid onto the object, but just the value in the texture for a specific pixel/fragment. So the vertex shader isn't involved in any way. Instead you just use the current fragment's screen space position as texture coordinate, that can be read in the fragment shader using gl_FragCoord. But keep in mind that this coordinate is in [0,w]x[0,h] and textures are accessed by normalized texture coordinates in [0,1]. So you have to divide the fragment's coordinate by the screen size:
uniform vec2 screenSize;
...
... texture2D(ShadowMap, gl_FragCoord.st/screenSize) ...
But you actually don't need two passes for this effect anyway, as you can just use the fragment's depth directly, without writing it into a texture. Instead of
texture2D(ShadowMap, gl_FragCoord.st/screenSize).x
you can just use
gl_FragCoord.z
which is nothing else than the fragment's depth value, that would have been written into the texture in the first pass. This way you completely spare the first depth-writing pass and the texture access in the second pass.

What does setting the GL color before doing a texture mapping operation do?

I am looking at some sample code in a book that creates a jittered antialiasing effect by repeatedly rendering a scene (at different offsets) onto a offscreen texture, then using that texture to repeatedly draw a quad in the main view with some blend stuff set up.
To accumulate the color "correctly", the code is setting the color like so:
glColor4f(f, f, f, 1);
where f is 1.0/number_of_samples, and then binding the offscreen texture and rendering it.
Since textures come with their own color and alpha data, what is the effect (mathematically and intuitively) that setting the overall "color" in advance achieves?
Thanks.
The default texture environment is GL_MODULATE, this means that the vertex color (set with glColor) is multiplied with the texture color.
So, mathematically, it does:
fragmentColor = texColor / numberOfSamples (alpha = 1.0)
Hope that explains it.