Render from fbo texture to another within same fbo - c++

I'm trying to set up deferred rendering, and have successfully managed to output data to the varying gbuffer textures (position, normal, albedo, specular).
I am now attempting to sample from the albedo texture to a 5th colour attachment in the same fbo (for the purposes of further potential post process sampling), by rendering a full screen quad with simple texture coordinates.
I have checked that the vertex/texcoord data is good via Nsight, and confirmed that the shader can "see" the texture to sample from, but all that I see in the target texture is the clear colour when I examine it in the Nsight debugger.
At the moment, the shader is basically just a simple pass through shader:
vertex shader:
#version 430
in vec3 MSVertex;
in vec2 MSTexCoord;
out xferBlock
{
vec3 VSVertex;
vec2 VSTexCoord;
} outdata;
void main()
{
outdata.VSVertex = MSVertex;
outdata.VSTexCoord = MSTexCoord;
gl_Position = vec4(MSVertex,1.0);
}
fragment shader:
#version 430
layout (location = 0) uniform sampler2D colourMap;
layout (location = 0) out vec4 colour;
in xferBlock
{
vec3 VSVertex;
vec2 VSTexCoord;
} indata;
void main()
{
colour = texture(colourMap, indata.VSTexCoord).rgba;
}
As you can see, there is nothing fancy about the shader.
The gl code is as follows:
//bind frame buffer for writing to texture #5
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glDrawBuffer(GL_COLOR_ATTACHMENT4); //5 textures total
glClear(GL_COLOR_BUFFER_BIT);
//activate shader
glUseProgram(second_stage_program);
//bind texture
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, fbo_buffers[2]); //third attachment: albedo
//bind and draw fs quad (two array buffers: vertices, and texture coordinates)
glBindVertexArray(quad_vao);
glDrawArrays(GL_TRIANGLES,0,6);
I'm trying to work out what is preventing rendering to the texture. I'm using a core context with OpenGL v4.3.
I've tried outputting a single white colour for all fragments, using the texture coordinates to generate a colour colour = vec4(indata.VSTexCoord, 1.0, 1.0); and sampling the texture itself, as you see in the shader code, but nothing changes the resultant texture, which just shows the clear colour.
What am I doing wrong?

Related

shader replaces texture of all meshes - opengl

Image:
http://i.imgur.com/rtlnKUO.png
Hi, I have 3 objects/modelinstances that are being rendered in my scene:
Model A
-Rectangle
-Blue box texture
Model B
-Rectangle
-Red box texture
Model C
-Square
-smiley face texture
But my code makes it so that it draws all of them with the "latest" drawn mesh's texture, hence why in the picture, all 3 of them are drawn with the same texture.
My fragment code:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoord0;
uniform sampler2D sampler;
void main() {
gl_FragColor = vec4(v_texCoord0, 0.0, 1.0);
vec4 tex = texture2D (sampler, v_texCoord0);
gl_FragColor = tex;
}
So here are my questions with shaders:
What causes this?
Do I need to have 1 instance of a shader per 1 instance of a modelInstance?
Can a single shader render multiple meshes, where each mesh has a different texture?
If so with above, how do you get past the 32 texture limit for a shader?
edit:
Just found out that whenever I bind a texture, all of the models will use that texture. So the most latest binded texture is the one that everyone uses

Sampling a GL_TEXTURE_3D in the Fragment Shader

I have a GL_TEXTURE_3D which is of size 16x16x6, it has been populated with floats in a compute shader and I am trying to sample it in the fragment shader.
To make it available to the fragment shader I have this code just before the draw call:
//Set the active texture and bind it
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_3D, textureID);
//Add the sampler uniform
glUniform1i(glGetUniformLocation(textureID, "TextureSampler"), 0);
Then in the fragment shader itself:
uniform sampler3D TextureSampler;
Then to test that the texture has come through correctly:
vec4 test = texture(TextureSampler, ivec3(0,0,0));
color = vec4(1.0,0.0,0.0,1.0) * test.x; //or test.y, test.z, test.w
Based on this it looks like every texels value is (0.0,0.0,0.0,1.0)
I can't work out why this is? I would expect at coords (0,0,0) for the value to be either (16.0,0.0,0.0,0.0) or (16.0,16.0,16.0,16.0) based on what I set them to in the compute shader
P.S. It might be worth noting that the values are being written to the texture correctly I check inbetween the compute and fragment shader calls using glGetTexImage()

Add radial gradient texture to each white part of another texture in shader

Recently, I have read article about sun shader (XNA Sun Shader) and decided to implement it using OpenGL ES 2.0. But I faced with a problem connected with shader:
I have two textures, one of them is fire gradient texture:
And another one is texture each white part of which must be colored by the first texture:
So, I'm going to have a result like below (do not pay attention that the result texture is rendered on sphere mesh):
I really hope that somebody knows how to implement this shader.
You can first sampling the original texture, if the color is white, then sampling the gradient texture.
uniform sampler2D Texture0; // original texture
uniform sampler2D Texture1; // gradient texture
varying vec2 texCoord;
void main(void)
{
gl_FragColor = texture2D( Texture0, texCoord );
// If the color in original texture is white
// use the color in gradient texture.
if (gl_FragColor == vec4(1.0, 1.0, 1.0,1.0)) {
gl_FragColor = texture2D( Texture1, texCoord );
}
}

deriving screen-space coordinates in glsl shader

I'm trying to write a simple application for baking a texture from a paint buffer. Right now I have a mesh, a mesh texture, and a paint texture. When I render the mesh, the mesh shader will lookup the mesh texture and then based on the screen position of the fragment lookup the paint texture value. I then composite the paint lookup with the mesh lookup.
Here's a screenshot with nothing in the paint buffer and just the mesh texture.
Here's a screenshot with something in the paint buffer composited over the mesh texture.
So that all works great, but I'd like to bake the paint texture into my mesh texture. Right now I send the mesh's UVs down as the position with an ortho set to (0,1)x(0,1) so I'm actually doing everything in texture space. The mesh texture lookup is also the position. The problem I'm having though is computing the screen space position of the fragment from the original projection to figure out where to sample the paint texture. I'm passing the bake shader my original camera project matrices and the object position to send the fragment shader the device-normalized position of the fragment (again from my original camera projection) to do the lookup, but it's coming out funny.
Here's what the bake texture is generating if I render half the output using the paint texture and screen position I've derived.
I would expect that block line to be right down the middle.
Am I calculating the screen position incorrectly in my vertex shader? Or am I going about this in a fundamentally wrong way?
// vertex shader
uniform mat4 orthoPV;
uniform mat4 cameraPV;
uniform mat4 objToWorld;
varying vec2 uv;
varying vec2 screenPos;
void main() {
uv = gl_Vertex.xy;
screenPos = 0.5 * (vec2(1,1) + (cameraPV * objToWorld * vec4(gl_MultiTexCoord0.xyz,1)).xy);
screenPos = gl_MultiTexCoord0.xy;
gl_Position = orthoPV * gl_Vertex;
gl_FrontColor = vec4(1,0,0,1);
}
// fragment shader
uniform sampler2D meshTexture;
uniform sampler2D paintTexture;
varying vec2 uv;
varying vec2 screenPos;
void main() {
gl_FragColor = texture2D(meshTexture, uv);
if (screenPos.x > .5)
gl_FragColor = texture2D(paintTexture, uv);
}

GLSL blending input texture with target FBO color attachment

I have a system which allows me to set different blending modes (Those found in Photoshop) for each renderable object.Currently what I do is :
Render the renderable Object into FBO B normally.
Attach blending mode shader program,FBO C , and blend color attachment from FBO B with color attachment from FBO A (FBO A contains previous draws final result).
Blit the result from FBO C into FBO A and proceed with the rest of pipeline .
While this works fine , I would like to spare some frame rate which is currently wasted for this ping pong.I know that by default it is not possible to read pixels at the same time when writing to them so it is not possible to set a texture to read from and write to? Ideally ,What I would like to do is in the stage 1 render geometry right into FBO A processing the blend between FBO A color attachment texture and input material texture.
To make it clear here is the example.
Let's assume all the previously rendered geometry is accumulated in FBO A.And each new rendered object that needs to get blended is rendered into FBO B (just like I wrote above).Then in the blend pass (drawn into FBO C) the following shader is used (here it is darken blending ) :
uniform sampler2D bottomSampler;
uniform sampler2D topSampler;
uniform float Opacity;
// utility function that assumes NON-pre-multiplied RGB...
vec4 final_mix(
vec4 NewColor,
vec4 BaseColor,
vec4 BlendColor
) {
float A2 = BlendColor.a * Opacity;
vec3 mixRGB = A2 * NewColor.rgb;
mixRGB += ((1.0-A2) * BaseColor.rgb);
return vec4(mixRGB,BaseColor.a+BlendColor.a);
}
void main(void) // fragment
{
vec4 botColor = texture2D(bottomSampler,gl_TexCoord[0].st);
vec4 topColor = texture2D(topSampler,gl_TexCoord[0].st);
vec4 comp = final_mix(min(botColor,topColor),botColor,topColor);
gl_FragColor = comp;
}
Here :
uniform sampler2D bottomSampler; - FBO A texture attachment.
uniform sampler2D topSampler; -FBO B texture attachment
I use only plane geometry objects.
The output from this shader is FBO C texture attachment which is blitted into FBO A for the next evolution.