I would like to pass two textures to my fragment shader. I succeed with two 2D textures, but not with one 1D and one 2D.
Here is a piece of the fragment shader code:
uniform sampler2D heights;
uniform sampler1D bboxes;
....
vec4 t2 = texture2D(heights, vec2(0.0, 0.0));
vec4 t1 = texture1D(bboxes, 0.0);
Here a piece of the main program code (notice the "print 'here'":
sh = shaders.compileProgram(VERTEX_SHADER, FRAGMENT_SHADER)
shaders.glUseProgram(sh)
print 'here'
glEnable(GL_TEXTURE_2D)
glEnable(GL_TEXTURE_1D)
glActiveTexture(GL_TEXTURE0)
glPixelStorei(GL_UNPACK_ALIGNMENT,1)
t_h = glGenTextures(1)
glBindTexture(GL_TEXTURE_2D, t_h)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32F, 800, 600, 0, GL_RGB, GL_FLOAT, pts)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
glActiveTexture(GL_TEXTURE0 + 2)
glPixelStorei(GL_UNPACK_ALIGNMENT,1)
t_bb = glGenTextures(1)
glBindTexture(GL_TEXTURE_1D, t_bb)
glTexImage1D(GL_TEXTURE_1D, 0, GL_RGB32F, len(bboxes), 0, GL_RGB, GL_FLOAT, bboxes)
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_CLAMP)
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_WRAP_T, GL_CLAMP)
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
loc_h = glGetUniformLocation(sh, "heights")
loc_bb = glGetUniformLocation(sh, "bboxes")
glUniform1i(loc_h, 0)
glUniform1i(loc_bb, 1)
The error I have is:
File "forth.py", line 282, in <module>
shader = setup()
File "forth.py", line 128, in setup
sh = shaders.compileProgram(VERTEX_SHADER, FRAGMENT_SHADER)
File "/Library/Python/2.7/site-packages/PyOpenGL-3.0.2-py2.7.egg/OpenGL/GL/shaders.py", line 196, in compileProgram
program.check_validate()
File "/Library/Python/2.7/site-packages/PyOpenGL-3.0.2-py2.7.egg/OpenGL/GL/shaders.py", line 108, in check_validate
glGetProgramInfoLog( self ),
RuntimeError: Validation failure (0): Validation Failed: Sampler error:
Samplers of different types use the same texture image unit.
- or -
A sampler's texture unit is out of range (greater than max allowed or negative).
And the 'here' is not printed.
Please, any idea ?
First issue is you seem to be using texture unit 2 for the 1D texture: glActiveTexture(GL_TEXTURE0 + 2)
But you're setting the uniform to 1 with glUniform1i(loc_bb, 1)
Make them consistent.
The second problem is that the PyOpenGL wrapper tries to validate the shader right after linking and that won't work because at that point you haven't configured the Uniforms yet and so there is a conflict of texture units. And if your GLSL version doesn't support layout to set a default binding, it can't possibly validate.
The only way I see around this is to create your own helper function to compile the program and skip the 'validate' step :
from OpenGL.GL.shaders import ShaderProgram
def myCompileProgram(*shaders):
program = glCreateProgram()
for shader in shaders:
glAttachShader(program, shader)
program = ShaderProgram( program )
glLinkProgram(program)
program.check_linked()
for shader in shaders:
glDeleteShader(shader)
return program
Related
I'm working on a deferred shading pipeline, and i stored some information into a texture, and this is the texture attached to my gbuffer
// objectID, drawID, primitiveID
glGenTextures(1, &_gPixelIDsTex);
glBindTexture(GL_TEXTURE_2D, _gPixelIDsTex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32UI, _width, _height, 0, GL_RGB_INTEGER, GL_UNSIGNED_INT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
And this is how i write IDs into it:
// some other gbuffer textures...
layout (location = 4) out uvec3 gPixelIDs;
gPixelIDs = uvec3(objectID, drawID, gl_PrimitiveID + 1);
After the geometry pass, i can read from it using the following code:
struct PixelIDs {
GLuint ObjectID, DrawID, PrimitiveID;
}pixel;
glBindFramebuffer(GL_READ_FRAMEBUFFER, _gBufferFBO);
glReadBuffer(GL_COLOR_ATTACHMENT4);
glReadPixels(x, y, 1, 1, GL_RGB_INTEGER, GL_UNSIGNED_INT, &pixel);
glReadBuffer(GL_NONE);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
So far, so good. The output is what i need.
But when i try to use this shader to display the object id on the screen(just for debug purpose)
uniform sampler2D gPixelIDsTex;
uint objID = uint(texture(gPixelIDsTex, fragData.TexCoords).r);
FragColor = vec4(objID, objID, objID, 1);
the result is 0 (I used the Snipaste to read the pixel color), which means i cant use the data in my following process.
Other gbuffer textures with data format in floating point (eg. vec4) all be fine, so i dont know why texture always return 0 on it
uniform sampler2D gPixelIDsTex;
Your texture is not a floating-point texture. It's an unsigned integer texture. So your sampler declaration needs to express that. Just as you write to a uvec3, so too must you read from a usampler2D.
As the title suggests, I am rendering a scene onto a framebuffer and I am trying to extract the color histogram from that framebuffer inside a compute shader. I am totally new to using compute shaders and the lack of tutorials/examples/keywords has overwhelmed me.
In particular, I am struggling to properly set up the input and output images of the compute shader. Here's what I have:
computeShaderProgram = loadComputeShader("test.computeshader");
int bin_size = 1;
int num_bins = 256 / bin_size;
tex_w = 1024;
tex_h = 768;
GLuint renderFBO, renderTexture;
GLuint tex_output;
//defining output image that will contain the histogram
glGenTextures(1, &tex_output);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex_output);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16, num_bins, 3, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL);
glBindImageTexture(0, tex_output, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_R16UI);
//defining the framebuffer the scene will be rendered on
glGenFramebuffers(1, &renderFBO);
glGenTextures(1, &renderTexture);
glBindTexture(GL_TEXTURE_2D, renderTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, W_WIDTH, W_HEIGHT, 0, GL_RGBA, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glBindFramebuffer(GL_FRAMEBUFFER, renderFBO);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, renderTexture, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
In the main loop I draw a simple square onto the framebuffer and attempt to pass the framebuffer as input image to the compute shader:
glBindFramebuffer(GL_FRAMEBUFFER, renderFBO);
glDrawArrays(GL_TRIANGLES, 0, 6);
glUseProgram(computeShaderProgram);
//use as input the drawn framebuffer
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, renderFBO);
//use as output a pre-defined texture image
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, tex_output);
//run compute shader
glDispatchCompute((GLuint)tex_w, (GLuint)tex_h, 1);
GLuint *outBuffer = new GLuint[num_bins * 3];
glGetTexImage(GL_TEXTURE_2D, 0, GL_R16, GL_UNSIGNED_INT, outBuffer);
Finally, inside the compute shader I have:
#version 450
layout(local_size_x = 1, local_size_y = 1) in;
layout(rgba32f, binding = 0) uniform readonly image2D img_input;
layout(r16ui, binding = 1) uniform writeonly image2D img_output;
void main() {
// grabbing pixel value from input image
vec4 pixel_color = imageLoad(img_input, ivec2(gl_GlobalInvocationID.xy));
vec3 rgb = round(pixel_color.rgb * 255);
ivec2 r = ivec2(rgb.r, 0);
ivec2 g = ivec2(rgb.g, 1);
ivec2 b = ivec2(rgb.b, 2);
imageAtomicAdd(img_output, r, 1);
imageAtomicAdd(img_output, g, 1);
imageAtomicAdd(img_output, b, 1);
}
I defined the output as a 2d texture image of size N x 3 where N is the number of bins and the 3 accounts for the individual color components. Inside the shader I grab a pixel value from the input image, scale it into the 0-255 range and increment the appropriate location in the histogram.
I cannot verify that this works as intended because the compute shader produces compilation errors, namely:
can't apply layout(r16ui) to image type "image2D"
unable to find compatible overloaded function "imageAtomicAdd(struct image2D1x16_bindless, ivec2, int)"
EDIT: after changing to r32ui the previous error now becomes: qualified actual parameter #1 cannot be converted to less qualified parameter ("im")
How can I properly configure my compute shader?
Is my process correct (at least in theory) and if not, why?
As for your questions:
can't apply layout(r16ui) to image type "image2D"
r16ui can only be applied to unsigned image types, thus you should use uimage2D.
unable to find compatible overloaded function ...
The spec explicitly says that atomic operations can only by applied to 32-bit types (r32i, r32ui, or r32f). Thus you must use a 32-bit texture instead.
Your have other issues in your code too.
glBindTexture(GL_TEXTURE_2D, renderFBO);
You cannot bind an FBO to a texture. You should instead bind the texture that backs the FBO (renderTexture).
Also, you intend to bind a texture to an image uniform rather than a sampler, thus you must use glBindImageTexture or glBindImageTextures rather than glBindTexture. With the later you can bind both images in one call:
GLuint images[] = { renderTexture, tex_output };
glBindImageTextures(0, 2, images);
Your img_output uniform is marked as writeonly. However the atomic image functions expect an unqualified uniform. So remove the writeonly.
You can find all the above information in the OpenGL and GLSL specs, freely accessible from the OpenGL registry.
[Edit2]: Nothing wrong with this code. My shader class didn't load the uniforms correctly.
[Edit]: It seems like I can only use GL_TEXTURE0/texture unit 0 by some reason.
What I want is to draw a 2d texture and a 3d texture, but only the texture with texture unit 0(GL_TEXTURE_0) will work. And I use both of them at the same time in the shader I can't see anything using that shader.
This is the fragment shader code I want to use:
#version 330 core
// Interpolated values from the vertex shaders
in vec3 fragmentColor;
in vec3 fragmentPosition;
// Ouput data
out vec3 color;
uniform sampler3D textureSampler3D;
uniform sampler2D textureSampler2D;
float getInputLight(vec3 pos);
void main(){
// Get the nearest corner
vec3 cornerPosition = vec3(round(fragmentPosition.x), round(fragmentPosition.y), round(fragmentPosition.z));
float light = getInputLight(cornerPosition);
color = (0.5+16*light)*fragmentColor;
}
float getInputLight(vec3 pos) {
if (pos.z <= 0.f)
return texture2D(textureSampler2D, vec2(pos.x/16, pos.y/16)).r;
return texture(textureSampler3D, vec3(pos.x/16, pos.y/16, pos.z/16)).r;
}
But with that I can't see anything made by that shader. If I use this I can see what the 2d textures does.
float getInputLight(vec3 pos) {
if (pos.z <= 0.f)
return texture(textureSampler2D, vec2(pos.x/16, pos.y/16)).r;
return 0.f;
}
If I use this it will work perfectly except that I only have the 3d texture:
float getInputLight(vec3 pos) {
return texture(textureSampler3D, pos/16).r;
}
That means that I can only use one of the textures on the shaders. When I say that I use the 3d texture then I change getInputLight so I it just used the 3d texture. I do the same thing with 2d textures except that I change it to the other version.
This is the c++ code I use to load the 3d texture:
glGenTextures(1, &m_3dTextureBuffer);
glBindTexture(GL_TEXTURE_3D, m_3dTextureBuffer);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage3D(GL_TEXTURE_3D, 0, GL_R8, voxelMatrixWidth, voxelMatrixHeight, voxelMatrixDepth, 0, GL_RED, GL_UNSIGNED_BYTE, (GLvoid*)m_lightData);
This is the code I use to load the 2d texture:
GLuint buffer;
unsigned char *voidData = new unsigned char[256];
// With this I can see if the shader has right data.
for (int i = 0; i < 256; ++i)
voidData[i] = i%16;
glGenTextures(1, &buffer);
glBindTexture(GL_TEXTURE_2D, buffer);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, 16, 16, 0, GL_RED, GL_UNSIGNED_BYTE, (GLvoid*)voidData);
m_3dTextureBuffer = buffer;
This is the code I run before it draws the vertex buffer:
GLint texture3dId = shader->getUniform(1);
GLint texture2dId = shader->getUniform(2);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_3D, m_3dTextureBuffer);
glUniform1i(texture3dId, 0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, m_2dTextureBuffer);
//glUniform1i(texture3dId,1);
If I use texture unit 0(TEXTURE0) for both 2d texture and 3d texture I get data/pixels of what I expect.
This is a picture of it and it is what I expect:
http://oi57.tinypic.com/3wrbd.jpg
If I use different units I get this random data and it falshes sometimes(every pixel turns black/0 for a frame). The random data doesn't change either. If you look in some direction it doesn't flash and some directions flash faster than others.
http://oi58.tinypic.com/2ltiqoo.jpg
When I swap texture unit of the 3d texture an the 2d texture the same thing happens, but the 2d texture works an the 3d texture fails.
Do you have any idea what it could be?
There was nothing wrong with this code. My shader class didn't load the uniforms correctly.
I'm trying to sample a depth texture into a compute shader and to copy it into an other texture.
The problem is that I don't get correct values when I read from the depth texture:
I've tried to check if the initial values of the depth texture were correct (with GDebugger), and they are. So it's the imageLoad GLSL function that retrieve wrong values.
This is my GLSL Compute shader:
layout (binding=0, r32f) readonly uniform image2D depthBuffer;
layout (binding=1, rgba8) writeonly uniform image2D colorBuffer;
// we use 16 * 16 threads groups
layout (local_size_x = 16, local_size_y = 16) in;
void main()
{
ivec2 position = ivec2(gl_GlobalInvocationID.xy);
// Sampling from the depth texture
vec4 depthSample = imageLoad(depthBuffer, position);
// We linearize the depth value
float f = 1000.0;
float n = 0.1;
float z = (2 * n) / (f + n - depthSample.r * (f - n));
// even if i try to call memoryBarrier(), barrier() or memoryBarrierShared() here, i still have the same bug
// and finally, we try to create a grayscale image of the depth values
imageStore(colorBuffer, position, vec4(z, z, z, 1));
}
and this is how I'm creating the depth texture and the color texture:
// generate the deth texture
glGenTextures(1, &_depthTexture);
glBindTexture(GL_TEXTURE_2D, _depthTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, wDimensions.x, wDimensions.y, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
// generate the color texture
glGenTextures(1, &_colorTexture);
glBindTexture(GL_TEXTURE_2D, _colorTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, wDimensions.x, wDimensions.y, 0, GL_RGBA, GL_FLOAT, NULL);
I fill the depth texture with depth values (bind it to a frame buffer and render the scene) and then I call my compute shader this way:
_computeShader.use();
// try to synchronize with the previous pass
glMemoryBarrier(GL_ALL_BARRIER_BITS);
// even if i call glFinish() here, the result is the same
glBindImageTexture(0, _depthTexture, 0, GL_FALSE, 0, GL_READ_ONLY, GL_R32F);
glBindImageTexture(1, _colorTexture, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA8);
glDispatchCompute((wDimensions.x + WORK_GROUP_SIZE - 1) / WORK_GROUP_SIZE,
(wDimensions.y + WORK_GROUP_SIZE - 1) / WORK_GROUP_SIZE, 1); // we divide the compute into groups of 16 threads
// try to synchronize with the next pass
glMemoryBarrier(GL_ALL_BARRIER_BITS);
with:
wDimensions = size of the context (and of the framebuffer)
WORK_GROUP_SIZE = 16
Do you have any idea of why I don't get valid depth values?
EDIT:
This is what the color texture looks like when I render a sphere:
and it seems that glClear(GL_DEPTH_BUFFER_BIT) doesn't do anything:
Even if I call it just before the glDispatchCompute() I still have the same image...
How can this be possible?
Actually, i discovered that you cannot send a depth texture as an image to a compute shader, even with the readonly keyword.
So i've replaced:
glBindImageTexture(0, _depthTexture, 0, GL_FALSE, 0, GL_READ_ONLY, GL_R32F);
by:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, _depthTexture);
and in my compute shader:
layout (binding=0, r32f) readonly uniform image2D depthBuffer;
by:
layout (binding = 0) uniform sampler2D depthBuffer;
and to sample it i just write:
ivec2 position = ivec2(gl_GlobalInvocationID.xy);
vec2 screenNormalized = vec2(position) / vec2(ctxSize); // ctxSize is the size of the depth and color textures
vec4 depthSample = texture2D(depthBuffer, screenNormalized);
and it works very well like this
I've been stuck with this problem for about four days now. I'm trying to get my geometry to rendered into an FBO (G-buffer) with three textures (albedo, normal, depth). So far, I've 'somewhat' implemented MRT functionality, but when I use gDEBugger to inspect the textures, they just appear black. No matter what I change, they result in solid black. The actual values outputted are correct, I checked by disabling MRT to make the fragment shader output to back buffer. The textures are being initialized properly, gDEBugger properly displays the parameters I have put for them. But they all just have a solid black (0, 0, 0, 255) fill.
There's hardly any elaborate information on MRTs for GLSL 3.30. I've relied entirely on answered questions here, along with the OpenGL/GLSL docs and tutorials across the web (outdated, but I updated the code). I've probably spent a full day looking for a solution for this problem on Google. If there's something wrong with the ordering of the code, or syntax, please point it out. I don't even know if this implementation is correct anymore...
I'm using Visual C++ 2010, OpenGL 3.30 and GLSL 3.30 (as said in the title). For my libraries, GLFW 3.0 is being used for the windows, input, and OpenGL context, and GLEW 1.10.0 for extensions.
Keep in mind that all of this code is taken from my wrapper class. The ordering of the code is how it is all run at runtime (in other words, it's like as if I didn't have a wrapper class, and all of the code was in main ()).
Initialization Stage
// Initialize textures
glGenTextures (3, tex_ids);
glEnable (GL_TEXTURE_2D);
glBindTexture (GL_TEXTURE_2D, tex_ids[0]); // Diffuse
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB8, res.x, res.y, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
glBindTexture (GL_TEXTURE_2D, tex_ids[1]); // Normal
glTexParameterf (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB8, res.x, res.y, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
glBindTexture (GL_TEXTURE_2D, tex_ids[2]); // Depth
glTexParameterf (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D (GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, res.x, res.y, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glBindTexture (GL_TEXTURE_2D, 0);
glDisable (GL_TEXTURE_2D);
// Initialize FBO
glEnable (GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, tex_ids[0]);
glFramebufferTexture2D ( GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D,
tex_ids[0],
0 ); // diffuse
glBindTexture(GL_TEXTURE_2D, tex_ids[1]);
glFramebufferTexture2D ( GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT1,
GL_TEXTURE_2D,
tex_ids[1],
0 ); // normal
glBindTexture(GL_TEXTURE_2D, tex_ids[2]);
glFramebufferTexture2D ( GL_FRAMEBUFFER,
GL_DEPTH_ATTACHMENT,
GL_TEXTURE_2D,
tex_ids[2], 0 ); // depth
glBindFramebuffer (GL_FRAMEBUFFER, 0);
glDisable (GL_TEXTURE_2D);
// Initialize shaders
// Snipped out irrelevant code relating to getting shader source & compiling shaders
glBindFragDataLocation (renderer_1prog, GL_COLOR_ATTACHMENT0, "diffuse_out");
glBindFragDataLocation (renderer_1prog, GL_COLOR_ATTACHMENT1, "normal_out");
glBindFragDataLocation (renderer_1prog, GL_DEPTH_ATTACHMENT, "depth_out");
// More snipped out code relating to linking programs and finalizing
Draw Stage - called on every frame
// Bind everything
glUseProgram (renderer_1prog);
glBindFramebuffer (GL_DRAW_FRAMEBUFFER, fbo_id);
GLenum targ [3] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_DEPTH_ATTACHMENT };
glDrawBuffers (3, targ);
// Draw mesh
glEnable (GL_CULL_FACE);
glEnable (GL_DEPTH_TEST);
teshmesh.draw ();
// Unbind fbo
glDisable (GL_CULL_FACE);
glDisable (GL_DEPTH_TEST);
glBindFramebuffer (GL_READ_FRAMEBUFFER, 0);
glBindFramebuffer (GL_DRAW_FRAMEBUFFER, 0);
Vertex Shader
#version 330
layout(location = 0)in vec4 v;
layout(location = 1)in vec3 c;
layout(location = 2)in vec3 n;
out vec4 pos;
out vec3 col;
out vec3 nrm;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 world;
void main () {
gl_Position = projection * view * world * v;
pos = view * world * v;
pos.z = -pos.z / 500.0;
col = c.xyz;
nrm = n;
}
Fragment Shader
#version 330
in vec3 col;
in vec3 nrm;
in vec4 pos;
layout(location = 0) out vec3 diffuse_out;
layout(location = 1) out vec3 normal_out;
layout(location = 2) out vec3 depth_out;
out vec3 o;
void main () {
diffuse_out = col;
normal_out = (nrm / 2.0) + 0.5;
depth_out = vec3 (pos.z, pos.z, pos.z);
}
There are a few problems here. Starting with the smallest:
glBindFragDataLocation (renderer_1prog, GL_COLOR_ATTACHMENT0, "diffuse_out");
glBindFragDataLocation (renderer_1prog, GL_COLOR_ATTACHMENT1, "normal_out");
glBindFragDataLocation (renderer_1prog, GL_DEPTH_ATTACHMENT, "depth_out");
These are pointless. You used layout(location) syntax in your shader to specify this, and that takes priority over OpenGL-provided location settings.
Also, these are wrong. You don't put the FBO buffer attachment name into the location; you put an index into the location. So even if you didn't use layout(location), this is simply incorrect. glBindFragDataLocation will emit an OpenGL error, since the location will most assuredly be larger than GL_MAX_DRAW_BUFFERS​.
Considering how many OpenGL errors your code should emit, I'm rather surprised that your use of gDEBugger didn't tell you about any of these.
glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB8, res.x, res.y, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
GL_RGB8 is not one of the required image formats for render targets. Therefore, the implementation is not required to support it; it may but it doesn't have to. And since you never bothered to check the completeness of the FBO (FYI: you should always do that), you didn't test that this combination of formats is valid.
Never render to a 3-component image. Pick 4, 2, or 1 instead.
GLenum targ [3] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_DEPTH_ATTACHMENT };
glDrawBuffers (3, targ);
This is probably your main problem: your glDrawBuffers call fails. glDrawBuffers sets the color buffer outputs. The depth buffer is not a color buffer. There's only one depth buffer, so there's no need to set it.
To write to the depth buffer... well, you shouldn't be writing a user-calculated value to the depth buffer. Just let the regular depth buffer writing handle it. But if you want to (and let me remind you again, you do not), you write to gl_FragDepth. That's what it's for.