I have got several meshes (~100) of the same complex object in various poses with slightly different rotation and translation parameters. The object consists of multiple rigid components like arms and legs.
The goal is to generate a unique grayscale picture showing the accumulation of these poses for a particular body part. The heat-map obtained gives an idea of probable pixel locations for the body part, where white represents maximum probability, and black minimum (the lighter the higher probability). Say I'm interested in the accumulation of the legs. If many leg pose samples lie on the same (x,y) pixel location, than I expect to see light pixels there. Ultimately the leg poses might not exactly overlap, so I also expect to see a smooth transition to the black low probability around the leg silhouette boundaries.
To solve this task I have decided to use rendering in OpenGL frame buffers as these are known to be computationally cheap, and because I need to run this accumulation procedure very often.
What I did is the following. I accumulate the corresponding renderings of the body part I'm interested in (let's still keep the leg example) on the same frame buffer 'fboLegsId' using GL_BLEND. In order to discriminate between the legs
and the rest of the body, I texture the mesh with two colors:
rgba(gray,gray,gray,255) for the legs, where gray = 255 / Number of samples = 255/100
rgba(0,0,0,0) for the rest of the body
Then I accumulate the 100 renderings (which for the leg should sum up to white = 255) by doing the following:
glBindFramebuffer(GL_FRAMEBUFFER, fboLegsId);
glClearColor(0,0,0,255);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBlendFunc(GL_ONE, GL_ONE);
glEnable(GL_BLEND);
for each sample s = 0...100
mesh.render(pose s);
end
glReadPixels(...)
This performs almost as I expected. I do obtain the smooth grayscale heat-map I wanted. However there are self-occlusion problems
which arise even when I use only 1 sample. Say for a single pose sample, one of the arms moved before the leg, partially occluding them. I expect the influence of the occluded leg parts to be cancelled during rendering. However it renders as if the arm is invisible/translucent, allowing for pixels behind to be fully shown. This leads to wrong renderings and therefore wrong accumulations.
If I simple disable blending, I see the correct self-occlusion aware result. So, apparently the problem lies somewhere at blending time.
I also tried different blending functions, and so far the following one produced the closer results to a self-occlusion aware accumulation approach:
glBlendFunc(GL_ONE, GL_SRC_ALPHA);
Anyway there is still a problem here: one single sample looks now correct; two or more accumulated samples instead show overlapping artefacts with other samples. It looks like each accumulation replaces the current buffer pixel if the pixel is not part of the legs. And if the leg was found many times in front of the (let's say) the arm, than it becomes darker and darker, instead of lighter and lighter.
I tried to fix this by clearing depth buffer at each rendering iteration enabling depth computations, but this did not solve the problem.
I feel like there is either something conceptually wrong in my approach, or a small mistake somewhere.
I've tried a different approach based on the suggestions which performs as expected. Now I'm working with 2 frame buffers. The first one (SingleFBO) is used to render single samples with correct self-occlusion handling. The second (AccFBO) is used to accumulate the 2D textures from the first buffer using blending. Please, check my code below:
// clear the accumulation buffer
glBindFramebuffer(GL_FRAMEBUFFER, AccFBO);
glClearColor(0.f, 0.f, 0.f, 1.f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
for each sample s = 0...100
{
// set rendering destination to SingleFBO
glBindFramebuffer(GL_FRAMEBUFFER, SingleFBO);
glClearColor(0.f, 0.f, 0.f, 1.f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glDisable(GL_LIGHTING);
mesh->render(pose s);
glDisable(GL_DEPTH_TEST);
glEnable(GL_LIGHTING);
// set rendering destination to the accumulation buffer
glBindFramebuffer(GL_FRAMEBUFFER, AccFBO);
glClear(GL_DEPTH_BUFFER_BIT);
glBlendFunc(GL_ONE, GL_ONE);
glEnable(GL_BLEND);
// draw texture from previous buffer to a quad
glBindTexture(GL_TEXTURE_2D, textureLeg);
glEnable(GL_TEXTURE_2D);
glDisable(GL_DEPTH_TEST);
glDisable(GL_LIGHTING);
glDepthMask(GL_FALSE);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glBegin( GL_QUADS );
{
glTexCoord2f(0,0); glVertex2f(-1.0f, -1.0f);
glTexCoord2f(1,0); glVertex2f(1.0f, -1.0f);
glTexCoord2f(1,1); glVertex2f(1.0f, 1.0f);
glTexCoord2f(0,1); glVertex2f(-1.0f, 1.0f);
}
glEnd();
glPopMatrix();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
// restore
glDisable(GL_TEXTURE_2D);
glEnable(GL_DEPTH_TEST);
glEnable(GL_LIGHTING);
glDepthMask(GL_TRUE);
glDisable(GL_BLEND);
}
glBindFramebuffer(GL_FRAMEBUFFER, AccFBO);
glReadPixels(...)
Please, check also my (standard) code for initializing the SingleFBO (similarly for AccFBO):
// create a texture object
glGenTextures(1, &textureLeg);
glBindTexture(GL_TEXTURE_2D, textureLeg);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0,
GL_RGB, GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);
// create a renderbuffer object to store depth info
glGenRenderbuffers(1, &rboLeg);
glBindRenderbuffer(GL_RENDERBUFFER, rboLeg);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT,
width, height);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
// create a framebuffer object
glGenFramebuffers(1, &SingleFBO);
glBindFramebuffer(GL_FRAMEBUFFER, SingleFBO);
// attach the texture to FBO color attachment point
glFramebufferTexture2D(GL_FRAMEBUFFER, // 1. fbo target: GL_FRAMEBUFFER
GL_COLOR_ATTACHMENT0, // 2. attachment point
GL_TEXTURE_2D, // 3. tex target: GL_TEXTURE_2D
textureLeg, // 4. tex ID
0); // 5. mipmap level: 0(base)
// attach the renderbuffer to depth attachment point
glFramebufferRenderbuffer(GL_FRAMEBUFFER, // 1. fbo target: GL_FRAMEBUFFER
GL_DEPTH_ATTACHMENT, // 2. attachment point
GL_RENDERBUFFER, // 3. rbo target: GL_RENDERBUFFER
rboLeg); // 4. rbo ID
// check FBO status
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(status != GL_FRAMEBUFFER_COMPLETE)
error(...);
// switch back to window-system-provided framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Here's a different approach:
Create two frame buffers: normal and acc. normal frame buffer should have a texture storage (with glFramebufferTexture2D).
Here's the basic algorithm:
Clear acc to black
Bind normal, clear to black, and render scene with white legs, and other parts black
Bind acc, render a full screen rectangle, with normal texture on it, with blend mode GL_ONE, GL_ONE
Forward the animation, and if it haven't finished, goto 2.
You have the result in acc
So, basically, acc will contain the individual frames summed.
Related
I noticed a big problem in my openGL texture rendering:
Assumedly transparent pixels are rendered as solid white. According to most solutions to similar issues discussed on StackOverflow, I need to set glBlend / the proper functions, but I have already set the necessary gl state and am positive that textures are loaded correctly as far as I can tell. My texture load function is below:
GLboolean GL_texture_load(Texture* texture_id, const char* const path, const GLboolean alpha, const GLint param_edge_x, const GLint param_edge_y)
{
// load image
SDL_Surface* img = nullptr;
if (!(img = IMG_Load(path))) {
fprintf(stderr, "SDL_image could not be loaded %s, SDL_image Error: %s\n",
path, IMG_GetError());
return GL_FALSE;
}
glBindTexture(GL_TEXTURE_2D, *texture_id);
// image assignment
GLuint format = (alpha) ? GL_RGBA : GL_RGB;
glTexImage2D(GL_TEXTURE_2D, 0, format, img->w, img->h, 0, format, GL_UNSIGNED_BYTE, img->pixels);
// wrapping behavior
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, param_edge_x);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, param_edge_y);
// texture filtering
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glBindTexture(GL_TEXTURE_2D, 0);
// free the surface
SDL_FreeSurface(img);
return GL_TRUE;
}
I use Adobe Photoshop to export "for the web" 24-bit + transparency .png files -- 72 pixels/inch, 6400 x 720. I am not sure how to set the color mode (8, 16, 32), but this might have something to do with the issue. I also use the default sRGB color profile, but I thought to remove the color profile at one point. This didn't do anything.
No matter what, a png exported from Photoshop displays as solid white over transparent pixels.
If I create an image in e.g. Gimp, I have correct transparency. Importing the Adobe .psd or .png does not seem to work, and in any case I prefer to use Photoshop for editing purposes.
Has anyone experienced this issue? I imagine that Photoshop must add some strange metadata or I am not using the correct color modes--or both.
(I am concerned that this goes beyond the scope of Stack Overflow, but my issue intersects image editing and programming. Regardless, please let me know if this is not the right place.)
EDIT:
In both Photoshop and Gimp I created a test case-- 8 pixels (red, green, transparent, blue) clockwise.
In Photoshop, the transparent square is read as 1, 1, 1, 0 and displays as white.
In Gimp, the transparent square is 0, 0, 0, 0.
I also checked my fragment shader to see whether transparency works at all. Varying the alpha over time does increase transparency, so the alpha isn't outright ignored. For some reason 1, 1, 1, 0 counts as solid.
In addition, setting the background color to black with glClearColor seems to prevent the alpha from increasing transparency.
I don't know how to explain some of these behaviors, but something seems off. 0 alpha should be the same regardless of color, shouldn't it?
(Note that I render a few shapes on top of each other, but I've tried just rendering one for testing purposes.)
The best I can do is post more of my setup code (with bits omitted):
// vertex array and buffers setup
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
// I think that the blend function may be wrong (GL_ONE that is).
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glDepthRange(0, 1);
glDepthFunc(GL_LEQUAL);
Texture tex0;
// same function as above, but generates one texture id for me
if (GL_texture_gen_and_load_1(&tex0, "./textures/sq2.png", GL_TRUE, GL_CLAMP_TO_EDGE, GL_CLAMP_TO_EDGE) == GL_FALSE) {
return EXIT_FAILURE;
}
glUseProgram(shader_2d);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex0);
glUniform1i(glGetUniformLocation(shader_2d, "tex0"), 0);
bool active = true;
while (active) {
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// uniforms, game logic, etc.
glDrawElements(GL_TRIANGLES, tri_data.i_count, GL_UNSIGNED_INT, (void*)0);
}
I don't know how to explain some of these behaviors, but something seems off. 0 alpha should be the same regardless of color, shouldn't it?
If you want to get an identical result for an alpha channel of 0.0, independent on the red, green and blue channels, the you have to change the blend function. See glBlendFunc.
Use:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
This cause tha the the red, green and blue channel are multiplied by the alpha channel.
If the alpha channel is 0.0, the resulting RGB color is (0, 0, 0).
If the alpha channel is 1.0, the RGB color channels keep unchanged.
See further Alpha Compositing, OpenGL Blending and Premultiplied Alpha
Here is a description of the problem:
I want to render some VBO shapes (rectangles, circles, etc) to an off screen framebuffer object. This could be any arbitrary shape.
Then I want to draw the result on a simple sprite surface as a texture, but not on the entire screen itself.
I can't seem to get this to work correctly.
When I run the code, I see the shapes being drawn all over the screen, but not in the sprite in the middle. It remains blank. Even though it seems like I set up the FBO with 1 color texture, it still only renders to screen even if I select the FBO object into context.
What I want to achieve is these shapes being drawn to an off screen texture (using an FBO, obviously) and then render it on the surface of a sprite (or a cube, or we) drawn somewhere on the screen. Yet, whatever I draw, appears to be drawn in the screen itself.
The tex(tex_object_ID); function is just a short-hand wrapper for OpenGL's standard texture bind. It selects a texture into current rendering context.
No matter what I try I get this result: The sprite is blank, but all these shapes should appear there, not on the main screen. (Didn't I bind rendering to FBO? Why is it still rendering on screen?)
I think it is just a logistics of setting up FBO in the right order that I am missing. Can anyone tell what's wrong with my code?
Not sure why the background is red, as I clear it after I select the FBO. It is the sprite that should get the red background & shapes drawn on it.
/*-- Initialization -- */
GLuint texture = 0;
GLuint Framebuffer = 0;
GLuint GenerateFrameBuffer(int dimension)
{
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, dimension, dimension, 0, GL_RGBA, GL_UNSIGNED_BYTE, nullptr);
glGenFramebuffers(1, &Framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, Framebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
glDrawBuffer(GL_COLOR);
glReadBuffer(GL_COLOR);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
console_log("GL_FRAMEBUFFER != GL_FRAMEBUFFER_COMPLETE\n");
return texture;
}
// Store framebuffer texture (should I store texture here or Framebuffer object?)
GLuint FramebufferHandle = GenerateFrameBuffer( 256 );
Standard OpenGL initialization code follows, memory is allocated, VBO's are created and bound, etc. This works correctly and there aren't errors in initialization. I can render VBOs, polygons, textured polygons, lines, etc, on standard double buffer with success.
Next, in my render loop I do the following:
// Possible problem?
// Should FramebufferHandle be passed here?
// I tried "texture" and "Framebuffer " as well, to no effect:
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferHandle);
// Correct projection, just calculates the view based on current zoom
Projection = setOrthoFrustum(-config.zoomed_width/2, config.zoomed_width/2, -config.zoomed_height/2, config.zoomed_height/2, 0, 100);
View.identity();
Model.identity();
// Mini shader, 100% *guaranteed* to work, there are no errors in it (works normally on the screen)
shaderProgramMini.use();
//Clear frame buffer with blue color
glClearColor(0.0f, 0.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);// | GL_DEPTH_BUFFER_BIT);
// Set yellow to draw different shapes on the framebuffer
color = {1.0f,1.0f,0.0f};
// Draw several shapes (already correctly stored in VBO objects)
Memory.select(VBO_RECTANGLES); // updates uniforms
glDrawArrays(GL_QUADS, 0, Memory.renderable[VBO_RECTANGLES].indexIndex);
Memory.select(VBO_CIRCLES); // updates uniforms
glDrawArrays(GL_LINES, 0, Memory.renderable[VBO_CIRCLES].indexIndex);
Memory.select(VBO_2D_LIGHT); // updates uniforms
glDrawArrays(GL_LINES, 0, Memory.renderable[VBO_2D_LIGHT].indexIndex);
// Done writing to framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// Correct projection, just calculates the view based on current zoom
Projection = setOrthoFrustum(-config.zoomed_width/2, config.zoomed_width/2, -config.zoomed_height/2, config.zoomed_height/2, 0, 100);
View.identity();
Model.identity();
Model.scale(10.0);
// Select texture shader to draw what was drawn on offscreen Framebuffer / texture
// Standard texture shader, 100% *guaranteed* to work, there are no errors in it (works normally on the screen)
shaderProgramTexture.use();
// This is a wrapper for bind texture to ID, just shorthand function name
tex(texture); // FramebufferHandle; // ? // maybe the mistake in binding to the wrong target object?
color = {0.5f,0.2f,0.0f};
Memory.select(VBO_SPRITE); Select a square VBO for rendering sprites (works if any other texture is assigned to it)
// finally draw the sprite with Framebuffer's texture:
glDrawArrays(GL_TRIANGLES, 0, Memory.renderable[VBO_SPRITE].indexIndex);
I may have gotten the order of something completely wrong. Or FramebufferHandle/Framebuffer/texture object is not passed to something correctly. But I spent all day, and hope someone more experienced than me can see the mistake.
GL_COLOR is not an accepted value for glDrawBuffer
See OpenGL 4.6 API Compatibility Profile Specification, 17.4.1 Selecting Buffers for Writing, Table 17.4 and Table 17.5, page 628
NONE, FRONT_LEFT, FRONT_RIGHT, BACK_LEFT, BACK_RIGHT, FRONT, BACK, LEFT, RIGHT, FRONT_AND_BACK, AUXi.
Arguments to DrawBuffer when the context is bound to a default framebuffer, and the buffers they indicate. The same arguments are valid for ReadBuffer, but only a single buffer is selected as discussed in section.
COLOR_ATTACHMENTi
Arguments to DrawBuffer(s) and ReadBuffer when the context is bound to a framebuffer object, and the buffers they indicate. i in COLOR_ATTACHMENTi may range from zero to the value of MAX_COLOR_ATTACHMENTS minus one.
Thsi means that glDrawBuffer(GL_COLOR); and glReadBuffer(GL_COLOR); will generate a GL_INVALID_ENUM error.
Try to use COLOR_ATTACHMENT0 instead.
Furthermore, glCheckFramebufferStatus(GL_FRAMEBUFFER), checkes the completeness of the framebuffer object which is bound to the target.
This means that
glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE
has to be done before
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Or you have to use:
glNamedFramebufferReadBuffer(Framebuffer, GL_FRAMEBUFFER);
I have implemented a deferred rendering and am trying to use multisample textures for anti aliasing.
I render the scene into a FBO with multisample textures, use glBlit to create regular textures in a second FBO and finally bind the texture to the lighting shader that produces the final image.
// draw to textures
mMultiGeometryFBO->bind();
glViewport(0,0,mWidth,mHeight);
glEnable(GL_DEPTH_TEST);
glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT );
// calling all modules to draw to FBO
for(auto r : mRenderer)
r->renderMaterial(camera);
glBindFramebuffer(GL_READ_FRAMEBUFFER, mMultiGeometryFBO->fbo());
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, mGeometryFBO->fbo());
glReadBuffer(GL_COLOR_ATTACHMENT0);
glDrawBuffer(GL_COLOR_ATTACHMENT0);
glBlitFramebuffer(0, 0, mWidth, mHeight,
0, 0, mWidth, mHeight, GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT, GL_LINEAR);
glReadBuffer(GL_COLOR_ATTACHMENT1);
glDrawBuffer(GL_COLOR_ATTACHMENT1);
glBlitFramebuffer(0, 0, mWidth, mHeight,
0, 0, mWidth, mHeight, GL_COLOR_BUFFER_BIT, GL_LINEAR);
glReadBuffer(GL_COLOR_ATTACHMENT2);
glDrawBuffer(GL_COLOR_ATTACHMENT2);
glBlitFramebuffer(0, 0, mWidth, mHeight,
0, 0, mWidth, mHeight, GL_COLOR_BUFFER_BIT, GL_LINEAR);
// draw to screen
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glClear(GL_DEPTH_BUFFER_BIT);
mSkybox->renderMaterial(camera);
mShader->use();
mShader->setTexture("tDiffuse", mDiffuseColor, 0);
mShader->setTexture("tNormal", mNormals, 1);
mShader->setTexture("tMaterial", mMaterialParams, 2);
mShader->setTexture("tDepth", mDepthBuffer, 3);
mShader->setTexture("tLights", mLightColor, 4);
mQuad->draw();
This produces a visible line at the horizon (between geometry and skybox).
The color is the clear color. Only clearing the depth reduces the problem when moving. Rendering the SkyBox to the FBO before rendering the geometry produces less visible artifacts, but the line is still there.
Edit: forgot the picture
Resolving the multisample target before the lighting pass does not make sense, conceptually. What you will get is that the values in your gbuffers will be averaged at the edges of objects. This is especially bad for the normal directions. Think about it: If you have a pixel which contains 50% of your ground plane, and 50% of your sky, you will get a normal direction which is (normal_ground + normal_sky)/2. This is totally different from calculating the final color of each of this parts with their original normal and mixing the resulting colors.
If you want to do multisampling with deferred rendering, you have to use the multisampling target for the lighting, and will have to enable per sample shading and actually access and light each sample individually, and only blit the final result to a non-multisampled target. However, that will be exorbitantly expensive. You especially lose the benefits of multisampling vs. supersampling.
I don't know if there are some neat tricks trick to still work with multisampling in a more efficient way, but the usual approach is to not use multisampling at all and doing the anti-aliasing via some image-based postprocessing pass.
The Problem
I have been trying to implement shadows in OpenGL for some time. I have finally gotten it to a semi-working state in that the shadow appears but covers the scene in strange places [i.e - it is not relative to the light]
To further explain the above gif: As I move the light-source further away from the scene (to the left) - the shadow stretches further. Why? If anything, it should show more of the scene.
Update - I messed around with the lights position and am now being given this result (confusing):
Depth Map
Here it is:
The Code
Because this is a difficult issue to pinpoint - I will post a large chunk of the code I am using in this application.
The Framebuffer and Depth Texture - The first thing I needed was a framebuffer to record the depth values of all the drawn objects and then I needed to dump these values into a depth texture (the shadow-map):
// Create Framebuffer
FramebufferName = 0;
glGenFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
// Create and Load Depth Texture
glGenTextures(1, &depthTexture);
glBindTexture(GL_TEXTURE_2D, depthTexture);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_REF_TO_TEXTURE);
glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);
//Attach Texture To Framebuffer
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthTexture, 0);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
//Check for errors
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
Falcon::Debug::error("ShadowBuffer [Framebuffer] could not be initialized.");
Rendering The Scene - First I do the shadow-pass which just runs through some basic shaders and outputs to the framebuffer and then I do a second, regular pass that actually draws the scene and does GLSL shadow-map sampling:
//Clear
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//Select Main Shader
normalShader->useShader();
//Bind + Update + Draw
/* Render Shadows */
shadowShader->useShader();
glBindFramebuffer(GL_FRAMEBUFFER, Shadows::framebuffer());
//Viewport
glViewport(0,0,640,480);
//GLM Matrix Definitions
glm::mat4 shadow_matrix_view;
glm::mat4 shadow_matrix_projection;
//View And Projection Calculations
shadow_matrix_view = glm::lookAt(glm::vec3(light.x,light.y,light.z), glm::vec3(0,0,0), glm::vec3(0,1,0));
shadow_matrix_projection = glm::perspective(45.0f, 1.0f, 0.1f, 1000.0f);
//Calculate MVP(s)
glm::mat4 shadow_depth_mvp = shadow_matrix_projection * shadow_matrix_view * glm::mat4(1.0);
glm::mat4 shadow_depth_bias = glm::mat4(0.5,0,0,0,0,0.5,0,0,0,0,0.5,0,0.5,0.5,0.5,1) * shadow_depth_mvp;
//Send Data To The GPU
glUniformMatrix4fv(glGetUniformLocation(shadowShader->getShader(),"depth_matrix"), 1, GL_FALSE, &shadow_depth_mvp[0][0]);
glUniformMatrix4fv(glGetUniformLocation(normalShader->getShader(),"depth_matrix_bias"), 1, GL_FALSE, &shadow_depth_bias[0][0]);
renderScene();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
/* Clear */
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
/* Shader */
normalShader->useShader();
/* Shadow-map */
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, Shadows::shadowmap());
glUniform1f(glGetUniformLocation(normalShader->getShader(),"shadowMap"),0);
/* Render Scene */
glViewport(0,0,640,480);
renderScene();
Fragment Shader - This is where I calculate the final color to be output and do the depth texture / shadow-map sampling. It could be the source of where I am going wrong:
//Shadows
uniform sampler2DShadow shadowMap;
in vec4 shadowCoord;
void main()
{
//Lighting Calculations...
//Shadow Sampling:
float visibility = 1.0;
if (texture(shadowMap, shadowCoord.xyz) < shadowCoord.z){
visibility = 0.1;
}
//Final Output
outColor = finalColor * visibility;
}
Edits
<1> AMD Hardware Issue - It was also suggested that this could be an issue of the GPU but I find this hard to believe given that it's a Radeon HD 6670. Would it be worth putting in a Nvidia card in to test this theory?
<2> Suggest Changes - I made some suggested changes from the comments and answers:
Firstly, I changed the light's perspective projection to an ortho one which gave me the accuracy I needed in the shadow-map so that now I can see the depth clearly (i.e -> it's not all white). In addition, it removes the need for the perspective division so I am using 3-dimensional coordinates for testing this. Below is a screenshot:
Secondly, I changed my texture sampling to this: visibility = texture(shadowMap,shadowCoord.xyz); which now always returns 0 and thus I cannot see the scene as it is considered ENTIRELY shadowed.
Thirdly and finally, I made a swap from GL_LEQUAL to GL_LESS as suggested an no changes occurred.
There is something fundamentally wrong with your shader:
uniform sampler2DShadow shadowMap; // NOTE: Shadow samplers perform comparison !!
...
if (texture(shadowMap, shadowCoord.xyz) < shadowCoord.z)
You have texture compare vs. reference enabled. That means that the 3rd texture coordinate is going to be compared by the texture (...) function and the returned value is going to be the result of the test function (GL_LEQUAL in this case).
In other words, texture (...) will return either 0.0 (fail) or 1.0 (pass) by comparing the looked up depth at shadowCoord.xy to the value of shadowCoord.z. You are doing this test twice.
Consider using this altered code instead:
float visibility = texture(shadowMap, shadowCoord.xyz);
That is not going to produce quite the results you want because your comparison function is GL_LEQUAL, but it is a start. Consider changing the comparison function to GL_LESS to get an exact functional match.
i am seeing this problem where the textures disappear after the application has been used for a minutes or two. why would the textures be disappearing? the 3d cube remains on the screen at all times. the place which the textures were appear as white boxes when the textures disappear.
my DrawGLScene method looks like this:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear Screen And Depth Buffer
glLoadIdentity(); // Reset The Current Modelview Matrix
glTranslatef(0.0f, 0.0f, -7.0f); // Translate Into The Screen 7.0 Units
//rotquad is a value that is updated as the user interacts with the ui by +/-9 to rotate the cube
glRotatef(rotquad, 0.0f, 1.0f, 0.0f);
//cube code here
RECT desktop;
const HWND hDesktop = GetDesktopWindow();
GetWindowRect(hDesktop, &desktop);
long horizontal = desktop.right;
long vertical = desktop.bottom;
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(-5.0, 3, 3, -5.0, -1.0, 10.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glDisable(GL_CULL_FACE);
glEnable(GL_TEXTURE_2D);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glClear(GL_DEPTH_BUFFER_BIT);
glColor4f(255.0f, 255.0f, 255.0f, 0.0f);
if (hoverRight) {
imageLoaderOut(outImage);
imageLoaderIn(inImage);
imageLoaderUp(upImage);
imageLoaderLeft(leftHover);
imageLoaderDown(upImage);
imageLoaderRight(rightImage);
}
// code for hover left, up and down are the same as hover right code above
glDisable(GL_TEXTURE_2D);
return TRUE; // Keep Going
}
this method is one of the imageLoad methods (others being called are almost identical, except for location/position..
void imageLoaderOut(const char* value)
{
FIBITMAP* bitmap60 = FreeImage_Load(
FreeImage_GetFileType(value, 0),
value, PNG_DEFAULT);
FIBITMAP *pImage60 = FreeImage_ConvertTo32Bits(bitmap60);
int nWidth60 = FreeImage_GetWidth(pImage60);
int nHeight60 = FreeImage_GetHeight(pImage60);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, nWidth60, nHeight60, 0, GL_RGBA, GL_UNSIGNED_BYTE, (void*)FreeImage_GetBits(pImage60));
FreeImage_Unload(pImage60);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex2f(2.8f, -1.1f); // moves BOTTOM EDGE UP or DOWN - stretches length of image
glTexCoord2f(0.0f, 1.0f); glVertex2f(2.8f, -1.9f);
glTexCoord2f(1.0f, 1.0f); glVertex2f(2.1f, -1.9f);
glTexCoord2f(1.0f, 0.0f); glVertex2f(2.1f, -1.1f); // moves BOTTOM EDGE UP or DOWN - stretches length of image
glEnd();
}
It's just a guess, but you have a severe design issue in your code, combined with memory leak, that can lead to such undefined results as you've described.
First, in imageLoaderOut() you are reading all the textures each frame from HDD, converting it to 32 bpp and sending data to OpenGL. You call it from DrawGLScene, which means you do it each frame. It's really invalid way to do things. You don't need to load resources each frame. Do it once and for all in some kind if Initialize() function, and just use GL resource on drawing.
Then, I think here you have memory leak, because you never unloading bitmap60. As you make loading each frame, possibly thousands times per second, this unreleased memory accumulating. So, after some time, something goes really bad and FreeImage refuses to load textures.
So, possible solution is to:
move resource loading to initialization phase of your application
free leaked resources: FreeImage_Unload(bitmap60) in each loading function
Hope it helps.
The problem seems to be in glTexImage2D. The manual can be found here: http://www.opengl.org/sdk/docs/man/xhtml/glTexImage2D.xml
In particular, they said that:
glTexImage2D specifies the two-dimensional texture for the current texture unit, specified with glActiveTexture.
Once you are calling glTexImage2D multiple times, it seems that your are overwriting the same location multiples times.