OpenGL Frame Buffer Object for rendering to textures, renders weirdly - opengl

I'm using python but OpenGL is pretty much done exactly the same way as in any other language.
The problem is that when I try to render a texture or a line to a texture by means of a frame buffer object, it is rendered upside down, too small in the bottom left corner. Very weird. I have these pictures to demonstrate:
This is how it looks,
www.godofgod.co.uk/my_files/Incorrect_operation.png
This is how it did look when I was using pygame instead. Pygame is too slow, I've learnt. My game would be unplayable without OpenGL's speed. Ignore the curved corners. I haven't implemented those in OpenGL yet. I need to solve this issue first.
www.godofgod.co.uk/my_files/Correct_operation.png
I'm not using depth.
What could cause this erratic behaviour. Here's the code (The functions are indented in the actual code. It does show right), you may find useful,
def texture_to_texture(target,surface,offset): #Target is an object of a class which contains texture data. This texture should be the target. Surface is the same but is the texture which should be drawn onto the target. offset is the offset where the surface texture will be drawn on the target texture.
#This will create the textures if not already. It will create textures from image data or block colour. Seems to work fine as direct rendering of textures to the screen works brilliantly.
if target.texture == None:
create_texture(target)
if surface.texture == None:
create_texture(surface)
frame_buffer = glGenFramebuffersEXT(1)
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, frame_buffer)
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, target.texture, 0) #target.texture is the texture id from the object
glPushAttrib(GL_VIEWPORT_BIT)
glViewport(0,0,target.surface_size[0],target.surface_size[1])
draw_texture(surface.texture,offset,surface.surface_size,[float(c)/255.0 for c in surface.colour]) #The last part changes the 0-255 colours to 0-1 The textures when drawn appear to have the correct colour. Don't worry about that.
glPopAttrib()
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0)
glDeleteFramebuffersEXT(1, [int(frame_buffer)]) #Requires the sequence of the integer conversion of the ctype variable, meaning [int(frame_buffer)] is the odd required way to pass the frame buffer id to the function.
This function may also be useful,
def draw_texture(texture,offset,size,c):
glMatrixMode(GL_MODELVIEW)
glLoadIdentity() #Loads model matrix
glColor4fv(c)
glBegin(GL_QUADS)
glVertex2i(*offset) #Top Left
glVertex2i(offset[0],offset[1] + size[1]) #Bottom Left
glVertex2i(offset[0] + size[0],offset[1] + size[1]) #Bottom, Right
glVertex2i(offset[0] + size[0],offset[1]) #Top, Right
glEnd()
glColor4fv((1,1,1,1))
glBindTexture(GL_TEXTURE_2D, texture)
glBegin(GL_QUADS)
glTexCoord2f(0.0, 0.0)
glVertex2i(*offset) #Top Left
glTexCoord2f(0.0, 1.0)
glVertex2i(offset[0],offset[1] + size[1]) #Bottom Left
glTexCoord2f(1.0, 1.0)
glVertex2i(offset[0] + size[0],offset[1] + size[1]) #Bottom, Right
glTexCoord2f(1.0, 0.0)
glVertex2i(offset[0] + size[0],offset[1]) #Top, Right
glEnd()

You don't show your projection matrix, so I'll assume it's identity too.
OpenGL framebuffer origin is bottom left, not top left.
The size issue is more difficult to explain. What is your projection matrix after all ?
also, you don't show how to use the texture, and I'm not sure what we're looking at in your "incorrect" image.
Some non-related comments:
creating a framebuffer each frame is not the right way to go about it.
come to think about it, why use framebuffer at all ? it seems that the only thing you're after is blending to the frame buffer ? glEnable(GL_BLEND) does that just fine.

Related

loading textures in C++ OpenGL using DevIL

Using C++, I'm trying to load a texture into OpenGL using DevIL. After scrounging around for different code segments, I have a bit of code done (shown below), but it doesn't seem to work completely.
Loading a texture (Part of a Texture2D class):
void Texture2D::LoadTexture(const char *file_name)
{
unsigned int image_ID;
ilInit();
iluInit();
ilutInit();
ilutRenderer(ILUT_OPENGL);
image_ID = ilutGLLoadImage((char*)file_name);
sheet.texture_ID = image_ID;
sheet.width = ilGetInteger(IL_IMAGE_WIDTH);
sheet.height = ilGetInteger(IL_IMAGE_HEIGHT);
}
This compiles and works fine. I do realise that I should only do the ilInit(), iluInit(), and ilutInit() once, but if I remove those lines the program instantly breaks upon loading any image (compiles fine, but errors on runtime).
Displaying the texture in OpenGL (Part of the same class):
void Texture2D::Draw()
{
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glPushMatrix();
u = v = 0;
// this is the origin point (the position of the button)
VectorXYZ point_TL; // Top Left
VectorXYZ point_BL; // Bottom Left
VectorXYZ point_BR; // Bottom Right
VectorXYZ point_TR; // Top Right
/* For the sake of simplicity, I've removed the code calculating the 4 points of the Quad. Assume that they are found correctly. */
glColor3f(1,1,1);
// bind the appropriate texture frame
glBindTexture(GL_TEXTURE_2D, sheet.texture_ID);
// draw the image as a quad the size of the first loaded image
glEnable(GL_TEXTURE_2D);
glBegin(GL_QUADS);
glTexCoord2f (0, 0);
glVertex3f (point_TL.x, point_TL.y, point_TL.z); // Top Left
glTexCoord2f (0, 1);
glVertex3f (point_BL.x, point_BL.y, point_BL.z); // Bottom Left
glTexCoord2f (1, 1);
glVertex3f (point_BR.x, point_BR.y, point_BR.z); // Bottom Right
glTexCoord2f (1, 0);
glVertex3f (point_TR.x, point_TR.y, point_TR.z); // Top Right
glEnd();
glPopMatrix();
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
}
Currently, the quad shows up, but its completely white (the background colour it's given). The image I'm loading exists and is loaded fine (verified using the loaded size values).
Another few things I should note:
1) I am using a depth buffer. I've heard this doesn't go well with GL_BLEND?
2) I would really like to use the ilutGLLoadImage function.
3) I appreciate example code, as I'm a newbie to openGL and DevIL as a whole.
Yes, you have the problem. There might be issues with ilutGLLoadImage().
Try doing things manually:
Load the image using ilLoadImage
Generate the OpenGL texture handle using the glGenTextures
Upload the image to OpenGL glTextImage2D and ilGetData()
See this link for a working solution
http://r3dux.org/tag/ilutglloadimage/
I know, this solution seems to be "a little" complicated, but nobody knows jow much time you would spend fighting with this bug hidden deep in the DevIL.
Another way of fixing things: check you GL texture setup code. Anything in the filtering can be a reason for GL_INVALID_OPERATION.
We've a lot of times into the "White texture" issue while programming the old ATI cards.
Oh! The biggest guess: Non-power-of-two textures. Is you texture file 2^N by 2^N or something different ?
To use non-rectangular textures you just have to use GL extensions.
And the other one: are you using the textures in the same thread or in the other ? Remember that you should glGenTextures() and glBindTexture()/glBegin/glEnd in the same thread.

OpenGL draw GL_LINES on top of GL_QUADS

I am drawing a cube and a line grid in 3D in OpenGL:
glBegin(GL_QUADS);
...
glEnd();
glBegin(GL_LINES);
...
glEnd();
Now, independent of the order (if I draw the lines first or the quads first) and independent of the position it always happens that the lines are draw over the cube. I thought OpenGL draws front to back.
What I tried is to use:
glEnable(GL_BLEND);
glBlendFunc (GL_ONE, GL_ONE);
which does work but part of the cube is transparent now.
I also tried glDepthFunc(GL_NEVER) with disabling glEnable (GL_DEPTH_TEST) but I get the same problem that the cube appears transparent.
Does anyone have a hint to overcome this problem?
If you want to draw the lines in the background, just draw them (and the rest of the background) first, clear the depth buffer, then render the rest of the scene.
Or you can just give the lines a depth such that they will always be behind everything else, but then you have to make sure that none of your gameworld objects go behind it.
Now, independent of the order (if I draw the lines first or the quads first) and independent of the position it always happens that the lines are draw over the cube.
IF your lines have proper depth, then you forgot to enable depth buffer. If you enabled depth buffer, then you must make sure that your library used for initializing OpenGL requested depth buffer.
I thought OpenGL draws front to back.
It does not. OpenGL draws polygons in the same order you specify them. There is no automatic sorting of any kind.
Does anyone have a hint to overcome this problem?
Well, you could clear depth buffer, but this will be slow and inefficient. Technically, you should almost never do that.
glDepthMask(GL_FALSE) will disable writing to depth buffer. i.e. any object drawn after that call will not update depth buffer but will use data that is already stored. It is frequently used for particle systems. So call glDepthMask(GL_FALSE), draw "lines", call glDepthMask(GL_TRUE), then draw cube.
If you combine glDepthMask(GL_FALSE) with glDepthFunc(GL_ALWAYS) then object will be always drawn, completely ignoring depth buffer (but depth buffer won't be changed).

How to make fading-to-black effect with OpenGL?

Im trying to achieve fade-to-black effect, but i dont know how to do it. I tried several things but they fail due to how opengl works
I will explain how it would work:
If i draw 1 white pixel and move it around each frame for one pixel to some direction, each frame the screen pixels will get one R/G/B value less (of range 0-255), thus after 255 frames the white pixel will be fully black. So if i move the white pixel around, i would see a gradient trail going from white to black evenly 1 color value difference compared to previous pixel color.
Edit: I would prefer to know non-shader way of doing this, but if its not possible then i can accept shader-way too.
Edit2: Since there is some confusion around here, I would like to tell that i can do this kind of effect already by drawing a black transparent quad over my whole scene. BUT, this does not work as i want it to work; there is a limit on the darkness the pixels can get, so it will always leave some of the pixels "visible" (above zero color value) because: 1*0.9 = 0.9 -> rounded to 1 again, etc. I can "fix" this by making the trail shorter, but i want to be able to adjust the trail lenght as much as possible and instead of bilinear (if thats the right word) interpolation i want linear (so it would always reduce -1 from each r,g,b value in 0-255 scale, instead of using a percent value).
Edit3: Still some confusion left, so lets be clear: i want to improve the effect that is done by disabling GL_COLOR_BUFFER_BIT from glClear(), i dont want to see the pixels on my screen FOREVER, so i want to make them darker in time, by drawing a quad over my scene that will reduce each of the pixels color value by 1 (in 0-255 scale).
Edit4: I'll make it simple, i want OpenGL method for this, the effect should use as little power, memory or bandwidth as possible. this effect is supposed to work without clearing the screen pixels, so if i draw a transparent quad over my scene, the previous pixels drawn will get darker etc. But as explained above few times, its not working very well. The big NO's are: 1) reading pixels from screen, modifying them one by one in a for loop and then uploading back. 2) rendering my objects X times with different darknesses to emulate the trail effect. 3) multiplying the color values is not an option since it wont make the pixels into black, they will stay on the screen forever at certain brightness (see explanation somewhere above).
If i draw 1 white pixel and move it around each frame for one pixel to some direction, each frame the screen pixels will get one R/G/B value less (of range 0-255), thus after 255 frames the white pixel will be fully black. So if i move the white pixel around, i would see a gradient trail going from white to black evenly 1 color value difference compared to previous pixel color.
Before I explain how to do this, I would like to say that the visual effect you're going for is a terrible visual effect and you should not use it. Subtracting a value from each of the RGB colors will produce a different color, not a darker version of the same color. The RGB color (255,128,0), if you subtract 1 from it 128 times, will become (128, 0, 0). The first color is brown, the second is a dark red. These are not the same.
Now, since you haven't really explained this very well, I have to make some guesses. I am assuming that there are no "objects" in what you are rendering. There is no state. You're simply drawing stuff at arbitrary locations, and you don't remember what you drew where, nor do you want to remember what was drawn where.
To do what you want, you need two off-screen buffers. I recommend using FBOs and screen-sized textures for these. The basic algorithm is simple. You render the previous frame's image to the current image, using a blend mode that "subtracts 1" from the colors you write. Then you render the new stuff you want to the current image. Then you display that image. After that, you switch which image is previous and which is current, and do the process all over again.
Note: The following code will assume OpenGL 3.3 functionality.
Initialization
So first, during initialization (after OpenGL is initialized), you must create your screen-sized textures. You also need two screen-sized depth buffers.
GLuint screenTextures[2];
GLuint screenDepthbuffers[2];
GLuint fbos[2]; //Put these definitions somewhere useful.
glGenTextures(2, screenTextures);
glGenRenderbuffers(2, screenDepthbuffers);
glGenFramebuffers(2, fbos);
for(int i = 0; i < 2; ++i)
{
glBindTexture(GL_TEXTURE_2D, screenTextures[i]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, SCREEN_WIDTH, SCREEN_HEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glBindTexture(GL_TEXTURE_2D, 0);
glBindRenderbuffer(GL_RENDERBUFFER, screenDepthBuffers[i]);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, SCREEN_WIDTH, SCREEN_HEIGHT);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[i]);
glFramebufferTexture(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, screenTextures[i], 0);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, screenDepthBuffers[i]);
if(glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
//Error out here.
}
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
}
Drawing Previous Frame
The next step will be drawing the previous frame's image to the current image.
To do this, we need to have the concept of a previous and current FBO. This is done by having two variables: currIndex and prevIndex. These values are indices into our GLuint arrays for textures, renderbuffers, and FBOs. They should be initialized (during initialization, not for each frame) as follows:
currIndex = 0;
prevIndex = 1;
In your drawing routine, the first step is to draw the previous frame, subtracting one (again, I strongly suggest using a real blend here).
This won't be full code; there will be pseudo-code that I expect you to fill in.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbos[currIndex]);
glClearColor(...);
glClearDepth(...);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT);
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, screenTextures[prevIndex]);
glUseProgram(BlenderProgramObject); //The shader will be talked about later.
RenderFullscreenQuadWithTexture();
glUseProgram(0);
glBindTexture(GL_TEXTURE_2D, 0);
The RenderFullscreenQuadWithTexture function does exactly what it says: renders a quad the size of the screen, using the currently bound texture. The program object BlenderProgramObject is a GLSL shader that does our blend operation. It fetches from the texture and does the blend. Again, I'm assuming you know how to set up a shader and so forth.
The fragment shader would have a main function that looks something like this:
shaderOutput = texture(prevImage, texCoord) - (1.0/255.0);
Again, I strongly advise this:
shaderOutput = texture(prevImage, texCoord) * (0.05);
If you don't know how to use shaders, then you should learn. But if you don't want to, then you can get the same effect using a glTexEnv function. And if you don't know what those are, I suggest learning shaders; it's so much easier in the long run.
Draw Stuff As Normal
Now, you just render everything you would as normal. Just don't unbind the FBO; we still want to render to it.
Display the Rendered Image on Screen
Normally, you would use a swapbuffer call to display the results of your rendering. But since we rendered to an FBO, we can't do that. Instead, we have to do something different. We must blit our image to the backbuffer and then swap buffers.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbos[currIndex]);
glBlitFramebuffer(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT, 0, 0, SCREEN_WDITH, SCREEN_HEIGHT, GL_COLOR_BUFFER_BIT, GL_NEAREST);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
//Do OpenGL swap buffers as normal
Switch Images
Now we need to do one more thing: switch the images that we're using. The previous image becomes current and vice versa:
std::swap(currIndex, prevIndex);
And you're done.
You may want to render a black rectangle with alpha going from 1.0 to 0.0 using glBlendFunc (GL_ONE, GL_SRC_ALPHA).
Edit in response to your comment (reply doesn't fit in a comment):
You cannot fade single pixels depending on their age with a simple fade-to-black operation. Usually a render target does not "remember" what has drawn to it in previous frames. I could think of a way to do this by alternatingly rendering to one of a pair of FBOs and using their alpha channel for it, but you needed a shader there. So what you would do is first render the FBO containing the pixels at their previous positions, decreasing their alpha value by one, dropping them when alpha == 0, otherwise darkening them whenever their alpha decreases, then render the pixels at their current positions with alpha == 255.
If you only have moving pixels:
render FBO 2 to FBO 1, darkening each pixel in it by a scale (skip during first pass)
render moving pixels to FBO 1
render FBO 1 to FBO 2 (FBO 2 is the "age" buffer)
render FBO 2 to screen
If you want to modify some scene (i.e. have a scene and moving pixels in it):
set glBlendFunc (GL_ONE, GL_ZERO)
render FBO 2 to FBO 1, reducing each alpha > 0.0 in it by a scale (skip during first pass)
render moving pixels to FBO 1
render FBO 1 to FBO 2 (FBO 2 is the "age" buffer)
render the scene to screen
set glBlendFunc (GL_ONE, GL_SRC_ALPHA)
render FBO 2 to screen
Actually the scale should be (float) / 255.0 / 255.0 to make the components equally fade away (and not one that started at a lower value become zero before the others do).
If you only have a few moving pixels, you could re-render the pixel at all previous positions up to 255 "ticks" back.
Since you need to re-render each of the pixels anyway, just render each one with the proper color gradient: Darker, the older the pixel is. If you have a real lot of pixels, the dual FBO approach
might work.
I am writing ticks, and not frames, because frames can take a varying amount of time depending on renderer and hardware, but you probably want to have the pixel trail fade away within a constant time. That means you need to dim each pixel only after so-and-so many milliseconds, keeping their color for the frames in between.
One non-shader way of doing this, especially if the fade to black is the only thing that is going on the screen is to grab the contents of the screen via readpixels iirc, pop those into a texture, and put a rectangle up onto the screen with that texture, then you can modulate the color of the rectangle to towards black to do the efect that you want to accomplish.
It is the drivers, Windows itself does not support OpenGL or only a low Version, I think 1.5. All newer versions come with drivers from ATI or NVIDIA, Intel etc.
Are you using different cards?
What version of OpenGL are you effectivly using?
It's situations like this that make it so I cannot use pure OpenGL. I am not sure if your project has room for it (which it may not if you're using another windowing API), or if the added complexity would be worth it, but adding a 2D library like SDL which works with OpenGL would allow you to directly work with the display surface's pixels in a reasonable fashion, as well as just pixels in general, which OpenGL generally doesn't make easy.
Then all you would need to do is run through the display surface's pixels before OpenGL renders it's geometry, and subtract 1 from each RGB component.
That's the easiest solution I can see anyway, if using additional libraries with OpenGL is an option.

Reading depth value of transparent plane with glReadPixels and gluUnProject

I am trying to create a billiards simulation and have been using glReadPixels along with gluUnProject to project my mouse pointer into the scene.
This works fine if the mouse is pointing at an object in the scene (the table for instance) but when it points to the background it messes up the gluUnProject due to the glReadPixels call returning 1.0.
I'm trying to figure out how to draw a transparent plane at the same level of the table so that no matter where I point the mouse in my scene, it will get the depth as if it were pointing onto the same plane as the table.
If I draw a transparent quad without glAlphaFunc(GL_GREATER, 0.01f); it will draw the quad as white and the depth testing will work as I planned, but when I add in the call to alphaFunc to get the quad to be transparent, the depth goes back to what it was before. From what I've seen, glReadPixels reads the pixels from the frame buffer, so this makes sense, I'm just wondering how I can work around this.
I've also tried reversing the winding on the quad so that it wouldn't be visible from above, but this has the same problem of glReadPixels taking it's measurements from the framebuffer.
In short, how do I get glReadPixels to get it's depth component from an object without drawing that object to the screen?
Here's the calls to glReadPixels and gluUnProject:
winX = (float)x;
winY = (float)viewport[3] - (float)y;
glReadPixels(x, (int) winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
gluUnProject(winX, winY, winZ, modelview, projection, viewport, &posX, &posY, &posZ);
That is because alpha testing fully discards pixels that don't pass the test.
However gluUnProject should not choke on a depth buffer value of 1.0; what you should get as value for a depth buffer value of 1.0 is the distance of the far clipping plane.
You can completely disable writes to the color buffer using
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
If you draw something after this, it will still get depth tested (if that's turned on) and its depth values will still be written to the depth buffer (if that's turned on) but none of its fragments will be visible in the color buffer.

OpenGL : How can I put the skybox in the infinity

I need to know how can I make the skybox appears as it's in the infinity??
I know that it's something related to depth, but I don't know the exact thing to disable or to enable??
First, turn off depth writes/testing (you don't need to bother with turning off depth testing if you draw the skybox first and clear your depth buffer):
glDisable(GL_DEPTH_TEST);
glDepthMask(false);
Then, move the camera to the origin and rotate it the inverse of the modelview matrix:
// assume we're working with the modelview
glPushMatrix();
// inverseModelView is a 4x4 matrix with no translation and a transposed
// upper 3x3 portion from the regular modelview
glLoadMatrix(&inverseModelView);
Now, draw your sky box and turn depth writes back on:
DrawSkybox();
glPopMatrix();
glDepthMask(true);
glEnable(GL_DEPTH_TEST);
You'll probably want to use glPush/PopAttrib() to ensure your other states get correctly set after you draw the skybox too (make sure to turn off things like lighting or blending if necessary).
You should do this before drawing anything so all color buffer writes happen on top of your sky box.
First, Clear the buffer.
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
Then, save your current modelview matrix and load the identity.
glPushMatrix();
glLoadIdentity();
Then render your skybox.
Skybox.render();
Then, clear the depth buffer and continue normally with rendering
glClear(GL_DEPTH_BUFFER_BIT);
OtherStuff.render();
glutSwapBuffers();
The only problem with drawing the sky box is first is that your pixel shader will execute for every pixel in the sky box. Just to be overwritten by other object in your world later on. Your best bet is to render all opaque object first then render your sky box. That way the pixel shader for the sky box only gets executed for the pixel who pass the z buffer test.
There is no infinity. A skybox is just a textured box, with normaly 0,0,0 in the middle.
Here is a short tut: link text
The best approach I can think of is to draw it on a first pass(or layer), then clear only the depth buffer. After that just draw the rest of the scene in another pass. This way the skybox will always remain "behind" the scene. Just remember to use the same camera for both passes and somehow snap the skybox to the camera.